Do you need to secure 14 signatures and present technical documentation just to run a script on your production database? Does it take a group of enterprise architects to approve a change to your application? Or do you have no restrictions whatsoever but are so afraid of touching the production server in case the slightest change brings everything down?
You are not alone… the word ‘Production’ has become synonymous with ‘Hallowed’ and a culture of fear dominates all changes made to a working production environment. An upcoming production deployment can feel like the Big Bad Wolf is coming to blow your house down.
Teams who are afraid of deploying changes to production generally exhibit these symptoms:
- Heavily gated approval processes
- Testing teams reviewing the build only after all the work is done
- Long intervals between production pushes (greater than 90 days)
- Limited number of people who are capable of executing the deployment
There are others, but the above are the usual signs of teams that are not confident in their ability to successfully push to production.
Facing the Big Bad Wolf
Nothing beats practice. Would you be able to take whatever you are working on now and have it deployed to production two weeks from today? Why not? (Hint: there are likely many reasons you can’t!)
The key here lies in taking steps towards Continuous Delivery. If you are regularly pushing to production this will instill in your team the mindset of getting things ready for deployment more often. Each time you make that short sprint to production your team will optimize the development, testing, regression, and deployment automation. You’ll find out what parts of your deployment pipeline aren’t working, or are causing problems in the target environment.
You’ll also feel better about each production deployment. If you have already deployed to production a dozen times before your new site even goes live, the big launch day won’t be as scary.
So how do we do it?
- Include operations in the development of the deployment pipeline and each production deployment. They are better skilled to offer insight into areas of optimization that will maintain network security and support ongoing management of the infrastructure. This also begins to remove the need for ‘specialty’ skills (such as Sitecore development expertise) for deployments.
- Introduce automation for continuous testing. Unit tests, automated regression tests, anything that can be repeated each time and will give the test team confidence that the next push is good to go.
- Decide on a schedule. Make sure this is a schedule your team will be comfortable maintaining over a long period of time. Most operations teams execute the typical ‘maintenance patch day’ once a month for production systems. Consider whether this would work for your team as well.
- Automate as much of the ‘specialty’ knowledge for the deployment as possible. For example, make sure that you could ask anybody to do the deployment, not just a Sitecore developer. Consider tooling that will be able to deploy packages built by the development team without need for manual interference or using the Sitecore interface.
A few example tools: TeamCity, TDS, Unicorn, Octopus Deploy
- Introduce Blue/Green deployments. Mike Edwards spoke to this at SUGCON EU this past year. You can even download his slides. This practice will really help you and the business feel more comfortable continuously deploying to production environments.
Cheating at Blue/Green deployments for Sitecore
As you all know, I’m very fond of taking baby steps and continuously improving. There is the ‘ultimate’ solution, which you’ve seen at numerous conferences and user groups, and then there’s the thing you have time to do right now. Here’s a few cheats that will help you get closer to Blue/Green right now. We all like saving time and money!
- Forget about Core and Master. In my experience, the impact of deployments on these databases to the end user is so minimal that unless you have a hard requirement to keep authoring up at all times without any issues, it’s often just cheaper to use the same Core and Master databases for both your Blue and Green instances. This simplifies your data replication.
- Consider an ‘outage’ on the authoring instance. You can have a duplicate authoring instance if you really need to, but you can save on licensing and VM costs by just taking authoring down during deployments. If you have a very long deployment window, however, you might need that extra authoring instance up for emergency investigations or publishing.
- Use a second publishing target database. If you have two ‘web’ databases (let’s call them Blue and Green for now) you don’t have to worry about figuring out database replication and the like. Just use Sitecore’s native publication mechanism to make sure the target environment is up to date based on what is in Master.
- EXTREME CHEAT: Use a DR or Load Balanced instance to maintain uptime. This isn’t anywhere near Blue/Green, but if you have multiple instances (say a disaster recovery offline instance or two content delivery instances) you can configure your load balancer to only target a single instance at a time. If you use separate publishing targets for the two instances, you can maintain uptime while you work on the other offline instance. Toggle back and forth as needed!
I hope this helped. Best of luck with your next production Sitecore deployment! Remember to continuously improve, and…
Who’s afraid of the big bad wolf? No, no, no not me.