A few years back 5 people was hammering the terminal from 11 in the evening until 6 in the morning to make a successful release deployment.
Today we push a button which deploys everything’s.
Now it’s your turn!
Let me guide you on the way to a boring life with a single button click deployment of your release .
This is a bit of a heavy topic, so even if I am just touching on things this is a rather large article.
Automation involves many things here are some of the key benefits to all of us;
- I know that the deployment is done correctly
- Deployment process is documented and visually described
- Anyone can deploy the release
So, what are the 5 people that hammered the keyboard all night doing today?
- They now have a happy family life!
- They are in office during the day to solve important issues!
- They have more time for improvement work.
- They are standing by during release night if anything breaks.
Ok, ok, I know lets stop the sales pitch and get on to the hard core.
Automation; Start with what hurts most.
Start with deployment of applications and then continue with provisioning of new server instances.
But if you suffer from other things that steels your time, chose that area instead to start your automation jurney.
One of the most important thing with automation is visualizing, make the process visual and understandable by anyone.
A Visual process is in itself a documentation, but it is also something that we can communicate to non-developers like manager etc.
Being able to communicate and showing the process is helping you if we need invest in something new that is improving the process.
If you can point out in your visual process diagram that one step is manual due to lack of a tool to automate it, this will help you motivate investment in new solutions and tools that will make the process fully automated.
If you can select an automation tool that provides a visual process designer, it will help you a lot.
And if the tool has a visual execution of the processes it will help you test out the process easier.
More importantly visual execution tool will be something the users of the process can look at and see progress and where in the process we currently are.
A user-friendly automation tool must make it easy for the users;
- to find the reason for why a process failed
- in which step in the process it failed
- easily get access of all information related to what was done in the step
- Input arguments to the step
- what the step do
- the execution outputs and exit statuses.
If a step is written by you as an executable script or program, make sure to sum up the exception with all needed information in a form that makes it easy to understand.
Avoid Complex steps
Do not fall into the trap to do complex things in one step, I know it is tempting, but it will cause you pain in the end.
Complex steps are difficult to trouble shouting, instead break it down in smaller functional step.
Example of this is that it is common to write a deployment script that stop the service, deploy the new version and then starts the service again.
Just by reading the text we see that we have 3 distinct steps;
- stop service
- deploy new version
- start service
Doing to many things in one step is causing many problems;
- an unclear process
- difficulties in understanding,
- impossible to understand cause of a failure in the process.
In other words do not bundle things, keep it clear and simple.
I have been lucky to test and play with many tools, Nolio, Rational Automation Framework, Ipsoft, UrbanCode deploy and a handful other tools all of them with different pros and cons, selecting tools is difficult, so prepare your requirements before you start the selection process.
This is not a sales pitch! If you are interested this page provide a list of many automation tools http://electric-cloud.com/wiki/display/releasemanagement/Deployment+Automation+Tools
When you select a tool, find a tool that has most free bundled automation steps for the areas you are into.
All steps bundled in a product that you need will save you time.
Also, select a tool that can be easily extended and customised in a natural way.
And lastly secure that the tool easily can be integrated to any other automation tool you use in the company.
Processes, best practise;
You need to group what you shall automate into type of component or areas if you like, example of component types are;
- Java application (war/ear)
A component is something you need to control and optionally also need to deploy (install).
Most component will need to be stopped, started, deployed and configured.
Some automation tools provide process library support, this is a good thing.
Being able to create generic processes and store them in a library to be used any ware as is, or as a step in a more complex process is flexible and a time saver.
Create a library for each component type or area you want to automate, the building blocks like stop, start etc. and then more advanced processes that still are generic.
When you have the building blocks you can create more complex but still generic processes, like downloading a component version from a repository such as ivy, stopping service deploying the new version and starting the service by using your process building blocks.
As we will manage many types of applications and also on more than one operating system we need some generic processes that we can use in other processes.
Example of generic processes are;
- Move a file and a directory
- Copy a file or directory
- Checking if file or directory exists
- Sleep for x seconds
- Encrypt and decrypt secrets
- Get me os type
Generic processes must be operating system agnostic so we can use them independent of operating system.
This type of steps will be used very frquently
When you automat, you will very fast get into the problem managing environmental differences, development, test and production environments.
It can be anything from that different users are used or different target directory for same component is used or that you have secret information like username and passwords.
A good practise is to break out all environmental information and not making it part of your process but instead have this information as arguments to your process or accessible by your process.
This will make your processes agnostic to environment differences, and easy to reuse.
I do recommend that your company have standards as that makes operations easier.
Remember that secrets have to be stored in a safe way and if possible in an encrypted format or in a vault of some kind.
I will touch on this topic a bit later as well.
Access to automation
Make you automation available to everyone.
This is one of your success factors, do not leave people outside, bring them in and let them benefit by the tool and contribute if they like.
Depending on the size of your organisation this might not be as simple as just saying it, not everyone should have access to production and not all should be able to access all environments.
A flexible access system is important when you select an automation tool.
Access right can be extremely complex very fast, as there are many angels of it.
But think of it from following perspectives;
Functional role; operator, deployer, viewer
Environmental access; test, production
Instance access; access to a component instance.
On top of this people wants to contribute and maintain their own processes, so you have automation tool access challenges as well.
You will have teams that need access to the tool, some teams need access to automation process for all components you have others to a subset.
A system is owned by a team but part of this system, is operated and managed by other teams.
Let’s assume team is general definition for ownership.
For each system, we have different environments test, prod etc.
For each environment, we have different roles, operator, deployer.
With this kind of model, it is possible to provide a very dynamic and flexible access model.
A release might be very complex so lets have a look at a complex situation.
You might have multiple releases, that sometimes includes a shared resource,
Dependency’s between applications, consider the scenario that we are upgrading a DB, and more than one application is using it, we stop the applications before the DB is changed to secure consistency.
Components in the release have dependency’s, like sharing the same server.
Applications is not the same as a component, Applications need multiple components to work and they have an interdependency in between them when it comes to installation or start up order.
For example, we have a SAS application that have 3 parts A, B and C and must be started in this order A, B and last C.
If we restart B this means we need to stop C as well, and then start B and then C.
If we look at a java application running on an application server it has a bunch of components in its stack on top of the operations system to work.
App server binary
App Server definition and configuration
Application war/ear file
Log file management
Start and stop scripts use when rebooting the server
When we install application stack we need and application deployment process that can deploy the components in the order they have dependency’s like App server binary is needing JDK to be in place before it is deployed and Application war cannot be deployed if not the server is defined.
Not all components will be part of your release so you also need to be able to handle the relation or the context of where a component is used.
If the JDK is used by multiple App servers on a linux host, and you want to upgrade the JDK, you first must stop all app servers before you deploy the new version of the JDK and then start up the app servers you previously stopped.
In other words, you must have a deploy process for JDK that is aware of its environment, and can manage it in a good way.
Your monitoring will detect deviations, like full disks, out of memory and a process that dies.
Incident automation is important to help analysing the cause, gather information and if possible also resolve it.
You need a tight integration with your incident and monitoring tools.
Full stack automation
DevOps, Continues Integration and Continues Deployment is creating new demands on our ability to provide a full stack setup to be used to automatically test a new version of an application.
Lets limit the scope a bit here and say that we are running on a linux or windows server and leave out docker containers, to not making it to complex.
You need a tight integration of your provisioning tools and most companies do not have one tool for this but several that needs to be integrated together.
We have tools here like puppet, ansible chef etc. If you have them integrate with them.
We have two angels of this, provisioning of a new system that do not exists, and provisioning of an already existing system.
In CI/CD the demand for a test system that will just live for one hour during auto test is an example of where a full stack provisioning is needed.
To make this work you have to integrate with building servers your developer is using like Jenkins for example.
You need to decide which automation tool is leading and controlling the total process, the import here is that application is installed and auto testing is started and that result is feed back to developer, and that the infrastructure is decommissioned when test is done.
One flow is Jenkins triggers automation tool, that sets up infrastructure and triggers automatic testings, automatic testing sets a status of this version in version control system, automation system is decommissioning the server.
As part of CI/CD automated testing is vital, I will not go into this topic as it requires its own article.
To wrap this up
I have touched on the things that is most important for you when you start implementing your automation.
Focus on deployment, as deployment is the most frequent activity and it is important it is done in same way in all environment.
When you build automation, create libraries of building blocks that do a single thing and can be used to build up the process and make it possible to easily maintain the steps and at the same time provide the flexibility of having variations of the processes.
Do not hardcode things, store directory locations etc outside of the process, so you have generic processes that can be used for any environment.
Break your process down in small steps that have its own automation with clear input and outputs.
Use your process step to build the complex processes.
A correction of a step will be done in one place instead of in all your processes.
If you want tips and have a dialog, feel free to contact me and I will guide you.