How to use DevOps workflows without risks
-
2505
-
3
-
0
-
0
DevOps methodology of software development was introduced a decade ago as an answer to the unpredictability and risks of Git-based software delivery flow under the Waterfall model. However, DevOps itself has its risks, and these have to be accounted for and mitigated. Today we tell you how to use DevOps workflows without risks.
The Waterfall software delivery model is very straightforward. New project requirements are carved in stone, the code is written, tested, pushed to release and run in production. The stages cannot change places or happen simultaneously. This limits the efficiency of software development dramatically.
Waterfall model shortcomings and limitations
The standard Git-based workflow is predictable but unreliable. Developers write new code and submit it to the central repository for testing. The QA engineers issue requests for testing environment configuration to be able to test the code — unless a company is quite large and can afford to keep a testing server farm online at all times. If this is not the case, the system administrators or Ops engineers have to provide these environments and configure them up to the task.
However, this has to be done either manually according to the requirements of every specific testing situation, or using the semi-automated pipeline based on scripts. These scripts have to be constantly adjusted and updated to remain relevant as the product development progresses. Thus said, the system administrators must either configure the testing server manually or do it according to a script. They also have to keep the infrastructure running, sort out the issues, deploy hotfixes in production, etc.
Thus said, even if the new code does not contain any bugs, it takes quite some time to go through the testing routine. Then it must be pushed into the staging server to check if the new code does not break the existing functionality. Then a new app version must be released to production — and a backup point must be made beforehand, so the production environment must be stopped.
Even if there are none post-release crashes, you will have some downtime — and if there are some bugs that come out only after release (because the conditions of the production environment differ quite a lot from testing and staging conditions), you would have to roll-back and sort out the bugs. This means the code goes back to the developers for fixing the bugs, and they sort it out or write new code. Rinse and repeat.
This situation with software development and running in production was so horrible, that products were updated once or twice a year, and major enterprise software like MS Office was updated once a couple of years only. The core problem was “throwing the responsibility above the wall” between the devs, the QA engineers and the system administrators, which resulted in the policy of the humble approach, meaning all the parties involved were afraid to innovate in order not to carry the blame for the results. The Devs wrote the code and then it was someone else’s headache to make it work in production.
DevOps transformation — automate the routine, optimize the performance
To overcome the pitfalls described above, a group of software engineers suggested introducing a new software delivery approach back in 2009. They called it the DevOps and it was a practical implementation of Agile methodology. Its main goal was to remove all the unneeded waste from the software development process and introduce streamlined software delivery pipelines to increase the process predictability.
There is a popular misconception, that to enable DevOps you need to combine Devs with QA and Ops, sit them in one room and they must teach each other to code and deploy, to remove the siloes of tasks and responsibilities and let the DevOps magic commence. It is actually quite far from reality.
The core idea of DevOps is that any product is developed for a limited period of time, and then works in production for a nearly unlimited time — so the needs of the production environment are paramount. In DevOps workflows, Ops engineers know they will have to run the software, product or service, so they make sure it is built well from the very beginning. They ensure the Devs and the QA specialists have as little bottlenecks in their workflows as possible, to minimize the number and severity of any issues during software development.
Ideally, when the Devs receive the task to develop a new application or product feature, all three of the parties involved sit down and decide HOW IT SHOULD RUN IN PRODUCTION. based on it, they decide whether it might be a part of a monolithic application or a separate microservice, how it should be scaled, updated, rebooted and shut down. Based on these software design decisions, the Devs configure the so-called IaC approach and CI/CD pipelines to automate the process of code building, testing, staging and releasing. The infrastructure must be cloud-based though, as virtualization is the crucial prerequisite for CI/CD functionality.
The Devs write automated unit and integration tests BEFORE they write the code itself, so the QA engineers don’t have to do low-level testing of each new batch of code. The Devs have to keep the unit test codebase relevant, yes, but it saves a ton of time and effort in the long run. Then the developers are able to produce the code in neat small batches and do the basic testing themselves. The key point here is that all a developer needs to test a new batch of code is launching a scripted command provided by DevOps engineer once, which can be used as many times as needed.
Once the code testing is done, the CI/CD pipeline pushes all the code to the staging server, where the latest application version can undergo high-level testing by QA engineers. If everything is good there, the tested code is pushed to production through rolling updates, so the end-users experience no downtime. Once the code is deployed to production, the same CI/CD pipeline logic is used to automate various aspects of cloud infrastructure management, so the DevOps monitoring job is made much easier. In addition, this allows using Machine learning models to enable so-called predictive analytics, where the system components are automatically scaled up and down based on demand or rebooted if they fail, thus enabling a self-healing architecture.
IaC, CI and CD – what are these?
We’ve mentioned the terms like IaC, CI and CD quite some times already, now it’s time to explain them.
IaC stands for Infrastructure as Code — one of the basic DevOps paradigms. All the settings and parameters for all the environments your Devs, QA and Ops engineers use — IDEs, testing, staging, production, all modules for scaling and monitoring — are codified in textual files, so-called “manifests”. These manifests are used by Terraform — an open-source configuration orchestration tool from HashiCorp, to spin up any required number of instances for any environments needed in your workflows. The manifests are written with a simple declarative language and can be stored at GitHub and versioned by any team member just as any other code, thus the name of the approach.
All a developer needs to test a new batch of code is commit it to GitHub. The CI/CD tools will automatically provision the required testing environment and test the code against the pre-configured unit and integration tests. Should any parameters be updated, all it takes is changing several values in the textual file. It helps save up to 85% of the testing time on software development — and yes, it is absolutely true for long-term projects.
CI stands for Continuous Integration. It means that instead of developing features in long separate branches, which causes huge conflicts during –git merge procedure, the developers write code in small, clean batches, which are quickly tested — as testing environments are provisioned in seconds due to IaC. If the code runs the tests successfully, it is pushed to the staging server automatically. This is possible due to using tools like Gitlab CI, CircleCI, Jenkins and Ansible. This is applicable to both software development and cloud infrastructure management, as CI helps reduce time-to-market for new products or features by at least 35%.
CD stands for Continuous Delivery. This is a practice of taking the output of any operation and turning it into an input for any other operation. For example, once a developer performs a code commit to a GitHub repository, a webhook is triggered, a testing environment is provisioned, the code is tested and if the tests succeed, the code moves on. Due to this, it is possible to essentially turn any code commit into the latest application build, which is automatically tested and pushed to the staging server, and later pushed to production.
In addition, CD tools like Jenkins, Ansible, SaltStack, etc can do the same for any infrastructure operation — scaling up or down, data processing, backups, etc. Thus said, applying CI/CD pipelines shortens the time-to-market for new product features, simplifies infrastructure management and improves the cost-efficiency of your IT operations in general.
DevOps risks and limitations
As good as it sounds, the DevOps methodology still has some limitations.
- When you go for open-source tools, you depend on the goodwill of the community to keep them updated and relevant
- When you partner with IT services providers to get DevOps services, you depend on their internal processes regarding releases and updates
- When you develop an app, it must work under all the operating systems, browsers, and be able to integrate with various apps to form holistic software ecosystems, as it greatly increases its value.
- When you work with customers from the EU or US, you must comply with various regulatory acts, like GDPR, PATRIOT act, etc.
- If your application has an API, it must remain compliant with third-party APIs it should interact with
As you can see, these limitations are not critical, yet they must be handled to avoid any hurdles.
DevOps workflows without risks
Below are the ways to negate or overcome the limitations mentioned above:
- Use the most popular open-source tools like Terraform, Kubernetes, Jenkins, Ansible, Docker, etc. Docker containers can run on any OS and ensure your app runs in any browser, so you are covered on this front.
- Branch out the CI/CD processes, so a single branch serves a separate project. This way, you can adjust the configurations according to the needs of every app
- Integrate your communication tools with third-party security update channels. If there is a security update for one of your tools, you want to get a notification about it at once
- Not all APIs are equally useful. Being able to integrate your product with some shiny new tool via API does not mean you should do it. Keep your CI/CD pipeline a lean, mean automation machine.
Thus said, some common sense helps get the most out of DevOps best practices and negates all CI/CD limitations.
Conclusions
Undergoing a DevOps transformation is a great way to improve cost-efficiency and performance of IT operations for your business. The only challenge here is actually finding a good IT Services provider to help you implement these glorious tools and workflows in your company. IT Svit can help with this, as we are one of the leading Managed Services Providers, who will be glad to assist your company with this endeavor. Should you want to reap the CI/CD benefits and use DevOps workflows without risks — we would be glad to assist!