Review of DevOps implementation results in 2018
-
4578
-
2
-
18
-
0
It is quite hard to assess the state of DevOps process when you are a part of it. IT Svit is a Managed Services provider, one of the leaders of IT outsourcing market in Ukraine, and we deliver DevOps-as-a-Service for the last 5 years.
During this period, DevOps practices and workflows became the flesh and blood of our business DNA and we literally cannot imagine how to do the IT ops in any other way. However, a wide variety of our customers from automotive, marketing, finances, retail, banking, logistics, telecom, and other industries are still doing business the way it was conducted for decades. Big companies have big inertia and the bigger the enterprise – the longer it takes to perform a digital transformation and leverage its benefits in full.
Long-established businesses stick to running dedicated servers or on-prem private virtual environments for as long as possible. They fear potential business interruption, monetary and reputational losses they associate with implementing new technology and reforming the IT operations. Quite opposite, startups that form the majority of our customers go for the cloud from the very beginning and adopt DevOps practices with ease – they just do not have siloed tech, tasks, responsibilities, and workflows to disrupt. This allows them to be much more flexible and they can respond to ever-changing customer preferences much faster, allowing startups to compete with long-standing leaders and win on their field.
Naturally, this means global enterprises have to transform in order to remain competitive, and they have to adopt DevOps best practices. However, necessity does not always lead to reaching the solution. Many of the enterprises find themselves close to failing their DevOps initiatives. In this review, we take a closer look at how the DevOps transformation must evolve in order to succeed. Why should you read this post? Hopefully, it will help you evaluate the state of DevOps adoption in your company, identify the possible challenges and apply the suggested solutions.
DevOps trends of 2018
As an MSP, IT Svit is on the spearhead of DevOps evolution. We experience the trends listed below in our daily operations, and we are very pleased when our findings and insights are backed by prominent experts from other industry-leading companies. Here is what we think of the current direction of DevOps trends evolution:
- Enterprise DevOps is a reality. We monitor the market diligently, and we see many signs of industry-wide DevOps acceptance. MSPs are frequently publishing new case studies covering their collaboration with enterprises who need DevOps implementation. Global companies report on significant savings and increase of software delivery quality for their products and services. Cloud service providers announce their earnings and declare huge sums gained from the projects for industry-leading enterprises. Forrester has proclaimed 2018 to be the year of enterprise DevOps adoption.However, while 80% of enterprise C-suite executives surveyed by Puppet and DORA (DevOps Research and Assessment agency) have stated their companies are actively implementing DevOps workflows, only 50% were able to demonstrate some tangible benefits of the transition and showcase any solved challenges. We will describe the reasons for such discrepancy below.
- DevSecOps gains traction. We have already mentioned the importance of DevSecOps approach to building the software delivery pipelines. This point of view is confirmed by Splunk vice-president for IT markets, Rick Fitz.He says that security becomes the standard requirement for enterprise software development, as the security requirements and best practices followed by security ops must be taught to the whole DevOps team. Security checks will be shifted to the very beginning of product development, instead of being bolted sideways right before the release.
- Serverless computing is on the rise. It is hard to overestimate the importance of the ability to run your code without having to configure the underlying infrastructure. AWS Lambda, Azure Functions, Google Cloud Functions and other services alike are steadily rising in popularity. We have covered this in our state of DevOps adoption in 2017 report.Mike Kavis, the managing director at Deloitte consulting said that the ability to abstract your code from the servers is the best part of using serverless computing in 2018. His opinion is backed by the Markets and Markets research of Functions-as-a-Service evolution, which predicts this branch of cloud services ~33% annual growth to $7.7 bn by 2021.
- SRE becomes the mainstream DevOps role. Site Reliability Engineering covers the areas of architectural flexibility and system automation, site reliability and developer empowerment to ensure more smooth and productive apps delivery and positive end-user experience, says Rick Fitz from Splunk.SREs can be described as Ops+, who are as familiar with Python and Ruby as with CI/CD tools and configuration management. These specialists are currently on the pinnacle of DevOps progress, as Ops talents become skilled with development to be able to react to issues in production faster.
- KPI balance becomes crucial. DevOps progress is propelled by KPIs, as using DevOps tools enables measuring the things like “time from code commit to code deploy”, “lead time per change” or “mean time to repair” and “change fail rate”. Working with these indicators help measure the success of the DevOps initiatives, or highlight the rooms for improvement / total redesign of the software delivery process.Nicole Forsgren, CEO @ DORA, has emphasized the importance of keeping all these parameters effective, as the true top-performers deliver high performance in all aspects of software development, so everything works in tandem, not just one KPI or the other.
- Business success is recognized as the most valuable KPI. There is no point in minimizing the software delivery time if the features delivered do not add actual business value to your products or services. Jez Humble, CTO @ DORA was very precise on that, saying that unless the business measures the exact impact of every new feature, it risks discovering that 2/3rds of the features do not add any value — or even decrease it. Every business must use their machine-generated Big Data to analyze the efficiency of their DevOps endeavors and projects.
Tim Buntel, VP of product @ Xebialabs contributes to this idea by stating that in 2018 the emphasis of data analytics shifts from merely ROI measurement to a minute examination of the ways each new DevOps investment or process improvement affect the bottom line, as well as the business productivity as a whole. - Experimentation is the engine of DevOps evolution. If KPIs are the engine of DevOps evolution, experimentation is the fuel that drives it. If DevOps teams have the right and opportunity to experiment without any detrimental effect on the product delivery time or performance in production — they are able to innovate much faster. Quite the contrary, if all experimentation is considered unneeded risk — how likely is the team to innovate?The whole point of DevOps approach is minimizing the risks — but the risks MUST be taken and the point of experimentation is not to blame someone but to highlight the system inefficiency and reorganize it to remove the possibility of the same failure in the future. By definition, innovation is doing something that has not been done before, so it is a risk the team must take. Fail fast, fail often and improve as the result — this is the motto of DevOps in 2018, says Jez Humble of DORA.
- CALMS are important. The definition and meaning of DevOps have constantly evolved through nearly the decade of its existence. In 2018, Nicole Forsgren from DORA says that the DevOps implementation is represented by 5 major characteristics: culture, automation, lean, measurement, sharing — or CALMS.The businesses that are serious about succeeding with their digital transformation and reaping its fruits must keep close tabs on all of these components, as the whole cannot work if some part is neglected.
Possible reasons for DevOps implementation issues
We now come to the fact we mentioned earlier — quite a lot of businesses struggle to complete their transition to DevOps practices in full, or fail to spread the initial success of the pilot projects throughout the whole organization. Why does it happen? The authors of the recent DORA State of DevOps 2018 report declare that C-level executives and managers tend to be somewhat pinkish about the real conditions of DevOps implementation in their departments. When the team members say they are experimenting with DevOps tools, practices and workflows (and don’t mention any particular challenges), the managers tend to report their teams are actively using them in full.
Such discrepancy in data leads to the aforementioned fact that while more than 80% of 30,000 enterprise managers and executives surveyed by Puppet reported their companies are doing DevOps, only 50% of them were able to showcase some real success stories.
Thus said, the results of DevOps implementation across the IT industry are not as shiny as the C-suite directors have expected.
The correct course of DevOps implementation
Below is the recommended course of DevOps implementation for the mid-to-enterprise business. As we explained above, the startups are much more nimble and implement the DevOps workflows with little to no trouble.
There are actually 6 stages of a successful DevOps implementation:
- Audit of the existing practices
- Normalization of the technology stack
- Standardization of processes to reduce variability
- Expansion of DevOps practices across the company
- Automation of infrastructure provisioning and configuration
- Enabling self-service capabilities
Below we describe each stage in more details.
Stage 0 — assessing and consolidating the existing processes and technologies
When the main stakeholders (Devs, Ops, QA, security, and management) are encouraged to begin their digital transformation journey, the first step to make is to assess where they stand. A complete audit of the existing infrastructure, workflows, tools and business practices is needed.
Once the existing state of the infrastructure and software ecosystem in place are audited in full, there are 5 major patterns to establish. These patterns are not a one-time task, but merely a foundation on which your digital journey will be based:
- Create reusable deployment patterns to ensure the same logic and scenario can be used for a wide variety of operations, shortening the time to market for a new feature by 24x on average
- Introduce configuration management tools like Kubernetes, Terraform, Puppet, Ansible, Chef, etc. to unify the process of infrastructure management. The highest performing businesses are using CM tools 27x times more often, as compared to lowest performers
- Let the teams configure the monitoring/alerting for the services they run. The team should be able to identify any issue and deal with it on their own, without the need to escalate the task to another team. The best teams have such practice in place 24x times more often, as compared to low-performers
- Create the codebase of automated unit tests to reuse the testing patterns. When the QA team does not have to reinvent the wheel for every new batch of code, the time from “code commit to code deployment” shortens by 44x on average
- Let DevOps recommend new tools for other departments. Management has to accept the fact that the DevOps team responsible for running the software and infrastructure is best suited to select the appropriate toolkit updates. This helps IT department to remain the driver of innovation in the organization. In highly-performing companies, DevOps teams have a say in choosing the tooling for other departments 44% of the time, as opposed to 1% in lowest performers.
Thus said, these 5 patterns can be slightly adapted to the needs of your particular organization, yet their core must remain unchanged, as they are crucial for the success of the further DevOps implementation.
Stage 2 — Normalization of the technology stack
As the business grows and accomplishes various projects, the technology stack grows with it. However, new technologies appear all the time and many of the legacy tools are utterly outdated nowadays. Normalization helps simplify the software delivery ecosystem and accomplish two main goals:
- Software development teams utilize version control. Over the years, our experience showed that the implementation of version control usage is the crucial prerequisite for building efficient CI/CD pipelines. This is proved by recent Puppet/DORA findings we covered in the recent report on the state of DevOps adoption in 2018. The teams that utilize version control systems like GitHub or BitBucket benefit from significantly higher code delivery speed, IT performance rates, and lower change failure rates.
- Development teams work with a standard set of operational systems. It is common to see large chunks of corporate IT infrastructure using Microsoft Server 2008, while the others run Microsoft Server 2012, and yet others run Microsoft Server 2016. This means that different apps must be built, tested and run using different sets of tools. Eliminating even one of such variables simplifies the IT infrastructure management a lot, so limiting the number of OSs, tools, and platforms in use to the absolute minimum is crucial.
The following steps contribute to the success of this stage:
- Build your apps on a standard technology set
- Store the code and configuration files in version control systems
- Test new configuration settings on staging servers before updating the production
- Share the source code between teams in your organization
The best way to implement these steps is by standardizing the toolset based on the needs of all your apps, not just several most important ones. Operate your production environments using the proven technologies and reliable processes and establish clear guidelines for adding new tools to the set to encourage experimentation.
Stage 3 — Standardization of the processes to reduce variability
Variability is the reality — and curse — of many legacy infrastructures and software ecosystems. As we mentioned above, with time many tools and apps using them become obsolete, but decommissioning them all together is impossible, as it would be detrimental for business, or simply too expensive.
Therefore, the next step of successful DevOps implementation is rebuilding the outdated parts of your system and processes using the standardized tools and workflows to reduce variability.
The following practices help to succeed at this stage:
- Reuse the same deployment patterns to build standardized copies of legacy apps and services
- Redesign the app architecture to meet the needs of your business using the new technology
- Use version control tools to store system configurations for your servers
Once your business uses the greatly simplified infrastructure to run modernized apps build with up-to-date tools — you are ready to begin the next phase. Many global enterprises are currently on this stage — yet they need to spread the success achieved during these pilot projects across their whole organizations.
Stage 4 — Expanding the DevOps success through the company
This is an essential step if you want small pockets of DevOps success achieved in several pilot projects in Center of Excellence to become an unstoppable transformative wave, reorganizing and revitalizing your whole organization. The key to success in this stage is for the managers to step aside. The point is, there are too many managers in most companies, far more than are actually needed to operate under the Lean model. These managers contribute nothing (or little) to successful operations of the company, but they have the functions to approve (or reject) the requests coming from their underlings.
If such managers are good-willed — they will support the DevOps initiatives and allow the DevOps teams to do their jobs without additional layers of approvement. What should these managers do once they are stripped of their approval functions is for the C-suite to decide. If, however, such managers decide to retain their rights of approval, the whole point of DevOps transformation will be lost and the process will struggle and halt on every step. To overcome this challenge, two more practices must be introduced in the organization:
- DevOps specialists can do their work without requesting and approval from any manager outside their team (this must later apply to all members of all departments)
- New apps and services are built reusing safe and proven deployment patterns, with the gradual introduction of new tools and tech built according to steps 2 and 3.
Note that we are not suggesting firing all the middle-level managers. Just make sure they don’t intervene with experimentation in their departments. As we said before, innovation is a risk, and there will be no innovation without free experimentation. When an experimental environment can be provisioned with three commands and configured in under 5 minutes — the cost of error is negligible. The benefits of success, however, are innumerable.
Stage 5 — Automation of infrastructure provisioning and configuration
Many consider this stage to be the actual beginning of the DevOps journey — but this is not true. This stage becomes possible only after complete success was reached on all previous stages: the existing infrastructure was audited and consolidated, the technology stack used was normalized, the superfluous processes and tools were replaced with standardized practices, and the DevOps culture was spread across the organization.
Once your DevOps team is quite used to automation of infrastructure provisioning and configuration for their immediate needs — they can spread the same practices throughout all the other departments and branches of your company. Undoubtedly, any department can benefit from rapid delivery of required software and server environments, be it R&D, finances, sales, marketing or HR. When all the routine tasks are automated, your whole company becomes much more productive.
This stage is characterized by the following practices:
- All system provisioning and configuration tasks are automated
- All app and system configuration files are stored in version control systems and are easily accessible to the members of the DevOps team
What to automate first? Automate the processes you do most frequently and across most parts of your infrastructure. This way, one can save up to 80% time on the repetitive and mundane tasks, freeing up your team’s time for the things that really matter, like improving your IT infrastructure or finding new ways to deliver value to your staff and customers. This stage benefits greatly from the introduction of such practices:
- Introduce self-service capabilities for the team
- Automate the security configurations through version control
While these are mostly self-explanatory, it is important to check your cybersecurity levels using the tools like OWASP Top-10 to minimize the possible security exposure surface. This should be done periodically, to ensure the ongoing development and adjustment of your IT infrastructure does not open new potential security breaches. Enabling the self-service capabilities through the use of scripted scenarios that can be launched by any member of the team is the final step on the way to establishing a truly digital company.
Stage 6 – Enabling self-service capabilities for your team
This is the stage we previously described in our article on how the enterprises can move to DevOps practices and workflows. By the time this stage begins, your IT department has assessed, normalized and automated the majority of the existing processes, workflows, and workloads in your software delivery pipeline. This is the time to let the rest of the staff benefit from this approach, instead of treating the IT dept as a cost center of your operations, existing to execute orders from other departments.
Your software developers, ITSM specialists, security and compliance team, finances and marketing, recruiting and HR, R&D department — anyone in need of any digital environment must have the possibility to provision it on their own, following simple guidelines and executing automated setup & configuration scripts, stored in the version control system.
It is important to note, that creating such a comprehensive self-service catalog on this stage is impossible if the work on creating self-service scenarios was not done during the previous stages. First, the DevOps teams automate their own tasks, then they help automate the most crucial business processes related to software delivery in production, then they help automate the mundane tasks across the whole business operations pipeline and across all the departments.
This stage also includes the process of redesigning the core product architecture to benefit from updated tools and workflows. This means splitting the monolith product into microservices, the introduction of Kubernetes clusters to run them, making use of Docker containers to ensure app portability and cross-platform ubiquity of deployment and management in production.
This is the way top-performing teams organize their DevOps implementation — and we recommend you follow the same route to achieve the best results.
Hiring external help to shorten the DevOps adoption time
The process described above can be performed by any enterprise with a strong internal IT department team. The only downside is the longevity of the process, as old customs die hard. Therefore, many companies opt for external IT consulting services from Managed Services Providers, who help establish Centers of Excellence and implement DevOps culture in the company much faster. IT Svit is one of such companies and we have ample experience with delivering the expected results and providing full-scale digital transformation to companies of all sizes.
Case 1 — Allianz Insurance UK DevOps adoption
Allianz has well over a century of history and a huge, widely distributed architecture. TCO was huge and the company decided to reduce it, as well as optimizing the performance of their IT infrastructure in general. They hired an external DevOps contractor to perform the digital transformation within tight time frames. The contractor helped Allianz audit their existing infrastructure and remove all redundancies and bottleneck, established the CI/D pipelines and proposed an outline for ongoing infrastructure improvement.
As a result, Allianz IT team was able to significantly shorten their development cycles and reduce the amount of operational overhead due to using automated setup and configuration tools. The company moved from Waterfall to Agile software delivery model and their SDLC became much more reliable, less error-prone and much more predictable.
Case 2 — IT infrastructure update for a Latin America insurance provider
Another insurance provider from Latin America was having frequent issues with their legacy IT infrastructure, and it began to hinder their normal operations and continuous development of their products. They also decided to undergo a digital transformation as a part of the transition to the cloud for their products and services.
The MSP sent their talents, who established a Center of Excellence on premises and taught a small team of IT engineers to work according to DevOps best practices. The existing apps were then split into microservices, so there is a bunch of independent modules interacting through a clean API, rather than a monolith product. These apps run in Docker containers inside MS Azure cloud, allowing the team to benefit from the cloud platform capabilities and rebuild their software delivery pipelines to use the Infrastructure as Code.
After the project was finished, the insurance provider found itself running a flexible, scalable and resilient Azure cloud infrastructure. Most of the functionality of the legal infrastructure was rebuilt to become cloud-native and the software updates are now done through rolling updates. It ensures there is no product downtime and the customers invariably have a positive end-user experience. The TCO was significantly reduced as a result, as well as the time-to-market for new products and features.
Case 3 — AWS cloud transition for a healthcare concern
A global healthcare conglomerate from the US, fielding more than 80,000 employees and making more than $30Bn in annual revenue, was using a legacy infrastructure spanning multiple dedicated data centers in multiple countries across the globe to deliver their 2 SaaS platforms to millions of daily customers. Once the company decided to implement a Machine Learning solution to improve the efficiency of their Big Data analytics, they understood the existing IT infrastructure was inadequate for the task.
The corporation opted for using DevOps-as-a-Service from an external MSP company. After an in-depth analysis of the existing IT infrastructure and workloads, the contractor came up with a solution to migrate to AWS and use the cloud service platform capabilities to the max. This would also ensure the ability to build a Big Data analytics solution atop the scalable cloud infrastructure layers.
As a result, the healthcare provider was able to cut their OPEX by 45% at least, while ensuring enterprise-grade security & compliance of all components, as well as the possibility to add multiple AWS modules and offers from AWS Marketplace to further facilitate the product development and improvement based on Big Data analytics results.
Final thoughts on the results of DevOps implementation in 2018
We hope that this analysis was helpful in describing the main trends of DevOps evolution through 2018, the challenges the enterprises currently face, and the possible ways to overcome these challenges. This is also a detailed guideline for the transition to DevOps done with the effort of your internal staff.
Should you decide to hire an external consultancy to help your IT department perform the task — IT Svit is ready to help!