Is the idea of a unified continuous integration and continuous delivery pipeline a dream?
When I joined WorkSafeBC’s cloud operations team responsible for cloud operations and engineering process optimization, I shared my dream of a tool pipeline where every product could be continuously integrated and delivered.
According to Lukas Klose, process flow (in software engineering jargon) is “the state in which software systems create value at a steady and predictable rate.” I think that’s one of the biggest challenges and opportunities, especially in the area of complex emerging solutions. I strive to provide a continuous delivery model and build the right things to satisfy our users with a consistent, efficient and quality solution. Finding ways to break down our system into smaller pieces that are valuable in their own right allows the team to deliver value incrementally. This requires a change of mindset in business and engineering departments.
Continuous integration and continuous delivery (CI/CD) pipeline
CI/CD pipelining is a DevOps practice for delivering code changes more frequently, consistently, and reliably. It helps agile development teams improve quality and deliver faster by increasing deployment frequency, reducing change lead time, change failure rates, and average recovery times for key performance indicators (KPIs). The only prerequisites are a solid development process, a quality mindset and a commitment to requirements from conception to abandonment, and a comprehensive pipeline (as shown below).
It simplifies engineering processes and products to stabilize the infrastructure environment; Optimize workflow; And create consistent, repeatable, automated tasks. As Illustrated by Dave Snowden’s Cynefin Sensemaking model, this allows us to turn complex, unsolvable tasks into complex, solvable tasks, reducing maintenance costs and improving quality and reliability.
Part of streamlining the process is to minimize the waste of wasteful practice types Muri (overload), Mura (variation) and Muda.
- Muri (overload) : Avoid over-engineering, functionality irrelevant to business value, and excessive documentation.
- Mura (variation): Improve approval and validation processes (e.g., safety endorsement); To promoteShift to the left ahead of timePolicies to implement unit testing, vulnerability scanning, and code quality checks; And improved risk assessment.
- Muda: Avoid waste such as technical debt, errors, or detailed documentation upfront.
It seems that 80% of the focus is on providing an engineering product that can be integrated and collaborated, and these systems can take a creative and plan, develop, test and monitor your solution. However, a successful transformation and engineering system consists of 5% product, 15% process, and 80% developer.
There are many products we can use. For example, Azure DevOps provides rich support for continuous integration (CI), continuous delivery (CD), and scalability, And integrates with open source integration and commercial off-the-shelf (COTS) software as a service (SaaS) such as Stryker, SonarQube, WhiteSource, Jenkins and Octopus. It’s always tempting for engineers to focus on products, but remember that they’re only 5% of our journey.
The biggest challenge is breaking decades of rules, regulations and processes that have moved into comfort zones: “We’ve always done it this way; Why do we need to change?”
Friction between development and operations leads to fragmented, repetitive, and continuous integration and delivery pipelines. Developers want to have access to everything in order to keep iterating, keep users using it, and keep releasing it quickly. Operations people want to lock everything up to protect the business, users, and quality. These contradictions inadvertently make it difficult to automate a process, which in turn leads to a later release cycle than expected.
Let’s explore pipelining using a snippet from a recent whiteboard discussion.
Supporting pipeline changes can be difficult and costly; This problem is compounded by inconsistency in versioning and traceability, so it is a challenge to continually streamline the development process and pipeline.
I advocate a few principles to make every product use a common pipeline:
- Automate everything that can be automated
- A build
- Maintain continuous integration and continuous delivery
- Keep streamlining and improving
- Keep a build definition
- Maintain a definition of a release pipeline
- Scan for vulnerabilities early and often, andFail as soon as possible
- Test early and often, andFail as soon as possible
- Keep releases traceable and monitored
But if I’m going to break this, the most important rule is to keep it simple. If you can’t explain the why (what and why) and process (how) of pipelining, you probably don’t understand your software process. Most of us don’t want the best, ultra-modern, revolutionary assembly line – we just need one that is functional, valuable, and facilitates engineering. The first thing that needs to be addressed is the 80 per cent — the culture, the people and their mindset. Invite your CI/CD knights to put on shining armor, slap the TLA (two/three-lettered ACRONYM) on their shields, and join in the power of practical and empirical engineering.
Unified pipeline
Let’s step through our whiteboard meeting practice.
Each application uses a set of build definitions to define a CI/CD pipeline that triggers the build of pre-merge validation and continuous integration of pull requests. Generate a published build with debugging information and upload it to the symbol server. This allows developers to debug in both local and remote production environments without worrying about which builds and symbols need to be loaded. The symbol server does the magic for us.
Doing as much validation as possible during the build process (left ahead) allows teams developing new features to fail as quickly as possible, continuously improving overall product quality, and providing valuable evidence for code reviewers in pull requests. Do you like pulling requests with lots of submissions? Or is it a pull request with a few commits and support for bug checking, test coverage, code quality checking, and Stryker mutation remnants? Personally, I vote for the latter.
Do not use build transformations to generate multiple environment-specific builds. Release time transformation, tokenization, and XML/JSON value substitution are implemented through a build. In other words, the right shift lags the configuration of the specific environment.
Securely store publish configuration data and make it available to both development and operations based on the trust and sensitivity of the data. Use open source key management tools, Azure Key vaults, AWS Key management services, or other products, and keep in mind that you have many handy tools in your toolbox.
Move approver management from multiple phases across multiple pipelines to simple group members using user groups instead of users.
Instead of repeating the pipeline to get the team to where they are interested, create a pipeline and give them access to specific delivery phases.
Last, but not least, embrace pull requests to help improve insight and transparency into the code repository, improve overall quality, collaboration, and publish pre-validated builds to selected environments, such as development environments.
This is a more formal view of the whole whiteboard.
So, what are your thoughts and experiences with CI/CD assembly line? Was my dream of managing them through an assembly line a pipe dream?
Via: opensource.com/article/19/…
By willy-peter Schaub, lujun9972
This article is originally compiled by LCTT and released in Linux China