DevOps and the Theory of Constraints

Tasktop is a vendor specializing in achieving this integrated value stream, offering a solution for ‘Value stream management for software development’.

As CEO Mik Kersten describes in his blog:

“in the world of Agile and DevOps, there is no one platform. Instead, we are witnessing a large-scale disaggregation of the tool chain towards the best-of-breed solutions. For large-scale Agile and DevOps transformations to succeed, we must shift our thinking from end-to-end platform to tool chain modularity.”

DevOps Integration Hub

As they suggest in their blog one of the key bottlenecks constricting the throughput of DevOps teams is the lack of integration between the different tools used to complete the software development lifecycle.

So they have suggested the need for a general solution category, a ‘DevOps Integration Hub‘ as a particular fit for this need. They offer a DevOps Integration suite that can be used to build linked permutations such as:

  • Exposes their Tasktop integration platform with a modern webhook, REST and JSON based integration layer that will allow any DevOps tool to be connected to the software lifecycle, such as Selenium-based test execution to Agile planning tools.
  • Jenkins integration for flowing build information to user stories or requirements, as well as Github and Gerrit integration for automating change set traceability.

They describe how the 1,000 strong Bosch team have achieved a unified tool chain that funnels 2,500 feature requests a month from electric car innovation programs, into a single dash board for developers to prioritize and action.

Furthermore by tying together Jenkins with the automation to burn Flash ECUs they have empowered the team to work around a true end-to-end Continuous Deployment lifecycle.

It’s a very powerful metaphor when you consider how literally it enables sharing of best practices. High velocity team processes that have been developed by a real team, encoded into the application patterns they use to achieve them, is a big building block for enabling this.

Doing so enables you to view the organization through a business performance-centric lens and apply optimization best practices such as ‘Value Stream Mapping‘, enabling you to identify the right tools and techniques to apply at the right process points, and thus improve developer productivity withiin a context of Business Value generation. In the DevOps article Lean Value Stream Mapping for DevOps the IBM author describes how they use this kind of process optimization goal as a way of better organizing software development and innovation.

Maturity models

A number of maturity models are available to map the details of this evolution. Thoughtworks offer this Forrester paper describing it within a context of executive strategy and perceptions of IT and by setting the scene for where many organizations are currently at. HP, IBM, InfoQ and Arun Gupta of AWS each offer a DevOps and CD maturity model.

Continuous Delivery

CloudBees Core is a fully-featured, cloud native CD solution that can be hosted on-premise or in the public cloud. It provides a shared, centrally managed, self-service experience for all your development teams. Allianz

Wikipedia describes Continuous delivery (CD or CDE) is a software engineering approach in which teams produce software in short cycles, ensuring that the software can be reliably released at any time and, when releasing the software, doing so manually. It aims at building, testing, and releasing software with greater speed and frequency. The approach helps reduce the cost, time, and risk of delivering changes by allowing for more incremental updates to applications in production. A straightforward and repeatable deployment process is important for continuous delivery. SearchIToperations describes: Continuous deployment is a strategy for software releases wherein any code commit that passes the automated testing phase is automatically released into the production environment, making changes that are visible to the software’s users.

Technology Metrics and Toolchains – Continuous Containers

As the base unit for improvement design this analysis can also identify and measure technology metrics, and from this inform what permutation of DevOps tools to use.

For example Lori McVittie of F5 provides a great blog on developing KPIs around system-level performance, relevant to DevOps goals, formulating a set of metrics for:

  • MTTR – Mean Time To Recover
  • LTTC – Lead Time To Change
  • MTTL – Mean Time to Launch

Lori identifies the core benefit of containers versus virtual machines, in terms of spin-up times and how this would be especially beneficial to high-traffic, high demand scenarios such as software defined networking. From this she identifies a number of variables that define a performance algorithm.

This type of analysis identifies the constraints in technical operations that can slow overall performance throughout, and therefore what types of tools and automations to apply.

Conclusion: Maturity-model Driven Cloud Native Agility

The Cloud Native mix of microservices, containers and Continuous Deployment offers a combination of architecture and tools that can bring these types of improvements to any enterprise software team.

David Linthicum describes how containers enables continuous operations, such as Docker. Rancher also explores this relationship in their white paper How to Supercharge your CI Pipeline.

However in 4 Myths about Containers and Continuous Delivery Todd DeCapua explains that this doesn’t happen magically, the use of these technologies does not automatically translate into improved development throughput. Instead he also recommends the use of maturity models, quoting the Infoq Continuous Delivery maturity model, to identify your current capabilities and from this plan a future roadmap where the tools are deployed to achieve specific process improvements.

You May Also Like

About the Author: CBPN Editor

Leave a Reply

Your email address will not be published. Required fields are marked *