Categories
IT Service Management

GitOps: A Path to More Self-service IT – ACM Queue

Some technological trends are consistent over time:

  • Dematerialization: hardware that accomplishes the same functionality becomes smaller and lighter.

  • Virtualization: hardware stacks are collapsed into software ones. Virtualization enables the configuration and implementation of such stacks in code.

  • Infrastructure as Code (IaC): The corollary of the previous item.

  • Automation: frequently performed tasks are automated to reduce the need for human intervention.

In this article, Thomas A. Limoncelli, frequent blogger and author of several books on system administration, proposes a system in which users can find the appropriate “Service Request” from a Git repository, modify the request for the new application, and propose a review icon as a pull request.

Upon review and approval, it can be submitted to the Continuous Integration (CI) pipeline for implementation.

This may work for some organizations, but it isn’t a good general solution for most organizations, for several reasons:

  • Not all hardware is virtualized in all organizations.

  • Not all virtualized hardware can be integrated with the CI pipeline.

  • Not all users have the technical sophistication to submit requests in this manner.

Nevertheless, the article proposes an interesting mechanism in which some IT organizations can advance the state of the art for some of their service requests, while using and advancing their existing DevOps investments.

GitOps: A Path to More Self-service IT – ACM Queue
— Read on queue.acm.org/detail.cfm

Categories
Release and Deployment Management

DevOps Changes Everything

…and DevOps changes nothing.

I was holding off writing about DevOps because:

  1. I don’t work with customers who implemented it successfully.
  2. I have nothing new to offer.

Both points are still true. There isn’t anything new to say about DevOps except that the hype machine is still in overdrive and the loop machine is wearing out.

DevOps is an improvement to Release and Deployment Management. There’s no conceptual abstraction on top that changes the way we think about releases and deployments. It arose in response to new layers of technical abstraction that enabled new capabilities. These include server virtualization (VMWare), the availability automated deployment tools (Chef, Puppet), and the rise of containers (Docker) and supporting programming interfaces and orchestration tools.

Together they allow us to make an order of magnitude improvement in the performance of the Release and Deployment Management and Service Validation and Testing processes. This is a very good thing, because it drastically decreases the costs of software deployment, and enables more rapid experimentation when addressing new customers or improving existing customers. Moreover developers can do this with tools (programming languages) familiar to them. This is also a good thing.

However, organizations adopting DevOps practices must understand the requirements of various stakeholders and minimize barriers of communication across silos: see Can We Stop Talking About DevOps?. Moreover, these organizations have to be in control of how applications are released (Release and Deployment Management), and you still have to know what you are automating (Service Asset and Configuration Management).

People. Process. Technology. In that order. Nothing has changed.

DevOps will fail without them, except in limited instances or for newly developed services (or recently deployed services) receiving abundant management attention and not encumbered by legacy configurations.

In the long run the automated deployment of production environments has a bright future, as even more functions of IT are virtualized (networking, storage). In a sense, DevOps will have won; it will be pervasive. We just won’t call it DevOps any longer. We will go back to calling it Service Management.