Categories
IT Service Management Modelling

All I Really Needed to Know About Titanic’s Deck Chairs I Learned From ITIL

Headline have a tendency towards hyperbole to grab attention, and there is no shortage describing the dwindling role of the CIO. Here is a recent one: IT department ‘re-arranging deckchairs on the Titanic’ as execs bypass the CIO.

As reporting of technological change increases,the probability of comparing IT tor e-arranging deck chairs on the Titanic approaches 100%. 1 – Tucker’s Law 2

It is true, CIO’s are challenged by technological change. The widespread adoption of cloud-based services by business units bypassing the traditional IT function is well documented. Support and adoption of consumer devices, including mobile phones, tablets, and non-standard operating systems such as Mac OS and Android, is also challenging traditional IT functional units.

Specific to the cloud example, some further examination is helpful. We don’t need to reinvent a framework, because ITIL 2011 already provides one. Service Strategy section 3.3 (p.80) describes three types of service providers:

  • Type I — Internal service provider. Embedded in an individual business unit. Have in-depth knowledge of business function. Higher cost due to duplication.
  • Type II — Shared services unit. Sharing of resources across business units. More efficient utilization.
  • Type III — External service provider. Outsourced provider of services.

Furthermore, ITIL describes the movement from one type of service provider as follows.

Current challenges to the CIO role come in 2 directions:

  1. Change from Type II to Type I, or dis-aggregation of services to the business units.
  2. Change from Type II to Type III, or outsourcing of services (presumably) to cloud providers.

In fact the CIO may be seeing both at the same time, as traditional in-house applications are replaced with cloud services and the management of those services and the information supply chain are brought back to the business unit. The combination of those two trends together could be called a value net reconfiguration, or simply, re-arranging the deck chairs on the Titanic.

Is this a necessary and permanent shift? Maybe but probably not. I personally believe that part of the impetus is simply to bypass organizational governance standards such as enterprise architecture and security policies. Business Units can get away with this for a while, but as these services realize failures and downside risks, aggregated IT functions will have take back control.

This does not mean the end of cloud adoption. Far from it. It means that the CIO will orchestrate the cloud providers, in order to optimize performance and manage risk. The CIO is as necessary as ever albeit with a different set of requirements.

Peter Kretzman has successfully argued that the reported demise of the CIO has it backwards: IT consumerization, the cloud, and the alleged death of the CIO.

Kretzman has also argued the dangers of uncoordinated fragmentation: IT entropy in reverse: ITSM and integrated software.

 

1 In 1990 Wired Magazine published the observation that as an online discussion grows longer, the probability of a comparison involving Nazis or Hitler approaches 1.0. As an experiment in memetics, it taught us a lot about psychological tendencies towards hyperbole. The Wired observation is now termed Godwin’s Law of Nazi Analogies. http://en.wikipedia.org/wiki/Godwin%27s_law
2 Consulting firm Forrester has provided us enough ammunition for Tucker’s Law as a corollary.
Categories
IT Service Management ITIL Knowledge Management Tools Trends

HP’s $10 billion SKMS

In August 2011 HP announced the acquisition of enterprise search firm, Autonomy, for $10 billion.

It is possible HP was just crazy and former CEO, Leo Apotheker, was desperate to juice up HP’s stock price. With Knowledge Management.

Within ITSM the potential value is huge. Value can be seen in tailored services and improved usage, faster resolution of Incidents, improved availability, faster on-boarding of new employees, and reduction of turnover. (Ironically, improved access to knowledge can reduce loss through employee attrition).

In 2011 Client X asked me for some background on Knowledge Management. I did prepare some background information on ITIL’s Knowledge Management that was never acted on. It seemed like too much work for too little benefit.

ITIL’s description does seem daunting. The process is riddled with abstractions like the Data —> Information —> Knowledge —> Wisdom lifecycle. It elaborates on diverse sources of data such as issue and customer history, reporting, structured and unstructured databases, and IT processes and procedures. ITIL overwhelms one with integration points between the Service Desk system, the Known Error Database, the Confirmation Management Database, and the Service Catalog. Finally, ITIL defines a whole new improvement (Analysis, Strategy, Architecture, Share/Use, and Evaluate), a continuous improvement method distinct from the CSI 7-Step Method.

Is ITIL’s method realistic? Not really. It is unnecessarily complex. It focuses too much on architecture and integrating diverse data sources. It doesn’t focus enough on use-cases and quantifying value.

What are typical adoption barriers? Here are some:

  1. Data is stored in a variety of structured, semi-structured, and unstructured formats. Unlocking this data requires disparate methods and tools.
  2.  Much of the data sits inside individual heads. Recording this requires time and effort.
  3. Publishing this data requires yet another tool or multiple tools.
  4. Rapid growth of data and complexity stays ahead of our ability to stay on top of it.
  5. Thinking about this requires way too much management bandwidth.

In retrospect, my approach with Client X was completely wrong. If I could, I would go back and change that conversation. What should I have done?

  1. Establish the potential benefits.
  2. Identify the most promising use cases.
  3. Quantify the value.
  4. Identify the low hanging fruit.
  5. Choose the most promising set of solutions to address the low hanging fruit and long-term growth potential.

What we need is a big, red button that says “Smartenize”. Maybe HP knew Autonomy was on to something. There is a lot of value in extracting knowledge from information, meaning from data. The rest of the world hasn’t caught up yet, but it will soon.

Categories
IT Service Management ITIL

ITIL Exam 2011 Statistics

Overview

APM Group just released their final exam performance statistics for all of 2011. ITSMinfo blog is now presenting our unadulterated free analysis of SPAM marketing registration.

Note: Click on any image for a larger version. All numbers rounded to 100’s place unless otherwise obvious.

ITIL Foundation

The total number of ITIL V3 Foundation and V3 Foundation Bridge exams taken was 250,400 in 2011, a growth rate of 8% over 2010.

A total of 220,200 ITIL V3 Foundation and V3 Bridge Foundation certificates were issued in 2011, compared to 185,800 in 2010, a growth rate of 19%. The average pass rate was 88%, compared to 85% in 2010.

I estimate 827,100 ITIL V2, ITIL V3, and ITIL V3 Bridge Foundation certificates were issued since 2008.

ITIL Advanced Certifications

A total 27,500 Lifecycle exams were attempted in 2011, up 83% from 2010.A total of 21,300 Lifecycle certificates were issued in 2011, up 95% from 2010. The average passrate was 76% across all exams.

A total 17,000 Capability exams were attempted in 2011, up 55% from 2010. There were 13,700 Capability certificates issued in 2011, up 60% from 2010. The average passrate was 79% across all exams.

Managing Across the Lifecycle (MALC): 4,000 exams were attempted in 2011, up 128% from 2010. About 2,600 ITIL V3 Expert certificates were issued in 2011 based on the MALC exam, up 149% from 2010. The average passrate was 62%.

Of the ITIL V3 Expert certificates issued, 5,000 were achieved via the Managers Bridge, including 2,000 in June 2011, which was the last month the Managers Bridge was offered (thereafter only retakes were allowed).

The total number of ITIL V3 Experts minted was 7,600, up 88% from 2010. Approximately 16,200 Expert certificates have been issued since 2008.

Regional Variations

Foundation

Intermediate

Categories
IT Service Management Tools

BMC and Numara: What it Means

Disclaimer: I contracted at Numara Software as an Implementation Consultant in Professional Services from July 2007 to June 2010.

Chris Dancy said it best: this is about as close to J-Lo and Marc Anthony as we get in the IT industry. On January 30, 2012 BMC Software announced the acquisition of Numara Software.

My initial reaction was shock—both have been stable, mainstays of the industry. Shock gave way to disappointment. Disappointment soon gave way to cautious optimism about the future of the combined company.

Stephen Mann and David Johnson of Forrester fame have written some initial reactions. Here are mine.

Track-It

The Track-It family of products is the core of the original Blue Ocean Software, which was acquired by Quicken then spun-off again as Numara Software.

The initial indicators are that BMC intends to allow Numara to operate as an independent unit and to continue to operate Track-It as a standalone product. Track-It forms the low-end of the product line but generates high margin maintenance fees.

Track-It is profitable on its own and does not undercut sales of FootPrints Service Core or BMC Remedy. BMC may choose to scale back feature development, but it cannot make significant reductions in commitment or support without jeopardizing the highly repeatable and stable revenue stream of maintenance and support.

FootPrints Service Core

I joined Numara Software as a contractor shortly after the acquisition of FootPrints from Unipress and watched them struggle to make the transition from high volume transactions to high touch solutions. They did make this transition successfully, though it has become apparent in the last couple releases that the code-base is brittle. Version 12 to be released in 2012 will be a major refactoring of the code base to a new programming language.  I expect development to continue along its current path and schedule unless the refactoring seriously jeopardizes backward compatibility–in which case product management should revisit the product line.

FootPrints Service Core is more directly competitive with BMC Remedy, and I have been engaged at several customers where FootPrints replaced Remedy or beat Remedy in a competitive comparison. FootPrints provides easy configuration and rapid ROI, but is flexible enough to support several business processes. Although there are workspace templates built-in, they aren’t very useful, and customers usually start from scratch. As such what are customizations in Remedy are web-based configurations in FootPrints.

Nevertheless, while there is competitive overlap, FootPrints usually sells at a lower point in smaller environments. The question is whether BMC’s sales force is up for the transition to lower customer service and reduced professional service requirements of FootPrints customers. It depends on whether BMC sales staff consider this a threat of reduced revenues or an opportunity to retain valid sales leads where Remedy isn’t competitive. I suspect Remedy has been squeezed by competition (primarily Service-Now and Hornbill) and will welcome a competitive solution that can be sold on-premise or SaaS.

FootPrints Asset Core

FootPrints Asset Core (formerly Numara Asset Management Platform, or NAMP) has always been an enterprise product designed for stability and scaleability.  It is a product based on client agents that provide hardware and software inventory, software deployment, patching, and policy deployment of Windows, Mac, and Linux devices.

Asset Core competes more directly with BMC’s BladeLogic Client Automation. BMC will need to pay more attention to how they position these product lines. Asset Core is poor at automated discovery and agentless inventory and is very complementary to Atrium Discovery & Dependency Mapping (ADDM) in its mid-market. I anticipate BMC will strip some functionality and complexity out of Asset Core and keep it focused on the mid-market, leaving BladeLogic in the enterprise.

Mobile Device Manager

Numara Cloud is a repackaging of FootPrints Service Core, FootPrints Asset Core, and a new product called Mobile Device Manager (MDM) they acquired in 2011. The strategic positioning was brilliant and the growth potential is huge. BMC does not offer much in this area, and this addition should be welcomed in their product line.

Conclusions

Numara has traditionally been weak in several areas.

  • Mobile computing: no solution here, nor even a hint of future product development. BMC could capitalize on this with mobile solutions that integrate across the spectrum of IT Service Management and asset management products.
  • Social networking and chat integration: While FootPrints provides web-based messaging capability, the functionality is slow and dismal and it provides no workflow or issue integration. FootPrints provides no integration or API for social networks.
  • Configuration Management and Service Catalog: the initial release of the CMDB functionality was promising but they have failed to improve on it. Reporting, data federation, and data reconciliation functions are very poor. Product management has mostly focused on integration with Asset Core.

 

I would look for changes and improvements in these areas. BMC would be wise to focus product management in these areas in order to capitalize the relative strengths of both organizations.

For existing or prospective customers of both organizations, I don’t expect much to change in 2012. For many organizations I don’t expect much to change through 2015 beyond “normal” product feature evolution that would have occurred anyway.

The reactions of my current and former inside Numara have been very positive. If BMC is planning on major force reductions (RIFs) they have been very quiet about it. I don’t expect many RIFs beyond back office staff, where Numara has already been very efficient. I don’t expect many reductions in development, product management, or sales because the products are either high-margin and non-strategic (Track-It) or strategic and complementary (MDM).

I am rushing this response in order to get out some initial reactions. Overall I believe they provide complementary product coverage that, if utilized and coordinated, could provide a lot of future growth for customers and BMC. I will keep an eye on things and let you know what transpires, but please feel free to provide feedback and updates.

Categories
IT Service Management

One Step Backward, Two Steps Forward

I am proud to announce that I have started a new position in November 2011. It is a one-year contract in Tokyo to help transform the local, shared IT service provider to a global infrastructure utility provider. My position involves the discovery and implementation of processes based on good practices from ITIL. The project involves a number of challenges that are typical of global organizations, ranging from technical to political, and involving people, process and technology. I am excited to take on these opportunities.

My blogging and social networking have been reduced as a result of the schedule change and demand loads, but I appreciate everyone’s support and I am adapting my production around my new schedule. Please stay tuned for lots of good things to come.

Categories
IT Service Management Problem Management

The Problem of CSFs

If you are unable or unwilling to appoint a Problem Manager, you are not ready for Problem Management.

That’s what I said. Or at least I think that’s what I said.

The venerable and ubiquitous Chris Dancy quoted me this January 2011 on episode 1 of the re-formed Pink Elephant Practitioner Radio podcast. He quoted me as saying “you can’t do Problem Management without a Problem Manager”. I finally listened to it last Friday.

I want to apologize to Chris. First, I apologize that I didn’t listen to his podcast earlier. I am a couple months behind on my podcast queue. Second, I apologize that I didn’t thank him personally at #PINK11 in February for the mention. I love Chris in almost every conceivable way.

I don’t fully agree with the paraphrase. I think a company can successfully implement a Problem Management process without a Problem Manager. What I really wanted to say was this: If you are unable or unwilling to appoint a Problem Manager, you probably haven’t achieved all the critical success factors you need to successfully carry out Problem Management. Unfortunately, this sentence doesn’t tweet well, so I abbreviated.

ITIL v3 Service Operations lists the critical success factors for Service Operations processes:

  1. Management Support
  2. Business Support
  3. Champions
  4. Staffing and retention
  5. Service Management training
  6. Suitable tools
  7. Validity of testing
  8. Measurement and reporting

All of these are necessary to successfully implement Problem Management. Organizations that lack any of these factors won’t appoint a Problem Manager. My advice to organizations, then, is very simple: appoint a Problem Manager. If they cannot do this, they are not ready for Problem Management.

In fairness, a few organizations do meet all the above CSF’s and choose to implement Problem Management without a centralized point of contact. It is the responsibility of managers to perform Problem Management activities inside their own group. Organizations with the right culture can get away with this. Most organizations cannot.

For that matter, most organizations cannot muster the courage or resources to appoint a Problem Manager.

Categories
Configuration Management

Rethinking the CMS

I first started this post in response to the IT Skeptic’s CMDB is crazy talk post about 2 weeks ago. My own position derives from several observations in the real world:

  1. Few organizations are willing to perform a full spectrum CMDB implementation, due to cost constraints. (In my opinion few organizations actually require this.)
  2. Observation #1 even includes those organizations that have purchased a commercial CMDB software package. They purchased the software to achieve a more specific objective.
  3. And yet most organizations perform some level of configuration management, even if that is as simple as tracking software licenses on a spreadsheet.
  4. Most organizations would benefit from doing a little more than they are currently doing. The “what” depends on the organization.

The ITSM community needs to stop thinking about the CMS / CMDB as a process (which has defined inputs and outputs) or a thing or a database. Instead we can think about it as a broad collection of activities that help maintain information and documentation on the configuration of assets and software inside an organization. This isn’t a black or white implementation where you do it all or you don’t do any–most organizations span the spectrum of gray.

The trouble with ITIL (as if there were only one) is the concept of a CMS is so abstract that most people cannot understand  it. This is by design–it saves the authors the trouble of actually doing anything. I still have trouble describing ITIL’s take on the CMS, and I have done practical implementations in a dozen organizations.

Let’s help practitioners by enumerating and describing the various CM activities that take may take place that are common in the real world. We will explain the benefits and costs associated with each.

For example:

  • Automated discovery of hardware assets
  • Automated discovery of installed software assets
  • Automated discovery of network assets
  • Automated linking of software and hardware assets
  • Automated linking of hardware and network assets
  • Automatic reporting on software compliance and unused licenses
  • Linking Incidents to CI’s
  • Linking Changes to CI’s
  • Linking Problems to CI’s
  • Linking CI’s to end-users
  • Linking CI’s to end-user organizations

This list is VERY incomplete, and there is no out of the box solution for any of the above. There is a wide variety of expression of CI names, CI types, attributes, and statuses of the above items. Each can be automated to different levels.

By making a checklist we can help practitioners and organizations understand what they can do, what other organizations do, and what they should consider in the future. It would be a list of available options, rather than a document of the One True Way dictated high from above. We can expand on Skep’s checklist concept.

A checklist of practical activities could also feed into a maturity assessment.

We can call it the Management of the Configuration of the Configuration Management Database Database. Or we can call it WikiCMDB for short. Stay tuned here for more details.

I am thinking out loud, so everything in this post may be wrong. I welcome your feedback.

Categories
Configuration Management IT Service Management

OBASHI and the CMDB

In September 2010 APMG unveiled its new OBASHI Certification based on the OBASHI methodology developed in 2001 in the gas & oil industry. I won’t go into detail here, but there is at least one book available at the APMG-Business Books but apparently not on Amazon, and least of all not in Kindle format. There is also a OBASHI Explained white paper. Confession: I haven’t yet read the book.

This is just a first impression, and it was this: this is a lot like the CMDB analysis I have done several times in the past. Here is a CMDB framework which I have commonly seen used in industry.

At the top you can imagine are products that your company offers to its customers. Those products are provided by Users (of Services provided by IT), which may be individual users, departments, divisions, teams, etc. The Users use services which may be provided by other services or by applications. Applications run on servers and computers, which are connected by networks.

That sounds obvious but have found some people find it a bit abstract until they start laying out practical examples. The important thing to remember is the objects (rectangles on a diagram) are connected by arrows. In CMDB parlance, the objects are Configuration Items (CI’s) and the arrows are relationships. I typically label the diagrams.

The OBASHI framework seems to use the same concepts. When modeling a CMDB I usually allowed the customer a little more flexibility of CI Types and Relationships, depending on their requirements. OBASHI seems a little more rigid in the use of components and data flows.

At first I wondered what is the purpose of OBASHI. However, I like it after further thought. Although it describes data flows, it is easy to envision it describing other flows, such as process flows. It is an effective framework for analysis that effectively communicates complex systems. It doesn’t require the full implementation of an expensive CMDB to achieve its benefits, and the information collected will readily be reused in the implementation of a CMDB.

Categories
Problem Management

The Problem of Revealed Preferences

Consumers express their rational interests through their behaviors. Economists call these revealed preferences.

IT Service Management trainers and consultants tell other companies how they should run their businesses, based on industry best practice frameworks. We seldom examine revealed preferences, but perhaps we should.

One of my first engagements as an ITSM consultant involved the planning and implementation of a Problem Management process at an organization that had committed to widespread ITIL adoption. For several years after I was an acolyte of Problem Management.

I have implemented Problem Management at a dozen customers and tried to sell it to even more. Among the latter, most resisted. The excuses often included “we are too busy putting out fires”, “we aren’t ready for that”, and “that’s not our culture”. Perhaps I wasn’t selling it well enough.

Most organizations do ad-hoc Problem Management, but few organizations have defined processes. Their reasons probably contain some legitimacy.

Organizations do need to be in control of their Incident Management process before they are ready for Problem Management. They do need to be in control of putting out fires and fulfilling service requests. Most organizations find themselves with a backlog, even those under control, and that is alright. A reporting history is a prerequisite.

Organizations must also be in control of its Changes. The resolution of known errors take place through Change Management, and organizations in control of Changes are better able to prioritize the resolution of its known errors. Anyway, at most organizations, an effective Change Management process is more beneficial than Problem Management.

I usually told customers they were ready for Problem Management only if they could devote at minimum one-quarter to one-half of an FTE to the Problem Manager role, and this person would need to have a good overview of the architecture or be well-connected throughout the organization.

In other words, the Problem Manager should be reasonably senior and independent of the Service Desk Manager. Without this the Problems will be starved of resources. Someone needs to liaise with senior management, ascertain business pain points, prioritize tasks, acquire resources, and hold people accountable. In other words, Problem Management requires commitment at senior levels, and it isn’t clear all organizations have this. Many don’t.

There is another reason that is more important. Organizations that are so focused on executing strategic projects won’t have the resources to execute Problem Management processes consistently. There are several reasons this can occur. In one instance the organization had acquired another and had dozens of projects to deliver on a tight deadline. In another, the reverse situation was occurring, as they built functions to replace those provided by a former parent company. In another, the organizations simply put a lot of focus on executing projects in support of the organization’s new strategy.

Some organizations plainly require Problem Management processes. I have seen more rigorous adherence among organizations who provide IT services to other technical organizations, such as data center outsourcers, hosting providers, or organizations who provide services such as video distribution or mapping to telcos. When services are interrupted, the customers demand root causes analyses.

But it is probably true that many organizations don’t need or aren’t ready for Problem Management. Problem Management is an investment, and like all investments should deliver returns that exceed not only its own costs, but also exceed the benefits of similar efforts the organization may undertake. So it has been revealed.

Categories
IT Service Management Tools

Implementing a Service Desk Application, Part 1

In this multi-part series on planning and implementing a Service Desk application, I start with identifying the characteristics of the tools.

Tool Characteristics

Service Desk applications usually support at a minimum the ITIL® V3 processes of Service Request Fulfillment and Incident Management, Problem Management, and Change Management. They frequently also support Release Management, Service Catalog Management, Service Asset and Configuration Management. They less frequently support Capacity Management, Availability Management, and/or Asset Management, but the tools may support integrations with other products in these areas.

In a nutshell, a Service Desk application is a mechanism for tracking tickets. I am sometimes asked what do Service Desk tools do that cannot be done in open source bug tracking applications Bugzilla. Without commenting specifically on Bugzilla or other products, Service Desks usually have a few requirements not supported by those tools. In general Service Desks require support for multiple processes, each with different fields, statuses, priorities, workflows, and approvals, and they require integration of ticket flows between those processes. For example, an Incident can initiate a Problem, which can initiate a Change. An Incident should be held Pending until the Change is complete.

Service Desk applications usually provide some capabilities to develop workflows. They are not true workflow engines, but provide lightweight workflow development capabilities consistent with the needs of Service Desks. Usually they provide configurable priorities and statuses. Often they provide configurable fields. They usually provide the capability of one or more drop-down dependency trees for categorizing the tickets. Usually there are workflow rules around, for example, how long tickets can spend in a particular status, and what should happen if that time period is exceeded. They usually provide approval workflows.

Service Desk applications also need to provide some reporting capabilities, which can come in a variety of manners. Some provide built-in reports. Some provide built-in capabilities for reporting configurations. Others provide templates with 3rd party tools, such as Crystal Reports or Microsoft SQL Server Reporting Service. The reporting may also include exporting of ticket information for consumption, manipulation or presentation in a tool like Microsoft Excel or Microsoft Access.

It is also important to have methods of interfacing the Service Desk application with other systems. In order to automate the population of customer information in the ticket, the Service Desk tool needs to import or input data from a corporate repository such as Active Directory or an HR database. Ideally this data will be imported dynamically or real-time at the time the ticket is created, but some tools will replicate the data into its own data store. The most common method is to interface with Active Directory via the LDAP protocol. Service Desk systems might also provide the capability of importing other data into the tickets, such as asset data from a 3rd party SQL database. Some tools allow dropdown or multiselect field choices to be imported from a data table. If the application includes a CMDB, then it should also provide methods to import or confederate data from multiple 3rd party data sources.
Some Service Desk systems use a “fat client” that is installed on the machine. The issue with this is having to install the fat client on every machine which will be used for the service desk and or its customers. Other Service Desk applications support user interactions via a web browser, so there is nothing to deploy on the clients. Others support a hybrid model, whereby limited or customer interfaces are provided via the web, while assignee and/or administration functions are provide via the fat client.

Most tools provide email interfaces in which agents or customers can create or update tickets via email, and updates to tickets can be sent to agents and/or customers via email. In addition many tools provide API’s and/or web services interfaces for programatic updates to tickets. Finally, most tools provide role-based access to the tool. For example, customers have fewer rights than agents of the tools. Depending on the role, some people can approve tickets at certain times, or others can enter certain information depending on the status, for example.

Next week I will outline some steps to prepare and plan for the selection of a tool.