Apple has a lot of work to do if it wants to compete with other companies in the self-driving car industry. Tesla already sells vehicles with semi autonomous systems, while automakers like General Motors are already giving rides in their self-driving cars.
Meanwhile, Google and Waymo are testing their autonomous Chrysler Pacifica Minivan in San Francisco, and have plans to launch their own ride-hailing service. It won’t be the only autonomous taxi service around, however, as Uber will be joining the race for driverless cabs in 2019. Even a few Lyft-branded vehicles were making the rounds around CES 2018.
Within AI, deep learning (DL) represents the area of greatest untapped potential. (For more information on AI categories, see sidebar, “The evolution of AI”). This technology relies on complex neural networks that process information using various architectures, comprised of layers and nodes, that approximate the functions of neurons in a brain. Each set of nodes in the network performs a different pattern analysis, allowing DL to deliver far more sophisticated insights than earlier AI tools. With this increased sophistication comes greater needs for leading-edge hardware and software.
Well aware of AI’s massive potential, leading high-tech companies have taken early steps to win in this market. But the industry is still nascent and a clear recipe for success hasn’t emerged. So how can companies capture value and see a return on their huge AI investments?
The responses vary among:
1. This guy is incompetent and should be fired (fairness seeking).
2. This guy is incompetent and will be fired anyway (realism).
3. This guy is incompetent and you should help him to acquire the skills he needs for his job (compassion).
4. Talk to your manager (practical).
5. It depends on the organization and culture (consultant speak).
In general the correct answer is to address any concerns with your immediate manager, while being willing and able to offer suggestions should he/she request them.
Incompetent new employee-should I advise HR/ethics? – Best Practices – Spiceworks
But both agreed that healthcare was ripe for disruption. That is still true, but the pace is slower than we envisioned a decade ago.
One reason is the high cost of certification. Consumer-grade equipment produces interesting data for casual self-analysis. Producing data to be used in medical diagnoses requires greater confidence in the accuracy of the data and the consistency of the devices used to produce it. In the case of robotics, makers have to demonstrate in clinical trials that the equipment is safer and produces tangibly better outcomes.
Another reason for the slow pace of disruption is maintaining the confidentiality of patient data. Device makers collect and store patient data, but need mechanisms authorize and interface with medical providers on behalf of the patients. Extending the value chain requires complex protocols and interfaces, while there is little incentive for any single party to develop them.
These are some random musings on research I performed a few years ago. If I have overlooked any recent developments, please feel free to leave feedback.
For example, we may choose a software package on the basis of criteria such as supported features or functions, scale-ability, quality (fitness for purpose, fitness for use), security, availability and disaster recovery. AHP provides a mechanism for weighting the criteria by interviewing several members of staff for one-by-one assessments of relative importance, which can then be transformed into relative weightings using an eigenvector transformation.
The idea of using multiple criteria to assess multiple options is not new. AHP enhances the ability to weight the assessment criteria using feedback from multiple stakeholders with conflicting agendas. Rather than determining a “correct” answer it assesses the answer most consistent with the organization’s understanding of the problem.
In “A Method to Select IT Service Management Processes for Improvement” (free access to PDF) Professors from School of Management & Enterprise at University of Southern Queensland used AHP as part of a method for ranking ISO2000 process priorities for improvement. This particular paper is worth exploring in much greater detail because, in my experience, the prioritization or process or service improvement initiatives can be very painful at organizations, particularly those with multiple influential stakeholders with incompatible or conflicting requirements.
Remaining time until violation of the service level target
IT organizations typically use simplified “rules of thumb” methods for prioritizing Incidents based on Impact and Urgency. Notably three of these four factors are typically included inside variants of the schema. Please see my discussion in Incident prioritization in detail (which also avoids the explicit use of SLAs in evaluating Incident resolution).
I don’t find the prioritization of Incidents to be a particularly strong candidate for AHP analysis. High priority incidents are relatively rare and are generally handled one at a time or by non-overlapping resources. Lower priority incidents (routine break-fixes for Services Desk) can be handled first-come-first-service or using the relatively crude but quick methods described in ITIL.
Prioritization of Problems seems a more suitable candidate for AHP because a) Problem resolution can require days or weeks, b) multiple Problems may be active and contending for resources, and c) the resolution or Problem can deliver much greater long-term financial impacts for organizations. The principles and underlying support system would be similar.
Other uses of AHP that merit further investigation include:
Prioritization of service and process improvement initiatives
Selection of ITSSM tools
Selection of vendors (in support of the Supplier Management function / process of Service Design) and/or cloud providers
Activity prioritization in environments of resources in multi-queuing environments (leveling of activities across multiple processes and/or projects)
For the time being I am focusing my attention on the use of models in service management (more here). Models are useful because they help us understand correlations, understand system dynamics, make predictions, and test the effects of changes.
There are few, if any, models in regular use in service management. There may be valid reasons for this. There are few, if any, good models in regular use in business (there are many bad model, and many more that are fragile and not applicable outside narrow domains).
ITIL1 2011 does make use of models, where appropriate. The service provider model is one such example that helps understand the nature of industry changes.
More are needed, and I am making a few assumptions here:
There are robust models in use in other domains that can be applied to the management of IT operations and services
These haven’t been adopted because we are unaware of them, or
The conditions in which these models can be applied haven’t been explored.
It is time for us to explore other models that may be applicable and useful.
Oh, and Happy New Year! I wish everyone a happy and prosperous 2017.
I don’t work with customers who implemented it successfully.
I have nothing new to offer.
Both points are still true. There isn’t anything new to say about DevOps except that the hype machine is still in overdrive and the loop machine is wearing out.
DevOps is an improvement to Release and Deployment Management. There’s no conceptual abstraction on top that changes the way we think about releases and deployments. It arose in response to new layers of technical abstraction that enabled new capabilities. These include server virtualization (VMWare), the availability automated deployment tools (Chef, Puppet), and the rise of containers (Docker) and supporting programming interfaces and orchestration tools.
Together they allow us to make an order of magnitude improvement in the performance of the Release and Deployment Management and Service Validation and Testing processes. This is a very good thing, because it drastically decreases the costs of software deployment, and enables more rapid experimentation when addressing new customers or improving existing customers. Moreover developers can do this with tools (programming languages) familiar to them. This is also a good thing.
However, organizations adopting DevOps practices must understand the requirements of various stakeholders and minimize barriers of communication across silos: see Can We Stop Talking About DevOps?. Moreover, these organizations have to be in control of how applications are released (Release and Deployment Management), and you still have to know what you are automating (Service Asset and Configuration Management).
People. Process. Technology. In that order. Nothing has changed.
DevOps will fail without them, except in limited instances or for newly developed services (or recently deployed services) receiving abundant management attention and not encumbered by legacy configurations.
In the long run the automated deployment of production environments has a bright future, as even more functions of IT are virtualized (networking, storage). In a sense, DevOps will have won; it will be pervasive. We just won’t call it DevOps any longer. We will go back to calling it Service Management.
It’s just a hell of a lot harder than we are given credit for.
Enter FiveThirtyEight Science on Science Isn’t Broken (“It’s just a hell of a lot harder than we give it credit for”). Science is messy by design. Scientific methods are adversarial. Scientists are competitive for mind share as well as funding. Science pushes into the unknown and forces us to recognize uncomfortable new truths.
When thinking about this 1 it occurred to me that traditional modes of managing IT are failing us. That isn’t necessarily the fault of IT departments or their supporting frameworks, or not entirely. Over the last decade I have worked with over 200 IT organizations in a variety of industries and geographies. Some IT departments are better than others, but overall IT staff have become cooperative, thoughtful, and motivated to fulfill the needs of their stakeholders. This is a major shift in mindset from when we regularly disparaged “users” as “lusers”, “PEBKACs”, or “identity (or ID10T) errors”. 2
All is not well;
I doubt some foul play; would the night were come!
– Hamlet (1.2.254), Hamlet, alone on the platform.
This is not to say all is well in IT Service Management or ITIL®.3
Interest in ITIL® certifications is flat, and declining in most regions except Europe and Asia, the latter primarily meaning India. ITIL is largely irrelevant in the rest of the world, or rapidly becoming so.
None of the frameworks built in parallel with ITIL®, including IEC/ISO 20000 and COBIT5, have made any traction.
Best practices and good practices are, by definition, past practices. The framework is ineffective in complex environments in which cause and effect relationships are obvious only in retrospect, and in which emergent behaviors are unpredictable.
The ITIL® framework, itself long in the tooth, was last updated in 2011 with no refresh in sight.
AXELOS and ISACA have increasingly turned their attention to information security with their RESILIA™ framework and Cybersecurity Nexus portal, respectively. This is a natural extension for ISACA and a slight departure for AXELOS.
IT is so hard in part because it is complex. In a few short years the industry has transitioned from traditional servers, to virtual servers and increasingly to containers. Container orchestration is improving just as hybrid containers and hyperconverged infrastructure standards are appearing. IT services are increasingly delivered via a combination of in-house and cloud vendors, each of which operates with different standards and API’s. Meanwhile, security attacks are becoming increasingly sophisticated as the attackers become professionals.
Progress is arriving from outside of IT operations, in the form of Lean and Agile practices, and codified (more or less) under the umbrella of DevOps (Let’s Not Paper Over the Differences), many of which specifically reject the bureaucracy erected by the process focus of traditional frameworks.4
What will emerge in two years is anyone’s guess. If your head isn’t spinning, you aren’t paying attention. IT departments are paying attention, even if they do not have the tools for managing it. What they don’t need is 4 more processes in the next revision of ITIL® or more control points in COBIT5.
One thing we do need is a framework for managing complexity. The Cynefin Framework shows promise for helping to manage the trade-offs between discovering emergent behaviors and exploiting them. Cynefin is not a panacea, but once the nature of complexity is understood, it follows that panaceas only exist in ancient mythologies. IT departments, meanwhile, will continue muddling along, hobbled but not broken.
Science is an imperfect metaphor for IT Service Management because IT should be cooperative, not competitive. However, IT does function in economic environments that are, by nature, competitive. IT relies heavily on vendors whose interests are not necessarily aligned with their customers.↩
The fact that most people stopped referring to them as “users” is major progress, but perhaps this is because IT has stopped being the tail that wags the dog — and become the dog.↩
ITIL and RESILIA are registered trademarks of AXELOS Limited.↩
Although ITSM has shifted its focus from processes to services, the latter are largely misunderstood even by IT. Stakeholders outside of IT are largely disinterested in Service Catalogs.↩