8. Investment and Planning

8.1. Introduction

dollar in pieces
Figure 142. How to invest in your organization?

Your decision to break your organization into multiple teams is an investment decision. You are going to devote some of your resources to one team, and some to another team. Furthermore, there will be additional spending still managed at a corporate level. If the results meet your expectations, you will then likely proceed with further investments managed by the same or similar organization structure. How do you decide on and manage these separate streams of investment? What is your approach for structuring them? How do you ensure that they are returning your desired results?

People are competitive. Your multiple teams will start to contend for investment. This is unavoidable. They will want their activities adequately supported, and their understanding of “adequate” may be different from yours—​and will also vary between each other. They will be watching that the other teams don’t get “more than their share” and are using their share effectively. You start to see that the teams need to be constantly reminded of the big picture, in order to keep their discussions and occasional disagreements constructive.

You now have a dedicated, full-time CFO and you are increasingly subject to standard accounting and budgeting rules. But annual budgeting seems to be opposed to how you have chosen to run your company to date. What alternatives are there? You begin to see that your approach to financial management affects every aspect of your company, including your product team effectiveness.

You also begin to see your vendor relationships (e.g.,your cloud providers) as a form of investment. As your use of their products deepens, it becomes more difficult to switch from them, and so you spend more time evaluating before committing. You establish a more formalized approach. Open source changes the software vendor relationship to some degree, but it’s still a portfolio of commitments and relationships requiring management.

Project management is often seen as necessary for financial planning, especially regarding the efforts most critical to the business. The reason it is seen as essential is because of the desire to coordinate, manage, and plan. Having a vision isn’t worth much without some ability to forecast how long it will take and what it will cost, and to monitor progress against the forecast in an ongoing way. Project management is often defined as the execution of a given scope of work within constraints of time and budget. But questions arise. We are already executing work, without this concept of “project.” We have discussed Scrum in Chapter 4, Kanban in Chapter 5, and various organizational levels and delivery models in the introduction to Part III. Here we will examine this idea of “scope” in more detail. How would we know it in advance, so that the “constraints of time and budget” are reasonable?

As we have seen in our discussions of product management, in implementing truly new products, (including digital products) estimating time and budget are challenging because the necessary information is not available. In fact, creating information -— which per Lean Product Development) requires tight feedback loops -— is the actual work of the “project.” Therefore, in the new Agile world, there is some uncertainty as to the role of and even need for traditional project management. This chapter will examine some of the reasons for project management’s persistence and how it is adapting to the new product-centric reality.

In the project management literature and tradition, much focus is given to the execution aspect of project management -— its ability to manage complex, interdependent work across resource limitations. We discussed project management and execution in Chapter 7. In Chapter 8, we are interested in the structural role of project management as a way of managing investments. Project management may be institutionalized through establishing an organizational function known as the Project Management Office (PMO), which may use a concept of project portfolio as a means of constraining and managing the organization’s multiple priorities. What is the relationship of the traditional PMO to the new, product-centric, digital world? [1]

8.1.1. Chapter 8 outline

In this chapter, we will cover:

  • IT financial management

    • Historical IT financial practices

    • Next generation IT finance

  • IT sourcing and contract management

    • Basic concerns

    • Outsourcing and cloudsourcing

    • Software licensing

    • The role of industry analysts

    • Software development and contracts

  • Structuring the investment

    • Features versus components

    • Epics and new products

  • Larger-scale planning and estimating

    • Why plan?

    • Planning larger efforts

  • Why project management?

    • A traditional IT project

    • How is a project different from simple “work management"?

    • The “iron triangle”

    • Project practices

    • The future of project management

  • Topics

    • Critical chain

    • The Agile project frameworks

  • Conclusion

8.1.2. Chapter 8 learning objectives

  • Describe traditional and next generation IT finance concerns

  • Describe various topics and issues in digital and IT sourcing

  • Identify and describe techniques for structuring digital investment portfolios

  • Describe basic project management practices

  • Critically evaluate the role and limitations of project management as a delivery vehicle

8.2. IT financial management

Computers initially were used to automate manual operations, and the benefits were relatively easy to forecast. As organizations use computers more for strategic purposes and management information enhancements, the benefits become harder to forecast. In a growing number of companies, the individuals most qualified to forecast the most significant costs and benefits are product managers, financial specialists, marketing specialists, and not the I/S technical specialists . . .
— Terence Quinlan
IT Financial Management Association

Financial health is an essential dimension of business health. And digital technology has been one of the fastest-growing budget items in modern enterprises. Its relationship to enterprise financial management has been a concern since the first computers were acquired for business purposes.

Important
Financial management is a broad, complex, and evolving topic and its relationship to IT and digital management is even more so. This brief section can only cover a few basics. However, it is important for you to have an understanding of the intersection of Agile and Lean IT with finance, as your organization’s financial management approach can determine the effectiveness of your digital strategy. See the cited sources and Further Reading at the end of this chapter.

The objectives of IT finance include:

  • Providing a framework for tracking and accounting for digital income and expenses

  • Supporting financial analysis of digital strategies (business models and operating models, including sourcing)

  • Support the digital and IT-related aspects of the corporate budgetary and reporting processes, including internal and external policy compliance

  • Supporting (where appropriate) internal cost recovery from business units consuming digital services

  • Support accurate and transparent comparison of IT financial performance to peers and market offerings (benchmarking)

A company scaling up today would often make different decisions from a company that scaled up 40 years ago. This is especially apparent in the matter of how to finance digital/IT systems development and operations. The intent of this section is to explore both the traditional approaches to IT financial management and the emerging Agile/Lean responses.

This section has the following outline:

  • Historical IT financial practices

    • Annual budgeting and project funding

    • Cost accounting and chargeback

  • Next generation IT finance

    • Lean Accounting & Beyond Budgeting

    • Lean Product Development

    • Internal “venture” funding

    • Value stream orientation

    • Internal market economics

    • Service brokerage

8.2.1. Historic IT financial practices

Historically, IT financial management has been defined by two major practices:

  • An annual budgeting cycle, in which project funding is decided

  • Cost accounting, sometimes with associated internal transfers (chargebacks) as a primary tool for understanding and controlling IT expenses

Both of these practices are being challenged by Agile and Lean IT thinking.

Annual budgeting and project funding
IT organizations typically adhere to annual budgeting and planning cycles—which can involve painful rebalancing exercises across an entire portfolio of technology initiatives, as well as a sizable amount of rework and waste. This approach is anathema to companies that are seeking to deploy agile at scale. Some businesses in our research base are taking a different approach. Overall budgeting is still done yearly, but road maps and plans are revisited quarterly or monthly, and projects are reprioritized continually. [68]
— Comella-Dorda et al
An Operating Model for Company-Wide Agile Development

In the common practice of the annual budget cycle, companies once a year embark on a detailed planning effort to forecast revenues and how they will be spent. Much emphasis is placed on the accuracy of such forecasts, despite its near-impossibility. (If estimating one large software project is challenging, how much more challenging to estimate an entire enterprise’s income and expenditures?)

The annual budget has two primary components: capital expenditures and operating expenditures, commonly called CAPEX and OPEX. The rules about what must go where are fairly rigid and determined by accounting standards with some leeway for the organization’s preferences.

Software development can be capitalized, as it is an investment from which the company hopes to benefit from in the future. Operations is typically expensed. Capitalized activities may be accounted for over multiple years (therefore becoming a reasonable candidate for financing and multi-year amortization). Expensed activities must be accounted for in-year.

One can only “go to the well” once a year. As such, extensive planning and negotiation traditionally take place around the IT project portfolio. Business units and their IT partners debate the priorities for the capital budget, assess resources, and finalize a list of investments. Project managers are identified and tasked with marshaling the needed resources for execution.

This annual cycle receives much criticism in the Agile and Lean communities. From a Lean perspective, projects can be equated to large “batches” of work. Using annual projects as a basis for investment can result in misguided attempts to plan large batches of complex work in great detail so that resource requirements can be known well in advance. The history of the Agile movement is in many ways a challenge and correction of this thinking, as we have discussed throughout this book.

The execution model for digital/IT acquisition adds further complexity. Traditionally, project management has been the major funding vehicle for capital investments, distinct from the operational expense. But with the rise of cloud computing and product-centric management, companies are moving away from traditional capital projects. New products are created with greater agility, in response to market need, and without the large capital investments of the past in physical hardware.

This does not mean that traditional accounting distinctions between CAPEX and OPEX go away. Even with expensed cloud infrastructure services, software development may still be capitalized, as may software licenses.

Cost accounting and chargeback
Note
The term “cost accounting” is not the same as just “accounting for costs,” which is always a concern for any organization. Cost accounting is defined as “the techniques for determining the costs of products, processes, projects, etc. in order to report the correct amounts on the financial statements, and assisting management in making decisions and in the planning and control of an organization . . . For example, cost accounting is used to compute the unit cost of a manufacturer’s products in order to report the cost of inventory on its balance sheet and the cost of goods sold on its income statement. This is achieved with techniques such as the allocation of manufacturing overhead costs and through the use of process costing, operations costing, and job-order costing systems." [4]

Information technology is often consumed as a "shared service” which requires internal financial transfers. What does this mean?

Here is a traditional example. An IT group purchases a mainframe computer for $1,000,000. This mainframe is made available to various departments who are charged for it. Because the mainframe is a shared resource, it can run multiple workloads simultaneously. For the year, we see the following usage:

  • 30% Accounting

  • 40% Sales operations

  • 30% Supply chain

In the simplest direct allocation model, the departments would pay for the portion of the mainframe that they used. But things always are more complex than that.

  • What if the mainframe has excess capacity? Who pays for that?

  • What if Sales Operations stops using the mainframe? Do Accounting and Supply Chain have to make up the loss? What if Accounting decides to stop using it because of the price increase? In public utilities, this is known as a "death spiral" and the problem was noted as early as 1974 by Richard Nolan [198 p. 179].

  • The mainframe requires power, floor space, and cooling. How are these incorporated into the departmental charges?

  • Ultimately, the Accounting organization (and perhaps Supply Chain) are back-office cost centers as well. Does it make sense for them to be allocated revenues from company income, only to have those revenues then re-directed to the IT department?

Historically, cost accounting has been the basis for much IT financial management (see e.g., ITIL Service Strategy, [266], p.202; [216]). Such approaches traditionally seek full absorption of unit costs; that is, each “unit” of inventory ideally represents the total cost of all its inputs: materials, labor, and overhead such as machinery and buildings.

In IT/digital service organizations, there are three basic sources of cost: “cells, atoms, and bits.” That is:

  • People (i.e. their time)

  • Hardware

  • Software

However, these are “direct” costs — costs that for example a product or project manager can see in their full amount.

Another class of cost is “indirect.” The IT service might be charged $300 per square foot for data center space. This provides access to rack space, power, cooling, security, and so forth. This charge represents the bills the Facilities organization receives from power companies, mechanicals vendors, security services, and so forth — not to mention the mortgage!

Finally, the service may depend on other services. Perhaps instead of a dedicated database server, the service subscribes to a database service that gives them a high-performance relational database, but where they do not pay directly for the hardware, software, and support services the database is based on. Just to make things more complicated, the services might be external (public cloud) or internal (private cloud or other offerings).

Those are the major classes of cost. But how do we understand the “unit of inventory” in an IT services context? A variety of concepts can be used, depending on the service in question:

  • Transactions

  • Users

  • Network ports

  • Storage (e.g.,gigabytes of disk)

In internal IT organizations (see "Defining consumer, customer, and sponsor") this cost accounting is then used to transfer funds from the budgets of consuming organizations to the IT budget. Sometimes this is done via simple allocations (Marketing pays 30%, Sales pays 25%, etc.). and sometimes this is done through more sophisticated approaches, such as defining unit costs for services.

For example, the fully absorbed unit cost for a customer account transaction might be determined to be $0.25; this ideally represents the total cost of the hardware, software, and staff divided by the expected transactions. Establishing the models for such costs, and then tracking them, can be a complex undertaking, requiring correspondingly complex systems.

IT managers have known for years that overly detailed cost accounting approaches can result in consuming large fractions of IT resources. As AT&T financial manager John McAdam noted:

“Utilizing an excessive amount of resources to capture data takes away resources that could be used more productively to meet other customer needs. Internal processing for IT is typically 30-40% of the workload. Excessive data capturing only adds to this overhead cost.” [182]

There is also the problem that unit costing of this nature creates false expectations. Establishing an internal service pricing scheme implies that if the utilization of the service declines, so should the related billings. But if

  1. the hardware, software, and staff costs are already sunk, or relatively inflexible and

  2. the IT organization is seeking to recover costs fully

the per-transaction cost will simply have to increase if the number of transactions goes down. James R. Huntzinger discusses the problem of excess capacity distorting unit costs, and states “it is an absolutely necessary element of accurate representation of the facts of production that some provisions be made for keeping the cost of wasted time and resources separate from normal costs” [131]. Approaches for doing this will be discussed below.

8.2.2. Next generation IT finance

What accounting should do is produce an unadulterated mirror of the business — an uncompromisable truth on which everyone can rely. …​ Only an informed team, after all, is truly capable of making intelligent decisions.
— Orest Fiume and Jean Cunningham
as quoted by James Huntzinger

Criticisms of traditional approaches to IT finance have increased as digital transformation accelerates and companies turn to Agile operating models, Rami Sirkia and Maarit Laanti (in a paper used as the basis for the Scaled Agile Framework's financial module) describe the following shortcomings:

  • Long planning horizons, detailed cost estimates that must frequently be updated

  • Emphasis on planning accuracy and variance analysis

  • Context-free concern over budget overruns (even if a product is succeeding in the market, variances are viewed unfavorably)

  • Bureaucratic re-approval processes for project changes

  • Inflexible and slow resource re-allocation [252]

What do critics of cost accounting, allocated chargebacks, and large batch project funding suggest as alternatives to the historical approaches? There are some limitations evident in many discussions of Lean Accounting, notably an emphasis on manufactured goods. However, a variety of themes and approaches have emerged relevant to IT services, that we will discuss below:

  • Beyond Budgeting

  • Internal “venture” funding

  • Value stream orientation

  • Lean Accounting

  • Lean Product Development

  • Internal market economics

  • Service brokerage

Beyond budgeting
Setting a numerical target and controlling performance against it is the foundation stone of the budget contract in today’s organization. But, as a concept, it is fundamentally flawed. It is like telling a motor racing driver to achieve an exact time for each lap . . . it cannot predict and control extraneous factors, any one of which could render the target totally meaningless. Nor does it help to build the capability to respond quickly to new situations. But, above all, it doesn’t teach people how to win.
— Jeremy Hope and Robin Fraser
Beyond Budgeting Questions and Answers

Beyond Budgeting is the name of a 2003 book by Jeremy Hope and Robin Fraser. It is written in part as an outcome of meetings and discussions between a number of large, mostly European firms dissatisfied with traditional budgeting approaches. Beyond Budgeting’s goals are described as:

releasing capable people from the chains of the top-down performance contract and enabling them to use the knowledge resources of the organization to satisfy customers profitably and consistently beat the competition

In particular, Beyond Budgeting critiques the concept of the “budget contract.” A simple “budget” is merely a “financial view of the future . . . [a] 'most likely outcome' given known information at the time . . . " A “budget contract” by comparison is intended to “delegate the accountability for achieving agreed outcomes to divisional, functional, and departmental managers.” It includes concerns and mechanisms such as

  • Targets

  • Rewards

  • Plans

  • Resources

  • Coordination

  • Reporting

and is intended to "commit a subordinate or team to achieving an agreed outcome.”

Beyond Budgeting identifies various fallacies with this approach, including:

  • The idea that fixed financial targets maximize profit potential

  • Financial incentives build motivation and commitment (see discussion on Motivation)

  • Annual plans direct actions that maximize market opportunities

  • Central resource allocation optimizes efficiency

  • Central coordination brings coherence

  • Financial reports provide relevant information for strategic decision- making

Beyond the poor outcomes that these assumptions generate, up to 20% to 30% of senior executives' time is spent on the annual budget contract. Overall, the Beyond Budgeting view is that the budget contract is

a relic from an earlier age. It is expensive, absorbs far too much time, adds little value, and should be replaced by a more appropriate performance management model. [124 p. 4], emphasis added.

Readers of this textbook should at this point notice that many of the Beyond Budgeting concerns reflect an Agile/Lean perspective. The fallacies of efficiency and central coordination have been discussed throughout this book. However, if an organization’s financial authorities remain committed to these as operating principles, the digital transformation will be difficult at best.

Beyond Budgeting proposes a number of principles for understanding and enabling organizational financial performance. These include:

  • Event-driven over annual planning

  • On-demand resources over centrally allocated resources

  • Relative targets (“beating the competition”) over fixed annual targets

  • Team-based rewards over individual rewards

  • Multi-faceted, multi-level, forward-looking analysis over analyzing historical variances

Internal “venture” funding
A handful of companies are even exploring a venture-capital-style budgeting model. Initial funding is provided for minimally viable products (MVPs), which can be released quickly, refined according to customer feedback, and relaunched in the marketplace . . . And subsequent funding is based on how those MVPs perform in the market. Under this model, companies can reduce the risk that a project will fail, since MVPs are continually monitored and development tasks reprioritized . . . [68]
— Comella-Dorda et al
An Operating Model for Company-Wide Agile Development

As we have discussed previously, product and project management are distinct. Product management, in particular, has more concern for overall business outcomes. If we take this to a logical conclusion, the product portfolio becomes a form of the investment portfolio, managed not in terms of schedule and resources, but rather in terms of business results.

This implies the need for some form of internal venture funding model, to cover the initial investment in a minimum viable product. If and when this internal investment bears fruit, it may become the basis for a value stream organization, which can then serve as a vehicle for direct costs and an internal services market (see below). McKinsey reports the following case:

A large European insurance provider restructured its budgeting processes so that each product domain is assigned a share of the annual budget, to be utilized by chief product owners. (Part of the budget is also reserved for requisite maintenance costs). Budget responsibilities have been divided into three categories: a development council consisting of business and IT managers meets monthly to make go/no-go decisions on initiatives. Chief product owners are charged with the tactical allocation of funds—making quick decisions in the case of a new business opportunity, for instance—and they meet continually to rebalance allocations.

Meanwhile, product owners are responsible for ensuring execution of software development tasks within 40-hour work windows and for managing maintenance tasks and backlogs; these, too, are reviewed on a rolling basis. As a result of this shift in approach, the company has increased its budgeting flexibility and significantly improved market response times. [68]

With a rolling backlog and stable funding that decouples annual allocation from ongoing execution, the venture-funded product paradigm is likely to continue growing. A product management mindset activates a variety of useful practices, as we will discuss in the next section.

Options as a portfolio strategy

In governing for effectiveness and innovation, one technique is that of options. Related to the idea of options is parallel development. In investing terms, purchasing an option gives one the right, but not the obligation, to purchase a stock (or other value) for a given price at a given time. Options are an important tool for investors to develop strategies to compensate for market uncertainty.

What does this have to do with developing digital products?

Product development is so uncertain that sometimes it makes sense to try several approaches at once. This, in fact, was how the program to develop the first atomic bomb was successfully managed. Parallel development is analogous to an options strategy. Small, sustained investments in different development “options” can optimize development payoff in certain situations (see [220], Chapter 4). When taken to a logical conclusion, such an options strategy starts to resemble the portfolio management approaches of venture capitalists. Venture capitalists invest in a broad variety of opportunities, knowing that most, in fact, will not pan out. See discussion of internal venture funding as a business model.

It is arguable that the venture-funded model has created different attitudes and expectations towards governance in West Coast “unicorn” culture. However, it would be dangerous to assume that this model is universally applicable. A firm is more than a collection of independent sub-organizations; this is an important line of thought in management theory, starting with Coase’s “The Nature of the Firm” [63].

The idea that “Real Options” were a basis for Agile practices was proposed by Chris Matts [181]. Investment banker turned Agile coach Franz Ivancsich strongly cautions against taking options theory too far, noting that to price them correctly you have to determine the complete range of potential values for the underlying asset [145].

Lean product development
Because we never show it on our balance sheet, we do not think of [Design in Process] as an asset to be managed, and we do not manage it.
— Don Reinertsen
Managing the Design Factory

The Lean Product Development thought of Don Reinertsen was discussed extensively in Chapter 5. His emphasis on employing an economic framework for decisionmaking is relevant to this discussion as well. In particular, his concept of cost of delay is poorly addressed in much IT financial planning, with its concerns for full utilization, efficiency, and variance analysis. Other Lean accounting thinkers share this concern, e.g.:

the cost-management system in a lean environment must be more reflective of the physical operation. It must not be confined to monetary measures but must also include nonfinancial measures, such as quality and throughput times.+[<<Huntzinger2007,131>>]+

Another useful Reinertsen concept is that of design in process or DIP. This is an explicit analog to the well-known Lean concept of work in process (WIP). Reinertsen makes the following points [219 p. 13]:

  • DIP is larger and more expensive to hold than WIP

  • It has much lower turn rates

  • It has much higher holding costs (e.g.,due to obsolescence)

  • DIP’s liabilities are ignored due to weaknesses in accounting standards

These concerns give powerful economic incentives for managing throughput and flow, continuously re-prioritizing for the maximum economic benefit and driving towards the Lean ideal of single-piece flow.

Lean accounting
It was not enough to chase out the cost accountants from the plants; the problem was to chase cost accounting from my people’s minds.
— Taiichi Ohno

There are several often-cited motivations for cost accounting [131] p 13:

  • Inventory valuation (not applicable for intangible services)

  • Informing pricing strategy

  • Management of production costs

IT service costing has long presented additional challenges to traditional cost accounting. As IT Financial Management Association president Terry Quinlan notes, “Many factors have contributed to the difficulty of planning EDP expenditures at both application and overall levels. A major factor is the difficulty of measuring fixed and variable cost.” [216], p. 6.

This begs the broader question: should traditional cost accounting be the primary accounting technique used at all? Cost accounting in Lean and Agile organizations is often controversial. Lean pioneer Taiichi Ohno of Toyota thought it was a flawed approach. Huntzinger [131] identifies a variety of issues:

  • Complexity

  • Un-maintainability

  • Supplies information “too late” (i.e., does not support fast feedback)

  • “Overhead” allocations result in distortions

Shingo Prize winner Steve Bell observes:

It is usually impossible to tie . . . abstract cost allocations and the resulting variances back to the originating activities and the value they may or may not produce; thus, they can’t help you improve. But they can waste your time and distract you from the activities that produce the desired outcomes . . . [20 p. 110].

The trend in Lean accounting has been to simplify. A guiding ideal is seen in the Wikipedia article on Lean Accounting:

The “ideal” for a manufacturing company is to have only two types of transactions within the production processes; the receipt of raw materials and the shipment of finished product.

Concepts such as value stream orientation, internal market economics, and service brokering all can contribute towards achieving this ideal.

Value stream orientation
Collecting costs into traditional financial accounting categories, like labor, material, overhead, selling, distribution, and administrative, will conceal the underlying cost structure of products . . . The alternative to traditional methods . . . is the creation of an environment that moves indirect costs and allocation into direct costs.[131]
— James R. Huntzinger
Lean Cost Management

As discussed above, Lean thinking discourages the use of any concept of overhead, sometimes disparaged as “peanut butter” costing. Rather, direct assignment of all costs to the maximum degree is encouraged, in the interest of accounting simplicity.

We discussed a venture-funded product model above, as an alternative to project-centric approaches. Once a product has proven its viability and becomes an operational concern, it becomes the primary vehicle for those concerns previously handled through cost accounting and chargeback. The term for a product that has reached this stage is “value stream.” As Huntzinger notes, “Lean principles demand that companies align their operations around products and not processes.” [131 p. 19].

By combinining a value stream orientation in conjunction with organizational principles such as frugality, internal market economics, and decentralized decision-making (see e.g., [124 p. 12]), both Lean and Beyond Budgeting argue that more customer-satisfying and profitable results will ensue. The fact that the product, in this case, is digital (not manufactured), and the value stream centers around product development (not production) does not change the argument.

Internal market economics
value stream and product line managers, like so much in the lean world, are “fractal.”
— Womack and Jones
Lean Thinking
Coordinate cross-company interactions through “market-like” forces.
— Jeremy Hope and Robin Fraser
Beyond Budgeting Questions and Answers

IT has long been viewed as a “business within a business.” In the internal market model, services consume other services ad infinitum [185]. Sometimes the relationship is hierarchical (an application team consuming infrastructure services) and sometimes it is peer to peer (an application team consuming another’s services, or a network support team consuming email services, which in turn require network services).

The increasing sourcing options including various cloud options make it more and important that internal digital services be comparable to external markets. This, in turn, puts constraints on traditional IT cost recovery approaches, which often result in charges with no seeming relationship to reasonable market prices.

There are several reasons for this. One commonly cited reason is that internal IT costs include support services, and therefore cannot fairly be compared to simple retail prices (e.g.,for a computer as a good).

Another, more insidious reason is the rolling in of unrelated IT overhead to product prices. We have quoted James Huntzinger’s work above in various places on this topic. Dean Meyer has elaborated this topic in greater depth from an IT financial management perspective, calling for some organzational “goods” to be funded as either Ventures (similar to above discussion) or “subsidies” (for enterprise-wide benefits such as technical standardization) [185 p. 92].

As discussed above, a particularly challenging form of IT overhead is excess capacity. The saying “the first person on the bus has to buy the bus” is often used in IT shared services, but is problematic. A new, venture-funded startup cannot operate this way — expecting the first few customers to fund the investment fully! Nor can this work in an internal market, unless heavy handed political pressure is brought to bear. This is where internal venture funding is required.

Meyer presents a sophisticated framework for understanding and managing an internal market of digital services. This is not a simple undertaking; for example, correctly setting service prices can be surprisingly complex.

Service brokerage

Finally, there is the idea that digital or IT services should be aggregated and “brokered” by some party (perhaps the descendant of a traditional IT organization). In partcular, this helps with capacity management, which can be a troublesome source of internal pricing distortions. This has been seen not only in IT services, but in Lean attention to manufacturing; when unused capacity is figured into product cost accounting, distortions occur [131], Chapter 7, “Church and Excess Capacity.”

Applying Meyer’s principles, excess capacity would be identified as a Subsidy or a Venture as a distinct line item.

But cloud services can assist even further. Excess capacity often results from the available quantities in the market — e.g., one purchases hardware in large-grained capital units. But more flexibly priced, expensed compute on-demand services are available, it is feasible to allocate and de-allocate capacity on-demand, eliminating the need for accounting for excess capacity.

8.3. IT sourcing and contract management

IT sourcing is the set of concerns related to identifying suppliers (sources) of the necessary inputs to deliver digital value. Contract management is a critical, subsidiary concern, where digital value intersects with law.

The basic classes of inputs include:

  • People (with certain skills and knowledge)

  • Hardware

  • Software

  • Services (which themselves are composed of people, hardware, and/or software)

Practically speaking, these inputs are handled by two different functions:

  • People (in particular, full time employees) are handled by a Human Resources function.

  • Hardware, software, and services are handled by a Procurement function. Other terms associated with this are Vendor Management, Contract Management, and Supplier Management. We will not attempt to clarify or rationalize these areas in this section.

We discussed hiring and managing digital specialists in the previous chapter.

8.3.1. Basic concerns

A small company may establish binding agreements with vendors relatively casually. For example, when the founder first chose a cloud platform on which to build their product, they clicked on a button that said “I accept,” at the bottom of a lengthy legal document they didn’t necessarily read closely. This “clickwrap” agreement (see Clickwrap example) is a legally binding contract, which means that the terms and conditions it specifies are enforceable by the courts.

Clickwrap license
Figure 143. Clickwrap example

A startup may be inattentive to the full implications of its contracts for various reasons:

  • The founder does not understand the importance and consequences of legally binding agreements

  • The founder understands but feels they have little to lose (for example, they have incorporated as a limited liability company, meaning the founder’s personal assets are not at risk)

  • The service is perceived to be so broadly used that an agreement with it must be safe (if 50 other startups are using a well known cloud provider and prospering, why would a startup founder spend precious time and money on overly detailed scrutiny of its agreements?)

All of these assumptions bear some risk -— and many startups have been burned on such matters -— but there are many other, perhaps more pressing risks for the founder and startup.

However, by the time the company has scaled to the team of teams level, contract management is almost certainly a concern of the Chief Financial Officer. The company has more resources (“deeper pockets”), and so lawsuits start to become a concern. The complexity of the company’s services may require more specialized terms and conditions. Standard “boilerplate” contracts thus are replaced by individualized agreements. Concerns over intellectual property, the ability to terminate agreements, liability for damages and other topics require increased negotiation and counterproposing contractual language. See the case study on the 9 figure true-up for a grim scenario.

At this point, the company may have hired its own legal professional; certainly, legal fees are going up, whether as services from external law firms or internal staff.

Contract and vendor management is more than just establishing the initial agreement. The ongoing relationship must be monitored for its consistency with the contract. If the vendor agrees that its service will be available 99.999% of the time, the availability of that service should be measured, and if it falls below that threshhold, contractual penalties may need to be applied.

In larger organizations, where a vendor might hold multiple contracts, a concept of "vendor management” emerges. Vendors may be provided “scorecards” that aggregate metrics which describe their performance and the overall impression of the customer. Perhaps key stakeholders are internally surveyed as to their impression of the vendor’s performance and whether they would be likely to recommend them again. Low scores may penalize a vendor’s chances in future RFI/RFP processes; high scores may enhance them. Finally, advising on sourcing is one of the main services an Enterprise Architecture group may provide.

Case study: Choosing a telecommunications provider

When Company X was a startup, its telecommunications needs were limited, as were its options. The founder had one choice for Internet access, the local cable company. Even when the company moved to a larger space, as a single team startup, its options were limited.

However, it is now a company of 50, and moving yet again to a new headquarters where there are a variety of options for network carriers. The company is known to be growing, and three telecommunications companies (“carriers”) have been sending sales representatives periodically to inquire if their services might be needed.

With the move to a new facility, some systematic effort must be undertaken to choose an appropriate provider. This becomes a sub-project in its own right and is part of the larger program required to complete the move effectively.

As part of this project, a formal “Request for Information” (RFI) is sent to all the potential carriers. Part of this RFI consists of a lengthy series of questions, such as:

  • What kinds of circuits are available?

  • What is their territory?

  • How much data can they handle?

  • What do they cost (at a high level)?

  • How are they secured?

  • How stable are they (how often are they “down”)?

  • Are co-location services available (can the carrier host the company’s servers in its data centers?)

  • What other services does the carrier provide?

The responses to these questions are “scored” (assigned a numeric weighting), and the 2 top scoring vendors are issued a Request for Quote (RFQ). The RFQ goes into much more detail in terms of the contract the carrier is willing to offer. After extensive discussions and negotiations, Company X’s contract team awards the business to the carrier they believe will provide the greatest value.

The same approach is used to establish relationships with cloud vendors, software providers, and consultants. Because the approach is so consistent, it is considered a repeatable “process.” See the chapter on process management.

8.3.2. Outsourcing and cloudsourcing

… cloud computing has some fundamental characteristics that distinguish it from traditional outsourcing …​ many cloud services are merely passive providers of computing resources, utilized by users to perform their own processing.
— Millard et al.
Cloud Computing Law

The first significant vendor relationship the startup may engage with is for cloud services. The decision whether, and how much, to leverage cloud computing remains (as of 2016) a topic of much industry discussion. JP Morgenthal reports on a 2016 discussion with industry analysts that identified the following pros and cons [190]:

Table 16. Cloud sourcing pros and cons
Pro Con

Operational costs

Poor availability of skilled workforce for implemeting internal cloud

Better capital management (i.e. through expensed cloud services)

Difficulty in providing elastic scalability

Agility (faster provisioning of commercial cloud instances)

Building and operating data centers is expensive

Limiting innovation (e.g.,web-scale applications may require current cloud infrastructure)

Private clouds are poor imitation of public cloud experience

Poor capacity management / resource utilization

Data Gravity (scale of data to voluminous to easily migrate the apps and data to the cloud)

Security (perception cloud is not as secure)

Emerging scalable solutions for private clouds

Lack of equivalent SaaS offerings for applications being run in-house

Significant integration requirements between in house apps and new ones deployed to cloud

Lack of ability to support migration to cloud

Vendor licensing (see 9 figure true-up)

Network latency (slow response of cloud-deployed apps)

Poor transparency of cloud infrastructure

Risk of cloud platform lock-in

Cloud can reduce costs when deployed in a sophisticated way; if a company has purchased more capacity than it needs, cloud can be more economical (review the section on virtualization economics). However, ultimately as Abbott and Fisher point out,

Large, rapidly growing companies can usually realize greater margins by using wholly owned equipment than they can by operating in the cloud. This difference arises because laaS operators while being able to purchase and manage their equipment cost-effectively, are still looking to make a profit on their services [2 p. 474].

Minimally, cloud services need to be controlled for consumption. Cloud providers will happily allow virtual machines to run indefinitely, and charge for them. An ecosystem of cloud brokers and value-add resellers is emerging to assist organizations with optimizing their cloud dollars.

8.3.3. Software licensing

Case study: The 9-figure “true-up" A large enterprise had a long relationship with a major software vendor, who provided a critical software product used widely for many purposes across the enterprise.

The price for this product was set based on the power of the computer running it. A license would cost less for a computer with 4 cores and 1 gigabyte of RAM than it would for a computer with 16 cores and 8 gigabytes of RAM. The largest computers required the most expensive licenses.

As described previously, the goal of virtualization is to use one powerful physical computer to consolidate more lightly-loaded computers as “virtual machines.” This can provide significant savings.

Over the course of 3 years, the enterprise described here virtualized about 5,000 formerly physical computers, each of which had been running the vendor’s software.

However, a deadly wrinkle emerged in the software vendor’s licensing terms. The formerly physical computers were, in general, smaller machines. The new virtual farms were clusters of 16 of the most powerful computers available on the market. The vendor held that EACH of the 5,000 instances of its software running on the virtual machines was liable for the FULL licensing fee applicable to the most powerful machine!

Even though each of the 5,000 virtual machines could not possibly have been using the full capacity of the virtual farm, the vendor insisted (and was upheld) that the contract did not account for that, and there was no way of knowing whether any given VM had been using the full capacity of the farm at some point.

The dispute escalated to the CEOs of each company, but the vendor held firm. The enterprise was obliged to pay a “true-up” charge of over $100 million (9 figures).

This is not an isolated instance. Major software vendors have earned billions in such charges and continue to audit aggressively for these kinds of scenarios. This is why contracts and licenses should never be taken lightly. Even startups could be vulnerable, if licensed commercial software is used in un-authorized ways in a cloud environment, for example.

As software and digital services are increasingly used by firms large and small, the contractual rights of usage become more and more critical. We mentioned a "clickwrap” licensing agreement above. Software licensing, in general, is a large and detailed topic, one presenting a substantial financial risk to the unwary firm, especially when cloud and virtualization are concerned.

Software licensing is a subset of software asset management, which is itself a subset of IT asset management, discussed in more depth in the material on process management and IT lifecycles. Software asset management in many cases relies on the integrity of a digital organization’s package management; the package manager should represent a definitive inventory of all the software in use in the organization.

Free and open-source software (sometimes abbreviated FOSS) has become an increasingly prevalent and critical component of digital products. While technically “free,” the licensing of such software can present risks. In some cases, open-source products have unclear licensing that puts them at risk of legal conflicts which may impact users of the technology [174]. In other cases, the open-source license may discourage commercial or proprietary use; for example, the Gnu Public License (GPL) requirement for disclosing derivative works causes concern among enterprises [288].

8.3.4. The role of industry analysts

When a company is faced by a sourcing question of any kind, one initial reaction is to research the market alternatives. But research is time consuming, and markets are large and complex. Therefore, the role of industry or market analyst has developed.

In the financial media, one often hears from “industry analysts” who are concerned with whether a company is a good investment in the stock market. While there is some overlap, the industry analysts we are concerned with here are more focused on advising prospective customers of a given market’s offerings.

Because sourcing and contracting are an infrequent activity, especially for smaller companies, it is valuable to have such services. Because they are researching a market and talking to vendors and customers on a regular basis, analysts can be helpful to companies in the purchasing process.

However, analysts take money from the vendors they are covering as well, leading to occasional charges of conflict of interest [cite]. How does this work? There are a couple of ways.

First, the analyst firm engages in objective research of a given market segment. They do this by developing a common set of criteria for who gets included, and a detailed set of questions to assess their capabilities.

For example, an analyst firm might define a market segment of “Cloud Infrastructure as a Service” vendors. Only vendors supporting the basic NIST guidelines for Infrastructure as a Service are invited. Then, the analyst might ask, “Do you support Software Defined Networking, e.g., Network Function Virtualization” as a question. Companies that answer “yes” will be given a higher score than companies that answer “no.” The number of questions on a major research report might be as high as 300 or even higher.

Once the report is completed, and the vendors are ranked (analyst firms typically use a two-dimensional ranking, such as the Gartner Magic Quadrant or Forrester Wave), it is made available to end users for a fee. Fees for such research might range from $500 to $5000 or more, depending on how specialized the topic, how difficult the research, and the ability of prospective customers to pay.

Note
Large companies, e.g., those in the Fortune 500, typically would purchase an “enterprise agreement," often defined as a named “seat” for an individual, who can then access entire categories of research.

Customers may have further questions for the analyst who wrote the research. They may be entitled to some portion of analyst time as part of their license, or they may pay extra for this privilege.

Beyond selling the research to potential customers of a market, the analyst firm has a complex relationship with the vendors they are covering. In our example of a major market research report, the analyst firm’s sales team also reaches out the vendors who were covered. The conversation goes something like this:

“Greetings. You did very well in our recent research report. Would you like to be able to give it away to prospective customers, with your success highlighted? If so, you can sponsor the report for $50,000.”

Because the analyst report is seen as having some independence, it can be an attractive marketing tool for the vendor, who will often pay (after some negotiating) for the sponsorship. In fact, vendors have so many opportunities along these lines they often find it necessary to establish a function known as “Analyst Relations” to manage all of them.

8.3.5. Software development and contracts

Customer collaboration over contract negotiation.
— Agile Manifesto
For both suppliers and buyers of Information Technology (IT) projects, one issue repeatedly arises: how to get out of the trap of fixed pricing without the disadvantages of time and materials contracts.
— Andreas Opelt et al.
Agile Contracts: Creating and Managing Successful Projects with Scrum
What do lawyers assume is the nature of software projects? First, it is common that they view it as similar to a construction project—relatively predictable—rather than the highly uncertain and variable research and development that it usually is. Second, that in the project (1) there is a long delay before something can be delivered that is well done, with (2) late and weak feedback, (3) long payment cycles, and (4) great problems for the customer if the project is stopped at any arbitrary point in time. These assumptions are invalidated in agile development.
— Arbogast et al.
Agile Contracts Primer

Software is often developed and delivered per contractual terms. Contracts are legally binding agreements, typically developed with the assistance of lawyers. As noted in [16] (p.5), “Legal professionals are trained to act, under legal duty, to advance their client’s interests and protect them against all pitfalls, seen or unseen.” The idea of “customer collaboration over contract negotiation” may strike them as the height of naïveté.

However, Agile and Lean influences have made substantial inroads in contracting approaches.

Arbogast et al. describe the general areas of contract concern:

  • Risk, exposure, and liability

  • Flexibility

  • Clarity of obligations, expectations, and deliverables

They argue that “An agile-project contract may articulate the same limitations of liability (and related terms) as a traditional-project contract, but the agile contract will better support avoiding the very problems that a lawyer is worried about.” (p. 12)

So, what is an "agile" contract?

There are two major classes of contracts:

  • Time and materials

  • Fixed price

In a time and materials contract, the contracting firm simply pays the provider until the work is done. This means that the risk of the project overrunning its schedule or budget resides primarily with the firm hiring out the work. While this can work, there is often a desire on the part of the firm to reduce this risk. If you are hiring someone because they claim they are experts and can do the work better, cheaper, and/or quicker than your own staff, it seems reasonable that they should be willing to shoulder some of the risks.

In a fixed price contract, the vendor providing the service will make a binding commitment that (for example) “we will get the software completely written in 9 months for $3 million.” Penalties may be enforced if the software is late, and it’s up to the vendor to control development costs. If the vendor does not understand the work correctly, they may lose money on the deal.

Reconciling Agile with fixed-price contracting approaches has been a challenging topic [202]. The desire for control over a contractual relationship is historically one of the major drivers of waterfall approaches. However, since requirements cannot be fully known in advance, this is problematic.

When a contract is signed based on waterfall assumptions, the project management process of change control is typically used to govern any alterations to the scope of the effort. Each change order typically implies some increase in cost to the customer. Because of this, the perceived risk mitigation of a fixed price contract may become a false premise.

This problem has been understood for some time. Scott Ambler argued in 2005 that “It’s time to abandon the idea that fixed bids reduce risk. Clients have far more control over a project with a variable, gated approach to funding in which working software is delivered on a regular basis” [12]. Andreas Opelt states, “For agile IT projects it is, therefore, necessary to find an agreement that supports the balance between a fixed budget (maximum price range) and agile development (scope not yet defined in detail) …”

How is this done? Opelt and his co-authors further argue that the essential question revolves around the project “iron triangle":

  • Scope

  • Cost

  • Deadline

The approach they recommend is determining which of these elements is the “fixed point” and which is estimated. In traditional waterfall projects, the scope is fixed, while costs and deadline must be estimated (a problematic approach when product development is required).

In Opelt’s view, in Agile contracting, costs and deadline are fixed, while the scope is “estimated” — understood to have some inevitable variability. "… you are never exactly aware of what details will be needed at the start of a project. On the other hand, you do not always need everything that had originally been considered to be important” [202].

Their recommended approach supports the following benefits:

  • Simplified adaptation to change

  • Non-punitive changes in scope

  • Reduced knowledge decay (large “batches” of requirements degrade in value over time)

This is achieved through:

  • Defining the contract at the level of product or project vision (epics or high-level stories; see discussion of Scrum) — not detailed specification

  • Developing high-level estimation

  • Establishing agreement for sharing the risk of product development variability

This last point, which Opelt et al. term “riskshare,” is key. If the schedule or cost expand beyond the initial estimate, both the supplier and the customer pay, according to some agreed %, which they recommend be between 30%-70%. If the supplier proportion is too low, the contract essentially becomes time and materials. If the customer proportion is too low, the contract starts to resemble traditional fixed-price.

Incremental checkpoints are also essential; for example, the supplier/customer interactions should be high bandwidth for the first few sprints, while culture and expectations are being established and the project is developing a rhythm.

Finally, the ability for either party to exit gracefully and with a minimum penalty is needed. If the initiative is testing market response (ala Lean Startup) and the product hypothesis is falsified, there is little point in continuing the work from the customer’s point of view. AND, if the product vision turns out to be far more work than either party estimated, the supplier should be able to walk away (or at least insist on comprehensive re-negotiation).

These ideas are a departure from traditional contract management. As Opelt asks, “How can you sign a contract from which one party can exit at any time?” Recall however that (if Agile principles are applied) the customer is receiving working software continuously through the engagement (e.g.,after every sprint).

In conclusion, as Arbogast et al. argue, “Contracts that promote or mandate sequential lifecycle development increase project risk … an agile approach …​ reduces risk because it limits both the scope of the deliverable and extent of the payment [and] allows for inevitable change” [16 p. 13].

8.4. Structuring the investment

Directors should monitor the progress of approved IT proposals to ensure that they are achieving objectives in required timeframes using allocated resources.
— ISO/IEC 38500:2008

Now that we understand the coordination problem better, and have discussed finance and sourcing, we are prepared to make longer term commitments to a more complicated organizational structure. As we stated in the chapter introduction, one way of looking at these longer term commitments is as investments. We start them, we decide to continue them, or we decide to halt (exit) them. In fact, we could use the term “portfolio” to describe these various investments; this is not a new concept in IT management.

Note
The first comparison of IT investments to a portfolio was in 1974, by Richard Nolan in Managing the Data Resource Function [198].

Whatever the context for your digital products (external or internal), they are intended to provide value to your organization and ultimately your end customer. Each of them in a sense is a “bet” on how to realize this value (review the Spotify DIBB model), and represents in some sense a form of product discovery. As you deepen your abilities to understand investments, you may find yourself applying business case analysis techniques in more rigorous ways, but as always retaining a Lean Startup experimental mindset is advisable.

As you strengthen a hypothesis in a given product or feature structure, you increasingly formalize it: a clear product vision supported by dedicated resources. We’ll discuss the IT portfolio concept further in Chapter 12. In your earliest stages of differentiating your portfolio, you may first think about features versus components.

8.4.1. Features versus components

feature component matrix
Figure 144. Features versus components

As you consider your options for partitioning your product, in terms of the AKF scaling cube, a useful and widely-adopted distinction is that between “features” and “components” (see Features versus components).

Features are what your product does. They are what the customers perceive as valuable. “Scope as viewed by the customer” according to Mark Kennaley [151] p. 169. They may be "flowers" -— defined by the value they provide externally, and encouraged to evolve with some freedom. You may be investing in new features using Lean Startup, the Spotify DIBB model or some other hypothesis-driven approach.

Components are how your product is built, such as database and Web components. In other words, they are a form of infrastructure (but infrastructure you may need to build yourself, rather than just spin up in the cloud). They are more likely to be “cogs” — more constrained and engineered to specifications. Mike Cohn defines a component team as “a team that develops software to be delivered to another team on the project rather than directly to users” [67 p. 183].

Feature teams are dedicated to a clearly defined functional scope (such as “item search” or “customer account lookup”), while component teams are defined by their technology platform (such as “database” or “rich client”). Component teams may become shared services, which need to be carefully understood and managed (more on this to come). A component’s failure may affect multiple feature teams, which makes them riskier.

It may be easy to say that features are more important than components, but this can be carried too far. Do you want each feature team choosing its own database product? This might not be the best idea; you’ll have to hire specialists for each database product chosen. Allowing feature teams to define their own technical direction can result in brittle, fragmented architectures, technical debt, and rework. Software product management needs to be a careful balance between these two perspectives. The Scaled Agile Framework suggests that components are relatively

  • more technically focused

  • more generally re-usable

than features. SAFE also recommends a ratio of roughly 20-25% component teams to 75%-80% feature teams [235].

Mike Cohn suggests the following advantages for feature teams [67 pp. 183-184]:

  • They are better able to evaluate the impact of design decisions

  • They reduce hand-off waste (a coordination problem)

  • They present less schedule risk

  • They maintain focus on delivering outcomes

He also suggests [67 pp. 186-187] that component teams are justified when:

  • Their work will be used by multiple teams

  • They reduce the sharing of specialists across teams

  • The risk of multiple approaches outweighs the disadvantages of a component team

Ultimately, the distinction between “feature versus component” is similar to the distinction between “application” and “infrastructure". Features deliver outcomes to people whose primary interests are not defined by digital or IT. Components deliver outcomes to people whose primary interests are defined by digital or IT.

8.4.2. Epics and new products

In the last chapter, we talked of one product with multiple feature and/or component teams (see One company, one product). Features and components as we are discussing them here are large enough to require separate teams (with new coordination requirements). At an even larger scale, we have new product ideas, perhaps first seen as epics in a product backlog.

one product
Figure 145. One company, one product

Eventually, larger and more ambitious initiatives lead to a key organizaitonal state transition: from one product to multiple products. Consider our hypothetical startup company. At first, everyone on the team is supporting one product and dedicated to its success. There is little sense of contention with “others” in the organization. This changes with the addition of a second product team with different incentives (see One company, multiple products). Concerns for fair allocation and a sense of internal competition naturally arise out of this diversification. Fairness is deeply wired into human (and animal) brains, and the creation of a new product with an associated team provokes new dynamics in the growing company.

multi product
Figure 146. One company, multiple products

Because resources are always limited, it is critical that the demands of each product be managed using objective criteria, requiring formalization. This was a different problem when you were a tight-knit startup; you were constrained, but everyone knew they were “in it together.” Now you need some ground rules to support your increasingly diverse activities. This leads to new concerns:

  • Managing scope and preventing unintended creep or drift from the product’s original charter

  • Managing contention for enterprise or shared resources

  • Execution to timeframes (e.g.,the critical trade show)

  • Coordinating dependencies (e.g.,achieving larger, cross-product goals)

  • Maintaining good relationships when a team’s success depends on another team’s commitment.

  • Accountability for results

Structurally, we might decide to separate a portfolio backlog from the product backlog. What does this mean?

  • The portfolio backlog is the list of potential new products that the organization might invest in

  • Each product team still has its own backlog of stories (or other representations of their work)

The DEEP backlog we discussed in Chapter 5 gets split accordingly (see Portfolio versus product backlog).

deep2portfolios
Figure 147. Portfolio versus product backlog

The decision to invest in a new product should not be taken lightly. When the decision is made, the actual process is as we covered in Chapter 4: ideally, a closed-loop, iterative process of discovering a product that is valuable, usable, and feasible.

There is one crucial difference: the investment decision is formal and internal. While we started our company with an understanding of our investment context, we looked primarily to market feedback and grew incrementally from a small scale. (Perhaps there was venture funding involved, but this book doesn’t go into that).

Now, we may have a set of competing ideas that we are thinking about placing bets on. In order to make a rational decision, we need to understand the costs and benefits of the proposed initiatives. This is difficult to do precisely, but how can we rationally choose otherwise? We have to make some assumptions and estimate the likely benefits and the effort it might take to realize them.

8.5. Larger-scale planning and estimating

8.5.1. Why plan?

Fundamentally, we plan for two reasons:

  • To decide whether to make an investment

  • To ensure the investment’s effort progresses effectively and efficiently.

We’ve discussed investment decision making in terms of the overall business context, in terms of the product roadmap, the product backlog, and in terms of Lean Product Development and cost of delay. As we think about making larger-scale, multi-team digital investments, all of these practices come together to support our decision making process. Estimating the likely time and cost of one or more larger-scale digital product investments is not rocket science; doing so is based on the same techniques we have used at the single-team, single-product level.

With increasing scope of work and increasing time horizon tends to come increasing uncertainty. We know that we will use fast feedback and ongoing hypothesis-driven development to control for this uncertainty. But at some point, we either make a decision to invest in a given feature or product and starting the hypothesis testing cycle -— or we don’t.

Once we have made this decision, there are various techniques we can use to prioritize the work so that the most significant risks and hypotheses are addressed soonest. But in any case, when large blocks of funding are at issue, there will be some expectation of monitoring and communication. In order to monitor, we have to have some kind of baseline expectation to monitor against. Longer-horizon artifacts such as the product roadmap and release plan are usually the basis for monitoring and reporting on product or initiative progress.

In planning and execution, we seek to optimize the following contradictory goals:

  • Delivering maximum value (outcomes)

  • Minimizing the waste of un-utilized resources (people, time, equipment, software)

Obviously, we want outcomes -— digital value -— but we want it within constraints. It has to be within a timeframe that makes economic sense. If we pay forty people to do work that a competitor or supplier can do with three, we have not produced a valuable outcome relative to the market. If we take twelve months to do something that someone else can do in five, again, our value is suspect. If we purchase software or hardware we don’t need (or before we need it) and as a result, our initiative’s total costs go up relative to alternatives, we again may not be creating value. Many of the techniques suggested here are familiar to formal project management. Project management has the deepest tools, and whether or not you use a formal project structure, you will find yourself facing similar thought processes as you scale.

To meet these value goals, we need to:

  • estimate so that expected benefits can be compared to expected costs, ultimately to inform the investment decision (start, continue, stop)

  • plan so that we understand dependencies (e.g.,when one team must complete a task before another team can start theirs)

Note
Projecting expected benefits is challenging. One of the most useful references for such questions is the book How to Measure Anything: Finding the Value of Intangibles in Business by Doug Hubbard [127].

Estimation sometimes causes controversy. When a team is asked for a projected delivery date, the temptation for management is to “hold them accountable” for that date and penalize them for not delivering value by then. But product discovery is inherently uncertain, and therefore such penalties can seem arbitrary. Experiments show that when animals are penalized unpredictably, they fall into a condition known as “learned helplessness,” in which they stop trying to avoid the penalties [285].

We discussed various coordination tools and techniques previously. Developing plans for understanding dependencies is one of the best known such techniques. An example of such a planning dependency would be that the database product should be chosen and configured before any schema development takes place (this might be a component team working with a feature team).

8.5.2. Planning larger efforts

…​ many large projects need to announce and commit to deadlines many months in advance, and many large projects do have interteam dependencies …​
— Mike Cohn
Agile Estimating

Agile and adaptive techniques can be used to plan larger, multi-team programs. Again, we have covered many fundamentals of product vision, estimation, and work management in earlier chapters. Here, we are interested in the concerns that emerge at a larger scale, which we can generally class into:

  • Accountability

  • Coordination

  • Risk management

Accountability

With larger blocks of funding comes higher visibility and inquiries as to progress. At a program level, differentiating between estimates and commitments becomes even more essential.

Coordination

Mike Cohn suggests that larger efforts specifically can benefit from the following coordination techniques [66]:

  • Estimation baseline (velocity)

  • Key details sooner

  • Lookahead planning

  • Feeding buffers

Estimating across multiple teams is difficult without a common scale, and Cohn proposes an approach for determining this in terms of team velocity. He also suggests that in larger projects, some details will require more advance planning (it is easy to see APIs as being one example), and some team members' time should be devoted to planning for the next release. Finally, where dependencies exist, buffers should be used -— that is, if Team A needs something from Team B by May 1, Team B should plan on delivering it by April 15.

Risk management

Finally, risk and contingency planning is essential. In developing any plan, Abbott and Fisher recommend the “5-95 rule": 5% of the time on building a good plan, and 95% of the time planning for contingencies [2 p. 105]. We’ll discuss risk management in detail in Chapter 10.

8.6. Why project management?

An ongoing work effort is generally a repetitive process that follows an organization’s existing procedures. In contrast, because of the unique nature of projects, there may be uncertainties or differences in the products, services, or results that the project creates.
— Project Management Body of Knowledge
version 5
Projects of all types and sizes are now the way that organizations accomplish their work [emphasis added].
— Stanley Portny
Project Management for Dummies 4th ed.
… the project as a vehicle of IT execution has, by and large, failed to live up to its promise of predictable delivery.
— Sriram Narayan
“Scaling Agile: Problems and Solutions"
Project management responsibilities are no longer exercised by one person. They are split across the members of the Scrum team instead.
— Roman Pichler
“Agile Product Management with Scrum"
Agile is having a profound impact on the project management profession and will cause us to fundamentally rethink many of the well-established notions of what a project manager is …
— Charles G. Cobb
The Project Manager's Guide to Mastering Agile: Principles and Practices for an Adaptive Approach

In our emergence model, we always seek to make clear why we need a new concept or practice. It is not sufficient to say, “we need project management because companies of our size use it.” Many authoritative books on Agile software development assume that some form of project management is going to be used. Other authors question the need for it or at least raise various cautions.

Project management, like many other areas of IT practice, is undergoing a considerable transformation in response to the Agile transition. However, it will likely remain an important tool for value delivery at various scales.

Fundamentally, project management is a means of understanding and building a shared mental model of a given scope of work. In particular, planning the necessary tasks gives a basis for estimating the time and cost of the work as a whole, and therefore understanding its value. Even though industry practices are changing, value remains a critical concern for the digital professional.

As the preceeding quotes indicate, there are diverse opinions on the role and importance of traditional project management in the enterprise. Clearly, it is under pressure from the Agile movement. Project management professionals are advised not to deny or diminish this fact. One of the primary criticisms of project management as a paradigm is that it promotes large “batches” of work. It is possible for a modern, IT-centric organization to make considerable progress on the basis of product management plus simple, continuous work management, without the overhead of the formalized project lifecycle suggested by PMBOK.

Cloud computing is having impacts on traditional project management as well. As we will see in the section on the decline of traditional IT, projects were often used to install vendor-delivered commodity software, such as for payroll or employee expense. Increasingly, that kind of functionality is delivered by online service providers, leaving “traditional” internal IT with considerably reduced responsibilities.

Some of the IT capability may remain in the guise of an internal “service broker,” to assist with the sourcing and procurement of online services. The remainder moves into digital product management, as the only need for internal IT is in the area of revenue-generating, market-facing strategic digital product.

So, this section will examine the following questions:

  • Given the above trends, under what circumstances does formalized project management make economic sense?

  • Assuming that formalized project management is employed, how does one continue to support objectives such as fast feedback and adaptability?

8.6.1. A traditional IT project

So, what does all this have to do with IT? As we have discussed in previous chapters, project management is one of the main tools used to deliver value across specialized skill-based teams, especially in traditional IT organizations.

A “traditional” IT project would usually start with the “sponsorship” of some executive with authority to request funding. For example, suppose that the VP of Logistics under the Chief Operating Officer (COO) believes that a new supply chain system is required. With the sponsorship of the COO, she puts in a request (possibly called a “demand request” although this varies by organization) to implement this system. The assumption is that a commercial software package will be acquired and implemented. The IT department serves as an overall coordinator for this project. In many cases, the “demand request” is registered with the enterprise Project Management Office, which may report to the CIO.

Note
Why might the Enterprise Project Management office report under the CIO? IT projects in many companies represent the single largest type of internally managed capital expenditure. The other major form of projects, building projects, are usually outsourced to a general contractor.

The project is initiated by establishing a charter, allocating the funding, assigning a project manager, establishing communication channels to stakeholders, and a variety of other activities. One of the first major activities of the project will be to select the product to be used. The project team (perhaps with support from the architecture group) will help lead the RFI/RFQ processes by which vendors are evaluated and selected.

Note
RFI stands for Request for Information; RFQ stands for Request for Quote. See the links for definitions.

Once the product is chosen, the project must identify the staff who will work on it, perhaps a combination of full time employees and contractors, and the systems implementation lifecycle can start.

We might call the above, the systems implementation lifecycle, not the software development lifecycle. This is because most of the hard software development was done by the third party who created the supply chain software. There may be some configuration or customization (adding new fields, screens, reports) but this is lightweight work in comparison to the software engineering required to create a system of this nature.

The system requires its own hardware (servers, storage, perhaps a dedicated switch) and specifying this in some detail is required for the purchasing process to start. The capital investment may be hundreds of thousands or millions of dollars. This, in turn, requires extensive planning and senior executive approval for the project as a whole.

It would not have been much different for a fully in-house developed application, except that more money would have gone to developers. The slow infrastructure supply chain still drove much of the behavior, and correctly “sizing” this infrastructure was a challenge particularly for in-house developed software. (The vendors of commercial software would usually have a better idea of the infrastructure required for a given load). Hence, there is much attention to up-front planning. Without requirements there is no analysis or design; without design, how do you know how many servers to buy?

Ultimately, the project comes to an end, and the results (if it is a product such as a digital service) are transitioned to a “production” state. Traditional IT implementation lifecycle presents a graphical depiction.

lifecycle
Figure 148. Traditional IT implementation lifecycle

There are a number of problems with this classic model, starting with the lack of responsiveness to consumer needs (see Customer responsiveness in traditional model).

lifecycle2
Figure 149. Customer responsiveness in traditional model

This might be OK for a non-competitive function, but if the “digital service consumer” has other options, they may go elsewhere. If they are an internal user within an enterprise, they might be engaged in critical competitive activities.

The decline of the “traditional” IT project

The above scenario is in decline, and along with it a way of life for many “IT” professionals. One primary reason is cloud, and in particular SaaS. Another reason is the increasing adoption of the Lean/Agile product development approach for digital services. Traditional enterprise IT “space” presents one view of the classic model.

classic
Figure 150. Traditional enterprise IT “space”

Notice the long triangles labeled “Producing focus” and “Consuming focus.” These represent the perspectives of (for example) a software vendor versus their customer. Traditionally, the R&D functions were most mature within the product companies. What was less well understood was that internal IT development was also a form of R&D. Because of the desire for scope management (predictability and control), the IT department performing systems development was often trapped in the worst of both worlds — having neither a good quality product nor high levels of certainty. For many years, this was accepted by the industry as the best that could be expected. However, the combination of Lean/Agile and cloud is changing this situation (see Shrinking space for traditional IT).

There is diminishing reason to run commodity software (e.g.,payroll, expenses, HR, etc.). in-house. Cloud providers such as Workday, Concur, Salesforce, and others provide ready access to the desired functionality “as a service.” The responsiveness and excellence of such products are increasing, due to the increased tempo of market feedback (note that while a human resource management system may be a commodity for your company, it is strategic for Workday) and concerns over security and data privacy are rapidly fading.

What is left internal to the enterprise, increasingly, are those initiatives deemed “competitive” or “strategic.” Usually, this means that they are going to contribute to a revenue stream. This, in turn, means they are “products” or significant components of them. (See Chapter 4, Product Management). A significant market-facing product initiative (still calling for project management per se) might start with the identification of a large, interrelated set of features, perhaps termed an “epic.” Hardware acquisition is a thing of the past, due to either private or public cloud. The team starts with analyzing the overall structure of the epic, decomposing it into stories and features, and organizing them into a logical sequence.

new model
Figure 151. Shrinking space for traditional IT

Because capacity is available on-demand, new systems do not need to be nearly as precisely “sized,” which meant that implementation could commence without as much up front analysis. Simpler architectures suffice until the real load is proven. It might then be a scramble to refactor software to take advantage of new capacity, but the overall economic effect is positive, as over-engineering and over-capacity are increasingly avoided. So, IT moves in two directions — its most forward-looking elements align to the enterprise product management roadmap, while its remaining capabilities may deliver value as a “service broker.” (More on this in the section on IT sourcing).

Let’s return to the question of project management in this new world.

8.6.2. How is a project different from simple “work management"?

In Chapter 5, we covered a simple concept of “work management” that deliberately did not differentiate product, project, and/or process-based work. As was noted at the time, for smaller organizations, most or all of the organization would be the “project team,” so what would be the point?

The project is starting off as a list of tasks, that is essentially identical to a product backlog. Even in Kanban, we know who is doing what, so what is the difference? Here are key points:

  • The project is explicitly time-bound. As a whole, it is lengthier and more flexible than the repetitive, time-boxed sprints of Scrum, but more fixed than the ongoing flow of Kanban.

  • Dependencies. You may have had a concept of one task or story blocking another, and perhaps you used a white board to outline more complex sequences of work, but project management has an explicit concept of dependencies in the tasks and powerful tools to manage them. This is essential in the most ambitious and complex product efforts.

  • Project management also has more robust tools for managing people’s time and effort, especially as they translate to project funding. While estimation and ongoing re-planning of spending can be a contentious aspect of project management, it remains a critical part of management practice in both IT and non-IT domains.

At the end of the day, people expect to be paid for their time, and investors expect to be compensated through the delivery of results. Investment capital only lasts as a function of an organization’s “burn rate;” the rate at which the money is consumed for salaries and expenses. Some forecasting of status (whether that of a project, organization, product, program, or what have you) is, therefore, an essential and unavoidable obligation of management unless funding is unlimited (a rare situation to say the least).

Project accounting, at scale, is a deep area of considerable research and theory behind it. In particular, the concept of Earned Value Management is widely used to quantify the performance of a project portfolio.

8.6.3. The “iron triangle”

Iron triangle
Figure 152. Project “Iron Triangle”

The project management "Iron Triangle" represents the interaction of cost, time, scope, and quality of a project (see Project “Iron Triangle” [2]). The idea is that, in general, one or more of these factors may be a constraint. The “Pick any Two” sign is often seen in service organizations (see Pick any two [3]).

Good-Cheap-Fast
Figure 153. Pick any two

The same applies to project management and reflects well the “iron triangle” of trade-offs. However, more recent thinking in the DevOps movement suggests that optimizing for continuous flow and speed tends to have beneficial impacts on quality as well. As digital pipelines increase their automation and speed to delivery, quality also increases because testing and building become more predictable. Conversely, the idea that stability increases through injecting delay into the deployment process (i.e. through formal Change Management) is also under question (see [95]).

8.6.4. Project practices

Project management (NOT restricted to IT) is a defined area of study, theory, and professional practice. This section provides a (necessarily brief) overview of these topics.

We will first discuss the Project Management Body of Knowledge, which is the leading industry framework in project management, at least in the United States. (PRINCE2 is another framework, originating from the UK, which will not be covered in this edition). We will spend some time on the critical issues of scope management which drive some of the conflicts seen between traditional project management and Agile product management.

PMBOK details are easily obtained on the web, and will not be repeated here. (See the PMBOK summary and project management overview). It’s clear that the Agile critiques of waterfall project management have been taken seriously by the PMBOK thought leaders. There is now a PMI Agile certification and much explicit recognition of the need for iterative and incremental approaches to project work.

PMBOK remains extensive and complex when considered as a whole. This is necessary, as it is used to manage extraordinarily complex and costly efforts in domains such as construction, military/aerospace, government, and others. Some of these efforts (especially those involving systems engineering, over and above software engineering) do have requirements for extensive planning and control that PMBOK meets well.

However, in Agile domains that seek to be more adaptive to changing business dynamics, full use of the PMBOK framework may be unnecessary and wasteful. The accepted response is to “tailor” the guidance, omitting those plans and deliverables that are not needed.

Important
Part of the problem with extensive frameworks such as PMBOK is that knowing how and when to tailor them is hard-won knowledge that is not part of the usual formalized training. And yet, without some idea of “what matters” in applying the framework, there is great risk of wasted effort. The Agile movement in some ways is a reaction to the waste that can result from overly detailed frameworks.
Scope management

Scope management is a powerful tool and concept, at the heart of the most challenging debates around project management. PMBOK uses the following definitions [214]:

Scope. The sum of the products, services, and results to be provided as a project. See also project scope and product scope.

Scope Change. Any change to the project scope. A scope change almost always requires an adjustment to the project cost or schedule.

Scope Creep. The uncontrolled expansion of product or project scope without adjustments to time, cost, and resources.

Change Control A process whereby modifications to documents, deliverables, or baselines associated with the project are identified, documented, approved, or rejected.

In the Lean Startup world, products may pivot and pivot again, and their resource requirements may flex rapidly based on market opportunity. Formal project change control processes are in general not used. Even in larger organizations, product teams may be granted certain leeway to adapt their “products, services, and results” and while such adaptations need to be transparent, formal project change control is not the vehicle used.

On the other hand, remember our emergence model. The simple organizational change from one to multiple products may provoke certain concerns and a new kind of contention for resources. People are inherently competitive and also have a sense of fairness. A new product team that seems to be unaccountable for results, consuming “more than its share” of the budget while failing to meet the original vision for their existence, will cause conflict and concern among organizational leadership.

It is in the tension between product autonomy and accountability that we see project management techniques such as the work breakdown structure and project change control employed. The work breakdown structure is defined by the Project Management Body of Knowledge as

… a hierarchical decomposition of the total scope of work to be carried out by the project team to accomplish the project objectives and create the required deliverables. The WBS organizes and defines the total scope of the project, and represents the work specified in the current approved project [214].

[213] recommends “Subdivide your WBS component into additional deliverables if you think either of the following situations applies: The component will take much longer than two calendar weeks to complete. The component will require much more than 80 person-hours to complete.”

This may seem reasonable, but in iterative product development, it can be difficult to “decompose” a problem in the way project management seems to require. Or to estimate in the way Portny suggests. This can lead to two problems.

First, the WBS may be created at a seemingly appropriate level of detail, but since it is created before key information is generated, it is inevitably wrong and needing ongoing correction. If the project management approach requires a high-effort “project change management” process, much waste may result as “approvals” are sought for each feedback cycle. This may result in increasing disregard by the development team for the project manager and his/her plan, and corresponding cultural risks of disengagement and lowering of trust on all sides.

Second, we may see the creation of project plans that are too high-level, omitting information that is in fact known at the time — for example, external deadlines or resource constraints. This happens because the team develops a cultural attitude that is averse to all planning and estimation.

Project risk management

Project management is where we see the first formalization of risk management (which will be more extensively covered in Chapter 10). Briefly, risk is classically defined as the probability of an adverse event times its cost. Project managers are alert to risks to their timelines, resource estimates, and deliverables.

Risks may be formally identified in project management tooling. They may be accepted, avoided, transferred, or mitigated. Unmanaged risks to a project may result in the project as a whole reporting an unfavorable status.

Project assignment

Enterprise IT organizations have evolved to use a mix of project management, processes, and ad hoc work routing to achieve their results. Often, resources (people) are assigned to multiple projects; a practice sometimes called “fractional allocation.”

In fractional allocation, a database administrator will work 25% on one project, 25% on another, and still be expected to work 50% on ongoing production support. This may appear to work mathematically, but practically it is an ineffective practice. Both Gene Kim in The Phoenix Project [153] and Eli Goldratt in Critical Chain [108] present dramatized accounts of the overburden and gridlock that can result from such approaches.

As previously discussed, human beings are notably bad at multi-tasking, and the mental “context-switching” required to move from one task to another is wasteful and ultimately not scalable. A human being fractionally allocated to more and more projects will get less and less done in total, as the transactional friction of task switching increases.

Governing outsourced work

A third major reason for the continued use of project management and its techniques is governing work that has been outsourced to third parties. This is covered in detail in the section on sourcing.

8.6.5. The future of project management

Recall our three “Ps":

  • Product

  • Project

  • Process

Taken together, the three represent a coherent set of concerns for value delivery in various forms. But in isolation, any one of them ultimately is limited. This is a particular challenge for project management, whose practitioners may identify deeply with their chosen field of expertise.

Clearly, formalized project management is under pressure. Its methods are perceived by the Agile community as overly heavyweight; its practitioners are criticized for focusing too much on success in terms of cost and schedule performance and not enough on business outcomes. Because projects are by definition temporary, project managers have little incentive to care about technical debt or operational consequences. Hence the rise of the product manager.

However, a product manager who does not understand the fundamentals of project execution will not succeed. As we have seen, modern products, especially in organizations scaling up, have dependencies and coordination needs, and to meet those needs, project management tools will continue to provide value.

Loose coupling to the project plan rescue? While this book does not go into systems architectural styles in depth, a project with a large number of dependencies may be an indication that the system or product being constructed also has significant interdependencies. Recall Amazon’s product strategy including its API mandate.

Successful systems designers for years have relied on concepts such as encapsulation, abstraction, and loose coupling to minimize the dependencies between components of complex systems so that their design, construction, and operation can be managed with some degree of independence. These ideas are core to the software engineering literature. Recent expressions of these core ideas are Service-Oriented Architecture and microservices.

Systems that do not adopt such approaches are often termed “monolithic” and have a well deserved reputation for being problematic to build and operate. Many large software failures stem from such approaches. If you have a project plan with excessive dependencies, the question at least should be asked: does my massive, tightly-coupled project plan indicate I am building a monolithic, tightly-coupled system that will not be flexible or responsive to change?

Again, many digital companies build tremendously robust integrated services from the combination of many quasi-independent, microservice-based “product” teams, each serving a particular function. However, when a particular organizational objective requires changes to more than one such “product,” the need for cross-team coordination emerges. Someone needs to own this larger objective, even if its actual implementation is carried out across multiple distinct teams. We will discuss this further in Chapter 9.

8.7. Topics

8.7.1. Critical chain

Author Eli Goldratt in the book Critical Chain develops a sophisticated critique of project estimation and the dysfunctions it promotes.

In a project requiring contributions from multiple skilled resources, a common practice is to ask each person, “how long will this take you?” The project manager then works the resulting estimates into the overall project plan.

The problem with this is that most people will estimate their time conservatively; they will forecast a longer duration than they actually require. When all these “padded” estimates are added together, the project may be unacceptably long. The agreed work will tend to expand to fill the time available . Furthermore, most people will wait until the end of their window to perform their task — a person who asks for 3 weeks to perform one week of work will often not start until week 3 -— otherwise known as Student Syndrome.

One of the reasons that people estimate conservatively is that project managers tend to be quite concerned if committed tasks are not performed on time. Failure to make the “deliverable” by the committed date may result in negative feedback to the employee’s manager and subsequently result in poor performance reviews. When coupled with the above-cited drive to multi-tasking, these factors result in poor project performance, despite the array of modern project management techniques.

Goldratt suggested an alternate approach, in which the idea of "critical path” is enhanced with resource awareness. That is to say, the issue of timing and dependencies (itself a complex problem) is further enriched with the availability of resources to perform the work. (In general, the availability of assigned project resources is assumed, but this is not a wise assumption in project-centric environments).

Estimation is handled more probabilistically, and the “critical chain” is the combination of the critical path plus the resource assigned to complete the most critical task. The theory is that a person performing such a task must be protected from distraction, and in fact, project managers must expand their tools to forecast effectively and plan the critical chain.

This leads to some complex math, in particular, a known problem called the Resource-Constrained Scheduling Problem. (e.g.,http://www.iste.co.uk/data/doc_dtalmanhopmh.pdf) The fact that this problem is so notoriously difficult is indicative of the need for adaptive approaches; ultimately, rigorous analytic methods fail to cope with the complexity of such problems.

Craig Larman, in Scaling Lean and Agile Development, is sympathetic to the overall insights and goals of Critical Chain. However, with respect the full blown analytical approach it implies, he states

“We have seen two very large official “project management TOC” adoption attempts (and heard of one more) in companies developing software-intensive embedded systems … The practice was clearly heavy, not agile, and not lean. In all three cases, the approach was eventually found cumbersome and not very effective, and was dropped.” [168]

8.7.2. The Agile project frameworks

As of this writing, a number of frameworks have been developed at the intersection of Agile and project management. Notable examples include:

Other Agile authors are skeptical of the need for such material [239].

8.8. Conclusion

This chapter is titled “Investments and projects,” and represents the middle ground between the foundation of the organizational structure, and the day-to-day execution of work. Project management will likely remain a significant practice for digital professionals, although there are organizations who achieve significant results without it (i.e., using continuous flow approaches across fixed teams).

Investments in vendor relationships and the overall approach to tracking the financials of digital work also affect both the organization and its ongoing work execution. While Agile and related practices provide new insights and directions, there are fundamental and unchanging challenges to managing these areas.

8.8.1. Discussion questions

  • As a team, compare & discuss the costs of cloud services to acquiring and running your own servers.

  • What experience do you or your team members have with project management? How effective did you find it?

  • Imagine yourself in an organization that recognized product management and work management, but had no concept of project management.

    • Discuss a scenario where project management would be a reasonable technique to introduce.

    • Discuss a scenario where project management would not make sense.

  • Do microservices/continuous delivery/DevOps render traditional PM obsolete? Discuss.

8.8.2. Research & practice

  • Review the marketing literature of the following companies. What do you understand of their products for IT financial management? Why would you need one?

    • Apptio

    • Nicus

  • Review all the text of a clickwrap license (e.g.,when you upgrade a popular piece of software). Do you see anything surprising?

  • Find a free version of a Gartner Magic Quadrant, Forrester Wave, or similar analyst report. Study it. What was the incentive for the product company to make this report available to you? Does that make you suspicious of its conclusions? Why or why not?

  • Compare Microsoft Project with one of the following. What are the pros and cons?

    • Rally

    • Jira

    • VersionOne

    • Asana

    • LeanKit

  • Develop a feature or release plan for your product, using the Abbott 5-95 rule.

  • Compare and contrast this Cohn article in favor of project management, with these skeptical articles by Narayam, Arnold, and Memon.


1. Image credit https://www.flickr.com/photos/42931449@N07/5299199423 and www.planetofsuccess.com/blog/, downloaded 2016-12-22, commercial use permitted
2. Image credit https://commons.wikimedia.org/w/index.php?curid=4282986, “By I, John Manuel Kennedy T., CC BY-SA 3.0,” downloaded 2016-10-31, fair use
3. Image credit https://www.flickr.com/photos/centralasian/4534292595, downloaded 2016-10-31, commercial use permitted