By Rod Newnham

Having delivered multiple projects and programmes in my 35 year career, I am convinced that many of the challenges that we as Programme or Project managers encounter arise at the outset of the initiative.

Expectations of date, scope, cost and benefit are set far too early in the life cycle and failure (measured as disappointment of key stakeholders) becomes almost inevitable.

The reality of the matter is that no two change(2) initiatives are identical and large scale organisational or system changes cannot be predicted with confidence until some degree of analysis/design is done. In other words, even if there is some argument for estimating based on reference to similar projects delivered historically, there are typically many variables to consider that could cause the profile of the change initiative to be very different to historical experience.

– Examples of key differentiators between programmes/projects:
– Maturity of the client business/function
– Robustness of the technologies being implemented/integrated
– The state of existing reference or transactional data
– The stability of the legacy systems
– Complexity of the enterprise architecture
– Availability of key skills and levels of proficiency
– The strength of the leaders of change
– Degree of support from the CEO, CIO and other executive stakeholders.
– The strength of the working relationships between IT and the operational business.
– Willingness of both the business and IT to adopt an ‘out of the box solution’ vs insistence on customisation
– Customer target market segment
– Number and geographical distribution of delivery teams
– The business driver for the change i.e. desired outcomes

What many people fail to realise is that a significant amount of time needs to be spent to confirm the metrics (schedule, scope, cost and benefit) with any degree of confidence. Moreover, this will require specialist skilled resource and it will require funding.

The diagram below represents the activities and stages that should be carried out in order to achieve increasing degrees of confidence. What must be recognised is that while prioritisation and justification will be assessed on the understanding available at each of the decision points, it is only after achievability that any degree of confidence should be felt in the delivery (capital) and support (operating) costs.

The core products of achievability are:
– Detailed Functional Requirements
– Detailed Non-functional Requirements
– Environments Strategy
– High Level Data, Application, Integration and Infrastructure architecture
– Organisation impact analysis (structure, recruitment and training)
– Draft Cutover approach
– Draft Service Design
– Draft Service Transition
– Restated Business Case
– High Level delivery schedule
– Risk and Issues logs

Detailed design might still highlight schedule and or cost variance. Post detailed design (systems and/or process) there should be a high level of confidence in all aspects of the delivery.

Achievability could consume 10% to 20% of the total capital budget and detailed design a further 10% to 20%. The inclusion of prototype or proof of concept development in the detailed design phase will push up the cost of this phase.

What is important here is that as each decision point is reached, revising the priority of a change initiative or even deciding that it is no longer viable is a perfectly healthy use of capital. This is of course only the case if factors causing a revision of the priority/viability could not have been reasonably foreseen in an earlier stage.

All senior stakeholders of organisational and IT change should be educated as to what level of expectation to set at each of the decision points identified in the diagram above. This education is the responsibility of the portfolio management and it needs to be reinforced at regular intervals.