Recently a discussion started in a LinkedIn Group that recruiters, HR, and Management was using the term "Systems Engineer" indiscriminately. The conclusion was that the discipline of Systems Engineering and the role of the Systems Engineer in Development and Transformation Efforts is poorly understood by most people, and perhaps by many claiming to be Systems Engineers. In my experience of building a Systems Engineer group from 5 to 55, I can attest to this conclusion.
Currently, I am working on a second book, with the working title of "Systems Engineering, System Architecture, and Enterprise Architecture". In the book, I'm attempting to distill 45+ years of experience and observation of many efforts, from minor report revisions to the Lunar Module, F-14, B-2, and X-29 aircraft creation efforts, to statewide IT outsourcing efforts. This post contains excerpts of several concepts from this manuscript.
The Archetypal Pattern for Product/System/Service Development and Transformation
At a high level there is an architectural process pattern for Product/System/Service development and transformation. I discuss this pattern in my current book, Organizational Economics: The Formation of Wealth, and it is one key pattern for my next book. This pattern is shown in Figure 1.
Figure 1--The Three Legged Stool Pattern
As shown in Figure 1, the architectural process model posits that all development and transformation efforts are based on the interactions of three functions (or sub-processes), Systems Engineering, Design and Implementation, and Program Management. This is true whether a homeowner is replacing a kitchen faucet or NASA is building a new spacecraft. Each of these sub-processes is a role with a given set of skills.
Consequently, as shown in Figure 1, I call this process pattern "The Three-legged Stool" pattern for development and transformation. I will discuss each sub-process as a role with requirements. Therefore, this is what I see as the needs or requirements for the process and the skills for the role. In my next book, I will discuss more about how these can be done.
Consequently, as shown in Figure 1, I call this process pattern "The Three-legged Stool" pattern for development and transformation. I will discuss each sub-process as a role with requirements. Therefore, this is what I see as the needs or requirements for the process and the skills for the role. In my next book, I will discuss more about how these can be done.
As shown in Figure 1, the program management role is to enable and support the other two roles with financial resources and expect results, in the form of a product/system/service meeting the customer's requirements.
Systems Engineering (and System Architecture) Role
The first role is the Systems Engineer/System Architect. This role works with the customer to determine the requirements--"what is needed." I've discussed this role in several posts including Enterprise Architecture and System Architecture and The Definition of the Disciplines of Systems Engineering. Three key functions of this sub-process are:
- Requirements Identification, Management, and Verification/Validation (see various other posts as well)
- Risk Management
- Configuration Management
These are the key responsibilities for the role, though from the posts, cited above, "The devil (and complexity of these) is in the detail".
The key issue with the Systems Engineering/System Architect role within a project/program/effort is that the requirements analysis procedure becomes analysis paralysis. That is, the Systems Engineer (at least within the "waterfall" style effort, that assumes that all of the requirements are known upfront) will spend an inordinate amount of time "requirements gathering"; holding the effort up, to attempt to insure that all of the requirements are "know"--which is patently impossible.
I will discuss solutions to this issue in the last two sections of this post.
Design and Implementation Role
When compared with Systems Engineering, the Design and Implementation functions, procedures, methods, and role are very well understood, taught, trained, and supported with tooling. This role determines "How to meet the customer's needs", as expressed in the "What is needed (requirements)", as shown in Figure 1. These are the product/system/service designer, developers, and implementers of the transformation; the Subject Matter Experts (SMEs) that actually create and implement. These skills are taught in Community Colleges, Colleges, Universities, Trade Schools, and on-line classes. The key sub-processes, procedures, functions, and methods are as varied as the departments in the various institutions of higher learning just mentioned.
There is a significant issue with designers and implementers, they attempt to create the "best" product ever and go into a never ending set of design cycles. Like the Systems Engineering "analysis paralysis", this burns budget and time without producing a deliverable for the customer. One part of this problem is that the SMEs too often forget is that they are developing or transforming against as set of requirements (The "What's Needed"). In the hundreds of small, medium, and large efforts in which I've been involved, I would say that the overwhelming percentage of time, the SMEs never read the customer's requirements because they understand the process, procedure, function, or method far better than the customer. Therefore, they implement a product/system/service that does not do what the customer wants, but does do many functions that the customer does not want. Then the defect management process takes over to rectify these two; which blows the budget and schedule entirely, while making the customer unhappy, to say the least. The second part of this problem is that each SME role is convinced that their role is key to the effort. Consequently, they develop their portion to maximize its internal efficiency while completely neglecting the effectiveness of the product/system/service. While I may be overstating this part somewhat, at least half the time, I've seen efforts where, security for example, attempts to create the equivalent of "write only memory"; the data on it can never be used because the memory cannot be read from. This too, burns budget and schedule while adding no value.
Again, I will discuss solutions to this issue in the last two sections of this post.
There is a significant issue with designers and implementers, they attempt to create the "best" product ever and go into a never ending set of design cycles. Like the Systems Engineering "analysis paralysis", this burns budget and time without producing a deliverable for the customer. One part of this problem is that the SMEs too often forget is that they are developing or transforming against as set of requirements (The "What's Needed"). In the hundreds of small, medium, and large efforts in which I've been involved, I would say that the overwhelming percentage of time, the SMEs never read the customer's requirements because they understand the process, procedure, function, or method far better than the customer. Therefore, they implement a product/system/service that does not do what the customer wants, but does do many functions that the customer does not want. Then the defect management process takes over to rectify these two; which blows the budget and schedule entirely, while making the customer unhappy, to say the least. The second part of this problem is that each SME role is convinced that their role is key to the effort. Consequently, they develop their portion to maximize its internal efficiency while completely neglecting the effectiveness of the product/system/service. While I may be overstating this part somewhat, at least half the time, I've seen efforts where, security for example, attempts to create the equivalent of "write only memory"; the data on it can never be used because the memory cannot be read from. This too, burns budget and schedule while adding no value.
Again, I will discuss solutions to this issue in the last two sections of this post.
Program Management Role
As shown in Figure 1, the role, procedures, and methods of Program Management is to support and facilitate Systems Engineering and Design and Implementation roles. This is called Leadership. An excellent definition of leadership is attributed to Lao Tzu, the Chinese philosopher of approximately 2500 years ago. As I quoted in my book, Organizational Economics: The Formation of Wealth:
- "The best of all leaders is the one who helps people so that, eventually, they don’t need him.
- Then comes the one they love and admire.
- Then comes the one they fear.
- The worst is the one who lets people push him around.
Where there is no trust, people will act in bad faith. The best leader doesn’t say much, but what he says carries weight. When he is finished with his work, the people say, “It happened naturally"."[1]
[1] Lao Tzu, This quote is attributed to Lao Tzu, but no source of the quote has been discovered.
If the program manager does his or her job correctly, they should never be visible to the customer or suppliers; instead they should be the conductor and coordinator of resources for the effort. Too often the project and program managers forget that this is their role and what the best type of leader is. Instead, they consider themselves as the only person responsible for the success of the effort and "in control" of the effort. The method for this control is to manage the customer's programmatic requirements (the financial resources and schedule). This is the the way it works today.
The Way This Works Today: The Program Management Control Pattern
There are two ways to resolve the "requirements analysis paralysis" and the "design the best" issues, either by the Program Manager resolving it, or through the use of a process that is designed to move the effort around these two landmines.
The first way is to give control of the effort to manager. This is the "traditional" approach and the way most organization's run development and transformation efforts . The effort's manager manages the customer's programmatic requirements, (budget and schedule), so the manager plans out the effort including its schedule. This project plan is based on "the requirements", most often plan includes "requirements analysis".
[Rant 1, sorry about this: My question has always been, "How is it possible to plan a project based on requirements when the first task is to analyze the requirements to determine the real requirements?" AND, I have seen major efforts (hundreds of millions to billions) which had no real requirements identified...Huh?]
The Program or Project Manager tells the Systems Engineer and Developer/Implementer when each task is complete; because that's when the time and or money for that task on the schedule is done, regardless of the quality of the work products from the task. "Good" managers keep a "management reserve" in case things don't go as planned. Often, if nothing is going as planned, the manager's knee jerk reaction is to "replan"; which means creating an inch-stone schedule. I've seen and been involved in large efforts where the next level of detail would be to schedule "bathroom breaks". This method for resolution of "analysis paralysis" and "design the best" will almost inevitably cause cost and schedule overruns, unhappy customers, and defective products because the effort's control function to control costs and schedules.
The Program Management Control Pattern
Figure 2 shows the Program Management Control Pattern. The size of the elipse shows the percieved importance of each of the three roles.
The first way is to give control of the effort to manager. This is the "traditional" approach and the way most organization's run development and transformation efforts . The effort's manager manages the customer's programmatic requirements, (budget and schedule), so the manager plans out the effort including its schedule. This project plan is based on "the requirements", most often plan includes "requirements analysis".
[Rant 1, sorry about this: My question has always been, "How is it possible to plan a project based on requirements when the first task is to analyze the requirements to determine the real requirements?" AND, I have seen major efforts (hundreds of millions to billions) which had no real requirements identified...Huh?]
The Program or Project Manager tells the Systems Engineer and Developer/Implementer when each task is complete; because that's when the time and or money for that task on the schedule is done, regardless of the quality of the work products from the task. "Good" managers keep a "management reserve" in case things don't go as planned. Often, if nothing is going as planned, the manager's knee jerk reaction is to "replan"; which means creating an inch-stone schedule. I've seen and been involved in large efforts where the next level of detail would be to schedule "bathroom breaks". This method for resolution of "analysis paralysis" and "design the best" will almost inevitably cause cost and schedule overruns, unhappy customers, and defective products because the effort's control function to control costs and schedules.
The Program Management Control Pattern
Figure 2 shows the Program Management Control Pattern. The size of the elipse shows the percieved importance of each of the three roles.
Figure 2--The Program Management Control Pattern
First, the entire "Three Legged Stool" Pattern is turned upside down is the Program Management Control Pattern. Rather than the Program Manager enabling and supporting the development process by understanding and supporting the development or transformation process, the Program Manager "controls" the process. In Lao Tzu leadership taxonomy, this process pattern makes the Program Manager one of the latter increasingly ineffective types. It also reverses importance of who produces the value in the effort.
To be able to "Control" the effort, the Program Manager requires many intermediate artifacts, schedules, budgets, and status reports, which use up the resources of the efforts and are non-valued work products, the customer might look at these artifacts once during a PMR, PDR, CDR, or other "XDR" (Rant 2: Calling these review Program Management Reviews, instead of some type of Design Review", Preliminary, Critical, etc., demonstrates the overwhelming perceived importance of the programmatic requirements by Program Managers.) I submit that all of these intermediate artifacts are non-value added because 3 months after the effort is completed, the customer or anyone else will not look at any of them except if the customer is suing the the development or transformation organization over the poor quality of the product. All of these management reviews require resources from the Developers/Implementers and the Systems Engineers.
One extreme example of this management review procedure was the procedures used in development of new aircraft for the US Air Force and Navy during the 1980s and 90s--sometimes facts are stranger than fantasy. The DoD required some type of "Development Review" every 3 months. Typically, these were week-long reviews with a large customer team descending on the aircraft's Prime Contractor. Program Management (perhaps, rightly) considered these of ultimate importance to keeping the contract and therefore wanted everyone ready. Consequently, all hands on the effort stopped work 2 weeks prior to work on status reports and presentation rehearsals. Then, after the "review" all hands would spend most of an additional week reviewing the customer's feedback and trying to replan the effort to resolve issues and reduce risk. If you add this up, the team was spending 1 month in every 3 on status reporting. And I have been part of information technology efforts, in this day of instant access to everything on a project where essentially the same thing is happening. Think about it, these aircraft programs spent one third of their budget, and lengthened the programs by 1/3 just for status for what? Intermediate artifacts of no persistent value--Who looked at the presentations of the first Preliminary Design Review after the aircraft was put into operations? [Rant 3: Did the American citizen get value for the investment or was this just another Program Management Entitlement Program funded by the DoD?]
Second, as shown in Figure 2, the Systems Engineering role is substantially reduced in the perception of the Program Manager. An example of this was brought home to me on a multi-billion program, when I asked the chief engineer where the requirements were stored, he quoted the Program's Director as saying, "We don't need no damn requirements, we're too busy doing the work." This Director underlined this thinking; he kept hiring more program management, schedule planners, earned value analysts, and so on, while continuous reducing then eliminating the entire Systems Engineering team and leaving only a few System Architects. He justified this by the need to increased control and cost reduction to meet his budget [Rant 4: and therefore to get his "management bonus"--no one ever heard of the Design or a System Engineering Bonus]. Actually, I've seen this strategy put into play on large (more than $20M) three programs with which I was associated and I've heard about it on several more within the organization I was work for and in other organizations, over the past 10 years.
Another program that I worked on as the Lead Systems Engineer that had the same perception of the Systems Engineer (including the System Architect's role within the Systems Engineering discipline/role). It is an extreme example of all that can go wrong because of lack of Systems Engineering. This effort was development of a portal capability for the organization. It started with a that had 10 management personnel and myself. They articulated a series of ill-thought-out capability statements, continued by defining a series products that had to be used (with no not identification of Customer System or IT Functional requirements), with a 6 weeks schedule, and ended with a budget that was 50 percent of what even the most optimistic budgeteers could "guessitmate". They (the three or four levels of management represented at the meeting) charged me with the equivalent of "Making bricks without straw or mud in the dark", that is, creating the portal. Otherwise, my chances of getting on the Reduction In Force (RIF) list would be drastically increased.
Given that charge, I immediately contacted the software supplier and the development team members from two successful efforts within the organization to determine if there was any hope of the effort within the programmatic constraints to accomplish the task. All three agreed, it could not be done in less than 6 months. Faced with this overwhelming and documented evidence, they asked me what can be done. The result was based on their "capability" statements, and "Requirements (?)" documents from the other two projects, I was able to cobble together a System Architecture Document (SAD) that these managers could point to as visible progress. Additionally, I used a home grown risk tool to document risks as I bumped into them. Additionally, I instituted a risk watch list report on a weekly basis, which all the managers ignored.
At this point one fiscal year ended and with the new year, I was able to have the whole, nationwide, team get together, in part, to get everyones requirements and design constraints. Additionally, I presented an implementation plan for the capabilities I understood they needed. This plan included segmenting the functions for an IOC build in May, followed by several additional several additional builds. Since this management team was used to the waterfall development process, the rejected this with no consideration; they wanted it all by May 15th. In turn, I gave them a plan for producing, more or less, an acceptable number of functions, and an associated risk report with a large number of high probability/catastrophic impact risks. They accepted the plan. The plan failed; here is an example of why.
One of the risks was getting the hardware for the staging and production systems in by March 15th. I submitted the Bill of Materials (BOM) to the PM the first week in February. The suppliers of the hardware that I recommended indicated that the hardware would be shipped within 7 days of the time the order was received. When I handed the BOM to the PM, I also indicated the risk if we didn't get the systems by March 15th. On March 1st, I told him that we would have a day for day slippage in the schedule for every day we didn't receive the hardware. The long and the short of it was that I was called on the carpet for a wire brushing on July 28th when we had the program held up because of lack of hardware. Since I could show the high-level manager that, in fact, I had reported the risk (then issue) week after week in the risk report she received, her ire finally turned on the PM, who felt he had the responsibility.
The net result of these and several other risks induced either by lack of requirements or lack of paying attention to risks resulted in a system that was ready for staging the following December. Management took it upon themselves to roll the portal into production without the verification and validation testing. The final result was a total failure of the effort due to management issues coming from near the top of the management pyramid. Again, this was due to a complete lack of understanding of the role of Systems Engineering and Architecture. In fact, this is a minor sample of the errors and issues--maybe I will write a post on this entire effort as an example of what not to do.
In fact the DoD has acknowledged the pattern shown in Figure 2 and countered it by creating System Engineering Technical Advisory (SETA) contracts.
The Utility of Program Management
To be able to "Control" the effort, the Program Manager requires many intermediate artifacts, schedules, budgets, and status reports, which use up the resources of the efforts and are non-valued work products, the customer might look at these artifacts once during a PMR, PDR, CDR, or other "XDR" (Rant 2: Calling these review Program Management Reviews, instead of some type of Design Review", Preliminary, Critical, etc., demonstrates the overwhelming perceived importance of the programmatic requirements by Program Managers.) I submit that all of these intermediate artifacts are non-value added because 3 months after the effort is completed, the customer or anyone else will not look at any of them except if the customer is suing the the development or transformation organization over the poor quality of the product. All of these management reviews require resources from the Developers/Implementers and the Systems Engineers.
One extreme example of this management review procedure was the procedures used in development of new aircraft for the US Air Force and Navy during the 1980s and 90s--sometimes facts are stranger than fantasy. The DoD required some type of "Development Review" every 3 months. Typically, these were week-long reviews with a large customer team descending on the aircraft's Prime Contractor. Program Management (perhaps, rightly) considered these of ultimate importance to keeping the contract and therefore wanted everyone ready. Consequently, all hands on the effort stopped work 2 weeks prior to work on status reports and presentation rehearsals. Then, after the "review" all hands would spend most of an additional week reviewing the customer's feedback and trying to replan the effort to resolve issues and reduce risk. If you add this up, the team was spending 1 month in every 3 on status reporting. And I have been part of information technology efforts, in this day of instant access to everything on a project where essentially the same thing is happening. Think about it, these aircraft programs spent one third of their budget, and lengthened the programs by 1/3 just for status for what? Intermediate artifacts of no persistent value--Who looked at the presentations of the first Preliminary Design Review after the aircraft was put into operations? [Rant 3: Did the American citizen get value for the investment or was this just another Program Management Entitlement Program funded by the DoD?]
Second, as shown in Figure 2, the Systems Engineering role is substantially reduced in the perception of the Program Manager. An example of this was brought home to me on a multi-billion program, when I asked the chief engineer where the requirements were stored, he quoted the Program's Director as saying, "We don't need no damn requirements, we're too busy doing the work." This Director underlined this thinking; he kept hiring more program management, schedule planners, earned value analysts, and so on, while continuous reducing then eliminating the entire Systems Engineering team and leaving only a few System Architects. He justified this by the need to increased control and cost reduction to meet his budget [Rant 4: and therefore to get his "management bonus"--no one ever heard of the Design or a System Engineering Bonus]. Actually, I've seen this strategy put into play on large (more than $20M) three programs with which I was associated and I've heard about it on several more within the organization I was work for and in other organizations, over the past 10 years.
Another program that I worked on as the Lead Systems Engineer that had the same perception of the Systems Engineer (including the System Architect's role within the Systems Engineering discipline/role). It is an extreme example of all that can go wrong because of lack of Systems Engineering. This effort was development of a portal capability for the organization. It started with a that had 10 management personnel and myself. They articulated a series of ill-thought-out capability statements, continued by defining a series products that had to be used (with no not identification of Customer System or IT Functional requirements), with a 6 weeks schedule, and ended with a budget that was 50 percent of what even the most optimistic budgeteers could "guessitmate". They (the three or four levels of management represented at the meeting) charged me with the equivalent of "Making bricks without straw or mud in the dark", that is, creating the portal. Otherwise, my chances of getting on the Reduction In Force (RIF) list would be drastically increased.
Given that charge, I immediately contacted the software supplier and the development team members from two successful efforts within the organization to determine if there was any hope of the effort within the programmatic constraints to accomplish the task. All three agreed, it could not be done in less than 6 months. Faced with this overwhelming and documented evidence, they asked me what can be done. The result was based on their "capability" statements, and "Requirements (?)" documents from the other two projects, I was able to cobble together a System Architecture Document (SAD) that these managers could point to as visible progress. Additionally, I used a home grown risk tool to document risks as I bumped into them. Additionally, I instituted a risk watch list report on a weekly basis, which all the managers ignored.
At this point one fiscal year ended and with the new year, I was able to have the whole, nationwide, team get together, in part, to get everyones requirements and design constraints. Additionally, I presented an implementation plan for the capabilities I understood they needed. This plan included segmenting the functions for an IOC build in May, followed by several additional several additional builds. Since this management team was used to the waterfall development process, the rejected this with no consideration; they wanted it all by May 15th. In turn, I gave them a plan for producing, more or less, an acceptable number of functions, and an associated risk report with a large number of high probability/catastrophic impact risks. They accepted the plan. The plan failed; here is an example of why.
One of the risks was getting the hardware for the staging and production systems in by March 15th. I submitted the Bill of Materials (BOM) to the PM the first week in February. The suppliers of the hardware that I recommended indicated that the hardware would be shipped within 7 days of the time the order was received. When I handed the BOM to the PM, I also indicated the risk if we didn't get the systems by March 15th. On March 1st, I told him that we would have a day for day slippage in the schedule for every day we didn't receive the hardware. The long and the short of it was that I was called on the carpet for a wire brushing on July 28th when we had the program held up because of lack of hardware. Since I could show the high-level manager that, in fact, I had reported the risk (then issue) week after week in the risk report she received, her ire finally turned on the PM, who felt he had the responsibility.
The net result of these and several other risks induced either by lack of requirements or lack of paying attention to risks resulted in a system that was ready for staging the following December. Management took it upon themselves to roll the portal into production without the verification and validation testing. The final result was a total failure of the effort due to management issues coming from near the top of the management pyramid. Again, this was due to a complete lack of understanding of the role of Systems Engineering and Architecture. In fact, this is a minor sample of the errors and issues--maybe I will write a post on this entire effort as an example of what not to do.
In fact the DoD has acknowledged the pattern shown in Figure 2 and countered it by creating System Engineering Technical Advisory (SETA) contracts.
The Utility of Program Management
[Rant 5: Here's where I become a Heritic to many, for my out of the warehouse thinking.] In the extreme, or so it may seem it is possible that projects don't need a project manager. I don't consider that a rant because it is a fact. Here are two questions that makes the point. "Can an excellent PM with a team of poorly skilled Subject Matter Experts (SMEs) create a top notch product?" and "Can a poor PM with a team of excellent SMEs create a top notch product?" The answer to the first is "Only with an exceptional amount of luck", while the answer to the second is "Yes! Unless the PM creates too much inter-team friction." In other words, except for reducing inter-team friction, which uses resources unproductively, and for guiding and facilitating the use of resources, the PM produces no value, in fact, the PM creates no value, just reduces friction, which preserves value and potential value.
None of the latter three types of leaders, as described by Lao Tzu, can perform perform this service to the team, the ones I call in my book, the Charismatic, the Dictator, or the Incompetent. In other words, the PM can't say and act as if "The floggings will continue until morale improves".
Instead, the PM must be a leader of the first type as described by Lao Tzu and as I called in my book as "the coach or conductor". And any team member can be that leader. As a Lead Developer and as a Systems Engineer, I've run medium sized projects without a program manager and been highly successful--success in this case being measured by bringing the effort in under cost, ahead of schedule, while meeting or exceeding the customers requirements Yet, on those none of the programs, for which I was the lead systems engineer and which had a program manager and who's mission was to bring in the effort on time and within budget, was successful. On the other hand, I've been on two programs where the PM listened with his/her ears rather than his/her month and both paid attention to the System Requirements; those efforts were highly successful.
The net of this is that a coaching/conducting PM can make a good team better, but cannot make a bad team good, while a PM in creating better projects plans, producing better and more frequent status reports, and creating and managing to more detailed schedules will always burn budget and push the schedule to the right.
None of the latter three types of leaders, as described by Lao Tzu, can perform perform this service to the team, the ones I call in my book, the Charismatic, the Dictator, or the Incompetent. In other words, the PM can't say and act as if "The floggings will continue until morale improves".
Instead, the PM must be a leader of the first type as described by Lao Tzu and as I called in my book as "the coach or conductor". And any team member can be that leader. As a Lead Developer and as a Systems Engineer, I've run medium sized projects without a program manager and been highly successful--success in this case being measured by bringing the effort in under cost, ahead of schedule, while meeting or exceeding the customers requirements Yet, on those none of the programs, for which I was the lead systems engineer and which had a program manager and who's mission was to bring in the effort on time and within budget, was successful. On the other hand, I've been on two programs where the PM listened with his/her ears rather than his/her month and both paid attention to the System Requirements; those efforts were highly successful.
The net of this is that a coaching/conducting PM can make a good team better, but cannot make a bad team good, while a PM in creating better projects plans, producing better and more frequent status reports, and creating and managing to more detailed schedules will always burn budget and push the schedule to the right.
A Short Cycle Process: The Way It Could and Should Work
As noted near the start of this post, there are two ways to resolve the "requirements analysis paralysis" and the "design the best" issues, either by Program Management Control, or through the use of a process that is designed to move the effort around these two landmines.
This second solution uses a development or transformation process that assumes that "Not all requirements are known upfront". This single change of assumption makes all the difference. The development and transformation process must, by necessity, take this assumption into account (see my post The Generalize Agile Development and Implementation Process for Software and Hardware for an outline of such a process). This takes the pressure off the customer and Systems Engineer to determine all of the requirements upfront and the Developer/Implementer to "design the best" product initially. That is, since not all of the requirements are assumed to be known upfront, the Systems Engineer can document and have the customer sign off on an initial set of known requirements early in the process (within the first couple of weeks), with the expectation that more requirements will be identified by the customer during the process. The Developer/Implementer can start to design and implement the new product/system/service based on these requirements with the understanding that as the customer and Systems Engineer identify and prioritize more the of the customer's real system requirements. Therefore, they don't have to worry about designing the "best" the first time; simply because they realize that without all the requirements, they can't.
Changing this single assumption has additional consequences for Program Management. First, there is really no way to plan and schedule the effort; the assumption that not all the requirements are known upfront means that if a PM attempts to "plan and schedule" the effort is an "exercise in futility." What I mean by that is if the requirements change at the end/start of the new cycle, then the value of a schedule of more than the length of one cycle is zero because at the end of the cycle the plan and schedule, by definition of the process, change. With the RAD process I created, this was the most culturally difficult issue I faced with getting PM and management to understand and accept. In fact, a year after I moved to a new position, the process team imposed a schedule on the process.
Second, the assumptions forces the programmatic effort into a Level Of Effort (LOE) type of budgeting and scheduling procedure. Since there is no way to know what requirements are going to be the customer's highest priority in succeeding cycles, the Program Manager, together with the team must assess the LOE to meet each of the requirements from the highest priority down. They would do this by assessing the complexity of the requirement and the level of risk with creating the solution that meets the requirement. As soon as the team runs out of resources forecast for that cycle, they have reached the cutoff point for that cycle. They would present the set to the customer for the customer's concurrence. Once they have customer sign off, they would start the cycle. Sometimes a single Use Case-based requirement with its design constraints will require more resources than are available to the team during one cycle. In that case, the team, not the PM, must refactor the requirement.
For example, suppose there is a mathematically complex transaction, within a knowledge-based management system, which requires an additional level of access control, new hardware, new COTS software, new networking capablities, new inputs and input feeds, new graphics and displays, and transformed reporting. This is definitely sufficiently complex that no matter how many high quality designers, developers, and implementers you up on the effort, it cannot be completed within one to perhaps even three months (This is the "9 women can't make a baby in a month" principle). Then the team must refactor (divide up) the requirement into chunks that are doable by the team within the cycle's period, say one to three months. For example, the first cycle might define and delimit the hardware required and develop the new level of access control; and so on for the number of cycles needed to meet the requirement.
Third, with this assumption of "not having all the requirements", the PM must pay most attention to the requirements, their verification and validation, and to risk reduction. All of these functions lay within the responsibility of the Systems Engineer; but the PM must pay attention to them to help best allocate the budget and time resources.
Fourth, there is no real need for PMRs, status reports, or Earned Value metrics. The reason is simple, high customer involvement. The customer must review the progress of the effort every month at a minimum, generally every week. This review is given by the developers demonstrating the functions of the product, system, or service on which they are working. And if the customer is always reviewing the actual development work, why is there a need for status, especially for an LOE effort?
Fifth, rolling a new system or service has significant implications for the customer.for the timing and size of the ROI for the development or transformation effort. With an IOC product, system, or service, the customer can start to use it and in using the IOC will be able to, at a minimum, identify missing requirements. In some cases, much more. For example, in one effort, in which I performed the systems engineering role, during the first cycle the team created the access control system and the data input functions for a transactional website. During the second cycle, the customer inserted data into the data store for the system. While doing this, the customer discovered sufficient errors in the data to pay for the effort. Consequently, they were delighted with the system and were able to fund additional functionality, further improving their productivity. If the effort had been based on the waterfall, the customer would have had to wait until the entire effort was complete, may not have been as satisfied with the final product (more design defects because of unknown requirements), would not have discovered the errors, and therefore, would not have funded an extension to the effort. So it turned out for a win for the customer-- more functionality and greater productivity--and for the supply--more work.
In using a short cycle process based on assuming "unknown requirements", there will always be unfulfilled customer system requirements at the end of this type of development or transformation process. This is OK. It's OK for the customer because the development or transformation team spent the available budgetary and time requirements in creating a product, system, or service that meets the customer's highest priority requirements, even if those requirements were not initially identified; that is, the customer "got the biggest bang for the buck". It's OK for the team because a delighted customer tends to work hard at getting funding for the additional system requirements. When such a process is used in a highly disciplined manner, the customer invariably comes up with additional funding. This has been my experience on over 50 projects with which I was associated, and many others that were reported to me as Lead Systems Engineer for a Large IT organization.
Conclusions and Opinions
The following are my conclusions on this topic:
- If a development or transformation effort focuses on meeting the customer's system requirements, the effort has a much better chance of success than if the focus is on meeting the programmatic requirements.
- If the single fundamental assumption is changed from "All the requirements are known up front" to "Not all the requirements are known up front" the effort has the opportunity to be successful or much more successful by the only metric that counts, the customer is getting more of what he or she wants, and that increases customer satisfaction.
- If the development or transformation effort can roll out small increments will increase the customer's ROI for the product, system, or service.
- Having a Program Manager, who's only independent responsibility is managing resources be accountable for an effort is like having the CEO of an organization report to the CFO; you get cost efficient, but not effective products, systems, or services. [Final Rant: I know good PMs have value, but if a team works, that is because the PM is a leader of the first type: a coach and conductor.] Having a Program Manager that understands the "three legged stool" pattern for development or transformation, and who executes to it will greatly enhance the chance for success of the effort.