Thursday, June 30, 2011

Systems Engineering, Product/System/Service Implementing, and Program Management

A Pattern for Development and Transformation Efforts
Recently a discussion started in a LinkedIn Group that recruiters, HR, and Management was using the term "Systems Engineer" indiscriminately.   The conclusion was that the discipline of Systems Engineering and the role of the Systems Engineer in Development and Transformation Efforts is poorly understood by most people, and perhaps by many claiming to be Systems Engineers.  In my experience of building a Systems Engineer group from 5 to 55, I can attest to this conclusion.

Currently, I am working on a second book, with the working title of "Systems Engineering, System Architecture, and Enterprise Architecture".  In the book, I'm attempting to distill 45+ years of experience and observation of many efforts, from minor report revisions to the Lunar Module, F-14, B-2, and X-29 aircraft creation efforts, to statewide IT outsourcing efforts.  This post contains excerpts of several concepts from this manuscript.

The Archetypal Pattern for Product/System/Service Development and Transformation
At a high level there is an architectural process pattern for Product/System/Service development and transformation.  I discuss this pattern in my current book, Organizational Economics: The Formation of Wealth, and it is one key pattern for my next book.  This pattern is shown in Figure 1.
Figure 1--The Three Legged Stool Pattern

As shown in Figure 1, the architectural process model posits that all development and transformation efforts are based on the interactions of three functions (or sub-processes), Systems Engineering, Design and Implementation, and Program Management.  This is true whether a homeowner is replacing a kitchen faucet or NASA is building a new spacecraft.  Each of these sub-processes is a role with a given set of skills.

Consequently, as shown in Figure 1, I call this process pattern "The Three-legged Stool" pattern for development and transformation.  I will discuss each sub-process as a role with requirements.  Therefore, this is what I see as the needs or requirements for the process and the skills for the role.  In my next book, I will discuss more about how these can be done.

As shown in Figure 1, the program management role is to enable and support the other two roles with financial resources and expect results, in the form of a product/system/service meeting the customer's requirements.

Systems Engineering (and System Architecture) Role
The first role is the Systems Engineer/System Architect.  This role works with the customer to determine the requirements--"what is needed."  I've discussed this role in several posts including Enterprise Architecture and System Architecture and The Definition of the Disciplines of Systems Engineering.  Three key functions of this sub-process are:
These are the key responsibilities for the role, though from the posts, cited above, "The devil (and complexity of these) is in the detail".

The key issue with the Systems Engineering/System Architect role within a project/program/effort is that the requirements analysis procedure becomes analysis paralysis.  That is, the Systems Engineer (at least within the "waterfall" style effort, that assumes that all of the requirements are known upfront) will spend an inordinate amount of time "requirements gathering"; holding the effort up, to attempt to insure that all of the requirements are "know"--which is patently impossible.

 I will discuss solutions to this issue in the last two sections of this post.

Design and Implementation Role
When compared with Systems Engineering, the Design and Implementation functions, procedures, methods, and role are very well understood, taught, trained, and supported with tooling.  This role determines "How to meet the customer's needs", as expressed in the "What is needed (requirements)", as shown in Figure 1.  These are the product/system/service designer, developers, and implementers of the transformation; the Subject Matter Experts (SMEs) that actually create and implement.  These skills are taught in Community Colleges, Colleges, Universities, Trade Schools, and on-line classes.  The key sub-processes, procedures, functions, and methods are as varied as the departments in the various institutions of higher learning just mentioned.

There is a significant issue with designers and implementers, they attempt to create the "best" product ever and go into a never ending set of design cycles.  Like the Systems Engineering "analysis paralysis", this burns budget and time without producing a deliverable for the customer.  One part of this problem is that the SMEs too often forget is that they are developing or transforming against as set of requirements (The "What's Needed").  In the hundreds of small, medium, and large efforts in which I've been involved, I would say that the overwhelming percentage of time, the SMEs never read the customer's requirements because they understand the process, procedure, function, or method far better than the customer.  Therefore, they implement a product/system/service that does not do what the customer wants, but does do many functions that the customer does not want.  Then the defect management process takes over to rectify these two; which blows the budget and schedule entirely, while making the customer unhappy, to say the least. The second part of this problem is that each SME role is convinced that their role is key to the effort.  Consequently, they develop their portion to maximize its internal efficiency while completely neglecting the effectiveness of the product/system/service.  While I may be overstating this part somewhat, at least half the time, I've seen efforts where, security for example, attempts to create the equivalent of "write only memory"; the data on it can never be used because the memory cannot be read from.  This too, burns budget and schedule while adding no value.

Again, I will discuss solutions to this issue in the last two sections of this post.

Program Management Role
As shown in Figure 1, the role, procedures, and methods of Program Management is to support and facilitate Systems Engineering and Design and Implementation roles.   This is called Leadership.   An excellent definition of leadership is attributed to Lao Tzu, the Chinese philosopher of approximately 2500 years ago.  As I quoted in my book, Organizational Economics: The Formation of Wealth:
  • "The best of all leaders is the one who helps people so that, eventually, they don’t need him.
  • Then comes the one they love and admire.
  • Then comes the one they fear.
  • The worst is the one who lets people push him around.
Where there is no trust, people will act in bad faith.  The best leader doesn’t say much, but what he says carries weight.  When he is finished with his work, the people say, “It happened naturally"."[1]
[1] Lao Tzu, This quote is attributed to Lao Tzu, but no source of the quote has been discovered.
If the program manager does his or her job correctly, they should never be visible to the customer or suppliers; instead they should be the conductor and coordinator of resources for the effort.  Too often the project and program managers forget that this is their role and what the best type of leader is. Instead, they consider themselves as the only person responsible for the success of the effort and "in control" of the effort.  The method for this control is to manage the customer's programmatic requirements (the financial resources and schedule).  This is the the way it works today.

The Way This Works Today: The Program Management Control Pattern
There are two ways to resolve the "requirements analysis paralysis" and the "design the best" issues, either by the Program Manager resolving it, or through the use of a process that is designed to move the effort around these two landmines.

The first way is to give control of the effort to manager.  This is the "traditional" approach and the way most organization's run development and transformation efforts .  The effort's manager manages the customer's programmatic requirements, (budget and schedule), so the manager plans out the effort including its schedule.  This project plan is based on "the requirements", most often plan includes "requirements analysis".

[Rant 1, sorry about this: My question has always been, "How is it possible to plan a project based on requirements when the first task is to analyze the requirements to determine the real requirements?"  AND, I have seen major efforts (hundreds of millions to billions) which had no real requirements identified...Huh?]

The Program or Project Manager tells the Systems Engineer and Developer/Implementer when each task is complete; because that's when the time and or money for that task on the schedule is done, regardless of the quality of the work products from the task.  "Good" managers keep a "management reserve" in case things don't go as planned.  Often, if nothing is going as planned, the manager's knee jerk reaction is to "replan"; which means creating an inch-stone schedule.  I've seen and been involved in large efforts where the next level of detail would be to schedule "bathroom breaks".  This method for resolution of "analysis paralysis" and "design the best" will almost inevitably cause cost and schedule overruns, unhappy customers, and defective products because the effort's control function to control costs and schedules.

The Program Management Control Pattern
Figure 2 shows the Program Management Control Pattern.  The size of the elipse shows the percieved importance of each of the three roles.

Figure 2--The Program Management Control Pattern

First, the entire "Three Legged Stool" Pattern is turned upside down is the Program Management Control Pattern.  Rather than the Program Manager enabling and supporting the development process by understanding and supporting the development or transformation process, the Program Manager "controls" the process.  In Lao Tzu leadership taxonomy, this process pattern makes the Program Manager one of the latter increasingly ineffective types.  It also reverses importance of who produces the value in the effort.

To be able to "Control" the effort, the Program Manager requires many intermediate artifacts, schedules, budgets, and status reports, which use up the resources of the efforts and  are non-valued work products, the customer might look at these artifacts once during a PMR, PDR, CDR, or other "XDR" (Rant 2: Calling these review Program Management Reviews, instead of some type of Design Review", Preliminary, Critical, etc., demonstrates the overwhelming perceived importance of the programmatic requirements by Program Managers.)  I submit that all of these intermediate artifacts are non-value added because 3 months after the effort is completed, the customer or anyone else will not look at any of them except if the customer is suing the the development or transformation organization over the poor quality of the product.  All of these management reviews require resources from the Developers/Implementers and the Systems Engineers.

One extreme example of this management review procedure was the procedures used in development of new aircraft for the US Air Force and Navy during the 1980s and 90s--sometimes facts are stranger than fantasy.  The DoD required some type of "Development Review" every 3 months.  Typically, these were week-long reviews with a large customer team descending on the aircraft's Prime Contractor.  Program Management (perhaps, rightly) considered these of ultimate importance to keeping the contract and therefore wanted everyone ready.  Consequently, all hands on the effort stopped work 2 weeks prior to work on status reports and presentation rehearsals.  Then, after the "review" all hands would spend most of an additional week reviewing the customer's feedback and trying to replan the effort to resolve issues and reduce risk.  If you add this up, the team was spending 1 month in every 3 on status reporting.  And I have been part of information technology efforts, in this day of instant access to everything on a project where essentially the same thing is happening.  Think about it, these aircraft programs spent one third of their budget, and lengthened the programs by 1/3 just for status for what?  Intermediate artifacts of no persistent value--Who looked at the presentations of the first Preliminary Design Review after the aircraft was put into operations?  [Rant 3: Did the American citizen get value for the investment or was this just another Program Management Entitlement Program funded by the DoD?]

Second, as shown in Figure 2, the Systems Engineering role is substantially reduced  in the perception of the Program Manager.  An example of this was brought home to me on a multi-billion program, when I asked the chief engineer where the requirements were stored, he quoted the Program's Director as saying, "We don't need no damn requirements, we're too busy doing the work."  This Director underlined this thinking; he kept hiring more program management, schedule planners, earned value analysts, and so on, while continuous reducing then eliminating the entire Systems Engineering team and leaving only a few System Architects.  He justified this by the need to increased control and cost reduction to meet his budget [Rant 4: and therefore to get his "management bonus"--no one ever heard of the Design or a System Engineering Bonus].  Actually, I've seen this strategy put into play on large (more than $20M) three programs with which I was associated and I've heard about it on several more within the organization I was work for and in other organizations, over the past 10 years.  

Another program that I worked on as the Lead Systems Engineer that had the same perception of the Systems Engineer (including the System Architect's role within the Systems Engineering discipline/role).  It is an extreme example of all that can go wrong because of lack of Systems Engineering.  This effort was development of a portal capability for the organization.  It started with a that had 10 management personnel and myself.  They articulated a series of ill-thought-out capability statements, continued by defining a series products that had to be used (with no not identification of Customer System or IT Functional requirements), with a 6 weeks schedule, and ended with a  budget that was 50 percent of what even the most optimistic budgeteers could "guessitmate".  They (the three or four levels of management represented at the meeting) charged me with the equivalent of "Making bricks without straw or mud in the dark", that is, creating the portal.  Otherwise, my chances of getting on the Reduction In Force (RIF) list would be drastically increased.

Given that charge, I immediately contacted the software supplier and the development team members from two successful efforts within the organization to determine if there was any hope of the effort within the programmatic constraints to accomplish the task.  All three agreed, it could not be done in less than 6 months.  Faced with this overwhelming and documented evidence, they asked me what can be done.  The result was based on their "capability" statements, and "Requirements (?)" documents from the other two projects, I was able to cobble together a System Architecture Document (SAD) that these managers could point to as visible progress.  Additionally, I used a home grown risk tool to document risks as I bumped into them.  Additionally, I instituted a risk watch list report on a weekly basis, which all the managers ignored.

At this point one fiscal year ended and with the new year, I was able to have the whole, nationwide, team get together, in part, to get everyones requirements and design constraints.  Additionally, I presented an implementation plan for the capabilities I understood they needed.  This plan included segmenting the functions for an IOC build in May, followed by several additional several additional builds.  Since this management team was used to the waterfall development process, the rejected this with no consideration; they wanted it all by May 15th.  In turn, I gave them a plan for producing, more or less, an acceptable number of functions, and an associated risk report with a large number of high probability/catastrophic impact risks.  They accepted the plan.  The plan failed; here is an example of why.

One of the risks was getting the hardware for the staging and production systems in by March 15th.  I submitted the Bill of Materials (BOM) to the PM the first week in February.  The suppliers of the hardware that I recommended indicated that the hardware would be shipped within 7 days of the time the order was received.  When I handed the BOM to the PM, I also indicated the risk if we didn't get the systems by March 15th.  On March 1st, I told him that we would have a day for day slippage in the schedule for every day we didn't receive the hardware.  The long and the short of it was that I was called on the carpet for a wire brushing on July 28th when we had the program held up because of lack of hardware.  Since I could show the high-level manager that, in fact, I had reported the risk (then issue) week after week in the risk report she received, her ire finally turned on the PM, who felt he had the responsibility.

The net result of these and several other risks induced either by lack of requirements or lack of paying attention to risks resulted in a system that was ready for staging the following December.  Management took it upon themselves to roll the portal into production without the verification and validation testing.  The final result was a total failure of the effort due to management issues coming from near the top of the management pyramid.  Again, this was due to a complete lack of understanding of the role of Systems Engineering and Architecture.  In fact, this is a minor sample of the errors and issues--maybe I will write a post on this entire effort as an example of what not to do.

In fact the DoD has acknowledged the pattern shown in Figure 2 and countered it by creating System Engineering Technical Advisory (SETA) contracts.

The Utility of Program Management
[Rant 5: Here's where I become a Heritic to many, for my out of the warehouse thinking.]  In the extreme, or so it may seem it is possible that projects don't need a project manager.  I don't consider that a rant because it is a fact.  Here are two questions that makes the point.  "Can an excellent PM with a team of poorly skilled Subject Matter Experts (SMEs) create a top notch product?" and  "Can a poor PM with a team of excellent SMEs create a top notch product?"  The answer to the first is "Only with an exceptional amount of luck", while the answer to the second is "Yes! Unless the PM creates too much inter-team friction."  In other words, except for reducing inter-team friction, which uses resources unproductively, and for guiding and facilitating the use of resources, the PM produces no value, in fact, the PM creates no value, just reduces friction, which preserves value and potential value.

None of the latter three types of leaders, as described by Lao Tzu, can perform perform this service to the team, the ones I call in my book, the Charismatic, the Dictator, or the Incompetent. In other words, the PM can't say and act as if "The floggings will continue until morale improves".

Instead, the PM must be a leader of the first type as described by Lao Tzu and as I called in my book as "the coach or conductor".  And any team member can be that leader.  As a Lead Developer and as a Systems Engineer, I've run medium sized projects without a program manager and been highly successful--success in this case being measured by bringing the effort in under cost, ahead of schedule, while meeting or exceeding the customers requirements  Yet, on those none of the programs, for which I was the lead systems engineer and which had a program manager and who's mission was to bring in the effort on time and within budget, was successful.  On the other hand, I've been on two programs where the PM listened with his/her ears rather than his/her month and both paid attention to the System Requirements; those efforts were highly successful.

The net of this is that a coaching/conducting PM can make a good team better, but cannot make a bad team good, while a PM in creating better projects plans, producing better and more frequent status reports, and creating and managing to more detailed schedules will always burn budget and push the schedule to the right.

A Short Cycle Process: The Way It Could and Should Work
As noted near the start of this post, there are two ways to resolve the "requirements analysis paralysis" and the "design the best" issues, either by Program Management Control, or through the use of a process that is designed to move the effort around these two landmines.

This second solution uses a development or transformation process that assumes that "Not all requirements are known upfront".  This single change of assumption makes all the difference.  The development and transformation process must, by necessity, take this assumption into account (see my post The Generalize Agile Development and Implementation Process for Software and Hardware for an outline of such a process).  This takes the pressure off the customer and Systems Engineer to determine all of the requirements upfront and the Developer/Implementer to "design the best" product initially.  That is, since not all of the requirements are assumed to be known upfront, the Systems Engineer can document and have the customer sign off on an initial set of known requirements early in the process (within the first couple of weeks), with the expectation that more requirements will be identified by the customer during the process.  The Developer/Implementer can start to design and implement the new product/system/service based on these requirements with the understanding that as the customer and Systems Engineer identify and prioritize more the of the customer's real system requirements.  Therefore, they don't have to worry about designing the "best" the first time; simply because they realize that without all the requirements, they can't.
Changing this single assumption has additional consequences for Program Management.  First, there is really no way to plan and schedule the effort; the assumption that not all the requirements are known upfront means that if a PM attempts to "plan and schedule" the effort is an "exercise in futility."  What I mean by that is if the requirements change at the end/start of the new cycle, then the value of a schedule of more than the length of one cycle is zero because at the end of the cycle the plan and schedule, by definition of the process, change.  With the RAD process I created, this was the most culturally difficult issue I faced with getting PM and management to understand and accept.  In fact, a year after I moved to a new position, the process team imposed a schedule on the process.

Second, the assumptions forces the programmatic effort into a Level Of Effort (LOE) type of budgeting and scheduling procedure.  Since there is no way to know what requirements are going to be the customer's highest priority in succeeding cycles, the Program Manager, together with the team must assess the LOE to meet each of the requirements from the highest priority down.  They would do this by assessing the complexity of the requirement and the level of risk with creating the solution that meets the requirement.  As soon as the team runs out of resources forecast for that cycle, they have reached the cutoff point for that cycle.  They would present the set to the customer for the customer's concurrence.  Once they have customer sign off, they would start the cycle.  Sometimes a single Use Case-based requirement with its design constraints will require more resources than are available to the team during one cycle.  In that case, the team, not the PM, must refactor the requirement. 

For example, suppose there is a mathematically complex transaction, within a knowledge-based management system, which requires an additional level of access control, new hardware, new COTS software, new networking capablities, new inputs and input feeds, new graphics and displays, and transformed reporting.  This is definitely sufficiently complex that no matter how many high quality designers, developers, and implementers you up on the effort, it cannot be completed within one to perhaps even three months (This is  the "9 women can't make a baby in a month" principle).  Then the team must refactor (divide up) the requirement into chunks that are doable by the team within the cycle's period, say one to three months.  For example, the first cycle might define and delimit the hardware required and develop the new level of access control; and so on for the number of cycles needed to meet the requirement.

Third, with this assumption of "not having all the requirements", the PM must pay most attention to the requirements, their verification and validation, and to risk reduction.  All of these functions lay within the responsibility of the Systems Engineer; but the PM must pay attention to them to help best allocate the budget and time resources.

Fourth, there is no real need for PMRs, status reports, or Earned Value metrics.  The reason is simple, high customer involvement.  The customer must review the progress of the effort every month at a minimum, generally every week.  This review is given by the developers demonstrating the functions of the product, system, or service on which they are working.  And if the customer is always reviewing the actual development work, why is there a need for status, especially for an LOE effort?

Fifth, rolling a new system or service has significant implications for the customer.for the timing and size of the ROI for the development or transformation effort.  With an IOC product, system, or service, the customer can start to use it and in using the IOC will be able to, at a minimum, identify missing requirements.  In some cases, much more.  For example, in one effort, in which I performed the systems engineering role, during the first cycle the team created the access control system and the data input functions for a transactional website.  During the second cycle, the customer inserted data into the data store for the system.  While doing this, the customer discovered sufficient errors in the data to pay for the effort.  Consequently, they were delighted with the system and were able to fund additional functionality, further improving their productivity.  If the effort had been based on the waterfall, the customer would have had to wait until the entire effort was complete, may not have been as satisfied with the final product (more design defects because of unknown requirements), would not have discovered the errors, and therefore, would not have funded an extension to the effort.  So it turned out for a win for the customer-- more functionality and greater productivity--and for the supply--more work.

In using a short cycle process based on assuming "unknown requirements", there will always be unfulfilled customer system requirements at the end of this type of development or transformation process.  This is OK.  It's OK for the customer because the development or transformation team spent the available budgetary and time requirements in creating a product, system, or service that meets the customer's highest priority requirements, even if those requirements were not initially identified; that is, the customer "got the biggest bang for the buck".  It's OK for the team because a delighted customer tends to work hard at getting funding for the additional system requirements.  When such a process is used in a highly disciplined manner, the customer invariably comes up with additional funding.  This has been my experience on over 50 projects with which I was associated, and many others that were reported to me as Lead Systems Engineer for a Large IT organization.

Conclusions and Opinions
The following are my conclusions on this topic:
  1. If a development or transformation effort focuses on meeting the customer's system requirements, the effort has a much better chance of success than if the focus is on meeting the programmatic requirements.
  2. If the single fundamental assumption is changed from "All the requirements are known up front" to "Not all the requirements are known up front" the effort has the opportunity to be successful or much more successful by the only metric that counts, the customer is getting more of what he or she wants, and that increases customer satisfaction.
  3. If the development or transformation effort can roll out small increments will increase the customer's ROI for the product, system, or service.
  4. Having a Program Manager, who's only independent responsibility is managing resources be accountable for an effort is like having the CEO of an organization report to the CFO; you get cost efficient, but not effective products, systems, or services.  [Final Rant: I know good PMs have value, but if a team works, that is because the PM is a leader of the first type: a coach and conductor.] Having a Program Manager that understands the "three legged stool" pattern for development or transformation, and who executes to it will greatly enhance the chance for success of the effort.

Monday, June 20, 2011

Transformation Benefits Measurement, the Political and Technical Hard Part of Mission Alignment and Enterprise Architecture

This post will sound argumentative (and a bit of Ranting--in fact, I will denote the rants in color.  Some will agree, some will laugh, and Management and Finance Engineering may become defensive), and probably shows my experiences with management and finance engineering (Business Management Incorporated, that owns all businesses) in attempting benefits measurement.  However, I'm trying to point out the PC landmines (especially in the Rants) that I stepped on so that other Systems Engineers, System Architects, and Enterprise Architects don't step on these particular landmines--there are still plenty of others, so find your own, then let me know.

A good many of the issues result from a poor understanding by economists and Finance Engineers of the underlying organizational economic model embodied in Adam Smith's work, which is the foundation of Capitalism.  The result of this poor understanding is an incomplete model, as I describe in Organizational Economics: The Formation of Wealth.

Transformation Benefits Measurement Issues
As Adam Smith discussed in Chapter 1, Book 1, of his Magna Opus, commonly called The Wealth of Nations, a transformation of process and the insertion of tools transforms the productivity processes.  Adam Smith called the process transformation "The division of labour", or more commonly today, the assembly line.  At the time, 1776, where all industry of "cottage industry" this transformation Enterprise Architecture was revolutionary.  He did this using an example of straight pin production.  Further, he discussed that concept that tooling makes this process even more effective, since tools are process multipliers. In the military, their tools, weapons, are "force multipliers", which for the military is a major part of their process. Therefore, both transformation of processes and transforming tooling should increase the productivity of an organization.  Productivity is increasing the effectiveness of the processes of an organization to achieve its Vision or meet the requirements of it various Missions supporting the vision.

The current global business cultural, especial finance from Wall St. to the individual CFOs and other "finance engineers", militates against reasonable benefits measurement of the transformation of processes and insertion and maintenance of tools.  The problem is that finance engineers do not believe in either increased process effectiveness or cost avoidance (to increase the cost efficiency of a process).

Issue #1 the GFI Process
Part of the problem is the way most organizations decide on IT investments in processes and tooling.  The traditional method is the GFI (Go For It) methodology that involves two functions, a "beauty contest" and "backroom political dickering".  That is, every function within an organization has its own pet projects to make its function better (and thereby its management's bonuses larger).  The GFI decision support process is usually served up with strong dashes of NIH (Not Invented Here) and LSI (Last Salesman In) syndromes.

This is like every station on an assembly line dickering for funding to better perform its function.  The more PC functions would have an air conditioned room to watch the automated tooling perform the task, while those less PC would have their personnel chained to the workstation, while they used hand tools to perform their function; and not any hand tools, but the ones management thought they needed--useful or not.  Contrast this with the way the Manufacturing Engineering units of most manufacturing companies work.  And please don't think I'm using hyperbole because I can cite chapter and verse where I've seen it, and in after hours discussions with cohorts from other organizations, they've told me the same story.

As I've discussed in A Model of an Organization's Control Function using IDEF0 Model, The OODA Loop, and Enterprise Architecture, the Enterprise Architect and System Architect can serve in the "Manufacturing Engineer" role for many types of investment decisions.  However, this is still culturally unpalatable in many organizations since it gives less wiggle room to finance engineers and managers.

Issue #2 Poorly Formalized Increased Process Effectiveness Measuring Procedures
One key reason (or at least rationale) why management and especially finance engineers find wiggle room is that organizations (management and finance engineering) is unable (unwilling) to fund the procedures and tooling to accurately determine pre- and post-transformation process effectiveness because performing the procedures and maintaining the tools uses resources, while providing no ROI--this quarter. [Better to use the money for Management Incentives, rather than measuring the decisions management makes].

To demonstrate how poorly the finance engineering religion understands the concept of Increased Process Effectiveness, I will use the example of Cost Avoidance, which is not necessarily even Process Effectiveness, but is usually Cost Efficiency.  Typically, Cost Avoidance is investing in training, process design, or tooling now to reduce the cost of operating or maintaining the processes and tooling later. 

[Rant 1: a good basic academic definition and explanation cost avoidance is found at  It includes this definition:
"Cost avoidance is a cost reduction that results from a spend that is lower then the spend that would have otherwise been required if the cost avoidance exercise had not been undertaken." ]
As discussed in the article just cited, in the religion of Finance Engineering, cost avoidance is considered as "soft" or "intangible".  The reason finance engineer cite for not believing cost avoidance number is that the "savings classified as avoidance (are suspect) due to a lack of historical comparison." 

[Rant 2: Of course Cost Reduction Saving is like that of avoiding a risk (an unknown) by changing the design is not valid, see my post The Risk Management Process because the risk never turned into an issue (a problem).] 

This is as opposed to cost reduction, where the Finance Engineer can measure the results in ROI.  This makes cost reduction efforts much more palatable to Finance Engineers, managers, and Wall St. Traders.  Consequently, increased cost efficiency is much more highly valued by this group than Increased Process Effectiveness.  Yet, as discussed above, the reason for tools (and process transformations) is to Increase Process Effectiveness.   So, Finance Engineering puts the "emphassus on the wrong salobul".

They are aided an abetted by (transactional and other non-leader) management.  A discussed recently on CNBC Squawk Box, the recent the CEOs of major corporations cite for their obscenely high salaries is that they make decisions that avoid risk. 

[Rant 3: Of course this is ignoring the fact that going into and operating a business is risky, by definition; and any company that avoids risk is on the "going out of business curve".  So most executives in US Companies today are paid 7 figure salaries to put their companies on "the going out of business curve"--interesting]

However, Cost Avoidance is one of two ways to grow a business.  The first is to invent a new product or innovate on an existing product (e.g., the IPAD) such that the company generates new business.  The second, is to Increase Process Effectiveness. 

Management, especially mid- and upper-level management, does not want to acknowledge the role of process transformation or the addition or upgrade of tooling as increasing the effectiveness of a process, procedure, method, or function.  The reason is simple, it undermines the ability for them to claim it as their own ability to manage their assets (read employees) better and therefore "earn" a bonus or promotion.  Consequently, this leaves those Enterprise and System Architects always attempting to "prove their worth" without using the metric that irrefutably prove the point.

These are the key cultural issue (problems) in selling real Enterprise Architecture and System Architecture.  And frankly, the only organizations that will accept this cultural type of change are entrepreneurial, and those large organization in a panic or desperation.  These are the only ones that are willing to change their culture.

Benefits Measurement within the OODA Loop
Being an Enterprise and an Organizational Process Architect, as well as a Systems Engineer and System Architect, I know well that measuring the benefits of a transformation (i.e., cost avoidance) is technically difficult at best; and is especially so, if the only metrics "management" considers are financial. 

Measuring Increased Process Effectiveness
In an internal paper I did in 2008, Measuring the Process Effectiveness of Deliverable of a Program [Rant 4: ignored with dignity by at least two organizations when I proposed R&D to create a benefits measurement procedure], I cited a paper: John Ward, Peter Murray and Elizabeth Daniel, Benefits Management Best Practice Guidelines (2004, Document Number: ISRC-BM-200401: Information Systems Research Centre Cranfield School of Management), that posits four types of metric that can be used to measure benefits (a very good paper by the way).
  1. Financial--Obviously
  2. Quantifiable--Metrics that organization is currently using to measure its process(es) performance and dependability that will predictably change with the development or transformation; the metrics will demonstrate the benefits (or lack thereof).  This type of metric will provide hard, but not financial, evidence that the transformation has benefits.  Typically, the organization knows both the minimum and maximum for the metric (e.g., 0% to 100%).
  3. Measurable--Metrics that organization is not currently using to measure its performance, but that should measurably demonstrate the benefits of the development or transformation.  Typically, these metrics have a minimum, like 0, but no obvious maximum.  For example, I'm currently tracking the number of pages accessed per day.  I know that if no one reads a page the metric will be zero.  However, I have no idea of the potential readership for anyone post because most of the ideas presented here are concepts that will be of utility in the future. [Rant 5: I had one VP who was letting me know he was going to lay me off from an organization that claimed it was an advance technology integrator that "he was beginning to understand was I had been talking about two years before"--that's from a VP of an organization claiming to be advanced in their thinking about technology integration--Huh....]  Still, I have a good idea of the readership of each post from the data,  what the readership is interested in and what falls flat on its face.  Measurable metrics will show or demonstrate the benefits, but cannot be used to forecast those benefits.  Another example is of a RAD process I created in 2000.  This process was the first RAD process that I know of, that the SEI considered as Conformant; that is, found in conformance by an SEI Auditor.  At the time, I had no way to measure its success except by project adoption rate (0 being no projects used it).  By 2004, within the organization I worked for, that did several hundred small, medium, and large efforts per year, over half of them were using the process.  I wanted to move from measurable to quantitative, using metrics like defects per roll out, customer satisfaction, additional customer funding, effort spent per requirement (use case), and so on, but "the management considered collecting this data, analyzing and storing it to be an expense, not an investment and since the organization was only CMMI level 3 and not level 4, this proved infeasible.   [Rant 6: It seems to me that weather forecasters and Wall St. Market Analysts are the only ones that can be paid to use measurable metrics to forecast, whether they are right wrong, or indifferent--and the Wall St. analysts are paid a great deal even when they are wrong.]
  4. Observable--Observable is the least quantitative, which is to say the most qualitative, of the metric types.  These are metrics with no definite minimum or maximum.  Instead, they are metrics that the participants agree on ahead of time--requirements? (see my post Types of Requirements.)  These metrics are really little more than any positive change that occurs after the transformation.  At worst they are anecdotal evidence.  Unfortunately, because Financial Engineers and Managers (for reasons discussed above) are not willing to invest in procedures and tooling for better metrics like those above, unless they are forced into it by customers, (e.g., requiring CMMI Level 5), Enterprise Architects, System Architects, and Systems Engineer must rely on anecdotal evidence, the weakest kind, to validate the benefits of a transformation.
Metric Context Dimensions
Having metrics to measure the benefits is good, if and only if, the metrics are in context.  In my internal paper, Measuring the Process Effectiveness of Deliverable of a Program, which I cited above, I found a total of four contextual dimensions, and since I have discovered a fifth.  I give two, to illustrate what I mean.

In several previous posts I've used the IDEF0 pattern as a model of the organization (see Figure 1 in A Model of an Organization's Control Function using IDEF0 Model, The OODA Loop, and Enterprise Architecture in particular).  One context for the metrics is whether the particular metric is measuring improvement in the process, the mechanisms (tooling), or in the control functions; a transformation may affect all three.  If it affects two of the pattern's abstract components or all three, the transformation may affect each either by increasing or decreasing the benefit.  Then the Enterprise Architect must determine the "net benefit."

The key to this "net benefit" is to determine how well the metric(s) of each component measures the organization's movement or change in velocity of movement toward achieving its Vision and/or Mission.  This is a second context.  As I said, there are at least three more.

Measuring Increased Cost Efficiency
While measuring the Benefits that accrue from a transformation is difficult (just plain hard), measuring the increased cost efficiency is simple and easy--relatively--because it is based on cost reduction, not cost avoidance.  The operative word is "relatively", since management and others will claim that their skill and knowledge reduced the cost, not the effort of the transformation team or the Enterprise Architecture team that analyzed, discovered, and recommended the transformation.  [Rant 7: More times than I can count, I have had and seen efforts where management did everything possible to kill off a transformation effort, then when it was obvious to all that the effort was producing results "pile on" to attempt to garner as much credit for the effort as possible.  One very minor example for my experience was that in 2000, my boss at the time told me that I should not be "wasting so much time on creating a CMMI Level 3 RAD process, but instead should be doing real work."  I call this behavior the "Al Gore" or "Project Credit Piling On" Syndrome (In his election bid Al Gore attempted to take all the credit for the Internet and having participated in its development for years prior, I and all of my cohorts resented the attempt).  Sir Arthur Clarke captured this syndrome in his Law of Revolutionary Development.
"Every revolutionary idea evokes three stages of reaction. They can be summed up as:
–It is Impossible—don’t Waste My Time!
–It is Possible, but not Worth Doing!
–I Said it was a Good Idea All Along!"]

Consequently, "proving" that the engineering and implementation of the transformation actually reduced the cost, and not the "manager's superior management abilities" is difficult at best--if it weren't the manager's ability, then why pay him or her the "management bonus" [Rant 8: which is were the Management Protective Association kicks in to protect their own].

The Benefits Measurement Process
The two hardest activities of Mission Alignment and Implementation Process are Observe and Orient, as defined within the OODA Loop (see A Model of an Organization's Control Function using IDEF0 Model, The OODA Loop, and Enterprise Architecture for the definitions of these tasks or functions of the OODA Loop).  To really observe processes the results and affects of a process transformation requires an organizational process as described, in part, by the CMMI Level 4 Key Practices or some of the requirements of the ISO 9001 standards.

As usual, I will submit to the reader that the keys (culturally and in a business sense) to getting the the organization to measure the success (benefits) of its investment decisions and its policy and management decisions is twofold.  The first high-level activity is a quick, (and therefore, necessarily incomplete) inventory of its Mission(s), Strategies, Processes and tooling assets.  As I describe in Initially implementing an Asset and Enterprise Architecture Process and an AEAR, this might consist of documenting and inserting the data of the final configuration of each new transformation effort as it is rolled out into an AEAR during an initial 3 month period; and additionally inserting current Policies and Standards (with their associate Business Rules) into the AEAR.  Second, analyze the requirements of each effort (the metrics associated with the requirements, really) to determine the effort's success metrics.  Using the Benefits Context Matrix determine where these metrics are incomplete (in some cases), over defined (in others), obtuse and opaque, or conflicting among themselves.  The Enterprise Architect would present the results of these analyses to management, together with recommendations for better metrics and more Process Effective transformation efforts (projects an programs).

The second high-level activity is to implement procedures and tooling to more effectively to efficiently observe and orient the benefits through the metrics (as well as the rest of the Mission Alignment/Mission Implementation Cycles).  Both of these activities should have demonstrable results (an Initial Operating Capability, IOC) by the end of the first 3 month Mission Alignment cycle.  The IOC need not be much, but it must be implemented, not some notional or conceptual design.  This forces the organization to invest resources in measurements of benefits and perhaps, in which component the benefits exist, control, process, or mechanisms.

Initially, expect that the results from the Benefits Metrics to be lousy for at least three reasons.  First, the AEAR is skeletal at best.  Second, the organization and all the participants, including the Enterprise Architect have a learning curve with respect to the process.  Third, the initially set of benefits metrics will not really measure the benefits, or at least not effectively measure the benefits. 

For example,I have been told, and believe to be true, that several years ago, the management of a Fortune 500 company chose IBM's MQSeries as middleware, to interlink many of its "standalone" systems in its fragmented architecture.  This was a good to excellent decision in the age before SOA, since the average maintenance cost for a business critical custom link was about $100 per link per month and the company had several hundred business critical links.  The IBM solution standardized the procedure for inter-linkage in a central communications hub using an IBM standard protocol.  Using the MQSeries communications solution required standardized messaging connectors.  Each new installation of a connector was a cost to the organization.  But, since connectors could be reused, IBM could right claim that the Total Cost of Ownership (TCO) for the inter-linkage would be significantly reduced. 

However, since the "benefit" of migrating to the IBM solution was "Cost Reduction", not Increased Process Effectiveness [RANT 9: Cost Avoidance in Finance Engineering parlance], Management and Finance Engineering (Yes, both had to agree), directed that the company would migrate its systems.  That was good, until they identified the "Benefit Metric" on which the management would get their bonuses.  That benefit metric was "The number of new connector installed".  While it sounds reasonable, the result was hundreds of new connectors were installed, but few connectors were reused because the management was not rewarded for reuse, just new connectors.  Finance Engineering took a look at the IBM Invoice and had apoplexy!  It cost more in a situation where they had a guarantee from the supplier that it would cost less [RANT 10: And an IBM guarantee reduced risk to zero].  The result was that the benefit (increased cost efficiency) metric was changed to "The number of interfaces reusing existing connectors, or where not possible new connectors".  Since clear identification and delineation of metrics is difficult even for Increased Cost Efficiency (Cost Reduction), it will be more so for Increased Process Effectiveness (Cost Avoidance).
Having effectively rained on every one's parade, I still maintain that with the support of the organization' s leadership, the Enterprise Architect, can create a Transformation Benefits Measurement procedure with good benefit (Increased Process Effectiveness) metrics in 3 to 4 cycles of the Mission Alignment Process.  And customer's requiring the suppliers to follow CMMI Level 5 Key Practices, SOA as an architectural pattern or functional design, together with Business Process Modeling, and Business Activity Monitoring and Management (BAMM) tooling will all help drive the effort.

For example, BAMM used in conjunction with SOA-based Services will enable the Enterprise Architect to determine such prosaic metrics as Process Throughput (in addition to determining bottlenecks) before and after a ttransformation. [RANT 11: Management and Finance Engineering are nearly psychologically incapable of allowing a team to measure a Process, System, or Service after its been put into production, let alone measuring these before the transformation.  This is the reason I recommend that the Enterprise Architecture processes, Like Mission Alignment be short cycles instead of straight through one off processes like the waterfall process--each cycle allow the Enterprise Architect to measure the results and correct defects in the transformation and in the metrics.  It's also the reason I recommend that the Enterprise Architect be on the CEO staff, rather that a hired consulting firm.] Other BAMM-derived metrics might be the cost and time used per unit produced across the process, the increase in quality (decreased defects), up-time of functions of the process, customer satisfaction, employee satisfaction (employee morale increases with successful processes), and so on.  These all help the Enterprise Architect Observe and Orient the changes in the process due to the transformation, as part of the OODA Loop-based Mission Alignment/Mission Implementation process.

Tuesday, June 14, 2011

Types of Requirements

Definition of a Requirement
A Requirement is a measurable expression of what a customer wants and for which the customer is willing to pay.  Therefore, a requirement has three attributes:
  1. It has a description of what the customer wants or desires or some derivation or transformation of what the customer wants or desires.
  2. That want or desire is measurable in a way to ensure that the want or desire is fulfilled.
  3. The customer is willing to pay for the supplier to fulfill the want or desire.
All types of requirements these attributes.  However, there is another class of documented wants and desires of customers that many customers and Systems Engineers misclassify as requirements.  These wants and desires do not have the second attribute, a metric to determine when the requirement is met.  I call these capabilities and the description, capability statements.  Customers are particularly enamored of the capability statements since, "Customers don't know what they want, but do know what they don't want."  The reason that they are enamored of capability statements is that language is slippery and allows for a great deal of interpretation/interpolation.  This means that if the supplier presents them with a result the supplier feels meets their capability need, the customer can stay NO IT DOESN'T; and the supplier has no recourse but to default on the contract or or ask for "the real requirements" to implement the product against.  This is very expensive for the supplier and very enjoyable for the customer, especially if they really don't what they want (measurably) and want the supplier to pay for the research; which is essentially what the supplier is doing in this mode of operation.  I have been on too many "opportunities" where I have seen this in action, despite my best attempts to stay the effort until at lease some of the "the real" requirements are known.  When the effort has been stayed by the program or project manager and the customer begins to understand the importance of requirements, as opposed to capability statements, then the customer tends to pay much more attention to "getting the requirements document right".

Types of Requirements and Roles
I will define and delineate the types of requirements by the Roles the identifies or defines and documents them.  There are three roles within the major discipline of Systems Engineering that identify or define requirements, as I discussed in my posts The Definition of the Disciplines of Systems Engineering and Enterprise Architecture and System Architecture and a presentation of the same at 2009 SEI SATURN conference.  The roles are: the Systems Engineer, the System Architect, and the Enterprise Architect.

Systems Engineer
The Systems Engineer identifies and manages the Customer's Requirements.  The Systems Engineer never should define the customer's requirements, but work with the customer to identify the requirements.  Once identified, the Systems Engineer must manage the customer's requirements to validate that these requirements are met.  There are two types of Customer Requirements, Programmatic Requirements and Customer System Requirements.

Customer Programmatic Requirements
There are two types of programmatic requirements:
  • Cost--What the customer is willing to pay for a product that meets the System Requirements
  • Schedule--How long the customer is willing to wait for the product that meets the System Requirements
Program Management considers the programmatic requirements within their exclusive purview.  They are wrong!  To start with, the Systems Engineer must analyze the known requirements to determine whether or not the effort is even feasible within budget and time constraints. 

To often, business development (or marketing), or for internal efforts, "management" will promise more (in very general terms) than any supplier can deliver.  I call this, "Listening with your mouth!"  Good Systems Engineering practice requires ears.  The Systems Engineer must work with the customer to clearly (measurably) identify the customer's real requirements, in part, by listening to the customer and documenting the requirements the customer states.
End of Sidebar

Once the Systems Engineer, together with any Subject Matter Experts (SMEs), have determined that the effort is feasible, the Systems Engineer must use Requirements Analysis (which I will discuss in a future post) to determine any gaps or conflicts among the requirements.  These should be documented, and if possible, discussed with the customer.  Then the Systems Engineer, SMEs, and Contracts expert should put together a proposal, based on the known and inferred requirements, including a notional Work Breakdown Structure (WBS) with a Bases Of Estimate (BOE), and a notional project plan.  If the proposal is accepted, the Systems Engineer/System Architect and SMEs should refine the WBS and Project Plan before turning it over to the Program Manager (unless the Program Manager is also an SME or Systems Engineer, which, in my experience, they ain't).  In this scenario, the people with the knowledge and the skills to execute the plan are the ones that create the plan; not those only interested in managing the programmatic requirements.

Customer System Requirements
Of more interest to the Systems Engineer, because these are the requirements from which the product will be constructed, while meeting the programmatic requirements (which are a form of Design Constraint), are the Customer's System Requirements.  In the US DoD Mil-Spec nomenclature from the 1970s and 1980s, the customer's System Requirements are the A-Spec.  The reason I bring this up is to show that these requirements concepts are not new.  As I describe in an early post, Types of Customer Requirements and the System Architecture Process, there are two types of System Requirements, those that the product must perform and those the product must meet.

The metrics associated with the Customer's System Requirements are the System Validation criteria.  While some Systems Engineers would argue that they can only Verify that the product meets the customer's requirements.  That misses the distinction between verification and validation.  By the best definitions I've seen, verification indicates that the components or functions meet the component specifications or the functional requirements.  Validation means that the product meets the customer's system requirements, that is, it does what the customer requires.  Many times all the parts of a product work as specified by the System Architect, but the System Architect did not specify a product, in total, that would meet the customer's system requirements.  That is the key difference and the rest is semantics.
End of Sidebar

Customer Functional Requirements
The Customer's Functional Requirements are those requirements that the product, system, or service "must perform".  The customer's functional requirements are based on how the customer envisions using the product.  For example, in IT, it would be "one way one type of user would use the application, system, or service".  This is nearly the exact definition of a Use Case (see the Glossary for the exact definition).  I've found that Use Cases are the simplest, clearest way for the Systems Engineer to communicate the customer's functional requirements to the development/transformation team.  Again, a Functional Requirement is always an action.

Design Constraints
Design Constraints are those requirements that the product, system, or service "must meet".  For example, the customer might want a display in the new Service or a report from the new Service to be identical with that of the current application.   Or the customer might want given level of dependability, the term which the Reliability, Maintainability Serviceability organization uses as the umbrella for all the "-ilities" (e.g., reliability, maintainability, serviceability, scalability, recover-ability, response time, up-time, etc.).  These are all attributes of the product, system, service.

There are many types of Design Constraints.   For example, internal and external policies and standards all constrain a design of the product and there can be sever consequences if a supplier ignores one or more of theses Design Constraints, even if the Systems Engineer does not know of them.  The Systems Engineer has to verify that all of these Design Constraint, type requirements, are met before the product is put into operation.

System Architect
The System Architect transforms the customer's System Requirements in the product's Functional Requirements, then orders and structure these Functional Requirements into a System Architecture or Functional Design, allocates tightly coupled groups of these requirements, together with the Design Constraints, to candidate Components Requirements or Specifications, and finally to perform Trade-off Studies to determine the initial Bill Of Materials (BOM).

System Functional Requirements
The System Functional Requirements (or Functional Requirements) are a set of functions the product, system or Service must perform to meet the Customer's System Requirements.  In the US DoD Mil-Spec terms, this would be the B-Spec.  The Functional Requirements are the result of a transformation of the Customer's System Requirements by the System Architect.  This transformation is the result of two fairly complex procedures, Decomposition (a form of Requirements Analysis) and Derivation (which can use a fairly formal process including UML diagrams, but that requires significant creativity if done properly.)

System Component Requirements (or Specifications)
System Component Requirements (or Specifications) are the requirements for the actual components needed to create the product, system, or Service.  In the US DoD Mil-Spec terms, this would be the C-Spec.  Again, the System Architect transforms requirements, this time it is the Functional Requirements into the Component Requirements.  Again the transformation is a result of two fairly complex procedures, the creation of a System Architecture, also called a Functional Design through ordering and structuring the requirements.  Then the System Architect allocates both the grouped System Functional Requirements and the Design Constraints to potential components.  The System Architect can use metrics associated with the System Component Requirements (specifications) within a trade-off study function to determine the actual components that the product will use.

Enterprise Architect
The Enterprise Architect supports the organization's leadership in optimizing their investment decisions, their policies and standards, to enable the organization to achieve its Vision, and Mission, as I've discussed in A Model of an Organization's Control Function using IDEF0 Model, The OODA Loop, and Enterprise Architecture and Governance, and Policy Management Processes: the Linkage with SOA. Optimizing is not increased cost efficiency of the investment, but the effectiveness of the processes used to achieve the mission or goal, and the cost efficiency of the tooling in supporting the effective processes. This is exceptionally important to any organization, since it can mean the difference between thriving and not surviving.

To adequately perform his or her role, the Enterprise Architect must work with three types of requirements, the Organizational performance requirements, Organizational (business) process requirements, and service component requirements.

Organizational Performance Requirements
There are three layers of performance requirements for an organization they all derive from the organization's Vision Statement.  For example, the Vision Statement for the United States--one of the best I've seen--is its Preamble,
We the People of the United States, in Order to form a more perfect Union, establish Justice, insure domestic Tranquility, provide for the common defence, promote the general Welfare, and secure the Blessings of Liberty to ourselves and our Posterity, do ordain and establish this Constitution for the United States of America.
As you can see, there are a series of statements "establish Justice", "provide for the common defence", and so on, that both provide the Vision and give a broad hint at both a series of Mission Statements and the method for measuring the achievement of the Vision through the various Missions.  These are the Vision Requirements.  These Vision Requirements enable the measurement of the success of the organization's various Missions.

To realize the Vision, organizations create various Missions.  Military organizations are best known for Missions, while many business organizations have a goal with multiple objectives fulfilling the role of Vision and Mission.  To perform a Mission, it has Mission Requirements, that is, needs that must be fulfilled to perform the Mission.  These include requirements for the Strategies, Processes, and enabling and supporting tooling.  The Enterprise Architect can use the metrics from these requirements to determine the success of the Strategies, Processes, and tooling in supporting the Mission, and through the Mission's requirements linkage to the Vision, their success in achieving the organization's Vision.

The final class of objects Strategy Requirements are needs of a strategy (guidance for achieving the Mission), in terms of the processes and tooling to enable and supporting it.  Products to enable and support these are supplied by the Processes and tooling.  Therefore, they are the key "business requirements" imposed on both the processes and tooling.

Organizational (Business) Process Requirements
The Organizational (Business) Process Requirements come from two sources.  The first set is the requirements of the Performance Architecture (Mission and Strategy Requirements) and the second set is derived from other Organizational Process Requirements.  This set of requirements falls into two categories.

Mission Alignment Requirements
Mission Alignment Requirements are needs of the Mission and Strategies that the processes and tooling must perform must perform to achieve the organization's vision.  Additional Mission Alignment requirements may be imposed on one process by another process.  These tie measurably back to the Mission and Strategies.  The Enterprise Architect uses them to determine which processes and tooling to retire, which to transform, update, or upgrade, and what new processes and tooling the organization requires move toward achieving it Mission and Vision.  This is the objective of the Mission Alignment process.

Policy and Standard Requirements
Policy and Standard requirements are needs the processes and tooling must meet to reduce the intra-organizational process friction, thereby making the processes and tooling both more process effective, and also more cost efficient.  The Enterprise Architect uses these requirements to identify candidate policies, standards, and business rules to enact, revise, or delete.  The Enterprise Architect will use this information and knowledge to support the organization's Governance process.

Service Component Requirements
As a result of the Mission Alignment process, the organization's leadership will fund individual development and transformation efforts.  Mission Alignment identifies "customer requirements" from the Enterprise level.  These are "Customer" Functional Requirements and are used by the effort's Systems Engineer.  The products of the Governance and Policy Management process are Design Constraints also used by the effort's Systems Engineer.

So, as you can see, all of these types of requirement form a requirements web or mesh.  And all of them, if use consistently and in concert can substantially boost the velocity of the organization toward its Mission and Vision.

Wednesday, June 8, 2011

Governance, and Policy Management Processes: the Linkage with SOA

Business Rules and Process Flow
In a recent post, A Model of an Organization's Control Function using IDEF0 Model, The OODA Loop, and Enterprise Architecture, I briefly described the Governance and the Policy Management processes within the context of the IDEF0 Model of the organization and the OODA Loop process pattern.  One function of the Policy Management process is to create "business" (organizational) rules and measurably instantiate the policies and standards.  Once created, these rules can be automated since they turn out to be the "if-then-else" statements in the process logic (for more on this see Types of Business Rules).  As discussed in the post, there are at least three types of Business Rules, knowledge rules, event rules, and value rules.  Each of these rules is an attributed metric, that is, it measures what is going on or changing and compares that with some limit or standard.  This makes the rules relatively easy to automate.

SOA and Process Flow
Many individuals and organizations still confuse Service Oriented Architecture (SOA) with Web Services on either the Internet (the Public Cloud) or an intranet (a Private Cloud).  SOA is a much larger change in IT architecture than that (i.e., "A gaggle of Web Services does not a SOA make").  As I discussed in SOA, The (Business) Process Layer, and The System Architecture , one key difference is that SOA formally separates the process flow from the functions of the process.  This difference makes it feasible to enforce a continuously changing set of rules while the system is operating rather than updating or developing new applications then rolling them out.  Consequently, this is one of the reasons that SOA-based systems are very agile and one of the reasons that good Governance and Policy Management processes are so important to the success of SOA-based systems as Gartner Group and other studies of SOA implementations have found. 

Additionally, it is the reason that both the Orchestration and Choreography engines of the SOA-supporting infrastructure must be directly linked to a Rules engine, which uses a Rules Repository (within the AEAR) since both control the process flow of the Composite Application.  In this configuration any changes in a rule will immediately affect all processes using the rule (so rules modeling, verification, and validation is an imperative as functions of the Policy Management Process or the Services will end up in a complete).

Where Rules Apply
There is yet another complication in this rules management discussion, relative to SOA, System, and Enterprise Architecture, that is some policies and standards apply during the development or transformation process (Design Time) and other apply during Service operation (Operational).  All Design Time and Operational Policies and Standards are Design Constraints (see Why Separate Design Constraints from the Customer's System Requirements? for a discussion of Design Constraints).  Both the Systems Engineer and the System Architect must ensure that these requirements are identified and met.

Design Time
Design Time policies and standards divide into two types, those that effect how the designer and developers implement the Composite Application and those that effect the resulting Composite Application.  For example, in a SEI CMMI Level 5 organization, developing a Composite Application following the dictates of the CMMI Level 3 process is a must, but, whether or not the Composite Application has two-factor authentication is not a concern.  However, if the organization has an organizational standard that all applications will have two-factor authentication will directly affect the resulting Composite Application.  So the first example standard effects "how the Composite Application is built" while the second dictates "what is built".  Good process engineering and management, systems engineering, and system architecture will ensure that all of these requirements are met.

Operational Policies and Standards are not IT-based, but "business" based for whatever the "business" of the organization is.  They are the attributes of the "If-then-else" branching clauses in the "business" process flow, which is instantiated in the code.  While the branching statements themselves will, generally (See the sidebar for the caveat), remain the same, the attributes, as instantiated by the rules can and will change.  This makes Verification and Validation of the Service (Composite Application) process flows during the development or transformation process difficult, at best.  This is the reason for the modeling, verification, and validation of the Services affected by the rules change before it is roll into the operational environment.

In a four part paper in the Journal of Enterprise Architecture, I envisioned the Self Adapting Service Architecture in which both the form and attributes of the branching statements adapt to changes as part of self-rules setting (or information abstraction) process (see my post The Hierarchy of Knowledge and Wisdom for the definitions).  A prototype of such a system is the IBM's Watson computer that won at Jeopardy over several Jeopardy champions.  As I suggested in the Fourth part of the paper, I would not expect such systems to be operational before 2021.
End of Sidebar

The Linkage
From this brief discussion, I think I've made clear that there is a close linkage between SOA and the Policies and Standards set by the Governance and Policy Management processes.  Getting this linkage in place will require the teaming of the leadership, Enterprise Architect, management, Systems Engineers, System Architect, and Subject Matter Experts (SMEs); this is a significant cultural shift throughout the organization, but organizations that achieve it will have a major competitive advantage over those that don't.

Monday, June 6, 2011

SOA Architects: "Don't Bury the Business Rules in the Database"

Why Rules Get Buried in Databases
Even though the Millennium Bug was first noted in 1982 (or perhaps earlier), it was not widely recognized until the mid-1990s.  Fundamentally, the "millennium bug"was caused by the need to minimize the amount of data storage during the 1960s and 1970s and the lack of understanding of System Architecture earlier.

Fragmented Architecture
Initially, the business logic was hard coded into the programming.  Partially, this was due to the fact that in the 1950s and early 1960s, computers only assisted in supporting the mathematical (e.g., accounting) functions of an organization.  So, it was rather straightforward to code in the Business Rules (the if-then-else, as I discuss in Types of Business Rules).  However, as the System Developers attempted to integrate these functions and as the organization's leaders attempted to respond to a changing organizational environment and to changing governmental regulations, they found that changing the enabling and supporting IT systems difficult and expensive (And many large organization's still do, since they are still dependent on millions of lines of code written in the 1960s and early 1970s).  A great part of the reason for this lack of agility was and is the hard-coded business rules in the program's logic--some of which is documented only in that logic.

Monolithic Architecture
In the early 1970s, as disk storage became somewhat less expensive, Relational Data Bases (RDBs) and Relational Data Base Management Systems (RDBMS) started to replace earlier data storage systems.  The RDBMS enabled developer to map data elements into tables with relationship among all the data.  This mapping essentially converts data into information (see the Blog's glossary page).

The advent of the RDBMS was one of the technical enablers of the MRP, MRPII, and ERP systems, built using Monolithic Architecture.  For example the SAP R3 system could not exist without an associated Oracle or other RDBMS system.  The reason is that the Monolithic Architecture is superior to a Fragmented Architecture because it reduces the number of interfaces that have to be maintained among programs to nearly zero--and as discussed in SOA, The (Business) Process Layer, and The System Architecture I found that the cost per interface was $100 per month.  When you multiple that by the 1000s or 10,000s of interfaces required in a large organization with a Fragmented Architecture, the apparent cost efficiency is enough to delight any finance engineer, be they the CFO or any of his or her minions.

Once the RDBMS systems took hold, the Database Designers (DBD) and coders found that the easiest point to validate the data input was just before data insertion into the tables.  The RDBMS suppliers responded with triggers, (if-then-else logic within the RDBMS).  These triggers rejected data that was out of conformance with some standard; which is the essence of a Business Rule.  However, once the DBD was given this tool, he or she started creating ever more complex triggers; triggers that embody Business Rules that may have little to do with data validation.  For example, many times when data is inserted a trigger will change a flag or other indicator within a set of metadata (data about the data).  The flag may then change the course of the function or business process.  This type of trigger buries the Business Rules in the database.

The problem with an application built on a Monolithic Architecture is the inflexibility of the entire system, as I've discussed in The (Business) Process Layer, and The System Architecture and several other posts.  One key technical element of this inflexibility is the System Architect and SME's inability to change or tailor the data structures because the application is dependent on a single data structure (as well as a standard process, etc.).  When the triggers in the RDBMS include a significant number of Business Rules, this increases the inflexibility because the detailed designer/coder/SME may have great difficulty in even locating the Business Rule.  Therefore, he or she must write an add-on or bolt-in to modify the Business Rule, with all of the inherent configuration management problems that creates.  Further, even if the detailed designer can find the proper trigger to modify, ERP systems do not usually identify all of the functions that use the data or the trigger, so there is a significant risk of induced defects, defects in a secondary process caused by the rules change to support the primary process or function of the transformation effort.

Service Oriented and Other Agile Architectures
As I discussed in  The (Business) Process Layer, and The System Architecture, and several other posts, Service Oriented Architecture (SOA) and other agile processes require separation of the processes and Business Rules from the functions of the process. From the discussion, above, the reasons should be obvious.  Both good Governance and Policy Management (see A Model of an Organization's Control Function using IDEF0 Model, The OODA Loop, and Enterprise Architecture, and Enterprise Architecture, Chaos Theory, Governance, Policy Management, and Mission Alignment) and Agile Mission Alignment require this separation.  Without a Rules Repository, the Enteprise Architect and the System Architect are much less effective at optimizing the SOA or other agile architecture to support the organization in a constantly changing operational and technical environment.

The Prescription: an Agile Architectural Process Pattern
As I see it, the following is a prescription for transforming an organization from function-based to process-based, and the IT architecture of the enabling and supporting systems from Fragmented and Monolithic to agile and Service Oriented is to first understand and target agility, in addition to process effectiveness and cost efficiency.  Second, define or ensure that there are formal Enterprise Architecture (Mission Alignment, Governance and Policy Management) and Systems Engineering/System Architecture processes that coordination with one another.  These processes should have a three month cycle, not a year or more, to ensure requirements agility.  Third, as I described in my posts, Initially implementing an Asset and Enterprise Architecture Process and an AEAR, and Asset and Enterprise Architecture Repository, start with the output of current projects to create the AEAR and exercise the processes.  Fourth, expect that the processes will be less than optimal at the start (the learning curve, see the Superiority Principle).  The risk reduction on this is that since the process cycle is three months, instead of a year or more, learning takes place at a much greater rate.

Addendum: When to Use Triggers When Building to An Agile Architecture
I had a futher thought about triggers.  The Gartner Group recommends segmenting the complete data structure (supporting a monolithic architecture) into databases supporting the individual Service Components (e.g., Web Service).  Each of these Service Components would support either an entire or a logical portion of an organizational function.  I would suggest that triggers can be used within the databases supporting individual functions (Service Components), especially for ensuring data completeness and correctness.