Saturday, May 28, 2011

Why Separate Design Constraints from the Customer's System Requirements?

Traditionally,  Requirements Identification, Analysis, and Management was based on "Shall" statements, that is, "the product (service) shall...".  Fundamentally, the "Shall Statements" were contractual obligations to be met by the contractor in creating a new product.  Systems Engineers and Requirements Analysts would attempt to identify both the requirements' description and metrics for each (i.e., when the requirement was met) by starting the Requirements Identification and Management effort by "shredding" the contract to identify the requirements within "the shall" statements.

Once the contractually contracted "requirements" were identified, the Systems Engineer/System Architect would decompose those requirements, partially in an effort to:

1) identify any requirements gaps,
2) ensure that all requirements were single requirements and not a group of requirements, and
3)  identify evaluation procedures to ensure that the requirements were met. 

This last reason was a requirement of any Systems Engineering process because most of the contractual requirements had no method for identifying when they were met; they were statements of the capabilities the customer required.  For example, within Information Technology (IT) "the system shall be user friendly" is typical.  However, what the customer meant by "user friendly" was left to the imagination of the software developer.  This particular requirement has led to many "programmer friendly" systems being initially rolled out, a great many cost overruns to "fix the defects" (defects in the requirements), and a great deal of customer dissatisfaction.

With respect to the first two reasons for decomposition, I have seen both the contracts and the requirements stripped from the contracts of several historic aerospace programs and poor the resulting requirements were.  The consequence was that Systems Engineers, from the 1960s to the 1990s, spent a great deal of effort and time to decompose the requirements in an attempt to truly identify the real requirements (those that the customer wants, is willing to pay for, and with a definable and quantifiable measurement of when it is met. 

Ralph Young, in his book Effective Requirements Practices from 2001, discussed studies that found that less than 50% of the real requirements for a system were in the contractual agreement.  Further there were significant numbers of the known requirements that were either conflicting or simply wrong.  Since this has been the case for most efforts most of the time, you can understand why requirements decomposition (definition) became such an important function of the Systems Engineer.  The decomposition process included State Diagrams, Functional Flow Diagrams and many other "requirements analysis" techniques that enable the Systems Engineer (Requirements Analyst) to tease out as many "missing" requirements as possible. 

The Program Management Issue with Requirements (A Sidebar)
Once completed (though many times before completion) the Program Managers and Finance Engineers put a project plan together based on the Heroic Assumption that all of the customer's requirements are known.  That is the foundational assumption of the Waterfall process for System Realization; and is patently false.  Consequently, project after program after effort fails to meet the programmatic requirements unless the Program Manager simply declares victory and leaves the mess for the operational Systems Engineers and Designers/Developers to clean up.  Generally, they do this through abbreviated validation or no validation at all.

The reason they make the assumption that "all the requirements are known upfront" is that it is the only way they can see to create a project schedule, project plan, and earned value metrics to control the effort.  Therefore, they religiously believe that their assumption is true, when time after time its proven false.  Einstein's definition of insanity is "perform the same test over and over and expecting different results".

Making the assumption that "not all of the requirements are known upfront" makes life much more difficult for the Program Manager, unless the process and functions is designed around that assumption.  While this has been done, I for one have done it, it is heretical to the Program Management catholic doctrine.  Until Program Management and Financial Engineering "disciplines" learn to put more emphasis on the Customer's System or Product requirements and less on cost and schedule, and understand that the known requirements are merely an incomplete and partially incorrect set of requirements, development and transformation programs will always miss their cost and schedule marks; in addition to having very dissatisfied customers.
(End of Sidebar)

From the mid-1960s to ~2000, I dealt with issue of requirements identification, trying most of the methods described in Ralph Young's book cited above.  The net result was that all requirements identification methods failed for one of four reasons.
  1. The Systems Engineer didn't describe the requirements using the customers language and ideas and the Systems Engineer didn't rephrase or translate them into the language of the developers.  This leads to many defects, finger pointing exercises, and customer dissatisfaction.
  2. The second is described in the sidebar.  That is, the waterfall process used in most development and transformation efforts was founded on an assumption that is just plain wrong.
  3. For the types of efforts I was working on, IT transformation projects, the customer's system requirements were only partially based on the process(es) that they were supposed to support.  Most of the time they were short-sightedly based on the function that the particular customer was performing.  Consequently, "optimizing the functions, sub-optimized the process".
  4. All customer system requirements were treated the same.  I found this to be incorrect and would lead to many defects.
Types of Requirements: Capabilities, Functional Requirements, and Design Constraints
When I started analyzing the requirements in various requirements documents and systems in the early 1990s, I found that there were two classes of requirements, and then requirements that weren't requirements at all, just wishful thinking.  The two classes are those that the system "must perform" (or process functions it must enable and support) and those the system "must meet".  An example of a must perform requirement is "The system will show project status using a standard form".  The standard form would be considered as a must meet requirement because a form is nothing that a system "must perform", but it is a requirement that the system "must meet".  This requirement constrains the design.  The designer/developer can't create a new form; he or she must create a system the produces the standard form.  Therefore, I define it as a Design Constraint, since it constrains the design.  Likewise, a requirement that the system "must perform" shows action.  Therefore, I define this type of requirement as a customer system functional requirement, or more simple a Customer System Requirement.  Finally, there are these "wishful thinking" statements.  Many times, these are, in fact, genuine customer needed capabilities, (unknown requirements).  The reason they are not requirements is that they have no metric associated with them to enable the implementer to know when they have been met.  Therefore, rather than ignore them, I've put them into a category (or type) I call Capabilities.  These are statements the Systems Engineer must investigate to determine if they are requirements or not.

In any software development or transformation effort, frequently, there are more design constraints than customer system requirements because there are so many types of design constraints and the relative number of functions is, normally, limited.  Typically, an organization does not radically reorganize or transform all of its processes and supporting systems concurrently; if there is a higher risk method to move to the going out of business curve, other than fraud, I don't know it.  Instead, the organization will develop or transform a single business function or service in a single project; though it may have several projects.

On the other hand, all development and transformation efforts of an organization are subject to the policies and standards of the organization, to contractual obligation of the organization, and to external laws and regulations.  All of these constrain the design of systems.  For example, Section 508 of the US Rehabilitation Act effects the development of all governmental websites by ensuring that the visually and auditorially impaired can use the site.  Again, the Clinger-Cohn Act effects both the choice of transformation efforts and the products IT can use within those projects, by not allowing software, which has been sunsetted or is being sunsetted, to be used in the operation of many businesses within the United States.  This act mandates that those systems must be updated to a current version of the system.  Obviously, these constrain the design of a new or a transforming system or service of an organization, irrespective of the function the system or service of the organization.

The organization has other design constraints, like having the organization's logo on all user interface displays, for example, or producing the same formats for the reports coming out of the new system, as from the prior system.  The organization may use Microsoft databases and therefore have expertise supporting them; in which case using Oracle or IBM products would not be feasible--except in very special circumstances (e.g., a contractual obligation).  Consequently, these "must meet" requirements are pervasive across all of an organization's development and transformation efforts.
Advantages of Separation
When, in the 1990s, I discovered this, I recommended to my management that we start a repository of design constraints to use on all projects of our aerospace and defense corporate organization and have recommended it many time since.   In my presentation I envisioned four benefits:
  • It enables and supports the reuse of Design Constraints and their evaluation methods.  This makes evaluating them both consistent and cost efficient.
  • It minimizes the risk of leaving a Design Constraint out of the requirements since the Systems Engineer has a long list of these from the repository.  This is a much better way than either trusting the Systems Engineer to remember or figure it out, or any private lists of design constraints kept by any single Systems Engineer.
  • If the Customer System Requirements are separated from the Design Constraints, then the System Architecture process is more effective and cost efficient because the System Architect does not have to attempt to sort these out and keep them straight during the process.  (Note: the Customer System Requirements are the requirements that are decomposed and from which the System Architecture is derived.)
  • Finally, if the organization uses an Enterprise Architecture process like I have described in I discussed in my posts A Model of an Organization's Control Function using IDEF0 Model, The OODA Loop, and Enterprise Architecture and Asset and Enterprise Architecture Repository then, understanding of what Systems will be affected with a change in policies or standards becomes a simple effort.

Monday, May 23, 2011

Technology Change Management: An Activity of the Enterprise Architect

As I see it, Enterprise Architects have three responsibilities, key to the success of the organization.  They are:
  1. Technology Change Management.
I discussed the first two, Mission Alignment and Governance and Policy Management in my post A Model of an Organization's Control Function using IDEF0 Model, The OODA Loop, and Enterprise Architecture.  In this post I will discuss the third, Technology Change Management (TCM).

The Mission of Technology Change Management
The Mission of TCM is to insert new technology or replace existing technology, to increase the process effectiveness, increase the cost efficiency, while minimizing the impact on the organization of the technology change transformation process.  While that is a mouthful, it does identify the three key strategies of all TCM and in the following priority order:
  • First, increase the organization's process effectiveness--this is the only reason for having tools
  • Second, increase the cost efficiency of the tools--i.e., reduce the Total Lifecycle Cost (TCO) of the tooling while maintaining the process effectiveness
  • Third,  transform the tool set while maintaining the process effectiveness--while this is third in priority, it is a strategy that the Enterprise Architect must keep in mind, especially when upgrading a current system to new technology.

The TCM Process and Its Activities
To meet the strategies, above the TCM process should have three high-level activities:
  • Technology Evaluation
  • Supplier Technology Roadmapping
  • Internal Technology Roadmapping
Technology Evaluation
There are two very different types of Technology Evaluation, those evaluating the potential of upgrades of current products and those that evaluate the potential of immature versions of potentially disruptive technologies.  Additionally, Technology Evaluation activity enables and supports the evaluation of two distinctly different reasons for TCM, those that enable and support a product for sale and those that have the potential to disrupt the organization's current, strategies, processes, the current systems, or any and all of those.  This is the source of a significant conflict, which is currently an unidentified issue within many organizations.

TCM and the CTO versus the Enterprise Architect (or Chief Architect)
The source of the conflict is over whether the technology is a Product or a support for Process.  If it is a product the organization is producing for sale (in its many forms) then it is a product.  Product include computer and networking hardware, and software for sale or rental/leasing.  The CTO is definitely responsible for new products or new functions and components within current products.  The CTO organization should also evaluate the product's production readiness, even for internal systems.

However, if  the technology enables and supports an internal process or the processes supporting tools, the Chief Process Officer (CPO) or Enterprise Architect should evaluate the technology with respect to its applicability to supporting the ongoing or radically altered strategies and  processes.  This is especially true if the organization is a "service providing" firm, like those in the financial industry, IT systems integration, or legal and governmental organizations.  In fact, I really wonder why that for those industries that use a "unique" process to sell as a service to other organizations, that there is never a CPO that is on the same level as the CFO, with the CTO and CIO as direct subordinates.  The reason that I wonder this is that if the process is the value producer for the organization, then if should be of the highest importance and responsibility in the organization (without it the responsibilities of the CFO are non-value added, because the support the organization's value producing engine).

New releases of current technology
Since one of the TCM strategies is to insert the technology into the organization's tooling with as little disruption as possible, the Enterprise Architect, together with the Subject Matter Experts (SMEs) will perform the evaluation activities.  In fact, large organizations may want to have a specific SME designated for the main products supporting their processes.  These people must be evaluators, rather than product advocates, which many of them turn into.  In fact, when I held a position as a technical leader in one major organization, I got into serious political trouble because I accurately identified the shortcomings of a product to the supplier, rather than lauding it.  The supplier reps reported my evaluation to my boss (who reported to the corporate CIO) and this is one of the real reasons he eventually figured out that he did not have budget for all the Enterprise Architects and so removed me.  Still, there is some Pyrrhic justice.  When a new CIO replaced the prior one, the product in question was replaced as quickly as was possible, given the size of the effort.  Further, the supplier that apparently "listened with his mouth", has been steadily loosing market share.

Disruptive Technology
Another reason that the Enterprise Architects and  product SMEs must be evaluators, rather than Evangelists, is that from time to time disruptive technologies change everything--that's why they are called disruptive (e.g., one of my forbearers was a major manufacturer of buggy harnesses in the 1880s, just as something called the automobile was being innovated into a practical and affordable product...).  Therefore, the organization must maintain membership on standards bodies, and so on, where there may be early warning that that suppliers and research organizations are creating these technologies, then bringing the technologies in for evaluation as early as possible--to evaluate the new function or functional group, and its production readiness.

Supplier Technology Roadmapping
While Mission Alignment and Governance and Policy Management are short-term, short-cycle activities, Technology Change Management has a much longer time horizon, since supplier technology plan data and information is available, though semi-fugitive, for two to three years out.  Many suppliers will release their plans under nondisclosure agreements.  Additionally, as suppliers and entrepreneurs get ready to release potentially disruptive technologies, they tend to publicize it.  Therefore, the Enterprise Architect has some significant ability to identify, track, and evaluate (for use in his or her organization) new tooling and plan for its introduction into the organization.  These plans are called supplier technology roadmaps.

Organizational Technology Implementation Roadmapping
Once the organization understands when new versions of a product, either a new release or a disruptive technology is be production ready, the organization will require a second type of roadmap, an organizational technology implementation roadmap.  This roadmap is one process instantiating the third Strategy of TCM, minimizing the impact of transformation.  One reason for needing this roadmap is that the Enterprise Architect and SME team needs time to evaluate the new release within the organization's systems and nfrastructure.  Many times, new releases of software require new releases of operating systems, databases and other software components to operate or operate properly.  I have found this a key consideration.  Therefore, the organizational technology implementation roadmap must account for the migration of all hardware and software within the organization, not just the single product.

Further, the roadmap must account for exceptions.  For example, if most of an organization's customers (or clients) are on past release of software and there is a need to share data or information, then it may be a contractual obligation (or simply make sense) to stay at the same release until completion of the work, even if the rest of the organization is migrating to a new release.  If the software supplier is sunsetting a release of software (i.e., no longer supporting it) within the next 3 months, then it would make sense to change the organization's standard for that software to disallow any installations of that release and mandate a migration to the next release or the latest release, while waviering any migration for those projects or efforts with contractual obligations.

Finally, the leadership of the organization will use the organizational technology implementation roadmap as one of the criteria to determine which Architectural Blueprints to fund in a particular cycle of the Mission Alignment/Mission Implementation process.

Wednesday, May 18, 2011

A Model of an Organization's Control Function using IDEF0 Model, The OODA Loop, and Enterprise Architecture

In my post The IDEF0 Model and the Organization Economics Model, I outlined the IDEF0 model, shown in Figure 1 below, and outlined the three functions contained within the Domain of the organization.
Figure 1

This post will describe the two high-level functions within control function and link the IDEF0 model's control function with the process of the OODA Loop pattern as shown in Figure 2 (for more details on the OODA Loop see my post The OODA Loop in Mission Alignment Activities).  I chose to use both the IDEF0 pattern for an organization and the OODA Loop for the Control Process pattern because they are simple models that, in my experience, when properly applied, produce useful results.

Figure 2

The Functions of Control
As shown in Figure 2, above, there are two high-level functions by which leaders and managers control an organization, Mission alignment, and Governance and Policy Management.

Mission Alignment
Mission Alignment is the function by which the leadership decides where to invest its limited resources cost efficiently achieve the vision and mission of the organization.  If the organization uses Enterprise Architecture to enable and support Mission Alignment then the definition is the process of aligning the organization's enterprise architecture with the organization's mission to ensure the organization's investments in processes and infrastructure most optimally enable and support the organization's mission and vision.

An organization's Vision is the ultimate goal that it is attempting to achieve.  For example, the Vision of the United States Federal Government is embodied in the Preamble to the United States Constitution.  In Built to Last, Jim Collins and his team describe several businesses that have and execute against Vision Statements; in some cases, over 100 years.

While the Vision statement of an organization describes the organization's goal, final state, to-be state, or optimal state, the Mission statement(s) describe the intermediate objectives for achieving the goal.  This means that the Mission is of a more limited duration and that it must be measurably linked to the Vision.  Strategies enable and support the Mission by defining processes, procedures, and methods, or the road map for achieving the objective of the Mission.  Consequently, the strategies should be measurably linked to the Mission--otherwise how does the organization determine if a strategy is successful.

An example of this hierarchy from vision to mission to strategies might be that a country's vision is a secure citizenry (which is part of the Preamble of the US Constitution).  A Security Mission during WWII was to liberate Europe from Hitler.  For the United States, Mission supporting Strategies included growing the US Army from about 140,000 (in 1939--fewer troops than landed in Normandy on D-Day), to over 10 million, equipping all of the US forces and providing much of the equipment for the rest of the allies, and invading Europe to kill or capture Hitler.  Today, the Security Mission supporting the Vision might be to eliminate terrorists' attacks on the United States.  The Strategies enabling and supporting the Mission could include, creating and operating a high technology intelligence system, using highly skill special operations troops, supporting with sophisticated tools, and developing and operating unmanned combat air vehicles (UCAVs).

Processes enable and support the Strategies by making them operational.  Process is defined as a set of activities ordered to achieve a goal; that goal is the Strategy that enables the Mission.  For example, a strategy of "using highly skilled special operations troops" requires a processes for recruiting, training, and supporting such troops.  Each of these processes should be measurable, in terms of  achieving the strategy.  If not, then how can the leadership determine whether or not the investments made in the process are providing value to the organization.  Further, this will help the leadership to determine which processes the organization requires as the organization's external context changes, that is, is the Mission or Strategy still required and if not, does the organization still require the processes supporting the strategy.

Additionally, the processes should be measured in terms of effectiveness, that is, how well the process is working.  Only when the leadership knows "how well" the current process is working can they make intelligent decisions as to where further investment is needed.  These "internal" metrics determine whether or not the flow through the process meets, does not meet, or exceeds demand for the product of the process.  These metrics may also determine the changes in cost efficiency of the process--notices this is the first layer where cost really enters the architecture (and most organizations really do not have effectiveness metrics coupled with cost efficiency metrics for processes; if anything, just cost efficiency metrics).

Tooling enables and supports the Processes by multiplying the effectiveness of the processes.  As indicated by Adam Smith, in Chapter 1 of An Inquiry into the Nature and Causes of the Wealth of Nations, or simply The Wealth of Nations, the only reason to use tools is to increase the process effectiveness of a process.  The military considers their weapons, intelligence, and logistic support to be "Force Multipliers", so that a smaller force with better tools can out think or out do a larger force. 

For example, at the start of the battle of Gettysburg in the US Civil War, two cavalry brigades under the command of Brigadier General Buford, held off held off three infantry divisions from two CSA Corps for more than 2 hours.  Up to then, an infantry brigade had always been able to chase away any cavalry fairly easily.  However, Buford gave his brigades two advantages, good position on the battlefield, and good tools.  The tool was Sharp's Carbine.  It could shoot 5 to 8 times per minute compared with the infantry, whose muskets could fire 2 to 3 times per minute.  The volume of fire made up for the small size of the force--this is the force (process) multiplier.  Likewise, carpenters use nail guns instead of rocks to drive nails because with the nail guns they can drive nails more uniformly (increasing the quality, a measure of process effectiveness) and faster (increasing the throughput, another measure of process effectiveness).

Finance Engineering often does not  consider The concept of tooling increasing the effectiveness of a process in making decisions, especially in IT, because there may not be clear and provable financial metrics for the benefits (process multiplier) directly attributable to the implementation of new tools.  For example, many accountants and CFOs will not accept "cost avoidance" as a benefit, since, if a cost is avoided there is no way to measure what the cost would be if it were not avoided.  Enterprise Architects can avoid this type of conundrum only by implementing a feedback loop like Business Activity Monitoring and Management (BAMM).  The problems for the Enterprise Architect are that demonstrating the benefits of BAMM and the investment decision-making feedback loop, 1) takes time, two or more investment cycles, 2) may show that the metrics poorly describe what is happening, or 3) may show that the leadership and management are making poor and/or bad investment decisions.  The Enterprise Architect may be able to ameliorate the first problem by creating a 3 month investment cycle, instead of the usual yearly investment cycle.  This is in line with the concepts of short cycle transformation processes as I described in my post SOA in a Rapid Implementation Environment.  The second is almost inevitable at the start.  The leadership will propose ill conceived metrics because they fit with their best understanding.  Once there are two or three investment cycles, the leadership, guided by the Enterprise Architect will define and delimit better metrics; this process will continue for the life of the organization.  The third is more difficult because leadership and, particularly, management do not like anyone to even question their decisions, let alone demonstrate that their decision-making ability is poor, or that their decisions are in their own best interests and not in the organization's best interest, that is, their private agenda.

The Enterprise Architect has the task of integrating the Vision, Mission, Strategies, Processes, and Tools into an "organizational functional design", that is, an Enterprise Architecture, while using it to support the organization (see my posts Initially implementing an Asset and Enterprise Architecture Process and an AEAR and Asset and Enterprise Architecture Repository for a RAD-like process for implementation).  Any Enterprise Architect that can perform this process successfully is worth every penny he or she is paid and more, since the organization will reap major dividends in terms of effectiveness, cost efficiency of meeting its Mission, and longevity and agility in moving toward its Vision.

Governance and Policy Management
Governance and Policy Management is the second function of leadership and management.  Governance and Policy Management identifies what policies and standards to set, defines each, determines when and how to enforce them, and how to mediate and/or adjudicate them.  Policies and Standards fall into two categories, constraints (the "thou must") and restraints (the "thou must not").  Consequently, for an organization, policies and standards are the equivalent of Design Constraints for a new product or process transformation project (see my post Types of Customer Requirements and the System Architecture Process).

The only rational reason for setting a policy or standard is to reduce the intra-organizational process friction.  Process friction is occurs when the interfaces among activities or between processes don't align, when there are conflicting policies and standards, or when Mission, Strategies, Processes, or Tools don't align.  Most importantly, no policy or standard should conflict with the Vision or Mission of the organization, as a whole.  This is sand in the gears of any process, which can bring that process to a halt.  An example, in the military, of constraints and restraints on a Mission are called "Rules of Engagement".  A restraint on a military mission might be "Don't kill thy fellow soldier", that is, no friendly fire incidents.  Organizations set policies and standards for much the same reason, though normally the result is not as catastrophic.  For example, using the same version of a Web Service standard will reduce interface friction among the Web Service Components of a Composite Application in a Private Cloud.

Governance and Policy Management is really a pair of conjoint processes. Governance, and Policy Management.

As attributed to Winston Churchill, "To Govern is to Decide."  In the case of the organization, it is to decide on which policies and standards to set to minimize intra-organizational process friction.  Normally, the highest level of leadership of an organization decides on what policies and standards to set.  However, while in their minds eye the new policy will not conflict with existing policies or standards, when enacted within Policy Management, it may.  Therefore, the Governance process should have a feedback loop to enable the leadership to understand the consequences, both good and bad, of the new policy.  This is another task of the Enterprise Architect.

Policy Management
The three primary activities of management within the Policy Management process are to enact, enforce, and mediate/adjudicate policies and standards.
  • Enact - Management creates new or updates old policies and standards based on the decisions of the leadership as a result of the Governance process.  Creating a policy (standard or for government, a law) is more than documenting a policy description.  To make the policy enforceable requires the association of  organizational "business rules" that define and delimit metrics for when the policy has been violated.  In government, these business rules may be known as "regulations".  In IT architecture, these rules may be parametrized and instantiated in a rules repository.  This work especially well in Enterprise SOA-based Composite Applications.
  • Enforce - Once management has described, documented, and delimited a policy or standard and instantiated them with business rules, the management must enforce the rules; otherwise the policy or standard becomes a mere admonishment to virtue (something that looks nice a paper, but is never followed).  While enforcement may seem fairly straightforward and simple, it turns out, that in detail, it's quite complex.
  • Mediate/Adjudicate - Management must also either mediate or adjudicate penalties.  While many people assume that judging is "yes or no" and that the penalties are imposed in a uniform manner.  In fact, many cultural/social/political forces militate against uniform penalties including the "management protective association" (which ensures that any of its members receive light penalties, as long as there is no big stink, since management controls all facets of the Policy Management process, so it is easy for them to protect their own).  This is the key reason that the creators of the US Constitution separated each of these activities into the three branches of government.
There are many patterns for the Governance process, and there are many ancillary and sub-processes associated with Policy Management, that I have not discussed, though may in future posts.

The IDEF0 pattern's use of the OODA Loop process pattern in Control
The IDEF0 functional architectural pattern (shown in Figure 1), and the Control architectural pattern (shown in Figure 2), show two levels of detail of an abstract architectural pattern of functions for control of the organization.  However, the Control of an organization is much more than levels of functions in an architect.  Control requires a pattern of process (also shown in Figure 2), that is activities and procedures that the leadership, management, and Enterprise Architecture use for aligning the mission and governing the organization.  I am recommending the OODA Loop process pattern, the functions of which I discussed in my post  The OODA Loop in Mission Alignment Activities.  The reason is that it works, like the IDEF0 pattern.  The acronym OODA stands for the four activities in the loop:
  • Observe
  • Orient
  • Decide
  • Act
These terms are defined in my post cited above.
Mission Alignment
In the process of Mission Alignment, the OODA Loop is used in the following way.

Observe - The Enterprise Architect observes the current processes and tooling by measuring them during each cycle through the Mission Alignment process.  In effect, the observe activity is the feedback loop of Mission Alignment and the OODA Loop process pattern.

Orient - The Enterprise Architect orients the observations by inserting them into the "as is" architecture.  The architect pays particular attention to the changes in any processes or tooling that was changed in the last cycle to determine if the benefits he or she predicted were, in fact, met.  This helps the architect to determine if there is further change required in the processes, the tooling, or in the method for measuring the benefits (both increased process effectiveness and cost efficiency).  This activity includes modeling of the current or "as-is" Enterprise Architecture and comparing with the results of models with candidate changes.  This helps the Enterprise Architect to determine which candidate changes has the greatest potential for increasing the process effectiveness and/or cost efficiency.

Decide - Once the Enterprise Architect determines the recommended candidates for change, he or she needs to create a blueprint for each of the recommended candidates.  A blueprint has three section.  The first section is a technical description of the change, that is, a notional design.  Additionally, this section should include all risks identified and accessed by the Enterprise Architect--and risks are no bad things to identify or access.  The second section of the blueprint is a benefits analysis, documenting the process effectiveness and cost efficiency potential benefits in modeling the candidate change.  These potential are much more accurate than many of the current "benefits analysis" because they are measurements of support for the organization's Vision and Mission, instead of mere support of an activity or even a process.  The final section is the estimated cost for the candidate change.  Since it is based on the notional design, generally, it will fall into the Rough Order of Magnitude (ROM) category, though many customers, managers, and finance engineers treat the number as a "not to exceed" number.  Consequently, many Enterprise Architects err on the high side of the estimate, which kills many good candidate efforts.

Once the Enterprise Architect presents the blueprints for the candidate change efforts to the organization's leadership, the leader or leadership team must decide which to implement.  This is still difficult because there will never be as much funding to time (the programmatic requirements) as there are good potential efforts.

Act - Once the organization's leadership has decided on which efforts to fund, the Systems Engineer and System Architect, together with the Program Manager, perform the transformation effort.  The first task, before project planning is performed by the Systems Engineer.  Starting from the requirements on which the Enterprise Architect based the notional design, the notional design, and the identified risks, the Systems Engineer works with the customer to create a much more detailed set of functional requirements and design constraints.

The reason for the System Engineer gathering and documenting an initial set of detailed customer system requirements before the start of project planning is that the project plan should be based on those requirements.  Many projects and programs fail simply because they create the project plan first, then attempt to perform the requirements analysis second.  The result is that the plan is based on poor or non-existent requirements causing a complete replan after the requirements analysis is performed ("getting the cart before the horse" is never a good thing).

If you make the assumption that "Not all of the real requirements and known upfront", hardly a heroic assumption, then using a RAD or rapid implementation process, such as the one I describe in SOA in a Rapid Implementation Environment would be a good thing to do.

When performing the first OODA Loop for Mission Alignment process, the Enterprise Architect should start with this Act activity for two reasons.  First, the Systems Engineer has identified the customer's requirements.  These have associated metrics.  The Enterprise Architect can measure the current system to determine the same metrics for the current system.  The Enterprise Architect can insert that data into the AEAR, then measure the transformed system to show success of the effort.  In doing so, the Enterprise Architect can demonstrate the operation of the OODA Loop process to the leadership, while starting the build out of the AEAR.. 

Second, there is no way to collect all of the necessary data for an Enterprise Architecture for all medium and large-sized organizations before the data is out of date.  The reason is that it typically takes 6 months or more to  create the AEAR Framework and to determine and insert the necessary data into the framework.  Since most medium and large-sized organizations are constantly inserting new system, software, and hardware, and retiring old ones, by the time the Enterprise Architect has enough data to start performing the Orient activity, the "as-is" data has become the "as-was" data; that is, the tempo of change is faster than the data about all current systems can be collected.  Any Enterprise Architect that is intent on making recommendations on a complete, correct, consistent, and current set of assets is doomed to failure.  Many organizations, including those in the US Federal government have found this to be the case--creating an Enterprise Architecture is an exercise in futility.  Consequently, the organization's leadership often kills an Enterprise Architecture project before it can proved to be useful.

There are only two ways I know of, that an Enterprise Architect can be successful, either start building the Enterprise Architecture with one sub-organization or start with the current projects.

If an Enterprise Architect starts with a single sub-organization, it must be small enough that the Enterprise Architect can gather all of the data for the AEAR in 3 months or less.  There are two reasons for this.  First, customers tend to loose interest if there are no results within that time frame.  Second, if a RAD-like process is used, then the Enterprise Architect could be synced-up with the next implementation cycle. 

I would not recommend this method for initializing the Mission Alignment process because it will take two or three cycles before the process is really capable of producing good assessments and blueprints; this is not terrible impressive to the customer (the leadership). 

Instead, I would recommend start with data gathered for current projects and efforts. The reason is that, for whatever reason, the leadership has already intuitively considered the benefits and costs of the effort, so that they are in tune with any measurements of the before and after transformation, that is, they have a vested interest in the results.  The Enterprise Architect should be ready to defend the methodology of analysis, especially if the results are poor.  Remember that, typically, the first set of metrics do not measure what the leadership intended. Second, that there is a process learning curve--it may take 6 months before the users employ the process and system with facility.  Third, many of the efforts to transform processes, activities, and systems are based on inaccurate understanding of these.  Fourth, because there is no Mission Alignment process--the Enterprise Architect is just starting it--the decisions of what projects and other efforts to fund is based on beauty contests, backroom politicking and private management agendas.

This project-based method for building out the architectural data of the AEAR assumes that within 3 to 4 years, at least 98 percent of the processes and tooling will be touched by a project or other effort.  This is a near certainty because both software and hardware technology are changing so rapidly that hard not to transform and/or update nearly every facet of an organization's architecture.  This means that as systems are transformed the AEAR is updated and becomes more and more valuable to the organization.  All the while, the Enterprise Architect is performing his or her role instead of trying to build-out the AEAR.

Governance and Policy Management
Any one Policy or Standard is almost always considered independently from all other Policies and Standards.  Most organizations have no architectural framework for policies and standards and never really measure the effects of a new or revised policy on the processes and systems of an organization.  Consequently, many policies and standards create as much or more process friction as they resolve.  Without a feedback loop, which many, if not most, Governance and Policy Management processes and systems do not have, then the leadership and management has little chance to ferret out the conflicts, anomalies, and non-value added Policies and Standards, except when the policy or standards causes so much friction or so many issues that the issues become obvious.

Again, I would recommend that the OODA Loop decision-support pattern be used.  For policies and standards, there may be no standard duration (like 3 months for a Mission Alignment cycle).  Instead, an OODA Loop for the Governance and Policy Management process will occur on an as required bases.  However, I would recommend that all policies and standards be reviewed at least once every 18 months to two years.

Observe - Like the observe activity of Mission Alignment, the Enterprise Architect uses the Observe activity to measure the effects of the change, only, in this case, the Enterprise Architect would measure change on the processes the policy or standard affect.  This is the feedback loop for the Governance and Policy Management process.

Orient - Once the Enterprise Architect has made the measurements, he or she has to orient these measurements within the Enterprise Architecture, as embodied in the AEAR. This may require significant effort since, as discussed above, Policy Management must enact, enforce, and mediate/adjudicate policies and standards.  The Enterprise Architect must evaluate the policy or standard within each of these sub-processes.  Additionally, the Enterprise Architect must analyze and evaluate the "business rules" associated with each policy or standard to understand their impact on the processes and systems they effect.

As with the Orient activity in Mission Alignment, the Enterprise Architect can model the "as is" architecture using the measurements.  Then, as required by the leadership, the Enterprise Architect can change the metrics of the business rules, the business rules themselves, or the rules (and policies/standards) linkage with the processes or strategies to determine if there are options for further process friction reduction.  If the Enterprise Architect finds an issue or opportunity, the Enterprise Architect can create a recommendation for a change and present it to the Governance board (i.e., leader, leadership team, lead or other designee).

Decide - The Governance Board will then decide whether or not to move forward with the recommendation.  If the Governance Board decides to move on the recommendation, they will forward it to the appropriate Policy Manager.

Act - The Policy Manager will work with the Enterprise Architect to craft the appropriate new or revised policy or standard.  That will include adding, revising, or deleting business rules through the use of the three sub-processes of enact, enforce, and mediate/adjudicate.

The IDEF0 architectural pattern, in part, describes the functions Mission Alignment and Governance and Policy Management that an organization's leadership and management uses to control the organization.  The OODA Loop architecture is the process pattern for the decision-making process for both Mission Alignment and Governance and Policy Management.

Postscript:  There is one other major activity that the Enterprise Architect should be involved with, that is, supporting the CTO and CIO in Technology Change Management (TCM), but more on that in another post. 

Tuesday, May 10, 2011

SOA, The (Business) Process Layer, and The System Architecture

As I discussed in my posts The Paradigm Shift of Service Oriented Architecture and SOA in a Rapid Implementation Environment, there is a significant cultural/business/technical shift in thinking when an organization starts to migrate from a fragmented, monolithic IT architecture, or Object Oriented/Web Service using code development using COTS or custom developed applications to Composite Applications within an SOA.  This key shift is formal linking of the IT application with the business process instead of the functions that make up the process.  I discuss several of the advantages in Assembling Services: A Paradigm Shift in Creating and Maintaining Applications.

Process and The Assembly Line
While it may seem like much of a change to link the application to the process instead of the function, consider this concept in the context of an assembly line.  An assembly line is the tooling enabling and supporting the process of assembly.  In turn, a process is "a set of activities ordered to achieve a goal", the goal or operational mission for the assembly line is to assemble the product with no defects.  I suspect that all manufacturing engineers would agree with that goal, but understand that Murphy's law works on all assembly lines, so they attempt to minimize the number of defects, still... On an assembly line, as much as is feasible, all of the activities and their enabling and supporting tooling works in concert with one another, orchestrated by the manufacturing engineering team.  This team spends significant time and effort measuring the flows through the assembly line to ensure, a) that the line produces products with as few defects as possible, while b) producing those products as cost efficiently as possible (e.g., minimizing the work in process [WIP]), and c) meeting the constantly changing demand with as little inventory as possible (which constitutes operational agility for the assembly line).  Additionally, the manufacturing engineers want the tooling to be agile, so that as the components of the product change, with changes in customer's requirements (and wants), the cost of retooling is minimized.

Process and IT
Contrast that with the way organizations engineer IT.  From its inception as supporting single functions, like accounting and payroll, with card-based semi-mechanical "electronic computers" in the late 1950s, IT has focused on supporting individual functions.  By the early 1980s the result was apparent to anyone that developed or used applications; they were information silos, that is, the tooling supporting each function had associated data, but there were no interfaces between functions to enable activity 2 in the process to share its data with activity 3, for example.  During this time, I worked at a major university.  In one Dean's office, that I supported, the Dean's administrative assistant had five terminals on her desk.  Why?  For two reasons; she needed data from five applications to perform her job of supporting the Dean, and she needed output from one application to serve as input to another--she was, in fact, the interface between the functions.

Since this situation was intolerable and expensive for operating the functional applications, the developers/coders created interfaces as the customers identified the need (and had funding to provide for it).  Consequently, the application layer functional design went from a series of virtually unconnected silos of data and code to something approaching a picture of a bowl of spaghetti.  In one study I did measuring the cost of maintenance of each link in this "fragmented" architecture, I found that for one small but business critical process for one of the business units, the cost was approximately $100/month/link.  Considering that even in mid-sized organizations, if half the maintenance cost of what I found is representative (i.e. $50 per link per month), then even a mid-sized organization would be spending a significant portion of its IT budget on link maintenance.  This cost created great pressure by finance engineering on IT management to become more cost efficient. 

Again, manufacturing engineering provided the concept, Materials Resource Planning (MRP).  The MRP systems interlinked all of the data systems and IT functions supporting the manufacturing floor by integrating them into a single application, thereby eliminating all of the linkages among functions.  This concept was expanded into MRPII, and then into the Enterprise Resource Planning (ERP) systems, like the SAP and Oracle tools.  The architecture is call Monolithic because all of the functions are tightly coupled in a single application.

Within a couple of years system users found several issues with ERP systems.
  • The ERP systems are difficult to tailor to support the organization's processes.  Typically, the suppliers of ERP systems build the functions to a group of standards that are used by most industries or most organizations within an industry.  If for whatever reason, a) the enterprise that is implementing the ERP does not have a process that uses those standards, then either the enterprise must change its process--unlikely--b) it must attempt to tailor the ERP system to match its process, or c) some combination of the two.  Generally, none of the alternatives are particular effective or popular in large organizations with a diverse product mix.  The result is that either the initiative fails completely or that the personnel of all of the business lines work much harder and longer (and possibly create "shadow systems") to overcome the limitations imposed by the ERP system.
  • The Systems are difficult to update.  Frequently, the software supplier implements the next version of a package  the equivalent of installing, configuring, and tailoring the original system.  The reason is that they have updated the technology, perhaps the functional architecture (design) sufficiently, that any bolt-ons, add-ins, or other tailoring will not work with the new version.
  • The Systems are operationally inflexible.  That is, they are not agile; able to respond successfully to unexpected challenges and opportunities.  Because the tailoring required for an ERP system can be major, the costs and time involved becomes a significant item in the organization's budget.  Consequently, the time to change becomes long and the cost of change becomes high.  This leads to the problem that IT systems have become an impediment to change rather than an enabler, as studies by Gartner and Forester have shown.
The advent of Object Oriented Programming and its successor, Web Services, enabled organizations to partially address the issues above.  As discussed in my post SOA and a Rapid Implementation Environment, OOD design enables a developer to treat an application as a set of objects.  If these objects use Web Services Standards (either WS or JSR) for their interfaces, then they are Web Services.  In effect, this transformed an application into a set of components (classes, or Web Service components) that could be lashed together to create the application.  The advantage of the OO software architecture is that each of the objects is a small code module that could be updated or replaced with new technology and as long as the objects external interfaces and the resulting functions remained the same, the application would operate in the same manner.  This change met two of the three issues noted above; updating of the application's technology is possible without a complete re-installation and configuration of the application and initial tailoring of the application is much more feasible.

While this tailoring of an application made up of objects or Web Service components does address the agility issue, it is only in a very minor sense.  Since Object Oriented Architecture only enables an implied workflow, rather than making it explicit and independent of the Service Components, it would require reprogramming to change in response to unexpected challenges and opportunities.

Process, SOA, and the System Architect
 As discussed in my post SOA and a Rapid Implementation Environment, SOA formally separates the process flow from the functions.  This formal separation means that the organization can have its transformation teams restructure and order the functions without reprogramming of the functions themselves.  Instead, they can simply be reassembled (see my post Assembling Services: A Paradigm Shift in Creating and Maintaining Applications).

Formally and explicitly assembling Composite Applications using orchestration and choreography enables the System Architect to model the organization's (business) process for both effectiveness (in achieving the organization's mission) and cost efficiency and then to model the Composite Application linked with the process to determine its ability to optimally enable and support that process in both an effective and cost efficent manner.  Consequently when the organization's processes require change in response to changes in the organization's mission, strategies, the organization's environment, or technology, the System Architect, together with the rest of the transformation team may only need to change the process flow Component,  may need to add or update certain Service Components supporting functions, or both.  Still, this will make much more agile support for the processes.  This links the SOA-based Composite Application directly to the organization's process.

There is an additional benefit to separating the process flows from the Service Components.  When the organization's automated business rules change in response to changes in policies and standards (which in turn should be in response to changes in Governance), the System Architect can very quickly evaluate the change by modeling the process and Composite Application to understand the consequences of the change.  In the best situation, the System Architect would model all affected applications before any proposed change in policies, standards, or business rules is put into place because this would enable both the Governance and Policy Management teams to understand the consequences and results of the proposed change before those changes become unmanageable negative externalities.

Obviously, this is a significant shift in the organizational culture.

Wednesday, May 4, 2011

SOA, the User Interface Service Components, Apps, and Validation before Registering

System Architecture and the User Interface
In an article I wrote for The Northrop Grumman Technical Review Journal, entitled Service-oriented Architecture and User Interface Services: the Challenge of Building a User Interfaces in Services, I described five functional categories of user interfaces that a system architect might use to enable a user to employ  for an SOA-based Composite Application.  These included: Thin Client, Portal, Rich Client, Smart Client, and Rich/Smart Client.  I describe the characteristics sub-functions of each such that a system architect be able to determine which would serve the Composite Application's user best. 

In the article I linked the user interface categories to three categories of users (not my categories, but I could not track down the source), informational, transactional, and authoring.  An informational user is one who uses data and information in an ad hoc (through search and discovery methods) or regularly to retrieve information from one or more sources.  The data flow is primarily to the user.  Many commercial cable companies set up their networks for this type of user; with data rates twice as fast to the customer of the data and to the supplier of the data.  A transactional user is one who repetitiously inserts and/or retrieves the same type of data from an application.  For example, the office administrator in a doctor's office makes appointments, that is, he or she looks up on a calendar for dates and times that are open and then inserts an appointment.  For organizations, including businesses, the vast majority for the uses of data and user interfaces are of this type.  Finally, there is the "authoring" use.  This customer function is the most creative and would include: programming, document, video, and audio file creation and editing, engineering, verification and validation, and enumerable other content creation.  Creating a user interface for the authoring category is the most complex because this service links to the most complex functions of an application.

There were two reasons for writing this article.  First, for SOA-based Composite Applications, to demonstrate that the user interface could be built in Service Components, as there was a good deal of discussion between myself and my colleagues on this point.  Second, to demonstrate the patterns that a System Architect can use in the activity of decomposing the customer's system requirements and deriving the functional requirements, which is a key step in the system architecture process.

An Example
While a functional design for the user interface Service Component may seem like a semi-useful abstraction, Apple seems to have intuitively understood the concepts very well.  For those versed in SOA, an Apple "App" is simply a Service Component.  It performs a single function or a small group of closely coupled functions (in a system architecture sense), for the user.  Apple, and subsequently many other suppliers of wireless user interface devices, are now selling these User Interface Service Components.  Most of them are for the informational user, if you can call streaming a movie informational--the functional concept is the same.  Likewise stock quotes, news feeds, the weather radar real time image, the restaurants closest to the user (with customer critiques), and earth images of the user's current location (i.e. navigational maps) are all informational, that is, they are likely used on an as needed basis, or semi-regularly, to gain data and information about what is going on in the environment that affects them.

Even Apple's iTunes and Apps store are informational in that they link to discovery and download functions (services) on the server-side of the system.  There are Apps, like playing music, games, and currently, the calendar, that are standalone applications (functions), but even the camera function (service) is loosely couple with the e-mail user interface Service Component.  Consequently, the system architecture of the new mobile devices is, in fact, SOA-based, they just don't want to call it that.

As further proof, Apple is treating their Apps the manner a Service Component supplier should be treated.  Apple has significant verification process to ensure that Apple's policies and standards are met, before the App is offered to customers.   That is, Apple is verifying the Apps, the way all Service Component supplier should verify Service Components, especially, a user interface Service Component.  These include meeting security and dependability standards, as well as having a Service Component Description.  The Apps Store serves as the Service Component registry and discovery function in the SOA-based architecture.

Consequently, this is an excellent example of an SOA-based architecture (though it really doesn't use Web Services).

For more on SOA see SOA in a Rapid Implementation Environment, Enterprise SOA vs Ecosystem SOA, Private Cloud Computing vs Public Cloud Computing, Choreography vs Orchestration: Much Violent Agreement, and SOA Orchestration and Choreography Comparison.  For more on System Architecture see SOA in a Rapid Implementation Environment, Enterprise Architecture and System Architecture, and Lean or Agile Enterprises and Architecture.