Saturday, November 19, 2011

Changing the Congressional Budget Office to the Congressional Enterprise Architecture Office

The Current Mission of the CBO
Currently, the mission of the Congressional Budget Office (CBO):
"is to provide Congress with objective, timely, nonpartisan analyses needed for economic and budget decisions and the information and estimates required for the Congressional budget process" [from CBO TESTIMONY Statement of Robert D. Reischauer Director Congressional Budget Office before the Joint Committee on the Organization of Congress,[the] Congress of the United States]

The director broke that into three operating strategies:
1. Helping the Congress formulate a budget plan;
2. Helping the Congress stay within that plan; and,
3. Helping the Congress consider policy issues related to the budget and the economy.

The Problem with the Current Mission
This Mission and Strategies is part of the economic, political, and social problems currently facing the United States, and, potentially, a source for the solution of those problems. The reason that the CBO is part of the problem is that finance engineering has influenced Congress to emphasize the "financial" part of an overall Enterprise Architecture. That is, Congress proposes functional and component changes to the US Federal Government and the CBO responds with an analysis, which Congress can choose to spin-doctor to its political purposes. Consequently, Congress can choose to support any industry; examples include agriculture (subsidies) and housing (mortgage deductions, etc.), gambling (gambling deductions), and so on.

A Solution the Congressional Enterprise Architecture Office
As I demonstrate in my book, Organizational Economics: The Formation of Wealth, the body performing the controlling (see IDEF0 post) and governing functions of any organization has three Missions, Security, Standards, and Infrastructure.  This is particularly true of any organization that has a spatial domain.  These missions appear in the Preamble of the US Constitution and throughout that document.

Given these three high-level missions, and my discussion of the role and responsibilities of the Enterprise Architect (as a sub-discipline of Systems Engineering), what the US Federal Government needs is a real implementation of the FEA Framework and a formal Enterprise Architecture process.  This process aligns the departments’, agencies’, and other organizations’ of the Federal Government with the three high-level missions of government. 

Additionally, Enterprise Architecture proposes where develop, transform, reform, end or otherwise change the organizations’ missions, strategies, processes, and tooling.  For the US Federal Government, (or any other organization of this scope and size), the EA process must be recursive, but traceable and integratable.  The CBO is in the position with some of the responsibilities for doing this. 

Why not have Congress empower them as the Congressional Enterprise Architecture Offiice?

Wednesday, November 16, 2011

SOA, Cloud Computing, and Event and Model Driven Architecture

SOA and Cloud Computing, the Predicate to Model and Event Driven Architecture
In a recent post (see Functions Required in the Cloud PaaS Layer to Support SOA), I discussed two SOA patterns and two Cloud Computing patterns and showed how they are, in fact the same patterns applied to the SOA and the Cloud Computing domains.  For SOA, the concepts are Enterprise SOA (ESOA) and Ecosystem or Internet SOA (ISOA), while for Cloud Computing, the concepts are Private and Public Clouds. The correlation of these concepts is shown in Table 1.

Ownership Domain
Integration Type
App. Layer Comm.
1 Dom
2+ Dom





















 Table 1
There are three attributes for both SOA and Cloud Computing shown in Table 1; Where the Services operate (the Ownership Domain variable), whether the service uses orchestration or choreography (Integration Type [see SOA Orchestration and Choreography Comparison]) and where the service components interact (The Application Layer Communications).  As the table demonstrates, on the dimensions shown, an ESOA is a model of the Software as a Service (SaaS) layer of the Private Cloud, while the ISOA is a model of the SaaS of the Public Cloud.  Consequently, SOA and Cloud Computing can be the same at the SaaS layer.  Since the SaaS drives most of the requirements for the Platform as a Service (PaaS [see Functions Required in the Cloud PaaS Layer to Support SOA]) and the Infrastructure as a Service (IaaS) layers, the results should be a nearly identical Technical Enterprise Architecture (or functional design) [Sidebar: but, perhaps with different labels to satisfy the purists, perhaps, "True Believers", in each camp].

Model and Event Driven Architecture
In a Part 3 of a four part paper, The Future of Information Technology: Enterprise Architecture 2006 to 2026, Part 3: Model and Event Driven Architecture (2008 to 2017), The Journal of Enterprise Architecture (February 2007), p. 28-39, I posit Model and Event Driven Architecture as an outgrowth of SOA.  This is an integration of Model Driven Architecture as espoused by the Object Management Group and a widely discussed, but somewhat nebulous Event Driven Architecture.
Model Driven Architecture and SOA
The current concept of Model Driven Architecture (MDA) starts from IT functional requirements, as modeled in UML diagrams and ends up with a complete application to roll into production.  With respect to SOA, using MDA would mean that the software developers create the Service Components using MDA and assemble these components into a Composite Application using orchestration based on MDA.  In fact, modeling the assembly of the composite application is a quick way to spot both coding and system functional design defects.

However, in a broader interpretation of MDA, implementers of new systems, application, and/or services could start with Business Process Modeling to identify business process requirements and functions and use these to derive the IT functions.  This enables the Systems Engineer and System Architect to capture more complete requirements and trace the IT functions to the business or organizational functions they support.  This modeling concept better fulfills the process envisioned for SOA, than merely using MDA to create the composite applications.  Now the services are traceable to the business or organizational processes and functions; which means that when the process changes, the effects on the composite application can be modeled and the composite application is more easily updated to enable and support the new process or function.

Event Driven Architecture and SOA
Event Driven Architecture (EDA) focuses on creating agile software, that is, software that can successfully react to external stimuli in a timely manner.  To perform this mission an EDA must be able to dynamically change both the functions it performs and the ordering of these functions in response to the stimulus.  This function falls along the branching logic dimension, which at one extreme is the simply "IF" statement and at the other is Artificial Intelligence (AI).  Obviously, the concepts from Knowledge Management (KM) can come into play with the functions along this dimension.  The key to EDA is determining what really constitutes an event.  This requires Complex Event Processing (CEP).  In terms of the OODA Loop, CEP constitutes the OO portion ("Observe" and "Orient").

Actually, EDA can support three of the four functions of the OODA based on the concept of types of "event" rules.  There are three type are Knowledge Rules, Event Rules, and Value Rules.  A knowledge rule supports the Observe function of the OODA loop.  A Knowledge Rule

An event rule supports the Orient function of the OODA loop. Several measurements may predict an Event.  An Event Rule is a codification or formalization of this grouping and ordering of the measurements within a confidence interval.  For example, in the 1920s the prediction of a hurricane forming was decidedly hit or miss (a very wide confidence interval).  Today, hurricane and tornado predictions are much more valid (a much smaller confidence interval); in fact, there are possibilities for predicting earthquakes and Tsunamis.

In many cases, it is not a single event that causes a major or catastrophic problem or issue, but an event chain.  This has been found in the vast majority of recent aircraft accidences.  Understanding these event chains is CEP.  Another example is that now, the weather service not only predict that a tropical storm or hurricane will occur, but also identify where it will make landfall, and or if it will make landfall.  These two events, "there is a hurricane", and its "coming our way" are part of the event chain that determine whether, "we'll sit this one out", or "it's time to leave".  To decide requires the decision-maker to understand the risks and rewards of the choice.

A value rule supports the decision-maker in the Decide function of the OODA loop.  A value rule determines the value to the organization of alternatives when an event or event chain has occurred.  In the case of the hurricane, the decision-maker makes the decision-based on the events, and value of staying versus leaving.  One well known example of value rules comes from Isaac Asimov.  The three value rules for robotics are:
  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.
While the knowledge rules are state attributes and change of state attribute, the events create decision points in the process.  With SOA, the orchestration or choreography engines use the processes flows from the assembly process plus the rules, based on the governance, to enable the process flow among the functions (service components) of the composite application.  If the SOA is the Ecosystem SOA (public cloud), then the composite application uses choreography to dynamically interlink the service components [Sidebar: In a web services type of application, these service components are the web services.]  Choreography enables the application to choose which function or service component to use next anywhere on the Internet.  By coupling the choreography repository and engine with a organizational rules repository, and CEP rules repository and engine, the composite application can dynamically link to any appropriate service component that is available that, according to its combined rules will produce the best outcome.  The down side to this is that, first, as the number of alternates increases, the composite application will slow down, and second, the composite application may become much more stochastic than would normally be expected or useful.

Model and Event Driven Architecture and Enterprise Architecture
Model and Event Driven Architecture is an amalgamation of Model and Event Driven Architecture, and in such an alloy increases the speed of the composite application, while reducing the risk that the application may become stochastic.  At the same time, the composite application is much more agile than that built using MDA.

The cost of this amalgamation is in terms of the complexity of the combined orchestration/ choreography/CEP engine (functionality) and the leadership and management discipline needed to ensure that the governance and "event" rules are enabling and supporting the organization's vision and mission, and that they are as internally consistent as possible--that is, a minimum number of "catch 22s".  This will require the a person with the role and responsibilities of an Enterprise Architect, as well as those of Systems Engineer and System Architect.

Friday, November 11, 2011

Housing, Finance, and Government: Three "industries" that produce Minimal Value

The thesis of this post is that it is pretty silly to base an economy, like that of the United States, on housing, finance, and government, which is what Wall St. and Pennsylvania Ave. seem to want to do.

Types of Industries

All organizations are constructed from three types of sub-organizations, which are within their domain.  The Domains would normally be considered as political unit as per example, a city, county, state, or country.  However, even in private organizations, these types of organizations exist, within the organization’s functions and departments.  These organizational categories[i] are:
·         Primary Industry – Organizations that are in an industry that creates a product or service that is exported beyond the boundaries of the domain within which it is produced. 
·         Secondary Industry – Organizations that are in an industry that enables and supports one or more of the processes of the primary industry within the domain it operates.
·         Tertiary Industry – Organizations that in an industry that enable and supports both the primary and secondary industries by providing services that support the environment in the domain within which the primary and secondary industries operate.
As I demonstrate in my book, Organizational Economics: The Formation of Wealth, the primary industry (or industries) is the economic engine that forms the value of the organization for other organizations. Hamel and Prahalad called the turbine of this engine, the organization’s core competence.[ii] It produces the value for the organization.  All other “industries” enable and support this engine.  For example, the economic engine and primary industry for Detroit Michigan, has been and continues to be the automotive industry; in “silicon valley” it’s information technology, the State of Iowa is agriculture, and so on.
Secondary industries are sub-contractors and suppliers of hardware, software, and services to the primary industries.  These industries would include auto parts suppliers, tool manufacturers, transportation within the organizational domain, and other organizations directly supporting the primary industry or industries.
Tertiary industries are organizations that enable and support the personnel, or the domain’s infrastructure.  Schools, colleges, and universities, banks and other financial services, municipal services (e.g., electric, communications, roads and bridges, sewer, water, and so on), food stores, and other stores, hospitals and other medical services, restaurants, fast food outlets, and so on.  In other words, the majority of economic activities within an organizational domain.  Additionally, tertiary industries includes all types of construction.  It also includes the defense (see   Security a Mission of Government).  These industries are where most of the economic activity of an organization occurs.
Some organizational theoreticians include quaternary industries as a category.  These activities include standards and policies (see Standards a Mission of Government) and infrastructure (see Infrastructure a Mission of Government and Organizational Control).

Types of Value

In the first chapter of Organizational Economics, I describe three types values, knowledge value, capacity value, and political value. 
Knowledge value (see Knowledge Value) is value created by an increasing knowledge-base and includes research and development (invention and innovation), and knowledge transfer (education). Products based on new scientific discoveries and transferred into production are the most high valued.  Unique user interface designs like the iPhone or innovative medicines are examples of knowledge value. 
Capacity value (see Capacity Value) is “more of the same” value.  Once a product has been perfected and competitors have brought out versions, then what Adam Smith called “the invisible hand” starts to force reduction in cost of the product.  Many economists refer to the as commoditization of a product, but its value is in capacity production—which produces capacity value.
Political value (see Political Value) is of two types, mediating and exploitive.
·         Mediating (or mediated) political value is created by reducing the organization’s internal process friction.  Examples of mediating political value include contracts, laws, customs, codes, standards, policies, and so on.  In the military, mediating political value (reduction in process friction) comes from “the rules of engagement” (e.g., don’t shoot your fellow military).  The reduction in process friction is very often the difference between a process adding value and a process absorbing value.  The regulation of markets (and the processes of markets, themselves) is such an example.
·         Exploitive political value is indirect or “siphoned” value.  It is caused by someone in the position of responsibility or authority using the position for the reaping of value to their own benefit; “The Lord of the Manor” is the archetypal example, those these include dictators, lobbyists, bankers, day traders, and many judges and legislators.  Further, as I describe in my book, in many cases it includes various religious authorities.

Housing, Finance, and Government as Value creators

My thesis is  that housing, finance, and government either do not create value or very little value.  I base this on the understanding on how these fit within the dimensions described in the previous sections.


A house is worth a house.  While that seems to be a tautology (and it is), too many people forgot that during “the housing bubble”.  What that saying means is that the value of the house is only what value it imparts to the consumer of the house’s value.  The house is never worth more than when it was built, unless it is maintained and upgraded.  And even when it is upgraded the value of house begins to decrease as it is used (what’s being used, at the most abstract is its value).  The problem, recently, has been that governments tend to inflate their money supply—money being a reserve of value.  With the inflation of money (that is, the decrease in the value of money) the price of a house to increases—though its value remains the same; it’s worth one house.  Likewise, when the housing market “goes down”, the price of the house goes down, but the value remains the same; one house.
House construction and remodeling is a tertiary economic activity.  It produces some capacity value (more of the same value) for the builder and construction workers, but once completed and purchased, it starts loosing value.  In giving the people of the organization a place to live, a house supports the secondary and primary industries of the organization.
Obviously, this is not an activity that enables and supports the formation of wealth for an organization.  Consequently, basing an economy on housing, or at least a significant portion of an economy is foolish and silly.  Yet, in the period from 1995 to 2007, that is what many Americans built the perceived wealth on, and what the United States did.


Finance includes two subtypes; banks and markets.  The Wall Streeters, (e.g., bankers, hedge fund managers, stockbrokers, pension fund managers and so on) have forgotten that a bank is a value battery and “a market” is the transfer point for the value.
Banks dilute stored value of money through investments that increases risk and potentially increases the amount value through the implementation of discoveries and inventions as new products, systems, or services.  In and of itself, investing cannot increase the amount of value only reduces it.  Only when the money is invested in innovative ideas or the production capability (seeROI Vs VOI) does the value increase, so that, for example, loaning money for a house does not increase the value of the house or create value of any sort.   However, if a bank loans money to a farmer to buy seed or farming implements, the bank has made an investment that does create capacity value—food.  Consequently, banks are tertiary activities that do not produce an increase value, but they loan their repository of potential value (Money) to primary and secondary activities that do.
In the process of each transaction, the bankers siphon off some of the value as a “transaction” fee.  This siphoning is converting potential value into exploitive political value; and exploitive political value is value that is quickly destroyed.
Markets have two missions.  The first is to measure the value of a material, product, or organization. The second is to transform value from real to potential and back; that is trade materials or stocks for money (potential value) or money for materials and stocks.  “Making a market” does both of these; and in this Internet age, anyone can do this.  That is, the person can buy commodities, hold them, and sell them.  In the process, the price of the commodity (be it materials, products, or stocks) converges on a price.
Again, market are tertiary activities that can convert knowledge and capacity value into potential value and the reverse.  And, again, the “market makers” and “stock brokers” that siphon a percentage off, because they are “providing a service” (which to some degree they are), are converting some of the value and potential value into exploitive political value.  Unfortunately, a good many Wall Streeters have turned the markets into legal mega-slot machines, gaming them through “day trading” and even “micro-second trading” to siphon off a much value as possible as quickly as possible, converting it into exploitive political value.


According to my Book, Organizational Economics: The Formation of Wealth, and as note above in this post, a government has three  missions—security, standards, and infrastructure (see. Internal and External security, standards, and infrastructure are mediating political value and all three are tertiary activities, that is, necessary but not sufficient conditions for the growth of value within the domain of the organization.  Further, the second and third activity can be Quaternary.  That is activities, like the enactment of laws and determination of regulations, policies, and standards that enable the standards and infrastructure activities.  These activities are very susceptible to manipulation for personal gain.  The personnel that enact or fund the activities can enjoy an extreme amount of exploitive political value, as I describe in my book.  In the past, it has been the lord of the manor, dictator, duke, king, emir,  priest, shaman, rabbi, Imam, or other religious leader.  Today, lobbyists must be included as they encourage the lawmakers to create uneven economic playing fields that favor one activity or one industry over another; this includes unions and other “not for profit” organizations as well as economic organizations. Consequently, mediated political value is at best much more easily converted into exploitive than either knowledge or capacity value, and is the catalyst for the conversion of these.
In this age, “Entitlements” are the single biggest place that creates exploitive political value.  These safety nets drain value from the infrastructure portion of government.  They are popular because the exploitive value goes into the pockets of the many rather than the few and popular with politicians because Entitlements buy votes.  But, entitlements are unsustainable for any organization as Greece and Italy have proven, and like the United States is likely to prove, now that the population is addicted to Entitlements.  For example, the occupy Wall St. movement feels that all college graduates are “entitled” to jobs (so what value is art history or black studies to an economic organization?).

The Net Result

Too much “unearned income” in too few wallets; too much “Entitlement income” in too many wallets.  I think what I’ve shown is that having an economy based on housing, finance, and government, like that toward which the United States is heading, is a sure recipe for going out of business.
We still have time, but do we have the leadership?

[i]These categories of industries were generally accepted in the 1920s onward, as primary: mining, and agriculture, secondary, manufacturing, and tertiary, services—these definitions are outdated and don’t get at the underlying concepts.  Therefore, I’ve redefined them for a more general meaning of the concepts.
[ii]G. Hamel and C. Prahalad, Competing for the Future: Breakthrough Strategies for Seizing Control of Your Industry and Creating the Markets of Tomorrow, (Boston: Harvard Business School Press, 1994).

Tuesday, November 1, 2011

Systems Engineering and System Architecture in an Agile and Short-cycle Transformation Process (Second Try)

I haven't updated the blog in a while; but I've been working. Here is a copy of a paper that I will present at an INCOSE Meeting in Detroit early in November.  I tried posting a link to a PDF version of the paper and it doesn't seem to work, so here is a copy.



For a Systems Engineering and Architecture process to tame uncertainty in a continuous change organizational and technology environment requires that the transformation process be agile.  Agility is “the ability to successfully respond to unexpected challenges and opportunities.”  Fundamental to agile systems and engineering and architecture is that assumption that “not all requirements are known upfront”.  This means that the process must allow the customer to add requirements throughout the transformation process.
In this paper, first, I will present a short-cycle CMMI Level 3 software development process, based on Extreme Programming, and anecdotal results from over 100 efforts using the process.  Second, I will generalize this process to include systems engineering and system architecture procedures and functions many types of agile and short-cycle development and transformation efforts.  Third, I will discuss the implications of this process on the roles of the program manager, systems engineer, and system architect.

What and Why Short Agile/Short-cycle Development?

This paper will describe, somewhat anecdotally and somewhat analytically, my experience with agile and short-cycle development and transformational processes. Then it will discuss what I’ve found relative to the change in emphasis in the roles and responsibilities of a project’s team members.
Since the terms “agile” and “short-cycle” are among the many hyped and over used terms, first, let me define what I mean by the terms. 
·         Short-cycle is “One implementation cycle Every 1 to 3 months” depending on the type of development or transformation, as I will describe later.
·         Agility is “The ability to successfully respond to unexpected challenges and opportunities”.[1]
Additionally, the reason I include both “development” and “transformation” is that they are somewhat different from a process perspective.[2]  In development, the team is creating an entirely new product, while in transformation; the team starts with a current product and revises or transforms it.
So why is it important that projects migrate to agile/short-cycle development and transformation processes?  To meet the challenge of “better”, “faster”, ”cheaper” that all customers want.[3]
·         Better” means producing a higher quality product, whether the “product” is a product, service, or process.  Quality is defined as “Conformance to [customer] requirements” by Phil Crosby, in the seminal work on the topic, entitled, Quality is Free.[4]  Any project that focuses on requirements will produce a highly effective, therefore, high quality product.  In any project, “Better” is the responsibility of the systems engineer and the system architect for that project.
·         “Faster” means creating or implementing a useable product in the short period.  Most customers prefer a usable part of a complex system in operation early, than the complete system and wait.  For example, Henry Ford did not wait until electric start was invented to start manufacturing the Model T; yet he sold a fair number of them.  Though his company and others manufactured a number of models before the Model T, many automotive historians would argue that the Model T Ford was the Initial Operating Capability (IOC) for the automotive industry.[5]
·         Cheaper” means more effective and more cost efficient, not the lowest cost.  There is a big difference that “finance engineering” tends to forget.  A stone from the backyard, a hammer, and a nail gun can all drive a nail, so why do carpenters building a house choose a nail gun, by far, the most expensive option?  The reason is that a nail gun can drive nails much more uniformly (i.e., more effectively) and that one carpenter can drive more nails much faster than using a hammer (i.e., more cost effective).

An Example: The Web Application Development Methodology (WADM)[6]

In late 1999 and early 2000, after the group supporting the organization I worked for announced a software CMM Level 3 process, it almost had a rebellion on its hands among the software developers.  The reason was that it could take longer to do the “necessary” paperwork than it took to do the job for small to medium projects (40 to 700 hours).  In the case of the minor project it nearly always took longer.
True of all large organizations, the internal information systems organization I worked for formed a team to “reduce” the SW CMM intermediate artifacts (the paperwork that is used only during the process, but has no value  beyond the end of the project—things like status reports).  Looking at the requirements (in the following section), I realized that a radical rethinking was required to meet them.  I had been researching the Software Engineering Institute’s “evolutionary spiral process” and other Rapid Application Development (RAD) processes and concluded that the simplest approach would be to add functions and documentation to the simplest short-cycle process of them all (eXtreme Programming—XP).  XP started from the premise that developers should develop and not write documentation, which apparently made it the opposite of SW CMM applied to a waterfall process.  In using this approach, but simplifying the process documentation of XP—which for me proved almost indecipherable—and the adding functions and documentation as needed to meet, the SW CMM, and later CMMI level 3 created a highly usable processes and one that was low cost with high customer satisfaction, as I will show.

The requirements

The requirements (or capabilities) for this new process and system had four high-level requirements; they should customer focused, produce a high-quality product, in a short time span, at low cost.

Customer Focus

What the team meant by the requirement for customer focus was:
·         Customer has ownership of, and a high degree of visibility into, the project.  In order for a customer to feel ownership of the project and its deliverable, the product, the customer needs insight into what is going on in the project and a feeling that the product is being influenced by the customer’s inputs (in the form of requirements).  This requires a:
·         High level of customer commitment and involvement in the development team.  The customer must feel like a team member and be treated as such by the rest of the team.  The customer’s role on the team is as the source of the requirements.  This means that the:
·         Customer can add, delete, modify and/or reprioritize functions as the application is being developed. Giving the customer the ability to add, delete, and modify or reprioritize the requirements during the process enables the customer to identify “new” real requirements that were forgotten in the original RFP or contract, but are of higher utility to the customer than the ones in the original source.  Consequently, this:
·         Meets customer’s real demand due to the customer’s degree of influence and control over the development effort.

Produces High Quality Product

What the team meant by the requirement for producing a high quality product is based on Phil Crosby’s definition of quality cited earlier,
·         Product has minimum number of defects.  There are two types of defects.  First, the product does not meet the customer’s requirements (fails validation (i.e., Customer acceptance tests) or second, there is a basic flaw in a function or component of the product.  These defects are minimized by clear identification of requirements—as required above—and clear and consistent high quality verification and validation procedures.  This means that:
·         Testing and defect prevention are integral to the process.  In fact, it means that regression testing is to used, that is, retesting for each build.

Short time Span

What the team meant by short time span is that:
·         The development cycle is divided into successive, short-duration builds.  A short-duration build is one that produced a build every month to six weeks.  Further,
·         The first build and each successive build are usable.  We did not consider it a build if it was not useable.

Low Cost

·         No unnecessary functions are allowed.  Functions that are not required are not developed.  Many times, especially for software, the designers, developers, and implementers add functions to the product that customer’s requirements do not call for.[7] Most of the time, creating functions that are not called for in the requirements means additional time removing them from the product and retesting.  Another requirement for low cost is to ensure that:
·         Project documentation and reviews are minimal.  This is key to a more effective process.  I did this by eliminating all intermediate artifacts, like status reports, and reviews like all of the XDRs (PDR, CDR, and so on) and all PMRs.  Finally, the last requirement of the team for low cost was that the process should:
·         Work with virtual teams.  This was important in the organization I worked for at the time because it was nation-wide.  It had well trained, but idle team members in West Texas, while at the same time, it had a work overload in Baltimore.  Both idle team members and overloaded team members, increases the cost to the organization.  Enabling the team members to work virtually, greatly reduces both costs.

WADM Structure

Figure 1 shows the structure of the WADM process.  It consists of an initialization phase and three cycles; Development, Design, and Construction.

Figure 1 – The WADM Process

Development Cycle

The objective of the Development Cycle is to ensure that the customer’s requirements for development (in priority order by the customer) are understood and met (that is identify customer requirements and validate that the requirements are met). Its duration is approximately one month to six weeks.  It is made up of:
·         A Planning Game[8] where the customer decides on the priority of all unfulfilled requirements and the team, including the customer, decides on which of the highest priority requirements will be implemented during the next cycle.  In many ways the Planning Game replaces the Program Management Review (PMR), except the Planning Game’s emphasis is on meeting the customer’s system requirements, and not its programmatic requirements of cost and schedule.
·          Three to four Design Cycles

Design Cycle

The objective of the Design Cycle is to design an application to meet a prioritized subset of the customer requirements.  Its duration is approximate one week.  It is made up of:
·         A Design Cycle Planning Meeting where the customer reviews the progress of the design in meeting the current requirements and recommends changes.  Sometimes, the developers can incorporate the changing during the next series of construction cycles and sometimes these are really new requirements.
·         Generally five Construction Cycles

Construction Cycle

The objective of the Construction Cycle is to code and unit test modules of the application to meet the current requirements.  Its duration is daily.  It is made up of:
·         A Standup Meeting that is supposed to be 15 minutes long, but that sometimes runs to a half hour.
·         Construction Activities where the developers actually code.
The WADM process meets all of the requirements for an Agile/Short-cycle development or transformation process.  First, it is a Short-Cycle process because a new usable version of the software product every 4 to 6 weeks.  Second it is Agile because it is based on the principle that “Not all the requirements are known upfront”; therefore: it allows the customer to identify new requirements anytime in the process and allows the customer to reprioritize the requirements every 4 to 6 weeks.

Systems Engineering Functions in the WADM

When compared with his or her role and responsibility in a waterfall-base development process the importance of Systems Engineer is much greater in the WADM.  The emphasis in the WADM is on the customers system requirements.  In 2000, I realized there were two types of requirements, the “must perform” and the “must meet”.[9]  Consequently the WADM requirements management procedures use a combination of: Use Cases[10] “One way one type of actor will use the product”, which are the “must perform” requirements, and Design Constraints, which are the “Must Meet” requirements (e.g., standards, regulations, codes, policies, etc.).
Additionally, the Systems Engineer must perform Risk Management procedures.  As defined in the WADM process this is management of the “unknowns in the design”.  These risks frequently arise at the daily standup meetings and can, in most cases, be disposed of at the same meeting or during the next construction cycle.
Finally, the systems engineer must perform the Verification and Validation procedures.  The procedures, themselves must be base on scenarios of the use cases (a scenario is a data instantiated version of the use case, or the verification procedures documented in the design constraints.  Of particular importance for defect minimization is a regression verification and validation procedure, that is, going through all of the verification and validation procedures from the previous development cycles.

Experience with WADM

My experience with the WADM process falls into three part, the pilots, what happened in production, and why were there failures.

WADM Pilots

Starting in 2001, before the process was rolled into production, we had two pilots.  The first we called the Health Visibility Management System (HVMS).  It is a management dashboard for programs and projects graded against program metrics.  Several large programs had their own management dashboard systems.  This made it cumbersome for executive management.  Further, the metric categories were not in same, in the same order, and were not based on uniform data or reporting methods.  Consequently, our customer representative, a former COO of the corporation led a customer team to resolve these issues.  Initially, this project was to provide a framework within which the executive management could gain access to a more uniform set of data from the other HVMS systems.  According to my best estimates of the initial set of requirements, this should have taken about 3 cycles (3 months).  However, as we developed the first three releases, it became obvious to the customer that a) the system was usable, and b) they had many more requirements.  The net result was the initial 3 month project continued at least 7 years, the last time I checked.  Further, our customer representative stated, “This is the only way I’ll ever build software again.”
A second project was the development of Financial and Asset Management System for Data Networks.  At the time, the corporation had merged with nine major organizations and close to 50 small organizations over a 10 year time frame.  At the same time, the data network grew to 8 times its initial size, yet the number of personnel managing “the phone bills” for the corporation had stayed the same.  The result was they were working 80 hours or more, they were getting further behind, and they were catching fewer billing errors.
After I gathered their first set of use cases (their “must perform” requirements), I estimated that it would take at least 6 development cycles (with the personnel available) to meet them.  Unfortunately, the networks department had only enough money for 3 cycles.  So we pared back the requirements to their highest priority set that would make a useable product.  We developed the initial datastore, security, and inputs displays during the first cycle.  While, we were developing some initial reports, in the second cycle, they were inserting data into the datastore.  In this insertion process, the network personnel found enough errors to more than pay for the additional 3 cycles.  By the end of the first six months, also the end of the financial year, the personnel were back to working 40 hours a week, they were catching most of the charging errors, and they could provide senior and executive management timely and correct reports on the data network costs.  Consequently, the management had more requirements.  The last I checked, the project had continued 5 years.  The customer said “This is the first time I got what I wanted without multiple defects or changes and added cost.”

Production (2003 to 2005)

The WADM process was roll into production in late 2002.  By 2005, well over 500 small and medium sized efforts used the process successfully.  The reason it was so successful in its adoption throughout the internal organization was that it met its requirements, better, faster, cheaper.
However, I was called in on two projects that used it that didn’t meet those requirements.  In both cases, the reason is “they didn’t follow the process”.  Instead the program managers attempted to follow the waterfall process and identify all of the requirements upfront and set up a project schedule based on those requirements and stick to the schedule.  The result was they each cost more than four times as much as estimated and did not roll out even the initial iteration of the product.


The WADM process has some major shortcomings.  Chief among these is that it is designed and implemented only for software development efforts.  It is not adaptable for hardware development or for hardware/software integration efforts.
Further, it cannot handle the complexity of large systems because it lacks systems architecture procedures (functional design processes), component requirement allocation procedures and tradeoff study procedures.  However, these two major limitations can be addressed.

The Generalize Agile Development and Transformation Process

The Generalize Agile Development and Transformation process is envisioned to eliminate the shortcomings of the WADM process.  It requirements includes the WADM Process Requirements of being customer focused, producing a high quality product, in a short time span, and at low cost; all of which were discussed in the previous section.
Additionally, it must support the development or transformation of both hardware and software products.  This would include:
·         System Architecture procedures to develop functional requirements (or specification) and the ordering of these to create a system architecture (or functional design) and the allocation of the functions to components.
·         A more integrated risk management functions for traceability
·         And a Tradeoff study procedure that is traceable to the requirements.
Figure 2 shows the process.

Figure 2 – Generalized Agile Development and Transformation

The Process

This process is made up of the three cycles of the WADM process, plus a fourth System implementation Cycle.  As shown in Figure 2, this process has a Construction Cycle, a Design Cycle, and a Development Cycle or all of its software.  However, additionally it has the System Implementation Cycle.  This cycle has many functions familiar to systems engineers.
The objective of the system implementation cycle is to ensure that the customer’s requirements for implementation of the system are understood and met (that is identify customer requirements and validate that the requirements are met for hardware and the integration of hardware and software).  The duration of this cycle is 3 to 6 months depending on the feasibility of useable hardware and its integration with software and other hardware.  It is made up of Systems Engineering, Detailed Design, Implementation and Procurement Activities.
As was noted in an earlier section, hardware of all types has frequently been developed using this type of short-cycle process.  The first model might only be a prototype or an X-model to try out a new concept.  That is followed by a number of test articles culminating in a fully functional produce.  Even then the process continues with block point upgrades and so on.  Sometimes the Systems Implementation Cycles are treated by program management as a single cycle.  But a close look at the process frequently reveals a hidden short-cycle process.[11] In taking advantage of this “short-cycle attribute of a large process, the development or transformation process can be much more agile.

Changing Roles and Responsibilities in Short-cycle Processes

To take advantage of this attribute within all programs, the team must revise their thinking about the various roles and responsibilities of the team members.  As mentioned earlier the key change in turning a waterfall-based process into an agile/short-cycle process is to its assumption that, “all requirements are known upfront”.  In and of itself this is a heroic assumption that, for all but the simplest effort, is false.  Further, it minimizes the agility of development or transformation effort, by adding all of the intermediate artifacts and reviews required for a formal waterfall process.
However, a truly agile process that takes advantage of short-cycles assumes that “not all requirements are known upfront”.  Any short-cycle process that does not make this assumption will have greatly increased management cost and schedule problems inherent in the waterfall process.

Program Management

With the assumption that “not all requirements are known upfront” the role and responsibilities of the program manager are radically altered.  For example:
·         Project Planning is minimized.  How can a project be fully scheduled and resources identified for unknown requirements?  Instead, the project plan should:
o   Emphasize Project Goals, Missions, Strategies, etc.
o   Emphasize the Systems Engineering Management Plan (SEMP)
·         Minimizes all intermediate artifacts and reviews.
o   There should be no need for status reports or Program Management Reviews (PMRs) since the customer is involved on the effort on at least a weekly basis.
o   There should be no need for Engineering Change Orders (ECOs), since requirements change is built into the process and procedures.
·         There is no need for scheduling, in the traditional waterfall sense.
o   Only high-level schedule possible over the System Implementation Cycle
o   Detailed scheduling is irrelevant (and a waste of resources) because schedules can change on a weekly or monthly bases.
·         Earned Value Analysis is irrelevant in its present form.  Traditional EVA will not work since there is no way of knowing what percentage of the work is complete when the requirements are continuously changing.  Instead, an alternative to EVA will need to be requirements completion based.
·         Agile/Short-cycle Contract Management cannot use a contract based on meeting the initial set of contracted requirement.  Agile/Short-cycle development and transformation processes require “Level of Effort”(LOE) Contracts, since there is no way to guarantee that all of the contracted requirements will be met—more importantly additional  requirements may be found in the process and some initial requirements may not be met during the contract.  Still, the customer will be more satisfied if the product the customer pays for, actually meets his or her highest priority needs.
·         Procurement is very important in an agile/short cycle process because the timing of the procurement of hardware and software is very important.  If the procurement is made too early the equipment or software will set around for awhile, at least, and as the requirements change the team may find that the equipment is not needed or a different version is needed.  On the other hand, if the equipment is procured too late, the project may come to a halt due to needing that hardware or software (though this is less true of short-cycle efforts and waterfall-based efforts because the functions needing the equipment can be reprioritized and be implemented later in the effort).

Systems Engineering

 With the assumption that “not all requirements are known upfront” the role and responsibilities of the systems engineer changes as well.
·         The Requirements Management procedures become seminal to successful agile/short-cycle development and transformation process.  The Requirements Management procedures must enable the customer to add requirements at any time in any cycle of the process.  It must also enable traceability from the product, through the design to the customer’s system requirements.
·         Risk Management must manage risk; risks are unknowns in the design and are very important to identify early.  With agile/short-cycle development and transformation processes this is particularly true because of the need to estimate the cost of each requirement (in the case of WADM, and the Generalized version in this paper, the use cases and design constraints)
·         Configuration Management is a must since a new version will be rolled out every one to three months.  Since the project team wants to insure that the version and components rolled out are the version and components verified and validated, disciplined configuration management in an imperative.  Configuration Management must include managing the Verification and Validation information and documentation to ensure proper regression testing.
·         Verification and Validation procedures must trace directly to the requirements at all levels, customer (A Spec), functional (B Spec) and Component (C Spec).  And both Verification and Validation procedures must be regressive, that is, they must include all prior verification or validation procedures.
From this you can get the idea that for an agile/short-cycle development and transformation effort there is much less emphasis on Program Management procedures and much more on Systems Engineering.


Agile/Short-cycle development and transformation can increase quality, decrease cost, and reduce the schedule concurrently, but only if the procedures, functions, roles, and responsibilities are changed and executed in a highly disciplined manner. To quote from a 25+ year veteran software developer who had worked 5 years in a SW CMM Level 3 environment: “In the first iteration of this process (the WADM), I thought that documenting all that requirements cr_p was more busy work.  I was wrong, this documentation is really helpful”.

[1] This definition comes from Agility Forum’s Virtual Extended Enterprise committee. The Agility Forum is based at Lehigh University.
[2] Unfortunately, I will have to leave the description of the differences for another presentation.

[3] An aircraft engineer had a saying posted on the wall of his office, “Cost, quality, schedule; choose two”, all customer prefer all three.
[4] Phil Crosby, Quality is Free.  Penguin Group, New York, 1980.
[5] For a discussion of the growth of the automotive industry, see my paper, Industrial Location Behavior and Spatial Evolution, Journal of Industrial Economics, Vol. 5 (1977) pp. 295-312
[6] In 2006 The Northrop Grumman Corporation applied for a patent on this process.  The process has been presented at several open forums since.  It is a CMMI level 3 conformant process
[7] It does also occur in hardware.  One famous example was the WWII M-3 tank where it showed up with a police siren.
[8] For more detail of the Planning Game, looks for papers on XP.
[9] While I will not claim to be the only one that came up with this concept for requirements, I did come up with it independently of others over a period of some 25 years of tortured and frustrating attempts to find a good method to identify requirements.  I had tried use cases without design constraints and found the shortcoming in that and thus backed into the concept of design constraints.
[10] This came directly from XP.
[11] The aircraft Kelly Johnson took advantage of the inherent short-cycle process within large hardware development efforts when he designed the U-2 and the SR-71.