"The farther backward you can look, the farther forward you can see. "
SOA is particularly well suited to Rapid Implementation and Rapid Transformation, which is an outgrowth of RAD Processes.
I've divided this post into two sections. First, a brief history of the trends in software coding and development, I've experienced; then, a discussion of how these trends will be amplified using Rapid Application Implementation processes, coupled with SOA.
Background
In my post The Generalized Agile Development and Implementation Process, I indicate that in my experience (that is over 40 years of experience (boy, time goes fast)), RAD software implementation process produces more effective and cost efficient software than any other method. There have been many changes in that time; binary to machine language, from machine languages to assemblers, from assemblers to procedural languages, from procedural languages to structured code, from structured code to object oriented languages, from object oriented to web services, which have been extended to service oriented.
Up to Object Oriented, all of these changes were developed to enhance the cost efficiency of creating and maintaining code. For example, going from coding in an assembler to coding in FORTRAN 1 enabled coders to construct an application much faster by using a single statement to insert a standard set of assembler code into the object deck of the application. For example, no longer did the coder have to rewrite all of the assembler instructions for saving data to given location on a disk--the instruction together with disk utilities took care of ensuring there was adequate space, writing the data, and the associated "housekeeping" activities. The downside to this change was that the object code was much larger, requiring more CPU cycles and memory. Consequently, there were (and are) applications that still require coding in an assembler (e.g., real time flight controls).
The second evolutionary transformation of coding was from procedural to structured. Due, simply, to the limitations in CPU cycles and memory in early computers and the programming culture of assembler where code was reused by branching into a section and then to another to minimize the number of separate instructions. With increasing computing power and the implementation of procedural languages, like FORTRAN and COBOL, the branching of code continued; to the point that maintenance of the code became infeasible in many cases, depending on the programmer. One way that many programmers, including me, used to avoid the issue way to use a modular IT system architecture--that is using callable subroutines as part of the code. In some cases the majority of the code was in subroutines with only the application work flow as part of the main--Sound familiar?
Anyway, this concept was formalized with the third evolutionary transformation, from procedural to structured (also known initially as Goto-less programming). This was a direct assault on programmers that used unconditional branching statements with great abandon. The thought here was to reduce the ball of tangle yarn logic found in many programs to increase the code readability and therefore maintainability. That's when I first heard the saying, "Intel giveth, structured programming taketh away", meaning that structured code use a great deal more computing power. At this point, compiler suppliers started to supply standardized sub-routine libraries as utilities (a current example of this type of library is Microsoft's Dynamic Link Library (DLLs)). Additionally, at this time Computer Aided Software Engineering (CASE) tools began to come on the market. These tools aided the code developer and attempted to manage the software configuration of the application.
The fourth transformation was from structure programming to object oriented development. Object Oriented Development (OOD) is not a natural evolution from structured programming. Instead, it's a paradigm shift in the programming concept--notice the shift in terminology from "programming" to "development". OOD owes as much to early simulation languages as coding. These languages, like Simscript, GPSS, GASP, all created models of "real-world" things (i.e., things with names or objects). Each of these model things was programmed (or parametrized) to mimic the salient characteristics of a "real-world" thing. OOD applied this concepts of objects with behaviors interfacing with each other to "real-world" software. So that, for example, a bank might have a class of objects called customers. Each object in the class represents one customer. The parametrize attributes of each customer object would include the customer's name, account number, address, and so on. Each of these customer objects has "behaviors" (methods) that mimic, enable and support what the customer wants and is allowed to do. For example, the customer object will have deposit, withdrawal, and inquiry methods. The interaction of the input and output methods, together with the objects and attributes, create the application.
A key problem in creating Object Oriented applications is with the interfaces between and among methods supporting the interaction among objects. OOD did not have standards for these interfaces so that sharing objects among applications was hit or miss. Much time and effort was spent in the 1990s on the integration among object orient application objects.
To resolve this interface, application developers applied the concepts of HTML so successfully use by web developers. HTML is based on the Standard Generalized Markup Language (SGML), developed, in part, by the aerospace industry and the Computer Aided Logistics Supporting initiative to support the inter-operation of the publishing of technical documentation. And this industry had a lot of it. At the time, (the late 1980s) a Grumman Corporation technical document specialist reported that when the documentation for all aircraft on an aircraft carrier was initially loaded on, the carrier sank 6". The SGML technical committee came up with standard tagging and standardized templates for document interchange and use. Web development teams latched on to the SGML concepts to enable transfer of content from the web server to the web browser.
Likewise, those seeking to easily interlink objects were looking for standardizing the interface and its description. The consequence of this process research was XML. The application of this standard to objects creates Web Service style Service Components. These components have a minimum of three characteristics that enable them to work as Web Service style Service Components:
- They persistent parametrized data mimicking the characteristics of some "real-world" object
- They have methods that create the object's behavior and
- They have interfaces adhering to the XML standard.
Having a great many Web Services is goodness, but it could turn the Internet into the electronic equivalent of a large library with no card catalog. The W3C and OASIS international standards organizations recognized this issue early and created to standards to help address the problem, Web Services Description Language (WSDL) and the Universal Description Discovery and Integration (UDDI). These standards, with many additional "minor" standards are converging on a solution to the "Electronic Card Catalog for Web Services" and/or the "Apps Store for Web Services".
In summary, initially programming was about creating code in as little space as possible. As hardware increased in power (according to Moore's Law), programming and its supporting tools focused on creating more cost effiecent code, that is, code that is easily to develop and maintain. This led to the paradigm shift of Object Oriented Development, which began to create code that enabled and supported particular process functions and activities. In total, this code was much more complex, requiring more CPU cycles, more memory, and more storage. Since, by the 1990s, the computing power had doubled and redoubled many times, this was not a problem, but linking the objects through non-standard interfaces was. The standards of Web Services provided the answer to this interfacing issue and created the requirements for another paradigm shift.
Now that Service Components (Web Services and Objects meeting other interface standards) can enable and support functions or activities of a process, organizations had a requirements for these functions to be optimally ordered and structured to support the most effective process possible, and the most cost efficient manner (as technology changed) and to be agile enough to enable the organization to successfully respond to unexpected challenges and opportunties.
In summary, initially programming was about creating code in as little space as possible. As hardware increased in power (according to Moore's Law), programming and its supporting tools focused on creating more cost effiecent code, that is, code that is easily to develop and maintain. This led to the paradigm shift of Object Oriented Development, which began to create code that enabled and supported particular process functions and activities. In total, this code was much more complex, requiring more CPU cycles, more memory, and more storage. Since, by the 1990s, the computing power had doubled and redoubled many times, this was not a problem, but linking the objects through non-standard interfaces was. The standards of Web Services provided the answer to this interfacing issue and created the requirements for another paradigm shift.
Now that Service Components (Web Services and Objects meeting other interface standards) can enable and support functions or activities of a process, organizations had a requirements for these functions to be optimally ordered and structured to support the most effective process possible, and the most cost efficient manner (as technology changed) and to be agile enough to enable the organization to successfully respond to unexpected challenges and opportunties.
Rapid, Agile, Assembly
Even a superb research library with a wonderful automated card catalog is of no value to anyone that has no idea what they are looking for. To use another analogy; my great uncle was an "inventor". But since he didn't what type of mechanical invention he might want to work on, he would go to auctions of mechanical parts--which meant that he had a shed sized room filled with boxes of mechanical part. He didn't know if he would ever use any of them, but he had them just in case.
Many organization's initial purchase of Service Components (Web Services) has been much like my uncle's acquisition of mechanical parts. Many times, the purchase of Service Components and supporting infrastructure components from multiple suppliers creates as many issues as it resolves because later versions and releases of "the standards" are incompatible with earlier versions of the same standard, and many software suppliers interpreting the standard to better match their existing products. The result is "the Wild West of Web Services" with supporters of one version of a standard or one product suite "gunning for" anyone that doesn't agree with them. However, the organization IT is cluttered with the equivalent of a Service Component bone yard (which is like an auto junkyard--a place where you can go to potentially find, otherwise difficult to locate car parts.
To convert these (Service Component) parts into Composite Applications supporting the organization's processes requires a blueprint in the form of a functional design (System Architecture), an identification of the design constraints on the process and tooling (including all applicable policies and standards). The key is the functional design and its implementation as a Composite Application using Assembly methods; see my posts Assembling Services: A Paradigm Shift in Creating and Maintaining Applications and SOA Orchestration and Choreography Comparison.
In addition to the paradigm shift from software development (coding) to Service Component assembly, there are two major changes in programming when moving from creating Web Services to creating SOA-based Composite Applications. If the processes for these are not well thought out, or have poor enabling and supporting training and tooling, or are poorly executed, then assembling Service Components will be difficult, time and resource consuming. This means the creating Composite Applications would be Slow Application Development (SAD???) instead of RAD. This happens frequently when initially operating the assembly process and seems to follow the axiom that "the first instance of a superior principle is always inferior to a mature example of an inferior principle". However, many books on the development of SOA-based applications describe processes that either are inherently heavy weight and slow and resources consuming, or that will process applications with a great many defects; and rectifying a defect in a product that is being used costs much more than rectifying it in design. Therefore, spending the time and effort to fully develop the process and will continue to provide a genuine ROI with each use of the process--though the ROI is difficult to measure unless you've already have a poor process that measure first, or if you can measure the ROI in a simulation of alternatives.
Here is an outline of the RAD-like implementation process I would recommend. It fits very nicely within The Generalized Agile Development and Implementation Process I discussed. Implementing and operating/ maintaining a SOA-based Composite Application is much more than simply coding into a work flow Service Component and then executing it in production. There are several activities required to implement or transform a Composite Application in an effective and cost efficient manner using a RAD-like implementation process.
This implementation process is predicated on at least 5 items in the organization's supporting infrastructure.
- That the implementation team has established or is establishing an Asset and Enterprise Architecture Repository (AEAR). This should include a Requirements Identification and Management system and repository (where all of the verification and validation (tests) are stored).
- That the implementation team is using a CMMI Level 3 conformant process (with all associated documentation
- That there are business process analysis (including requirements identification and modeling), and a RAD-like Systems Engineering and System Architecture process preceding the implementation process.
- That the implementation team has access to a reasonable set of Composite Applications modeling tools that link to data in the AEAR.
- That the organization's policies and standards are mapped into automated business rules that the process flow can use through a rules engine linked to both the Orchestration engine and the Choreography engine.
First, create and model the process or work flow of the Composite Application before assembling the Service Components. The key technical differentiator of an SOA-based application from a web service or other type of application is that there are three distinct elements of the SOA application; the work flow, the Service Components (that support the activities and functions of the process), and the business rules that are derived from the organization's policies and standards. This first activity creates the process flow and ensures that all applicable business rules are linked to the flow in the correct manner.
For those Composite Applications using Orchestration, the entire Composite Application should be statically and dynamically modeled to verify the functions using Service Components models to ensure that they are in proper order, that their interfaces are compatible, that they meet all design constraints, that the Composite Application will meet all organization's business rules (that automate the organization's internal and external policies and standards, and that the Composite Application's performance and dependability meet their requirements.
I found that with RAD, performing this type of verification activity can take from 1 hour for simple applications to two days for a complex application with complete regression testing. Additionally, I found that both software coders/programmers and program management wanted to skip all of the verification activities that I recommended, but for somewhat different reasons.
The reason for developers not wanting to do it, was, according to them, "Boring"; actually, it is more "programmer friendly", that is, to ignore verifying that the requirements have been met and that there are no induced defects (defects introduced in the process of updating the Composite Application's functionality), which required in a RAD process to ensure reliability. Additionally, it's more fun to program than it is to ensure that the version verified is the version that goes into production.
The reason why program managers don't pay attention to the customer's requirements and the requirements verification is that the earned value procedures--the supposed measurement of a program's progress--is not based on measuring how many of the customer's system requirements have been met through verification and validation (V&V), but on the resource usage (cost and schedule) versus the implementer's best guess at how much of which task in the "waterfall" schedule have been completed. Since producing the earned value is the base for payment by the customer and the PM's and management's bonuses, that is what they focus on--naturally. Therefore, neither the implementers or management have any inducement to pay a great deal of attention to ensuring that the customer's requirements are met.
However, with a good RAD process like the Generalized Application Development and Implementation Process, the customer's system requirements are paramount (additionally, my experience has been focusing on the programmatic requirements (cost and schedule) pushes the schedule to the right (it takes longer), while focusing on ensuring the customer's requirements (functions, and design constraints) are met, pushes the schedule to the left).
Assemble the Composite Application and Verify
Second, once the work flow has been verified, then it is time to link the Service Components to it in the development environment. The verification, in this case, is to assure that the Service Component behaviors are as expected (the components meet the component requirements), and that the components have the expected interfaces.
Verify the Sub-systems
Third, once components have been assembled into a portion of the system, or a sub-system, these too, must be verified to ensure that they enable and support the functions allocated to them from the System Architecture (functional design) and meet all applicable design constraints. Again, this can be done in the development environment, and must be under configuration control.
System Validation
Fourth, System Validation means that the current version is evaluated against the customer's system requirements, that is, that it performs as the customer expects, with the dependability that the customer expects, and meets all of the policies and standards of the organization (as embodied in the design constraints). Typically, this step is performed in a staging environment. The are several reasons for doing this, as most configuration management training manuals discuss. Among these are:
- Separating the candidate application from the developers, since the developers have a tendency to "play" with the code, wanting to tweak it for many different reasons. These "tweaks" can introduce defects into code that has already been through the validation process. Migrating the code from the development environment to the staging environment and not allowing developers access to the staging environment, can effectively lock out any tweaking.
- The migrating the application to the staging environment, itself, can help to ensure the migration documentation is accurate and complete. For example, the order in which code patches are applied to their COTS product can change, significantly, the behavior of the product.
- The staging environment is normally configured in a manner identical with the production environment. It uses the same hardware and software configurations, though from large installation, the size of the systems is generally smaller than production. Since the code or product implementers have a tendency to "diddle with" the development environment. Very often I've been part of efforts where the code runs in the development/assembly environment, but not production because the configuration of the two environments is very different with different releases of operating system and data base COTS software. Having the staging environment configuration control to match the production environment obviates this problem.
Roll-out
Fifth, Roll out consists of releasing the first or Initial Operating Capability (IOC) or the next version of the Composite Application. In the RAD or Rapid Application Implementation process, this may occur every month to 3 months. If all of the preceding activities have been successfully execute, post roll out stablization will be very boring.
Advantages of SOA in a RAD/I Process
There are several advantages to using SOA in a Rapid Application Development/Implementation process including:
There are several advantages to using SOA in a Rapid Application Development/Implementation process including:
- The implementer can use Service Component or assemble a small group of Service Components to perform the function of a single Use Case. This simplifies the implementation process and lends itself to short-cycle roll-outs.
- Since the Service Components are relatively small chunks of code (and generally perform a single function or a group of closely coupled functions) it allows the implementers at least the following actions:
- Replace one Service Component with another technologically updated component without touching the rest of the Composite Application as long as the updated component has the same interfaces as the original. For example, if a new developmental authentication policy is set
- Refactoring and combining the Service Components is simplified, since only small portions of the code are touched.
- Integration of COTS components with components that are create for/by the organization (for competitive advantage) is simplified. While my expectation is that 90 percent or more of the time, assembling COTS components and configuring their parameters will create process enabling and supports Composite Applications, there are times when the organization's competitive advantage rests on a unique process, procedure, function, or method. In these cases unique Service Components must be developed--these are high-value knowledge-value creating components. It's nice to be able to assemble these with COTS components in a simple manner.
- The Structure of an SOA-based Composite Application has three categories of components: Service Components, Process or Workflow Components, and the Business Rules. Change any of these and the Composite Application changes. Two examples:
- Reordering the functions is simplified because it requires a change in the process/workflow component rather than a re-coding of the entire application.
- If properly created in the AEAR, when an organization's policy or standard changes, the business rules will reflect those changes quickly. Since in well architected SOA-based systems will have a rules engine tied to the rules repository in the AEAR and to the orchestration and choreography engines executing the process or work flows, any change in the rules will be immediately reflected in any Composite Application using the rule.
These two example demonstrate that a Composite Application can be changed by changing the ordering or rules imposed on the application, without ever write or revising any code. This is the reason that SOA is so agile, and another reason it works so well with short-cycle Rapid Application Development and Implementation processes.
Tooling
In addition to good training on a well thought out Rapid Application Development and Implementation process, the implementation of the process for creating SOA-based Composite Applications requires good tools and a well architected AEAR. While implementing the process and acquiring the tooling and instantiating the AEAR may seem like it's a major cost risk, it really doesn't need to be, (see my post, Initially Implementing the Asset and Enterprise Architecture Process and an AEAR). Currently, IBM, Microsoft and others have tooling that can be used "out of the box" or with some tailoring to enable and support the development and implementation environment that is envisioned in this post. Additionally, I have used DOORS, SLATE, CORE Requisite-Pro, and other requirements management tools on various efforts and can see that they can be tailored to support this process Rapid Appliction Development and Implementation process.
However, as indicated by the Superiority Principle, it will take time, effort, and resources to create the training and tooling, but the pay back should be enormuous. I base this prediction on my experience with the RAD process that I developed in 2000, which by 2005 had been used on well over 200 software development efforts and which was significantly increasing the process effectiveness of the IT tools supporting the process, and the cost efficiency of the implementations.
(PS--the Superiority Principle is "The first instance of a superior system is always inferior to a mature example of an inferior system").
However, as indicated by the Superiority Principle, it will take time, effort, and resources to create the training and tooling, but the pay back should be enormuous. I base this prediction on my experience with the RAD process that I developed in 2000, which by 2005 had been used on well over 200 software development efforts and which was significantly increasing the process effectiveness of the IT tools supporting the process, and the cost efficiency of the implementations.
(PS--the Superiority Principle is "The first instance of a superior system is always inferior to a mature example of an inferior system").
No comments:
Post a Comment