Tuesday, December 19, 2017

The Digital Future: Services Oriented Architecture and Mass Customization, Part 3A

From Parts 1 and 2

Part 1 discussed the four ages of mankind.  The first was the Age of Speech; for the first time humans could “learn by listening” rather than “learn by doing”; that is, data could be accumulated, communicated, and stored by verbal communications.  It also transformed the hunting and gathering into an economic architecture of small somewhat settled communities over the course of 300,000 years.   Settlement produced first significant increase in economic activity, wealth per capita, and in the academics in the form of the shaman for tribal organization.
The second, the Age of Writing, produced a quantum leap in data and information that could be accumulated, communicated, and stored.  This was over a period of at least 6,500 years.  During this time, academic activity evolved from everyone working to survive to a diversity of jobs and trades and the economic stratification of political organizations.   Again, the total wealth of humanity took a leap of orders of magnitude as the economic architectures of city states, then countries, and then empires evolved.  The academics evolved from the shaman, to priests, clerics, researchers, mathematicians, and universities (e.g. the Museum at Alexandria ~ 370 BC and the University of Bologna, 1088) and libraries.

The third, the Age of Print, started with Gutenberg’s press in 1455, but blossomed with Luther’s radical admonition that everyone should “read” the bible about 1517.  Suddenly, the quantity of information and knowledge to a leap of several orders of magnitude as all types of ideas were accumulated, communicated, and stored.
 
Part 2 dealt with history of Services Oriented Architecture (SOA) as it developed hand in glove with computing architecture—a natural fit.

This part, Part 3 A, deals with how SOA works with mass customization of products, systems, and services.  Part 3 B, will show where the three economic architectures, infrastructure, mass production, and mass customization will be employed in the Digital Age.

Mass Customization Using Services Oriented Architecture

As I will attempt to demonstrate, Services Oriented Architecture (SOA), while beginning its life as an architecture for computer applications, is really a new economic system that will supplant Capitalism as the economic engine.  That is to say, the prescriptive economic architectures of socialism and communism do not have the ability to create value; just redistribute it so that everyone is in the same economic class…destitute.

Mass Production Architecture

I’m positing that in the Digital Age mass customization will replace mass production for products, systems, and services.  This does not mean that mass production will go the way of the dodo.  In fact there will be three architectures, infrastructure, mass production, and mass customization.

Many times, many people may want and are willing to pay for the same or nearly identical items.  As Adam Smith discussed, the reason the mass production produced so much wealth is that mass production is cost efficient, or as many economists point out, there are economies of scale.  Economies of scale result from turning the process of manufacturing a produce into discrete steps or activities.
    
Adam Smith discussed this concept in the first chapter of An Inquiry into the Nature and Causes of the Wealth of Nations.  He showed how the same number of individuals could make an order of magnitude more pins per day when each one performed only one step in the process and repeated that step of each pin.
 
According to Adam Smith, once the process for creating a product is divided into discrete steps, many individuals will “figure out” how to create tooling to improve their step of the process.  While Adam Smith did not document this step, he implied that the business owner, the one selling the product and employing the personnel, would then purchase the tooling.  Again, the productivity of each worker using the tooling increases. 

Since Adam Smith’s time, economists have been fascinated with these two concepts.  The first Adam Smith called the “Division of Labor”.  The second has no name so I call this effect “process multiplication”.  The reason is that tooling increases the effectiveness of labor the way a gun increases the effectiveness of a soldier—the military calls this effect “force multiplication”.

The more tooling that is used in the process, the more the tooling costs.  The more it costs, the more product is produce (hopefully) and more cost efficiently.  This cost efficiency means the product costs less to produce.

Because the production process has been tooling intensive, it has also become capital intensive—tooling costs money and a lot of expensive tooling costs a lot of money (capital—hence Capitalism).  Actually, there is no such thing as Capitalism; it’s really mass production architecture.  Since it is capital intensive, anyone who does not have the money to purchase the tools can’t produce the product as cost efficiently.  This leads to both the concepts of Economies of Scale, (i.e., the greater the quantity of product you make, the better the division of labor and process multiplication of more tooling), and the Barrier to Entry caused by the need to have the money to buy the tooling to produce the product cost competitively.

The reason for this brief recitation of a significant portion of the mass production architecture is that for the near and mid-terms there will be certain sectors of the economy where mass production will continue to make sense—it will continue to be the most cost efficient architecture for producing a certain class of products.

The Day before Mass Customization

In the early 1990s, I worked with the Strategic Supply Chain Management (SSCM) Project.  The goal of this team was to find ways to improve the cost efficiency of mass production systems.  This included the development and implementation of products.  The reason for the formation of this project was to counter the gains in market share by the Japanese and other foreigners.

The team came to several conclusions.  First, Just In Time (JIT) manufacturing was an imperative.  JIT means that the subcomponents of a product are manufactured as they are needed for the assembly of a product in response to a customer order—all of this in near real time.  JIT means that warehousing costs are completely eliminated.

I had already worked on two projects of this type.  The first in 1984 and 85 created a paperless product line for a major office furniture company (The design of this line won the Society of Manufacturing Engineers Lead Award in 1987).  In 1986 and 1987, I worked on a program for the US Navy, called Rapid Access to Manufactured Parts, (RAMP).  It too, was using JIT concepts.

The SSCM team came to two other conclusions.  First, that standardized contractual clauses should be agreed to by contractor and subcontractor before any contracts are bid on.  In other words, there should be a team, built by and around the contractor to “go after” proposed projects.

Second, that a better customer requirements identification and management process is needed to effectively and cost efficiently manage a supply chain.  These are two items that are required for the digital age and mass customization.  While the first has gained a very small amount of traction, businesses in the US, at least, have paid no attention to the second.  Sometime in the near future it will become self-evident that this is a problem.

While the static and dynamic architecture of mass production, as described by Adam Smith, and incorporated into the US Constitution, has served the United States and the rest of the world well, it will be supplanted by the mass customization using Services Oriented Architecture in the Digital Age.

Mass customization is creating products, systems and services tailor-made to the customer’s requirements.  Actually, this was tried by the US automotive industry after WWII with limited success. In the 1950s and 60s the “option list” for automobiles was quite long.  For example, you could order a car only with power steering but not power brakes, with an AM or with an AM/FM radio.  Every option was individually priced.  So each customer could get a vehicle that met their exact requirements.

However, the Japanese took advantage of the inherent costs in terms of time to fulfill orders, defects caused by not meeting the customer requirements, etc., by reducing the costs of their vehicles and decreasing the delivery time through offering bundled packages of options, among other things.
Now, customers could only order a sun roof if they accepted a power seat on the driver’s side, even if they didn’t want because the seat was part of the bundle or package that include the sun roof.  In the future, this will not be the case.

Mass Customization

In the 1990s, I became a member of several international standards committees.  The first was the Agile Manufacturing Enterprise Forum, at Lehigh University.  Their definition of agile manufacturing was “an organization that has created the processes, tools, and training to enable it to respond quickly to customer needs and market changes while still controlling costs and quality.”  The team determined that this is accomplished by assembling a consortium of small organizations (businesses, consultants, and possibly academics).  This consortium would as a team on create products, systems, and services.

The next team that I joined, The Next Generation Manufacturing Project (NGM), elaborated on the consortium concept.  This project focused on how to create an agile manufacturing enterprise.  It concluded that there are two ways to create design/development/implementation/manufacturing consortiums.  These happen to be identical with the ways that programming languages were implemented in the 1960s and with the ways services are assembled in SOA.

The first method for assembling services into a program is called “Orchestration”.  Orchestration is gathering all of the needed software functional components together, then ordering and structuring them into the program that performs the task require by the customer.  In the older programming technology this would be called compiling a program; that is, converting all of the instructions into machine code before executing the program.

For mass customization, orchestration is assembling a team or consortium of small and entrepreneurial organizations, then creating the product, system, or service.  Because it’s mass customization and not mass production the team creates only one item.

Currently, one of the better examples of organizations that use the orchestration form of business architecture, are custom car shops.  In fact, there is one television channel where half its shows are of shops that create custom cars.  In these shows, a customer starts with something that was well used, badly abused, to complete junk.  The customer tells the custom car shop owner his or her requirements and the owner tells the customer the approximate cost.

When the vehicle, in whatever condition comes into the shop, the shop’s team disassembles it entirely.  They send all of the metal body components to a member of the consortium that functions as a “sand” blaster and epoxy coater, send the engine and other mechanical components to a shop that functions as the engine and mechanical parts rebuild center, send seats and other interior components to a shop that functions as the interior restoration and customization center, and tosses the parts that can’t be salvaged.

When the metal body comes back to the shop, generally, there is rust and damage, which didn’t show prior to the blasting that will need to be repaired.  Additionally, if the job is to “customize” as opposed to “restore” the vehicle a body shop function will need to change the bodies shape to enable the customization.    Frequently, this is the function of the shop.

If, for example, a fender is in too poor condition to be repaired, then the shop may go to a junkyard to find the part or may go to an organization that just manufactures metal components that are no longer available.  This is yet another function of the consortium.

When the body work is completed, the shop will send the vehicle’s body, sometimes it engine and mechanical components out to a paint shop; and if there are chrome parts, they may be sent to a shop specializing in chroming parts—two more functions of the consortium.  Frequently, for customized vehicles a new exhaust system needs to be constructed—yet one more function; and finally the vehicle is assembled in its restored or custom form.

Since these custom car shops (especially those with good to great reputations) have a fairly constant stream of customers, they can set up agreements (read contracts, with standard contractual clauses) as to who does what part of the work, the timeframes required, and the costs, prior to starting a job.  In effect, the consortium functions as a single unit, the way services assembled for execution function as a program.  So we could call this organizational architecture, the Orchestration Mass Customization Architecture.

The second method for assembling software services is called “Choreography”.   Choreography differs from orchestration in that the core organization—the one accountable to the customer for the product, system, or service—organizes the team on an as required basis.  At the start of the effort it does not have a consortium in place.  Instead, the core organization adds functional services as it deems necessary.

Most times, core organizations engaged in research and development or creative content activities use choreographic organizational architecture.  This would include research institutes, creation of exotic materials, the initial development of an entire new field of engineering, like ocean engineering or space engineering, and, most familiar to most people, the creation of entertainment content.

The motion picture and now the video content industry have long used choreographic organizational architecture.  To start, a “screenwriter” authors or adapts a story from a book to the “must meet” requirements for a video or movie; that is, that it fits within a time-frame, that much of the background of the story is told dialog and so on.

Once the screenwriter has a script, he or she will send it out to producers.  If a producer likes the script, that is, in general “make money”, he or she will assemble a team to produce the film or video.  This team is assembled as needed, not like the old studio system where a team has been preassembled. 

Actually, these days and going forward, more videos will be produced by “amateurs” using current and near future technology.  This is very likely going to undermine the entire “entertainment industry”.  This is the next step from the studio system, to customer centric entertainment.

The Three Economic Architectures of the Digital Age

In Part 3 B, I will describe how the three architectures that I have defined will work together in the Digital Age to provide unprecedented value to the largest number of people possible.

Thursday, December 7, 2017

The Digital Future: Services Oriented Architecture and Mass Customization, Part 2


From Part 1

There have been four ages of mankind.  The first was the Age of Speech; for the first time humans could “learn by listening” rather than “learn by doing”; that is, data could be accumulated, communicated, and stored by verbal communications.  It also transformed the hunting and gathering into an economic architecture of small somewhat settled communities over the course of 300,000 years.   Settlement produced first significant increase in economic activity, wealth per capita, and in the academics in the form of the shaman for tribal organization.

The second, the Age of Writing, produced a quantum leap in data and information that could be accumulated, communicated, and stored.  This was over a period of at least 6,500 years.  During this time, academic activity evolved from everyone working to survive to a diversity of jobs and trades and the economic stratification of political organizations.   Again, the total wealth of humanity took a leap of orders of magnitude as the economic architectures of city states, then countries, and then empires evolved.  The academics evolved from the shaman, to priests, clerics, researchers, mathematicians, and universities (e.g. the Museum at Alexandria ~ 370 BC and the University of Bologna, 1088) and libraries.
 
The third, the Age of Print, started with Gutenberg’s press in 1455, but blossomed with Luther’s radical admonition that everyone should “read” the bible about 1517.  Suddenly, the quantity of information and knowledge to a leap of several orders of magnitude as all types of ideas were accumulated, communicated, and stored.  

This created for many changes in sociopolitical organizations and cultural upheaval.  After major wars (e.g., the 30 years, the 100 years, WWI, and WWII, with continuous warring in between) and “the age of exploration and exploitation (colonization)” which created the footings of globalization, in 1776 economic architecture was formalized into the mass production industrial architecture called Capitalism.  Capitalism, with its risk/reward, and mass production has created far more wealth for humanity, though several new religions have destroyed a significant percentage of this wealth; religions, like Fascism (a throwback to feudalism), Communism, Socialism, and Liberalism (All which replace “personal responsibility” with “social responsibility” as their major article of faith).

Now humanity is on the cusp of the Digital Age.  It too, will create orders of magnitude more data, information, and knowledge.  And, it promises another giant leap in the wealth, but with commensurate risks of barbarianism and wars, from all sides, unless there can be integration of cultures, not “cultural diversity”.  History graphically demonstrates that cultures always clash and that “diversity” cultures implode from the clash.  I will discuss these later, but first Part 2 will discuss the inception and gestation of the Digital Age.

Part 2: The Digital Age: A Personal Perspective of How Services Oriented Architecture Evolved


Intel giveth, Microsoft taketh away”.
A saying by computer software developers circa 1975

From its start to today, the Digital Age could be called the Age of Assembly.  I know because I was there.

Starting in earnest sometime in the late 1960s, humanity has entered a new age, the Digital Age!  And, as in the past three occasions, humanity has not realized the potential of this change in information technology.

Computing Power

In 1965, Gordon Moore, the founder of Intel, observed that “the number of transistors in a dense integrated circuit doubles approximately every two years.”  Effectively, what this means is that raw computing power doubles approximately every two years.

Coding Cost Efficiency

But, the digital age requires three technologies.  In addition to faster and more powerful hardware, it requires the same abilities of the other ages, the ability to accumulate and analyze the raw datum and to communicate both the data and store the results of the analysis.

The ability of a digital system to accumulate and analyze data is based on its programming.  In 1956, I played with my first computer, or what was referred to at that time as a computer.  This computer was programmed with a wire board—a board with a matrix of connectors; the programming came of wires connecting the “proper” connectors together.  Data was inserted using Hollerith cards (referred to as punch cards).

 By 1960, programming had graduated to computer coding using the punch cards.  I coded my first program in Symbolic Programming System (SPS), a form of Assembler—the first step up from coding in machine language (1s and 0s)—as a member of my high school’s math club.  An Assembler is little more than a useful translator of machine code to make it simpler of the coder to create code and to more easily identify bugs—both, greatly increasing the coders and codes effectiveness and also the cost efficiency of creating code.

By 1964, I started taking the first of three computer courses offered by the Math Department of the university I attended.  This class included programming in machine code (literal ons and offs), SPS, and Fortran 1.  The latter was the first use of the concept of Services Oriented Architecture (SOA). 
Fortran 1 (Formula Translation 1) was among the earliest scientific programming languages.  These languages were made of a set of computer commands (functions), read (getting input), print (provide output), do mathematically calculations (add, subtract, etc.) and perform some logical step (loop, branch, and so on).  Actually it’s much more complex than this, but I’m not quite ready to take a swan dive into the minutia of computer software and hardware design and architecture.

By 1965, a coder could create hundreds of instructions per hour, rather than a couple of dozen like in 1956.  Since the computing power of the hardware was (and is) dramatically increasing, making the coder more cost efficient makes sense.  Additionally, it meant that computers could handle much larger and more complex tasks.

Data Storage and Data Communications

Between 1964 and 1980 two other technological developments occurred that have led to the start of the Digital Age, data storage devices, and data communications. 

In 1964, I first saw, wrote code for, and used a storage device called a disk drive.  Prior to this data was stored either on cards or on tape drives.  Like the CPU, data storage hardware has continued to follow Moore’s Law.  Today, the average smart phone has 100 to  1000 times the storage that the “mainframe” computer had when I was working on my Ph.D. and that storage 300MB costs tens of thousands time more and took up more than a basketball court sized room.

So the abilities to store and analyze data and information have become much more effective and cost efficient.   So has the technology’s ability to communicate data, information, and knowledge.
In 1836 Samuel Morse demonstrated telegraphic communications, the first machine language method for communicating information.  It took until the mid-1970s telegraphic communications to evolve into a wide variety of data communications hardware, software, and communications protocols.  Then it took the next 20 years to coalesce into the hardware, software, and communications protocols we know as the “Internet” and the “web”.

During the same 20 years the final element for the digital age evolved, Services Oriented Architecture. 

Services Oriented Architecture

At the dawn of the digital age programming, all programs were simply an order set of machine instructions, no loops and no logic.  When computer languages, like FORTRAN 1 first evolved, they were created with a good deal of branch logic to allow code to be reused.  Why: because the memory and data storage on the machine was so small.  So, at the time it made sense to reuse sections of code that were already in the program, rather rewrite the same code.

Inevitably this led to what “computer scientists”, people that taught programming rather than writing programs for a living, called officially “unstructured” programming, or in the slang of the day “spaghetti code”.  In production at the time, unstructured programs were much faster in execution on the hardware available.  However, they were also much more difficult to understand especially if they weren’t properly documented. This meant that they were hard to debug and hard to update or upgrade.

According to the computer scientists, the chief culprit of unstructured programming was the unconditional branch, called the “goto” statement.  This statement indicated the location of the next coded statement that the computer should execute, and this, in general, was back to some location earlier in the program.  This meant that in following the program, it jumped around, rather than going from top to bottom, making it easier for the inexperienced to follow.  So, like all liberal bureaucrats they outlawed unconditional branching.

Then they replayed the goto statement with an unconditional branch to another program, initially called a subroutine.  This fragmented the program and intuitively created what could become the services of SOA, but didn’t.  Instead, they became the Dynamic Link Library (DLL) of computer functions; utilities of the operating system that can be used to support all programs on a computing system.
Instead of structured programs, I opted for modular programming.  These were programs that performed a given function for the program.  For the application that I wrote as part of my Ph.D., I used this architecture; and I submit that it’s the basis for the SOA-based applications.
Be that as it may, I’ve described how I see by the concepts for SOA evolving from machine language of the late 1950s, to assembler, then programming languages with DLLs, and then modular programming.

Each time, the number of machine instructions increased by orders of magnitude, meaning that a single instruction to call a “subroutine” or function could generate 50 to 500 or more instructions and each of these instructions could generate 1000 to 10,000 instructions.  Actually, one time I did some research and found that one 20 line program actually generated 63 mega-bytes of code.

There was still a problem.  It really helped the cost efficiency of the coder to be able to create massive amounts of code fast, but many times the customer for whom the program was being created wanted to interlink that program with other programs.  Most often these programs were created in different computer languages (e.g., FORTRAN, COBOL, PL1), and on various brands of computers (e.g., IBM, DEC, HP, Sun, Silicon Graphics).

Obviously, the problem was that all of these various software and hardware suppliers were competing for business and therefore making it hard to interlink their products with competitors’ products.  The reason was simple; to force their customers to buy only their products.  Today you can see the identical strategy, with Apple, Microsoft, and others building their own “ecosystems” to ensure their customers stay their customers.

In the early 1980s, customers began to recognize this, especially with the advent of data networks.  This set off a series of international standards committees for all components, from data and how to store it, to data communications.

For example, I was on a number of Open Systems Interconnect (OSI) data communications committees starting in 1982.  One of the team members of the team I led had been a coauthor of the Standard Generalized Markup Language (SGML).  Both HTML and XML, the languages of the web, were derived from SGML.  I was peripherally involved with STEP (PDES) for engineering data, X.400 for e-mail and X.500 Directory Services from which LDAP was derived.  This was all between 1982 and 2009.

Another step in creating SOA for information and knowledge-based system (the so called “Big Data” systems) is a standardized interface for the Services (i.e., components or functions).  Various organizations including the W3C worked on this issue and came up with Web Services.  Web Services uses XML in a defined and standardized manner to enable the software functions to communicate without having to write an interfacing routine between and among functions.\

The final step in creating SOA was the design and development of a reference architecture or model denoting the components of its supporting infrastructure, assembly process, and functions needed to create a robust application using SOA.  I was a team member on the OASIS Team the created this model.

So from personal experience, I’ve seen the international standards for data formats, and data communications develop.  All of the above are the precursors for Services Oriented Architecture as used for a new economic architecture.

Mass Customization Using Services Oriented Architecture--Part 3, Coming Soon


Part 3 will discuss an economic version of Services Oriented Architecture and how it will reformulate business organizations in the Digital Age.

Wednesday, December 6, 2017

The Digital Future: Services Oriented Architecture and Mass Customization, Part 1

Part1: The Digital Future

I was challenged to forecast changes in economic systems based on both my knowledge of spatial economic systems and on my experiences with computers, data networking, and automation.  What I have come up with is a four part article on the digital future.

Since I normally tend to build to a thesis like any good engineer designing a product, instead of stating my thesis and then defending it, like lawyers, and journalists normally do, I will take a shot at the thesis of this paper first.
 
We have entered the Digital Age in which Capitalism, which describes the economic system of the Age of Print will be succeeded by Economic Services Oriented Architecture producing Mass Customization.  It will be an age where consortiums are formed small and entrepreneurial organizations to produce products, systems, and services the customer wants.  This architecture will replace the current organizational architecture of a single large organization producing a large quantity of products that “satisfice”, that is, they come somewhat close to satisfying the customer’s requirements—they suffice.

The article is constructed in five parts.  This part, Part 1, discusses the economic history of humankind based on how they have communicated and stored data and information.  I feel it’s important to provide the context for my forecast.

The Second Part is a discussion of the coming of the Digital Age based on my experiences seeing it over the past 50+ years.  I’ve found a structure in pattern in the seeming chaos of change in data and information storage.  This pattern leads me to architectural pattern changes that lead to my forecast.
The Third Part is a more detailed discussion of this new architectural pattern called Services Oriented Architecture (SOA).  I will give a couple of examples to demonstrate how SOA will work economically.

The Fourth Part will consider how SOA and the Digital Age will change an individual’s life by giving three examples.

The Fifth Part will show how converting to SOA will create changes as drastic as the changes from the feudal economic architecture to the Industrial architecture.   In this part, I will forecast the change to a number of industries.  Most of these changes are already starting to occur, though in a very minor way.


An Exceedingly Brief History of European and American Civilization

There have been four ages for humankind. 


The Age of Speech

The age of speech (verbal communication), from circa 300,000 BC to circa 6,000 BC  was when for the first time data, information, and knowledge could be transferred and store within and between generations.  This was the first time when clans and tribes formed.  And, according to archeologists, there was a glacially slow revolution from hunting and gathering and stone to agriculture and metals.  This was the economic architecture of the time.  During this period, the shaman, or priest was holder of the tribal information base.


The Age of Writing

The age of writing (written communication), from circa 6,000 BC to 1455 AD, was when data, information, and knowledge could be more accurately transferred longer distances and stored for much longer time periods (in fact, there are documents and records over this entire period).  Political institutions increased from tribes migrating all over the landscape to settled (or at least apparently) settled city states, and then to regional and national states.  This was the second form of an economic architecture.

During this age the first known libraries and colleges formed; for example, the library and museum (college/research center) at Alexandria.  And again, more than 600 years later, after the various barbarian tribal invasions sent Europe back to the talking age, (during the dark ages) up to 900 AD when Carolus Magnus (or Charlemagne) manage to very slightly reintroduce writing and then colleges were formed in what is now Italy.


The Age of Printing

 The age of printing, (printed communications), from 1455 AD to between circa 1942 to 1992, data, information, and knowledge, became much more readily available to humankind.  Thanks, in large part to Martin Luther insistence that everyone should be able to read the Bible, Northern Europe learned to read and read ideas and concepts that were not part of the Catholic Church Doctrine.
By 1776, Adam Smith had described how wealth was created, together with the growth of engineering knowledge, and the ability of individuals to take risks and fail or succeed, Humans entered the era of Mass Production and Liberty.  This is the basic economic architecture of the Age of Printing.

Included in the mass production was mass production of education, based on a school for all teaching reading, writing, and arithmetic.  This has led to the mass production educational systems of today.


Knowledge and Wealth

You should note that with the speech, humanity grew significantly wealthier than other animal species.  The reason is that they could accumulate more and better data, information, and knowledge through speech.

With writing, humanity accumulated a much more wealth.  This wealth was exceedingly badly distributed. Nonetheless, looking at places like Pompeii, even some of the slaves could accumulate small wealth (while “the bread and circuses” form of socialism led to the eventual destruction of the Roman Empire).

With printing, a much large chunk of humanity created orders of magnitude more wealth.  The accumulation of knowledge of how the Universe works has led to mass production, which meant mass wealth.  For example, if there is a disaster now, people expect the restoration of power, water, fuel, and communications immediately; this was never true for even the “wealthiest in the age of speech, writing, or even for most of the age of print.  This demonstrates how exceedingly rich even the poor are today, when compared with the rest of human history (This is something the liberal entitlement generation has forgotten).


The Digital Age: The next Age


The next Age has begun.  It began, in a real sense with WWII.  It gestated throughout the 1950s to the mid-1960s.  I will discuss this period and beyond in the next part of this article.

Friday, October 13, 2017

The Healthcare Information Infrastructure Business Case

Benefits of the Healthcare Information Infrastructure

Increased Utility of Information to the Customer (Patient)

The customer regulates access to his or her medical record
Healthcare information is Immediately available to healthcare professionals where and when the customer needs assistance
Better tracking of customer medical history
The healthcare information is 99.999999% secure from cyber attacks

Reduce Regulations

Little or no need for HIPA regulations and procedures with respect to the customer
Must reduce Obama Care, Medicare, and Medicaid processes and regulations to fully implement this infrastructure

Reduce Cost (10 to 30%)

Reduced paperwork by doctors and hospitals—a major cost driver
Reduced need for additional forms by customers
Automated information insertion into customer records means no need for duplicate testing
Better ability to audit all stakeholders in the infrastructure

Side Benefits

Increased access to summary data by researchers

Downside

Time to implement

Prototype Phase – about 2 Years
Pilot/Customer Acceptance Phase – about 1 Year
Build out – Initial Operating Capability (IOC) about 5 Years

Political/Culture/Economic

Politicians, Doctors, Hospitals and Clinics, Pharmacies, Insurance Companies, and current Bureaucrats will not like it because of the way it effects their processes, procedures, and ways for making money
Remember: “The first instance of a superior principle is always inferior to a mature instance of an inferior principle”

An Architecture for a Customer-Centric Healthcare Information Infrastructure

1.    Healthcare IS: The Obsolete and Broken Infrastructure

The healthcare Information Infrastructure is both an obsolete and broken system.  It has at least four major problems.  The total system is technologically fractured between paper-based systems and computer-based; and the computer-based system is shattered among competing technologies, (for example, PC versus smart phone).  The number of health-care information regulations and standards militates against a coherent information infrastructure. 

The Fractured System

Any system in a technology change process will be fractured between the old and new technologies.  However, the healthcare system is particularly fractured because of overburdening federal regulations for insurance, for care, and for HIPA. 
Then there are the interlocking state regulations and insurance policies and procedures.  All of these make creating a healthcare information infrastructure difficult and complex.
Additionally, a significant portion of the medical professionals are the opposite of tech-savvy.  So they stick to paper—lots and lots of paper.  Compared with electronic storage, the simple storage of paper is expensive.  Then there is the time and expense of retrieving a patient’s medical records.  All of these costs are absorbed in the cost of healthcare.
But there is yet another, potentially life threatening expense; the loss of records and the inaccessibility of records when patients are on travel.  When people move for a new job or other reasons, the medical data contained in their records is more frequently than not lost.  
When they travel and have an accident or medical problem, their medical records contained in this fractured infrastructure are normally inaccessible.  The inability of onsite medical personnel to have access to a patient’s medical record can be an unnecessary death sentence for the patient.

Number of Regulations and Standards

There are other problems caused by this fractured healthcare information infrastructure.  As noted, there is a large increase in office staff to manage the paper records caused by the regulations supporting the “affordable care act”. 
This is in addition to the HIPA and state regulations, and insurance company’s policies and standards. Is it any wonder that doing paperwork is the single largest expense in a doctor’s office or medical center?

Non-linkage to Research

One of the most significant challenges in medical research on various diseases is tracking a disease through generations to determine if the disease is caused by environmental factors, or by a person’s generic heritage.  Given a set of standards and regulations (not over regulations) an integrated healthcare information infrastructure could provide the necessary information to enable much better insight in a much shorter time, at much less expense.

Non-linkage to Technology

There is another very expensive problem caused by the highly fractured and fragmented information infrastructure.  Currently and increasingly, there are many digital sensors on the medical device market.  These sensors include, X-ray, MRI, and cat scan machines.  There are also digital thermometers, blood pressure sensors, heart rate monitors (including Fitbit, Garmin, and Apple watches).
Additionally, they include a host of equipment for evaluating eyes, ears, blood, nervous systems and the hundreds of other evaluation sensors.

2.    Customer (Patient) Centric healthcare Information Infrastructure Architecture

A goal of a customer-centric healthcare information infrastructure architecture is to enable the customer to have complete access control over his or her health data and information in an ultra secure data network (USDN) while providing a high-level access to non-personal (i.e., aggregated) data for research and development.
I recently posted a high-level architecture for such an USDN; having said that I will now the healthcare information infrastructure as an example.

Customer ID and HIPA

The first component of this USDN is the user (customer) interface.  The key concept embedded in the HIPA rules and regulations is to keep access to the customer’s data to only those with a need to know.  Currently, the person who gives written permission is the customer [Sidebar: I don’t like the term patient because of its implications and connotations.]
With the USDN, the user/customer would give permission to medical personnel by inserting a smart card.  This card would contain biometric data.  The user/customer would then use the sensor on the medical terminal to confirm that they are who they say they are.  For example, put their finger on a fingerprint reader. 
If the biometric information on the card matches the information given by the user/customer then the medical personnel can get to the portion of the customer’s medical record and history they need to do their job.
If a customer comes into an emergency room unconscious the medical personnel can get authorization in much the same manner.  This could save the customer’s life.
In a hospital, the access to the customer’s record and history could be systematically extended to all appropriate personnel through the use of authorization or entitlement meta-data.

The Bridge/Portcullis

A second unique feature of the Ultra-Secure Data Network is the network bridge with portcullis.  The bridge converts the standard Internet communications protocols (that are well known to hackers and which malware uses to attack computers) into a set of network protocols much less well known. 
The portcullis allows only a limited set of formally defined data in formally defined data formats across the bridge.  The portcullis has other functions that are somewhat arcane, but which are important for security. [Sidebar: A high-level description of these is found in my post on the Ultra-Secure Data Network]

Centrally Organized Geographically Distributed Datastore

The data storage software will be what is currently being call “in the cloud”.  That means that the data is stored in data centers that are protected by the USDN including the bridge/portcullis.  Additionally, the operating system and application software will be customized to accept only data that is in the authorized formats.

Datastore for Research

The Medical Information Infrastructure is a cornucopia of data for medical research.  Currently, this data is, in the main, inaccessible to medical researchers due all of the rules and regulations for the current fractured system and to the nature of gathering data from a fractured system.
With the functions and features of using the USDN, data from the Medical Information Infrastructure could be made available to researcher without destroying the privacy of the customers of the system.  In turn, this would reduce the cost of medical research significantly.

3.    Building the USDN

Because of the size, scope, and complexity of the Medical Information Infrastructure it will take at least 10 years for the initial build out.  I will describe the major tasks of the build out, as I envision them, currently.

Setting up the Governance of the Infrastructure

The key to the USDN is setting up the processes for defining data, meta-data, governance, and infrastructure management.  Without these, the infrastructure will either be hacked immediately or be completely ineffective.

·         Governance

As I see it, the governance of the infrastructure will make or break both the security and utility of the infrastructure.  This governance sets the data, meta-data, and infrastructure management policies, standards, and rules for the infrastructure. 
That is why it needs to be the first task; though it will run concurrently with other tasks.
As I envision it the board of governors (guiding group, whatever the name) should be made up of medical professionals, medical researchers, insurance personnel, and IT professional from business and academia.  Getting that eclectic group moving in the same path will be difficult.  Therefore, I think an Enterprise Architect with a Systems Engineering group will be needed. 

·         Data

There will need to be a process set up for defining the data that should be in the USDN, the format for communicating that data and for creating, updating, and deleting data formats and structures.  Once set up this will be one of the key governance processes; changing data types and the actual data formats as technology changes.
One key to the USDN is that no unformatted data, like e-mails and e-mails with attachments will be allowed.  This cuts off one prime access points for hackers.

·         Meta-Data

For security purposes, the definition and delimitation of the infrastructure’s meta-data is of seminal importance.  It is what enables authentication, authorization, and access to the data of the Medical Information Infrastructure.  The governance body together with the Enterprise Architect and the Systems Engineering group will need to make decisions as to what is and what is not allowed, always with an eye to securing the data.
For example, the terminals connected to the internet, in turn connect to the USDN through the Bridge/Portcullis.  Each of these terminals will have a specific and static Media Access Control (MAC) address.  I would recommend that these terminals also have a static Internet Protocol (IP) address and other static information tied to that particular terminal.  This would make is difficult to spoof the bridge/portcullis into allowing a hacker through the gateway.
Additionally, there are many other ways this static (or infrastructure management only) meta-data can be used. But they will be guided by the rules and policies set by the governance board.

·         Infrastructure Management

The infrastructure management team will manage the hardware, software, and meta-data of the Medical Information Infrastructure as the prototype is rolled out and the operational version is built out.  It will be under the guidance of the governance board.

Developing the Bridge/Portcullis

The design requirements for the bridge/port will come from the standards and policies set by the governance board as architected by the enterprise architect and the functional and component designs will come from the Systems Engineering team.
There are several technical innovations required for an operational USDN.  I would recommend that a series of prototypes be developed using a cyclic process, like the one I developed and was patented by the Northrop Grumman Corporation.  That is, design, create the prototype, test, and repeat until all of the requirements are met.  I would expect that this development effort will take a minimum of one year and can be started about three to six months after the governance board begins their work which will identify the requirements for the equipment.
The first is the Bridge/Portcullis.

·         Designing

Designing the bridge/portcullis can be divided into two functional designs.  The first is the design of the network bridge.  This should not be as much about creating a design as resurrecting a design of a network bridge from the late 1990s.
The second is the design of the portcullis.  This is a wholly new functional and component design, though based on many standards.
When these functional and component designs are complete, then the entire bridge/portcullis will need additional functional integration design work.

·         Prototyping

A series of prototypes will need to be constructed, in following the Technology Readiness Level (TRL) characteristics.  I expect that the bridge can be prototyped at level 7 or 8 right from the start.  However, the network portcullis starts as a TRL level 1 or 2.  It will need to cycle through a number design, prototype, test cycles before it will be ready for integration with the bridge.

·         Testing

All prototypes will be evaluated against the metrics as defined by the requirements.  This means that prior to testing the systems engineering team will need to establish metrics for each requirement.

·         Rollout

When all of the cycles of design and functional testing and requirement evaluation of the bridge/portcullis are completed, the unit will be replicated two or more times for use in the pilot testing effort.

Developing the User Terminal(s)

The user terminals for the Medical Information Infrastructure are an interesting combination of advanced technology being used for a single purpose terminal (a throwback to the 1960s).  Some of the functions of this terminal type are discussed in my post on the USDN.  One key function will be enabling automated medical sensors (X-ray, MRI, etc. machines) to report results of tests through the terminal to the Medical Information Infrastructure.

·         Design

Again, the design of this terminal should start as soon as its requirements are identified (3 to 6 months.  In this case, the design is really more in terms of a system integration of functions rather than the development of new technology.

·         Prototyping

Constructing prototypes of the terminal should not nearly as difficult as the development of the bridge/portcullis.  But again, it is likely that several prototypes will be needed to get the functions and form factor correct for the terminal.

·         Testing

Likewise, testing should be relatively straightforward.  The most extensive set of tests will be those associated with terminal and network security.

·         Rollout

When all of the cycles of design and functional testing and requirement evaluation of the terminal is completed, the unit will be replicated several times for use in the pilot testing effort.

Datastore

The datastore, the hardware and software for storing the customer’s data, is by far the simplest of the functions/components that will need to be developed.  The reason is that commercial off the shelf hardware and software is available.

·         Design

In this case the challenge is to write custom code and define the parameters of standard database software and the operating system such that the data is secure from hacks inside individuals (the technical staff) and to provide the access for the various functions of the infrastructure.  For example, giving access to researchers to summarized or anonymous data would be one such function.
This development exercise would use a cyclic process, like the one I developed and Northrop Grumman patented.

·         Prototyping and Testing

Even while a software service is being coded it will be evaluated on a daily basis and formally tested at least once a month.  Frequently, the testing will not only reveal bugs and inconsistencies, but also additional governing board requirements and technical risks.

·         Rollout

As opposed to the bridge/portcullis and terminal, I expect that the datastore appliance (the combination of hardware and software) will be rolled out as an Initial Operating Capability (IOC) with a minimum number of data structures and software functions.
Additionally, I expect that one a one to three month cycle, new versions of the software will be integrated into the appliance.

Pilot Testing and Updating

Once the governance board has decided that enough of the bugs, defects, and other “gotchas” have been worked out of the system (likely after about one and a half to two years), the infrastructure will need to be acceptance tested.  For this type of system pilot testing is the only procedure guaranteed to clearly demonstrate all of defects in the total system from the various users’/stakeholders’ perspectives.
Ideally, this would be a progressive slow roll out, first to a single site, then to three sites, then to 10 to 12 sites with at least 3 data centers.  Ideally, this piloting phase will take one to two years.
At the same time there should be a significant increase in the types of data the infrastructure can store and the number of functions the infrastructure can perform for the various stakeholders.

Build out of the Infrastructure

Within six months of the start of pilot testing, the build out of the infrastructure can begin with user/stakeholder education about the infrastructure.  These educational activities will prepare the medical and medical IT professionals to deal with the coming changes in a more orderly manner.
As the pilot testing is coming to an end, hardware and software companies should be invited to start building products to the infrastructure’s specifications.  These would be sent to a test environment for certification as meeting all of the specifications.
These competing products could then be sold to the medical community as the infrastructure is built out across the nation.

4.    Issues

I would forecast that creating the Medical Information Infrastructure will reduce the total medical bill paid by customers, insurance companies (including governments) by 10 to 30 percent while producing much better outcomes.  This is a significant reduction in cost (hundreds of billions of dollars per year).
However, there are powerful groups that will oppose the infrastructure’s creation.  Some of their arguments will be economic and some emotional (in terms of politics or organizational culture).

Implementation Costs

Many in the current medical establishment point that the implementation of this system will be very expensive.  It will; there is no doubt of that.  It may run into $100 billion over ten years, but it will likely save more than $10 trillion over that same period.  That seems like a good investment for the country.  With the added bonus that it will save lives.
From a hospital or insurance company’s CFO perspective, this will turn into a gigantic expense for which there will be no immediate ROI. [Sidebar: This turns “short-term cost conscious” CFO type financial engineers nuts—I know.]  Again, they are correct.  But there will be a significant ROI long-term.

Politics/Cultural Issues

There are also many political/culture issues

·         The Affordable Care Act, Medicare/Medicaid, HIPA

[Sidebar: The US Constitution specifically states that governmental healthcare, like other social entitlements are within the purview of the states, not the federal government; an attitude that my ultra-left leaning “social responsible” friends find distressing.  However, since we’ve already entered this financial black hole that lead to dystopia, I think we need to find a way out of it or at least to ameliorate many of its worst effects.]
These federal laws mandate a great many enforcement rules and regulations.  These rules and regulations will be unnecessary with the Medical Information Infrastructure.  However, many federal and state bureaucrats will have to find other employment if they are removed.  So, politically they will lobby for most of them to remain.  This will cause a great deal of process friction and greatly reduce the benefits of the infrastructure.

·         Pilot Testing

[Sidebar: “The first instance of a superior principle is always inferior to a mature instance of an inferior principle”.]
Critics of the Medical Information Infrastructure will have a field day with its pilot testing.  Like many complex military systems, (three being the B-52, the M-1A2, and the CVN 78) and like many versions of software in beta testing, the system will have many hiccups and gotchas when first brought up in a pilot.  Even if the program, describes above, is followed there will be technical defects.  And like the critics of other complex systems, there will be many that will say the Medical Information Infrastructure will never live up to its hype or its promise.  In the short term they will be correct, but wrong in the long term [Sidebar: The Gartner Group calls this the hype-cycle, I call it the path that the way the future becomes the now.]

·         Medical Culture

There are three cultural issues associated with implementing the Medical Information Infrastructure.  The first is the current medical culture.
The Medical Information Infrastructure is being proposed as part of the conversion of the medical information system from the Age of Print to the Age of the Computer (as I discussed in a previous post). 
As such, people brought up in the age of print have much greater difficulty in dealing with this type of change.  That is true for medical personnel as well as everyone else.  Consequently, they will resist any automated system.
Even in medical facilities that have automated the records management this change will be difficult.  It’s like learning how to use a tablet after only using a PC.

·         Insurance Culture

Insurance is the transference of the consequences of a risk (an unknown) from the customer to organization insuring.  This comes at a price, the cost of insurance.
The cost of insurance is based on the probability that something bad will happen.  The probability is based on statistics of how often the bad thing happens within a population.  That, in turn is based on data—data that would be stored in the Medical Information Infrastructure instead of now, in a huge collection of digital and paper files spread across the country.
This means that there would much less cost in gathering and maintaining this information, but it also means that the culture of insurance selling, and adjusting would change.  And as with the medical professional, cultures resist change.

·         Customer Acceptance

One of the major challenges will be acceptance by the customer base.  In the pilot phase, customers, not fully into the computing culture will be very anxious about the processes, where his or her records are stored, and why the change is necessary.
Additionally, the customers will be anxious about the necessary changes in law to enable this system to function effectively and cost efficiently.  This stress will be hyped in the media and transmitted to congress.  There is no way to stop it, but education/marketing will be able to ameliorate it to some extent.

Meeting the Issues


Meeting these cultural issues will require significant education, training, and marketing to  create initial grudging acceptance.

Monday, July 24, 2017

Knowledge Creation and the Four Great Cultural Transformations of Humanity

[Sidebar:  While starting with a sidebar is unusual to say the least, I felt it necessary because this post has gotten itself completely out of hand in terms of length.  But then when I thought about it attempting to synthesis history and future history into a single post should be long.  Also forgive me for injecting myself into many of the sidebars.]

It’s weird when you think about it, the historians, archeologists, and other social scientists discuss the ages of culture as the early stone age, the late stone age, the copper age, the bronze age, the iron age, and so on, as the way to name the cultures of humans.

I think there is a better way to understand the ages of humanity. It is through humanity’s creation and dissemination of information and knowledge.  All the current ages of humanity are really based on these four transformations of data, information, and knowledge.

Knowledge Generation before Speech

Prior to the development of speech there were two ways that life created “information and knowledge”.  The first started with the start of life itself on earth.  It was the combination of changes in the DNA chains and natural selection by the environment.  [Sidebar: This is still the foundation on which all other forms of knowledge creation is built; that is, except for a relatively small number of differences, human DNA and tree DNA is the same.]  One form of this knowledge creation and communications is instinctual behaviors.  Additionally, this forms the basis for the concept of Environmental Determinism.

The second way “information and knowledge” was created was through the evolution of “monkey-see-monkey-do”; that is, through “open instincts”.  An open instinct is one that allows the life-form, generally, animals to observe its surroundings, orient the observations (food, a place to hide, a threat), decide on an action, and act. [Sidebar: Oh shoot, there goes Boyd’s OODA loop again.]  No longer does DNA only decide.  To make the decision the life-form must learn to observe, choosing which input is data and which is noise, and must create a mental model in order to orient the observed data.  Both of these require the ability and time to learn.  This learn-by-doing forms the basis for the concept of “Possiblism”.

The Age of Speech (~350,000 to 80,000 BC)

As noted, the learn-by-doing (monkey-see-monkey-do) process requires both the ability and time to learn.  According to studies of DNA and archeology, the average man would live about 20 years. [Sidebar: note that even today a boy becomes a man at age 13 in the Jewish religion.  This means that thousands of years ago, the man would have 7 years to procreate before dying.  Today, that 13 year old kid is not even in high school.]  So there is a time when the young need adult protection to learn.  This may be a few months as in the case of deer, or a couple of years.

The problem with learn-by-doing is that it requires both the ability to learn and the time to learn.  Because DNA evolution continues with each biological experiment—each child—there will be significant variations in the ability to learn.  So, sometimes knowledge would be lost by the inability of a child to learn-by-doing.  Other times, the parent/coach could die unexpectedly early, so that there was insufficient time for knowledge transfer.  Either way, information and knowledge was lost.

At some point between approximately 350,000 BC and 80,000 BC, possibly in several steps a new hopeful dragon (to use Carl Sagan’s term) was born. This hopeful dragon had some ability to articulate and an open instinct to most probably create a noun (the name of a thing) and/or a verb (the name of an action).  This gave birth to language.  And language allowed for learning-by-listening, which turned out to be a competitive advantage for the groups and tribes that had it when compared with those that didn’t.

This is Learning-by-listening resolves the problem of loosing knowledge gained by previous generations.  As language evolved it enabled humans to communicate increasingly abstract concepts to others.  Initially, (for 100,000 years or so) much of this knowledge was communicated by statement of observations and commands, some of it evolved (likely at a much later date) into stories, odes, epic tales, sagas, and myths.  These tales encapsulated the knowledge of prior generations; the tribal or cultural memory.

Toward the end of the period (~80,000 BC) when speech and language were born Homo sapiens started migrating from Africa.  Some researcher believe this was due to the competitive advantage of speech and language, that is, better methods of knowledge accretion and communication when compared with other animals.

The Age of Speech allowed for the accumulation of data, information, and knowledge.  Much of this was passed along in the form of tails, odes, myths, and so on.  At the same time practical skills like hunting and gathering were learned more effectively when verbal instructions and especially critics could be given.  Students learned much faster and at a much higher level.  The result was a differential in knowledge among the many, many small family groups and tribes.

After many millennia of inter-tribal wars and with some inter-tribal trading enough data, information, and knowledge was created to begin the long  trek to civilization.  [Sidebar: During the “hunter gatherer stage of human “civilization” there were no “Noble Savages”, just savages.  According to DNA evidence and studies of tribes in New Guinea, the average male was killed when approximately 20 years old.]  During the time from the Paleolithic through the  Neolithic ages, knowledge accumulated very slowly.  Archeologists have found the innovations diffused through the human population over hundreds of years.  Many archeologists want to attribute this to trading, but evidence suggests that much of the time violence was involved.

The Age of Writing (~3000 BC)

Speech and language enabling and supporting Learning-by-doing and learning-by-listening provided the basis for humans’ knowledge development for the next 70,000+ years. It was not until human organizations grew beyond a few hundred individuals with a geographic territory beyond what a person could walk in a day that humans had a need for data, information, and knowledge transfer/communications that went beyond speech.  

At about the time the first large kingdoms were formed, apparently the traders of the era found a need to track their trading.  And traders and trading was the main vehicle for communicating data, information, and knowledge during this entire period.  [Sidebar: At least this is what the archeologists have found so far.]  Additionally, the tribal shaman (priests) started to create documents so that their religious beliefs, traditions, knowledge, and tenets would not be lost by their successors.  [Sidebar: these were the scientists of their age.]  Consequently, religious documents, together with trade documents are among the earliest writing found.

Understand, writing came into existence at about the same time as many large construction projects, like pyramids and ziggurats.  And this was when city states, the forerunners of the modern state formed.

For the next 4400+ years writing continued to be the main medium for data, information, and knowledge documentation and communications.  During this time many kingdoms and empires rose and fell, including The Roman Empire, and a vast quantity of data, information, and knowledge was created documented and lost.  [Sidebar: The worst was the destruction of the Library and Museum (University) at Alexandria.] 

Finally, with the beginnings of the European Renaissance in the 1100s AD schools in Italy and Spain, initially created to teach monks to read and write, began to collect and copy works from early times (including Greek and Roman).  The copies were exchanged and libraries began to appear within these schools that were called and were Universities.  [Sidebar: This age is called the “Renaissance” because it was the time when initially data, information, and knowledge were recovered and new knowledge was documented.]

During this same period, and in part using the recovered knowledge-base, came the slow innovation of new instruments including the mechanical clock, and new navigational instruments, and new methods for ship construction; all leading to an economic sea change in the European kingdoms.  Further, during this time, apprentice schools (schools of learn-by-doing) appeared in greater number with more formality to their coursework.  These schools taught “manual trades”, the start of formal engineering and technology programs.

The Age of Printing (1455 AD)

All during this time, more and more clerics, (clerks) were coping more documents.  And though the costs were high, there was a major demand for more copies of books, like the Christian bible. 
In about 1440, a German, Johannes Gutenberg, developed a system that could make hundreds of copies.  In 1455, he printed what is known as the Gutenberg Bible and created the technology infrastructure for a paradigm shift.  He also printed a goodly number of these bibles.

Another German, Martin Luther, subsequently kick started this shift by nailing his 95 theses to the door in 1517.  Prior to Luther most Europeans could not read.  The Roman Catholic clergy up to and including the Pope took advantage of this to create highly imaginative church doctrine that would provide them with a large money stream.  Since they had been infected with the edifice complex they used this money stream to indulge their favorite activity at Rome and elsewhere.  

Luther was intensely unhappy with this church doctrine in his theses.  Instead of the Pope being the final Authority on Christianity, he preached that the Christian Bible was the final authority and that all Christians had the right to read it.  So, by the late 1500s, there were many printed books in an increasing number of libraries with an increasing number of Europeans (and shortly, American colonists) that could read.  [Sidebar: Remember that Harvard College, now Harvard University was founded in 1636.]  And this was only step one of the Age of Printing.

Step two in the Age of Printing was Rev. John Wesley’s creation of “Sunday School”.  Many or most of the members Wesley’s sect, “The Methodists”, had been tenant farmers, laborers, or cottage industry owners who had lost their jobs or their businesses in the early stages of the industrial revolution (The late 1600s and through 1700s).  

At this time, machines began to be used on farms and in factories, putting these people out of work.  Wesley and the Methodists, by teaching them to read and write on Sunday, their day off from work, enabled them to move into and participate in the profits of the industrial revolution.  Together with other movements toward “schooling”, the age of printing and economic progress happened, creating the “middle class.” [Sidebar: In Colonial New England, early on—in the 1640s—primary schooling became a requirement.  For more information, see my book.]  Glossing over the many upgrades and refinements, knowledge creation and communications was base on printing technology until the 1980s, more or less.

During the Age of Writing, but particularly during the Age of Printing, the methods for the communications data, information, and knowledge began to diverge from trade.  In fact, in the US Constitution the founding fathers treated the “the US mail” as a direct government function because they felt that communications for everyone was so important.  On the other hand, they indicated that the government should “regulate” commerce among the states; and there is a great difference between a function of government and regulation by government.

The Age of Computing (~1940 AD)

There are two roots of the Age of Computing.  These had to do with improving print-based data storage and the communication of data and printed materials.  The first root was data and information communications.  While there were many early attempts of high-speed communications over long distances in Europe over the ages, the first commercially success telegraph was developed by Samuel Morse in 1837 [Sidebar: together with a standard code, coincidently call the Morse Code.]  By the 1850s this telegraphic system had spread to several continents.  

By 1874, Émile Baudot invented the teletype machine which allowed any typist to type a message on a typewriter keyboard, which the machine would then translate into Morse code.  A second teletype machine would then print the message out at the other end.  This meant that typists, rather than trained telegraphers could send and receive messages.  Additionally, the messages could be coded and sent much faster.  Three other inventions/innovations the facsimile machine, the telephone, [Sidebar: a throwback to the Age of Speech], and the modem complete the initial intro the Age of Computing.

The second root was the evolution of the computer itself.  Early in the industrial revolution, Adam Smith discussed the assembly line process and the fact that tooling can be made to improve the quality and quantity of output in every activity in the process.  Using this process more or less, the hand tooling of the late 1700s gave was to increasing complex powered mechanical tooling for manufacturing products in the 1800 and 1900s.  

While that helped the manufacturing component of the business, it did not help the “business” component of the business.  While the need for improving the information handling component (reducing the time and cost) of a business was recognized in the 1500s, it wasn’t until 1851 that a commercially viable adding machine became available to help with the “book keeping/accounting” of a business.  These machines produced a paper tape (printing) on which the inputs and output was reported.

From 1851 to at least 1955 these mechanical wonders were improved, to the point that in the early 1950s, they were call “analog computers”.  And for a short time there was discussion about whether analog or this new thing called digital computers were better. [Sidebar: Into the 1990s tidal predictions were made by NOAA using analog equipment, since they kept proving to be more accurate.]

The bases for the electronic, digital computer came from several sources, mostly in the United States and in Britain, during the late 1930s and early to mid-1940s. However, it wasn’t until the invention of the transistor in 1948 coupled with the concept of the Turing Machine (Alan Turing, working from 1941 to 1950) that the first prototype commercial “electronic computers” were developed. 

In 1956 I “played” with my first computer. It consisted of a Hollerith card reader for data input, electronics, a breadboard (a board with a bunch of holes arranged in a matrix) on which a program could be “coded” by connecting the holes with wires (soft wiring), and a 160 character wide printer for the output.  The part I played with was the card sorter.  Rather than sorting the data in the “computer”, it was done by arranging and ordering the Hollerith cards before inserting them into the card reader.  The card sorter enabled the computer’s operator to sort them very much faster than attempting to sort them by hand.

By 1964, computers had internal memory, about 40K bits, and storage, tape drives (from the recording industry) and disks (giant multi-platter removable disks) holding up to 2MB of data.  [Sidebar: I learned to code on two of these; IBM’s 1401 and 1620. I coded in machine language, symbolic programming system and Fortran 1 and 2.]  These computers had rudimentary operating systems (OS) with input and output being a card reader and a punch card writer.  And they had teletype machines attached as control keyboards.

Fast forward to 1975; by this time, Technology had advanced to the point where teletypewriters were attached as input/output terminals.  These were running at 80 to 120 baud (charters per minute, fast for a human typing, but very slow for a computer).  Some old style television-like (cathode ray tube, or CRT) terminals were becoming commercially available.  Mostly, this were simply glass versions of teletype printers, allowing the use to type into or read from an 80 characters-wide by 24 lines long green screen; and it was at about the same speed as a 120 baud teletype.  But, Moore’s Law was in high gear with respect to hardware so that with each two years, computers doubled in speed and capacity.

In about 1980 networking started to develop commercially, though there were several services over telephone networks earlier. [Sidebar: The earliest global data network that I know of was NASA’s network for data communications with the Mercury spacecraft in 1961.]  Initially, this development was in terms of a Local Area Network (LAN), linked through the use of telephone cables. [Sidebar: During this time, I set up some LANs at Penn State University and at Haworth, a furniture manufacturing company.]

By 1985 the Internet protocols evolved.  [Sidebar: Between approximately 1985 and 1993, a significant group of engineers created a set of protocols to international standards; they were called Open Systems Interconnect or OSI protocols.  They were a set of protocols based on a seven layered model.  This group formed one camp; the other was from an amorphous organically evolving TCP/IP group of protocols.  This group included academics, hackers, and software and hardware suppliers.  This group preferred TCP/IP because it was a free open source technology with few if any real standards—One HP Vice President said of TCP/IP that it was so wonderful because there were so many “standards” to choose from—and because OSI required significantly more computing power because of architectural complexity of its security and other functionality.  Consequently, TCP/IP won, but we are now facing all of the security and functionality issues that would have been resolved by OSI.]  [Sidebar: In 1987, I predicted that the internet would serve as the nervous system of all organizations and was again looked at like I had two heads.]  And technology had evolved to the point the PCs on LANs were replacing CRTs as terminals to mainframe computers.  Additionally, e-mail, word processing, and spreadsheet software were coming into their own, replacing typewriters and mail carried memo and documents.

In the early 1990s fiber optic cables from the Corning Glass Works revolutionized data and information transfer in that it was speeded up from minutes to micro-seconds with approximately the same cost. [Sidebar: Since I worked with data networks from 1980 on, and since I led an advanced networking lab for a major defense contractor, I could go into the hoary details for many additional posts, but I will leave it at that.]  As fiber optics replaced copper wires, the speed of transmission went up and the cost went down.  There were two consequences.  First, the number of people connected to the internet drastically increased.  Second, more people became computer literate, at least to the point of using automated devices—especially, the children.

By 1995, the Internet was linking home and work PCs with the start of web (~1993), and by the 1996/1997 timeframe the combination of home computers, e-mail, word processing, and the Internet/web was beginning to disrupt retail commerce and the print information system.  At this point the computer started to affect all of data, information, and knowledge systems, which is disrupting culture worldwide.

User Interfaces and Networking

As I discussed in a previous post and in SOA and User Interface Services: The Challenge of Building a User Interface in Services, The Northrop Grumman Technology Research Journal, ( Vol. 15, #1, August 2007), pp. 43-60, there three sets of characteristics of every user interface.  The first is the type of user interface, the second is how rich the interface is, and third, how smart the interface is. 
There are three types of user interfaces, informational, interaction oriented, and authoring.  The first is typical of the “Apps” on your smart phone, getting information.  The second is transaction oriented.  This means interacting with a computer in a repeated manner, like when an operator is adding new records to a database.  The third is authoring.  This doesn’t mean writing only, it means creating anything from a document, a presentation, to a movie, to a song, to an engineering drawing, or to a new “App”lication.  This differentiation of the user interface only really developed in the late 1990s and early 2000s as each of these types requires a different form factor for the interface and increasingly complex software supporting it. 

A rich user interface is an interface that performs many functions internally, i.e. does a lot for you.  As computer chips have become smaller, using less power, and much faster, the interface has become much richer.  This started with the first graphics terminals (in which there were 24 by 80 address locations) in the early 1970s.  Shortly, real graphics terminals appeared costing upwards of $100K.  These graphics terminals required considerable computing power from the computers they were directly connected with to operate. 

In an effort to relieve the host computer of having to support the entire set user interface functions Intel and others developed chips for performing those functions.  When some computer geeks looked at the functionality of these chips, (the Intel 8008 chip, among them) they decided they could construct small computers from them; the genesis of the PC [Sidebar: I was one of these.  With two friends, a home grown electrical engineer and an account, I tried to convince a bank to loan us $5000 to start a “home computer” company and failed; most likely because of my lack of marketing acumen].

A smart user interface is one that that takes the information of a rich interface and intercommunicates with mainframe applications (“the cloud” as marketers like pretend is a new concept) and their databases to bi-directionally update (share) their data.  Rich interfaces have rapidly evolved as network technology has grown from copper wire in the 1950s to fiber optics, Wi-Fi, and satellite communications as competing interconnection technologies at the physical through network layers of the OSI model.  These enabled first the Blackberry devices and phones, then in 2003, the Iphone and competing products.  The term “App” from application is a rich and generally “smart” user interface.  [Sidebar: I put “smart” in quotes because many of these “rich/smart apps” require constant updating burning data minutes like they are free.  When you allow them to only use Wi-Fi they complain bitterly.]

The library

Initially, in the late 1970s, information technology started to disrupt the printed information center, that is, the library.  The library is the repository of printed documents (encompassing data, information, and knowledge) of the Age of Print.  It uses a card catalog together with an indexing system, like the Dewy Decimal or Library of Congress systems, creating metadata to organize the documents to enable a library’s user to find documents containing data or information contained in the document pertaining to the user’s search requirements.

It started from the use of the rudimentary data bases’ (records management systems’) ability to control inventory, in the case of a library the inventory of books.  Initially, automation managed the metadata about the library’s microfilm and/or microfiche collections.  [Sidebar: The libraries used microfilm and microfiche technologies to reduce the volume and floor space of its collections as well as enabling easier searches of those collections.  Microfilm and microfiche technologies greatly reduced the size of the material.  For example, an 18 by 24 inch newspaper could be reduced to less than a two inch square (or rectangle).  However, with so many articles in each daily paper, library patrons had difficulty finding articles on particular topics; enter automation.

Initially, the librarians used the one or two terminals connected to the computer to either enter the metadata about what was on the microfilm or fiche or pull that data for a library’s customer.  They would enter the data using a Key Word In Context (KWIC) indexing system. 

Gradually, as computing systems evolved the quantity and quality of metadata of what was in the libraries increased and access within the library’s computing system increased; generally with a terminal or two sitting next to the card catalog.  However, none of the metadata was available outside the library.

With the advent of the World Wide Web standards and software (both servers and browsers) all of that changed.  [Sidebar: Interestingly, at least to me, the two basic protocols of the web, HTML and XML were derivatives of SGML, Standard Generalized Markup Language.  SGML is a standard developed by the printing industry to allow it to transmit electronic texts to any location and allow printers at that location to print the document.  It’s ironic that derivatives of that standard are putting the printing industry out of business. One of the creators of SGML worked for/with me for awhile.] 

With the advent of the Internet, browser and server software, and HTML (and somewhat later XML), the next step in the disruption of libraries as repositories of data, information, and knowledge started with search engines.  The first commercially successful search engine was Yahoo.  It used (as do all search engines) web crawler technology to discover metadata about websites then organizes it in a large database.  The most successful search engine to date is Google; the key reason being that it was faster than Yahoo and contained metadata about more websites.  These search engines replaced card catalogs of libraries before the libraries really understood what they were dealing with.  This has been especially true since as a great deal of data and information has migrated to the web in various forms and formats.

One of the things many library users went to the library for, before the advent of the web, was to use encyclopedias, dictionaries, and other such materials.  Now, Wikipedia and others sites of this type are the encyclopedias, dictionaries, thesaurus, and so on, of the Age of Computing.  Additionally, many people read newspapers and magazines at the library.  These too, are now available on any rich, smart user interface.  [Sidebar: For the definitions see my paper on Services at the User Interface Layer for SOA.  There is a link on this blog.]   The net result is that libraries, as physical facilities, are nearly obsolete.  Now “Big Data” (actually the marketing term for knowledge management of the 1990s) libraries and pattern analysis algorithms are taking data, information, and knowledge development of the library to the next level, as I will discuss shortly.

Imaging: Photos, Videos, Television, Movies, and Pictures

One of the greatest transformations, so far, from the Age of Print to the Age of Computing is in the realm imaging.  Images, pictures if you will, have been found on cave walls inhabited in the early “stone age” and some written languages are still based on ideographs.  So imaging is one of the oldest forms of communications.

Late in the Age of Writing, in the Italian Renaissance, images became much more realistic with the “discovery” of perspective.  Up to that point images (paintings) had been very “two dimensional”; now they were three.  Early in the Age of Print, actually starting with Guttenberg, wood cut images were included in printed materials.  From ­­­1800 onward, a series of inventors created photography, capturing images on a photo-reactive film.  Lithography allowed these images to be converted into printed images.  Next moving images, the movies came into being; as well as color photography.

From the 1960s, the U.S. Defense Department started looking for methods and techniques to gather near real-time intelligence by flying over the area—in this case areas in the USSR; and the USSR objected.  The first attempt was through the use of aerial photography, which started with a long winged version of B-57, then the U2, and finally the SR-71.  All of these used the then state-of-the-art film-based photography.  But all had pilots and only the SR-71 was fast enough to evade anti-aircraft missiles. 

So a second approach was used, sending up satellites and then parachuting the film back to earth. There were two major problems with this approach.  First, was getting the satellite up in a timely manner.  Rockets at the time took days to launch so getting timely useful data was difficult.  Second, having the film canister land at the proper location for retrieval was difficult.

Therefore, the US government looked for another solution.  They, and their contractors, came up with digital imaging.  This technology crept into civilian use over the next 20 years. Meanwhile, the photographic industry, in the main, ignored it, in part, because of the relatively poor quality of the images early on.  But this improved, both the resolution and the number of colors.  Among others, this led to the demise of Kodak and Fuji Films.

Another part of the reason the photo film industry ignored digital imaging is the quantity of storage and the physical size of the storage units required to store digital images.  But as Moore’s Law indicated, the amount of storage went up while the cost dropped drastically and this size of the hardware needed decreased even more.  With the advent of SD and Micro-SD cards there was no need for film.  And with the advent of image standards like .tif, .gif, and .jpg the digital images could be shared nearly instantly.

Retail Selling

From before the dawn of history, until 1893, trade (buying and selling) was a face to face business.  In 1893, Sears, Roebuck, and Company started selling watches and then additional products by catalog using the railroad to deliver the goods.  When coupled with the Wells Fargo delivery system—across the railroad system—allowed people in small towns to purchase nearly any “ready-made” goods, from dresses to farm implements.  This helped mass production industries and helped to create cities of significant size.  It then followed (or led) the way by building retail outlets (stores) in every town of even small size. 

This model of retailing is still the predominate model, but is the one being challenged by the Sears and Roebucks catalog model in an electronic internet-based form of retailing.   Examples include the electronically based, Amazon, eBay, and Google.  Amazon rebooted the no bricks and mortar retailing catalog with an internet version.  It is successfully disrupting the retail industry.  Likewise, eBay used the earliest market model, trading in the local market, in a global version.  Early on in the existence of the internet various groups developed search engines.  Currently, Google is the primary search engine. But it is supporting a concierge service which the Agility Forum, The Future Manufacturing Consortium said would be a requirement for the next step in manufacturing and retailing, that is, mass customization.

Additive Manufacturing

Early in my studies in economics, the professors tied economic progress of the industrial age,  to mass production, to economies of scale.  However, in the Age of Computing mass production is giving way to mass customization.

Initially, in the 1970s, robotic arms were implemented on mass production lines to reduce the costs of labor [Sidebar: especially in the automotive industry.  At the time US automakers found it infeasible to fire inept or unreliable employees do to union contracts.  Additionally, the labor costs, do to those contracts priced the US automobiles out of competition with foreign automakers.  To reduce their labor costs the automakers tried to replace labor with robots numeric controlled machines.  They had mixed success do to both technical and political issues raised.  This is not unlike the conversion of the railroads for steam to diesel and the “featherbedding that forced many railroad into contraction or bankruptcy.]  By the 1990s automation and in particular agile automation (automation that is leading to mass customization) is becoming the business-cultural norm in manufacturing and fabrication industries.  Automation is replacing employees in increasingly complex activities.  It will continue to do so and will continue to enable increasing mass customization of products.

For thousands of years components for everything from flint arrowheads to automobile engine blocks to sculptures were created by subtracting material from the raw material.  This subtracted material is waste.  A person created a flint arrowhead by removing shards from a flint rock. 
Automobile engine blocks are created by metal casing, then milling the casting to smooth the surfaces for the moving engine components. 

Stone and wood sculptures use the same material removal procedures as creating an arrowhead.  These too create waste.  Some cast sculptures may not be milled or polished, but these are the exceptions and the mold for the casting is still waste material.

Recently, a process similar to casting called injection molding does create products with relatively little waste.  But most component manufacturing processes create considerable waste.

However, with the rise of ink jet printing technology, people began to experiment with overlaying layers of material and found they could create objects. This technology is called 3D printing or additive manufacturing. It will have a much greater impact on manufacturing and mass customization.

A simple example is car parts for older model vehicles.  A car enthusiast orders a replacement part for the carburetor in his 1960s vintage muscle car.  The after-market parts company can create the part using additive technology rather than warehousing hundreds of thousand parts, just in case.  The enthusiast gets a part that is as good as, or perhaps better than, the original, the after-market parts company doesn’t need to spend money on warehousing, and the manufacturing process doesn’t product waste (or at least only a nominal amount).

Research and development is using this technology is now looking at creating bones to replace bones shattered in accidents, war, and so on; in nano-versions to create a wide variety of products.  [Sidebar: Actually, one of the first “demonstrations” of the concept was on the TV show, Star trek, where the crew went to a device that would synthesis any food or drink they wanted.] 

In the future this technology will disrupt all manufacturing processes while creating whole new industries because it can create products that meet the customer’s individual requirement better, while costing less, and being produced in less time.  For example, imagine a future where this technology can create a new heart identical to the heart that needs replacement, except fully functional—researchers are looking into the technology that could, one day, do that.

Automotive

The automotive industry is already starting to feel the effects of the Age of Computing.  The automotive industry has been based on cost efficiency since Henry Ford introduced the assembly line.  The industry was among the first to embrace robots on the assembly line.  But, there is much more.

The cell phone is becoming the driver’s interactive road map.  This road map tells the driver which of several routes is the shortest with respect to driving duration based on the current traffic and backups, as well as speed, and distance.

Since the 1970s automobiles have had engine sensors and “a computer” to help with fuel efficiency and identifying engine malfunctions.  These have become increasingly sophisticated.
Right now the automotive industry is driving toward self-driving cars.  There some on the roads and many that have sensors (and “alerts”) that “assist” drivers in one or more ways.

In the Near Future

And there are many industries like the automotive industry which are feeling the effects of The Age of the Computer.  That is, there are many more systems which the technology and processes of the Age of Computing are disrupting.

While processes are in transformation today, it’s nothing compared with what will happen in the immediate and not very distant future.

Education

Shortly, in the Age of Computing, information technology will disrupt schools.  People learn in two ways, by doing (showing, or “hacking”) and by listening.  And everyone learns using differing combinations of these two methods.

Technology can and will be used to “teach” in all of these combinations.  Therefore, “the classroom” is doomed.

Some students learn by doing, a method that “academics” pooh-pooh; only “stupid” children take shop and apprenticeships don’t count, you must of a “degree” to get ahead.

However, children do learn by doing, and enjoy it.  Why do you think that so many boys, in particular, choose to play video games? 

Why is it, that pilots of the United States Navy have go through 100 hours or more of computer simulations before trying a carrier landing?  Why, because they learn by doing. 

In the near future most of the jobs will require learn by doing.  Learn by doing includes simulations, videos, solving problems, labs.  Automation has and increasingly will impact all of these, giving the learn-by-doers the opportunity the current mass production education system doesn’t.
The other method for learning is “learn by listening”.  Learn by listening includes reading and audio (audio includes both lectures and recordings of lectures).  Over the past two hundred years, these have been the preferred methods of “teaching” in mass production public schools.

In the main, it has worked “good enough” for a significant percentage of the students, but numbers of students have fallen from the system.  Part of the problem is that some teachers can hold the interest of some students better than other students, other teachers may hold the interest an entirely different group of students, and some may just drone on.

Now, using the technology of the age of computers, students will be able to listen to lectures from teachers that they are best suited to learn from.  This means that the best teachers are able to teach hundreds of thousands of students across the globe, not just the 30 to 50 using the tools of the age of print.

It also means that students can learn in ways the more align with their interests. [Sidebar: I saw a personal example of this when I was working on my Ph.D. at the University of Iowa.  The Chair of the Geography Department, Dr. Clyde Kohn was also a wine connoisseur.  He decided to offer a course, called “the world of wines” to a group of 10 to 15 students.  He would teach them about climates and geomorphology (soils, etc.) that create the various varieties of wine.  He would also teach them about wine making and distribution worldwide; so there was physical and economic geography involved.  In the first 5 minutes of enrollment the class was filled and students were clamoring to get into to it.  He opened it up.  By the time all students had enrolled there were 450 students in the geography class and they probably learned more geographic information than they ever had before. It also gave the state legislator apoplexy.]  As the technology becomes more refined, students will be able to learn whatever they need to learn without ever going near a classroom.  I suspect that home (computer) schooling will become the norm.  Even “class discussion” can be carried on using Skype/Gotomeeting/etc. like tools.  Sports will be team-based rather than school-based.

I will define a prescriptive architecture for education in another post.  It turns the educational system on its head.  [Sidebar: Therefore, it will be ignored by the academic elite.]

Medicine

Medicine, too, is starting and will continue to go a complete disruption of the way it is performed (not practiced).

Currently, most of medical performance is in the rational “weegee”-boarding stage and uses mass production methods, not mass customization.  But all people are biological experiments and are, consequently, individuals.  And every malfunction should be treated the same way.

To get the best result for the individual, each type of drug and dosage of that drug should be customized for the individual from the start—not by trial and error.

In the near future, people will be diagnosed using their complete history, analyzing their DNA, body scanning, and other diagnostic measurements (both current and undiscovered). Then, using additive nano-technology an exact prescription will be created.  The medicine may be a single pill, mixed with a liquid, through a shot, or some other method, introduced into the individual.

Much of this analysis will be done by a computer.  Already, in the 1070s, a program simulated a patient, so that medical students could attempt to diagnose the “patient’s” problem.  In order for this program to serve its intended function, the MDs and Computer Assisted Instruction mavens were continually refining the data used by the program.  If this continued, and I suspect it did, the database from this single program could have been used by an analysis program to produce a diagnosis that would be comparable with that of expert diagnosticians.

This type of program could be, and likely will be, used by every hospital in the country, saving time and a great deal of money in identifying problems.  The key reason that it is not used today is that it has poor “bedside” manners—but so to do many of the best diagnosticians. 

Also, in many situations, this will take “The Doctor” out of the loop.

For example, instead, the patient walks into “the office”, which may be in front of the home computer.  The analysis “App” asks the patient questions and gets the patient’s permission to access his or her medical record.  If the patient is at home and the “Analysis App” needs more information, the app may ask the patient user to go to the nearest analysis point of service (APOS) for further tests.
At the APOS the patient would lay on a diagnostic table, not unlike those mocked up in Star Trek.  This table would have all sensors needed to take the necessary measures—in fact; there will be a mobile version of this table in the back of a portable APOS vehicle.

Once the analysis is complete, the APOS will use additive manufacturing to incorporate all of the medicines needed in a form usable by the patient.

For physical trauma or where this is irreparable damage to a bone or organ, additive manufacturing will create the necessary bone or organ and a robotic system will then transplant it into the patient’s body.

The heart of this revolution in medical technology is Integrated Medical Information System based on the architecture I’ve presented in the post entitled “An Architecture for Creating an Ultra-secure Network and Datastore.”  Without such an ultra-secure system for the medical records of each individual, externalities are too grave to consider.

However, even with an Integrated Medical Information System there will be substantial side effects for all stakeholders, doctors, nurses, technicians, and patients.  There need no longer be any medical professions, except for medical research organizations. 

Because the recurring costs of an APOS are low when compared with the current doctor’s office/hospital facility, all people should be able to pay for their own medical costs.  So there will be little or no need for insurance.

Additionally, because medicines are manufactured on a custom basis as needed by the patient, there will be no need for pharmacies or systems for the production and distribution of medicines.
With no medical professionals, no insurance, and no need for the production and distribution of medicine, this whole concept will be fought, in savage conflict, by the those groups, as well as Wall Street and federal, state, and local welfare agencies, all of whom will lose their jobs.  However, it will be inevitable, though perhaps greatly slowed by governmental regulation.

Again, I will say a good deal more on this topic in a separate post.

Further into the Future

There are three alternative future cultures possible in the Age of Computers, the Singularity, Multiple Singularities, or the Symbiosis of Humans and Machines.  These may all sound like science fiction or fantasy, but they are based on my 50+ years of watching the Age of Computers and technology advance.

The Singularity

In a story that someone told me in the 1960s, a man created a complex computer with consciousness.  He created it to answer one question, “Is there a god?”  The computer answered, “Now there is.”  A definition of “The Singularity” is that all of the computers and computer controlled devices, like smart phones become “cells” in a global artificial consciousness. 

Many science fiction writers and futurists have speculated on just such an occurrence and its implications.  John von Neumann first uses the term "singularity" in the early 1950s as applied to the acceleration of technological change and the end result. 

In 1970, futurist Alvin Toffler wrote Future Shock. In this book, Toffler defines the term "future shock" as a psychological state of individuals and entire societies where there is "too much change in too short a period of time".

The Singularity Is Near: When Humans Transcend Biology is a 2006 non-fiction book about artificial intelligence and the future of humanity by Ray Kurzweil

Many science fiction writers and many movies have speculated about what happens when the Singularity arrives.  For the most part these stories take the form of Man/Machine Wars or conflicts.  In the first Star Trek movie, the crew of the Enterprise had to battle” a world consuming machine consciousness.  In the Terminator series of movies it’s man versus machine and man and machine versus a machine.  And in The Matrix, it’s about man attempting to liberate himself from being a slave of the machine consciousness. [Sidebar: In the mid-1970s I had a very interesting discussion with Dr. John Crossett about the concept that formed the plot for The Matrix.]

There are literally hundreds of other books and short stories about dealings and conflicts with the singularity.  While this is all science fiction, science fiction has often pointed the way to science and technology fact.

Multiple Singularities

A second scenario is that because of the advances in artificial intelligence there are multiple singularities.  Again, Science Fiction has dealt with this scenario.  Isaac Asimov was one that dealt with multiple singularities and the results in his I Robot series of stories.  In this scenario, more than one robot achieved consciousness.  In these scenarios, humanity plays a subordinate role to the “artificial intelligence”.  These singularities interact with each other in both very human and very un-human ways.

Symbiosis of Humans and Machines

The best set of scenarios, from the perspective of humanity, is the symbiotic scenarios.  All multi-cell life, above a very rudimentary level is composed of a symbiosis of cells and bacteria.  So it is reasonable that there could be a symbiosis of humans and machines.

For example, nano-bots could be inserted that would deliver toxins to cancerous cells to directly kill those cells, to inhibit their transmission of the cancer causing agent to other cells or to  link with brain with orders to repair any damaged cells. These nano-bots would be excreted when their work is complete.

Taking this a step further, these nano-bots could allow the human brain direct access to the information on the Internet or “in the cloud” (as marketers like to say). [Sidebar: “Cloud Computing” has been with us ever since the first computer terminals used a proprietary network to link themselves to a mainframe computer.  Yes, the technology has been updated, but it’s still remote computing and storage.]  This would mean that all you would have to do is think to watch a movie, or gain some knowledge about the world around you.  The very dark downside of this is that terrorists, politicians, news commentators, or other gangsters and thugs could control your thinking, i.e., direct mind control.  And actually the artificial consciousness could take over and use human to their benefit.  [Sidebar: Remember a thief is nothing more than a retail politician, retail socialist, or retail communist.  Real politicians, socialists, and communists steal at the wholesale level.]   This mind control is the ultimate greedy way to steal—anyone whose mind is controlled is by definition a slave of the mind controller.

“Space the Final Frontier”

I see only one way out of the mind-control conundrum, traveling into and through space.  Once humans leave the benign environment of the earth, the symbiosis of humans and machines (computers and other automation) becomes imperative for both humans and their automated brethren.  Allies are not made in peace, only when there are risks or threats.


Even the best astronomical physicists readily admit that while we don’t understand our universe, as humans we may never be able to understand the universe.  There is simply too much to fathom.  However, with the symbiosis with artificial consciousness, we may be able to take a stab at it.