Monday, July 24, 2017

Knowledge Creation and the Four Great Cultural Transformations of Humanity

[Sidebar:  While starting with a sidebar is unusual to say the least, I felt it necessary because this post has gotten itself completely out of hand in terms of length.  But then when I thought about it attempting to synthesis history and future history into a single post should be long.  Also forgive me for injecting myself into many of the sidebars.]

It’s weird when you think about it, the historians, archeologists, and other social scientists discuss the ages of culture as the early stone age, the late stone age, the copper age, the bronze age, the iron age, and so on, as the way to name the cultures of humans.

I think there is a better way to understand the ages of humanity. It is through humanity’s creation and dissemination of information and knowledge.  All the current ages of humanity are really based on these four transformations of data, information, and knowledge.

Knowledge Generation before Speech

Prior to the development of speech there were two ways that life created “information and knowledge”.  The first started with the start of life itself on earth.  It was the combination of changes in the DNA chains and natural selection by the environment.  [Sidebar: This is still the foundation on which all other forms of knowledge creation is built; that is, except for a relatively small number of differences, human DNA and tree DNA is the same.]  One form of this knowledge creation and communications is instinctual behaviors.  Additionally, this forms the basis for the concept of Environmental Determinism.

The second way “information and knowledge” was created was through the evolution of “monkey-see-monkey-do”; that is, through “open instincts”.  An open instinct is one that allows the life-form, generally, animals to observe its surroundings, orient the observations (food, a place to hide, a threat), decide on an action, and act. [Sidebar: Oh shoot, there goes Boyd’s OODA loop again.]  No longer does DNA only decide.  To make the decision the life-form must learn to observe, choosing which input is data and which is noise, and must create a mental model in order to orient the observed data.  Both of these require the ability and time to learn.  This learn-by-doing forms the basis for the concept of “Possiblism”.

The Age of Speech (~350,000 to 80,000 BC)

As noted, the learn-by-doing (monkey-see-monkey-do) process requires both the ability and time to learn.  According to studies of DNA and archeology, the average man would live about 20 years. [Sidebar: note that even today a boy becomes a man at age 13 in the Jewish religion.  This means that thousands of years ago, the man would have 7 years to procreate before dying.  Today, that 13 year old kid is not even in high school.]  So there is a time when the young need adult protection to learn.  This may be a few months as in the case of deer, or a couple of years.

The problem with learn-by-doing is that it requires both the ability to learn and the time to learn.  Because DNA evolution continues with each biological experiment—each child—there will be significant variations in the ability to learn.  So, sometimes knowledge would be lost by the inability of a child to learn-by-doing.  Other times, the parent/coach could die unexpectedly early, so that there was insufficient time for knowledge transfer.  Either way, information and knowledge was lost.

At some point between approximately 350,000 BC and 80,000 BC, possibly in several steps a new hopeful dragon (to use Carl Sagan’s term) was born. This hopeful dragon had some ability to articulate and an open instinct to most probably create a noun (the name of a thing) and/or a verb (the name of an action).  This gave birth to language.  And language allowed for learning-by-listening, which turned out to be a competitive advantage for the groups and tribes that had it when compared with those that didn’t.

This is Learning-by-listening resolves the problem of loosing knowledge gained by previous generations.  As language evolved it enabled humans to communicate increasingly abstract concepts to others.  Initially, (for 100,000 years or so) much of this knowledge was communicated by statement of observations and commands, some of it evolved (likely at a much later date) into stories, odes, epic tales, sagas, and myths.  These tales encapsulated the knowledge of prior generations; the tribal or cultural memory.

Toward the end of the period (~80,000 BC) when speech and language were born Homo sapiens started migrating from Africa.  Some researcher believe this was due to the competitive advantage of speech and language, that is, better methods of knowledge accretion and communication when compared with other animals.

The Age of Speech allowed for the accumulation of data, information, and knowledge.  Much of this was passed along in the form of tails, odes, myths, and so on.  At the same time practical skills like hunting and gathering were learned more effectively when verbal instructions and especially critics could be given.  Students learned much faster and at a much higher level.  The result was a differential in knowledge among the many, many small family groups and tribes.

After many millennia of inter-tribal wars and with some inter-tribal trading enough data, information, and knowledge was created to begin the long  trek to civilization.  [Sidebar: During the “hunter gatherer stage of human “civilization” there were no “Noble Savages”, just savages.  According to DNA evidence and studies of tribes in New Guinea, the average male was killed when approximately 20 years old.]  During the time from the Paleolithic through the  Neolithic ages, knowledge accumulated very slowly.  Archeologists have found the innovations diffused through the human population over hundreds of years.  Many archeologists want to attribute this to trading, but evidence suggests that much of the time violence was involved.

The Age of Writing (~3000 BC)

Speech and language enabling and supporting Learning-by-doing and learning-by-listening provided the basis for humans’ knowledge development for the next 70,000+ years. It was not until human organizations grew beyond a few hundred individuals with a geographic territory beyond what a person could walk in a day that humans had a need for data, information, and knowledge transfer/communications that went beyond speech.  

At about the time the first large kingdoms were formed, apparently the traders of the era found a need to track their trading.  And traders and trading was the main vehicle for communicating data, information, and knowledge during this entire period.  [Sidebar: At least this is what the archeologists have found so far.]  Additionally, the tribal shaman (priests) started to create documents so that their religious beliefs, traditions, knowledge, and tenets would not be lost by their successors.  [Sidebar: these were the scientists of their age.]  Consequently, religious documents, together with trade documents are among the earliest writing found.

Understand, writing came into existence at about the same time as many large construction projects, like pyramids and ziggurats.  And this was when city states, the forerunners of the modern state formed.

For the next 4400+ years writing continued to be the main medium for data, information, and knowledge documentation and communications.  During this time many kingdoms and empires rose and fell, including The Roman Empire, and a vast quantity of data, information, and knowledge was created documented and lost.  [Sidebar: The worst was the destruction of the Library and Museum (University) at Alexandria.] 

Finally, with the beginnings of the European Renaissance in the 1100s AD schools in Italy and Spain, initially created to teach monks to read and write, began to collect and copy works from early times (including Greek and Roman).  The copies were exchanged and libraries began to appear within these schools that were called and were Universities.  [Sidebar: This age is called the “Renaissance” because it was the time when initially data, information, and knowledge were recovered and new knowledge was documented.]

During this same period, and in part using the recovered knowledge-base, came the slow innovation of new instruments including the mechanical clock, and new navigational instruments, and new methods for ship construction; all leading to an economic sea change in the European kingdoms.  Further, during this time, apprentice schools (schools of learn-by-doing) appeared in greater number with more formality to their coursework.  These schools taught “manual trades”, the start of formal engineering and technology programs.

The Age of Printing (1455 AD)

All during this time, more and more clerics, (clerks) were coping more documents.  And though the costs were high, there was a major demand for more copies of books, like the Christian bible. 
In about 1440, a German, Johannes Gutenberg, developed a system that could make hundreds of copies.  In 1455, he printed what is known as the Gutenberg Bible and created the technology infrastructure for a paradigm shift.  He also printed a goodly number of these bibles.

Another German, Martin Luther, subsequently kick started this shift by nailing his 95 theses to the door in 1517.  Prior to Luther most Europeans could not read.  The Roman Catholic clergy up to and including the Pope took advantage of this to create highly imaginative church doctrine that would provide them with a large money stream.  Since they had been infected with the edifice complex they used this money stream to indulge their favorite activity at Rome and elsewhere.  

Luther was intensely unhappy with this church doctrine in his theses.  Instead of the Pope being the final Authority on Christianity, he preached that the Christian Bible was the final authority and that all Christians had the right to read it.  So, by the late 1500s, there were many printed books in an increasing number of libraries with an increasing number of Europeans (and shortly, American colonists) that could read.  [Sidebar: Remember that Harvard College, now Harvard University was founded in 1636.]  And this was only step one of the Age of Printing.

Step two in the Age of Printing was Rev. John Wesley’s creation of “Sunday School”.  Many or most of the members Wesley’s sect, “The Methodists”, had been tenant farmers, laborers, or cottage industry owners who had lost their jobs or their businesses in the early stages of the industrial revolution (The late 1600s and through 1700s).  

At this time, machines began to be used on farms and in factories, putting these people out of work.  Wesley and the Methodists, by teaching them to read and write on Sunday, their day off from work, enabled them to move into and participate in the profits of the industrial revolution.  Together with other movements toward “schooling”, the age of printing and economic progress happened, creating the “middle class.” [Sidebar: In Colonial New England, early on—in the 1640s—primary schooling became a requirement.  For more information, see my book.]  Glossing over the many upgrades and refinements, knowledge creation and communications was base on printing technology until the 1980s, more or less.

During the Age of Writing, but particularly during the Age of Printing, the methods for the communications data, information, and knowledge began to diverge from trade.  In fact, in the US Constitution the founding fathers treated the “the US mail” as a direct government function because they felt that communications for everyone was so important.  On the other hand, they indicated that the government should “regulate” commerce among the states; and there is a great difference between a function of government and regulation by government.

The Age of Computing (~1940 AD)

There are two roots of the Age of Computing.  These had to do with improving print-based data storage and the communication of data and printed materials.  The first root was data and information communications.  While there were many early attempts of high-speed communications over long distances in Europe over the ages, the first commercially success telegraph was developed by Samuel Morse in 1837 [Sidebar: together with a standard code, coincidently call the Morse Code.]  By the 1850s this telegraphic system had spread to several continents.  

By 1874, Émile Baudot invented the teletype machine which allowed any typist to type a message on a typewriter keyboard, which the machine would then translate into Morse code.  A second teletype machine would then print the message out at the other end.  This meant that typists, rather than trained telegraphers could send and receive messages.  Additionally, the messages could be coded and sent much faster.  Three other inventions/innovations the facsimile machine, the telephone, [Sidebar: a throwback to the Age of Speech], and the modem complete the initial intro the Age of Computing.

The second root was the evolution of the computer itself.  Early in the industrial revolution, Adam Smith discussed the assembly line process and the fact that tooling can be made to improve the quality and quantity of output in every activity in the process.  Using this process more or less, the hand tooling of the late 1700s gave was to increasing complex powered mechanical tooling for manufacturing products in the 1800 and 1900s.  

While that helped the manufacturing component of the business, it did not help the “business” component of the business.  While the need for improving the information handling component (reducing the time and cost) of a business was recognized in the 1500s, it wasn’t until 1851 that a commercially viable adding machine became available to help with the “book keeping/accounting” of a business.  These machines produced a paper tape (printing) on which the inputs and output was reported.

From 1851 to at least 1955 these mechanical wonders were improved, to the point that in the early 1950s, they were call “analog computers”.  And for a short time there was discussion about whether analog or this new thing called digital computers were better. [Sidebar: Into the 1990s tidal predictions were made by NOAA using analog equipment, since they kept proving to be more accurate.]

The bases for the electronic, digital computer came from several sources, mostly in the United States and in Britain, during the late 1930s and early to mid-1940s. However, it wasn’t until the invention of the transistor in 1948 coupled with the concept of the Turing Machine (Alan Turing, working from 1941 to 1950) that the first prototype commercial “electronic computers” were developed. 

In 1956 I “played” with my first computer. It consisted of a Hollerith card reader for data input, electronics, a breadboard (a board with a bunch of holes arranged in a matrix) on which a program could be “coded” by connecting the holes with wires (soft wiring), and a 160 character wide printer for the output.  The part I played with was the card sorter.  Rather than sorting the data in the “computer”, it was done by arranging and ordering the Hollerith cards before inserting them into the card reader.  The card sorter enabled the computer’s operator to sort them very much faster than attempting to sort them by hand.

By 1964, computers had internal memory, about 40K bits, and storage, tape drives (from the recording industry) and disks (giant multi-platter removable disks) holding up to 2MB of data.  [Sidebar: I learned to code on two of these; IBM’s 1401 and 1620. I coded in machine language, symbolic programming system and Fortran 1 and 2.]  These computers had rudimentary operating systems (OS) with input and output being a card reader and a punch card writer.  And they had teletype machines attached as control keyboards.

Fast forward to 1975; by this time, Technology had advanced to the point where teletypewriters were attached as input/output terminals.  These were running at 80 to 120 baud (charters per minute, fast for a human typing, but very slow for a computer).  Some old style television-like (cathode ray tube, or CRT) terminals were becoming commercially available.  Mostly, this were simply glass versions of teletype printers, allowing the use to type into or read from an 80 characters-wide by 24 lines long green screen; and it was at about the same speed as a 120 baud teletype.  But, Moore’s Law was in high gear with respect to hardware so that with each two years, computers doubled in speed and capacity.

In about 1980 networking started to develop commercially, though there were several services over telephone networks earlier. [Sidebar: The earliest global data network that I know of was NASA’s network for data communications with the Mercury spacecraft in 1961.]  Initially, this development was in terms of a Local Area Network (LAN), linked through the use of telephone cables. [Sidebar: During this time, I set up some LANs at Penn State University and at Haworth, a furniture manufacturing company.]

By 1985 the Internet protocols evolved.  [Sidebar: Between approximately 1985 and 1993, a significant group of engineers created a set of protocols to international standards; they were called Open Systems Interconnect or OSI protocols.  They were a set of protocols based on a seven layered model.  This group formed one camp; the other was from an amorphous organically evolving TCP/IP group of protocols.  This group included academics, hackers, and software and hardware suppliers.  This group preferred TCP/IP because it was a free open source technology with few if any real standards—One HP Vice President said of TCP/IP that it was so wonderful because there were so many “standards” to choose from—and because OSI required significantly more computing power because of architectural complexity of its security and other functionality.  Consequently, TCP/IP won, but we are now facing all of the security and functionality issues that would have been resolved by OSI.]  [Sidebar: In 1987, I predicted that the internet would serve as the nervous system of all organizations and was again looked at like I had two heads.]  And technology had evolved to the point the PCs on LANs were replacing CRTs as terminals to mainframe computers.  Additionally, e-mail, word processing, and spreadsheet software were coming into their own, replacing typewriters and mail carried memo and documents.

In the early 1990s fiber optic cables from the Corning Glass Works revolutionized data and information transfer in that it was speeded up from minutes to micro-seconds with approximately the same cost. [Sidebar: Since I worked with data networks from 1980 on, and since I led an advanced networking lab for a major defense contractor, I could go into the hoary details for many additional posts, but I will leave it at that.]  As fiber optics replaced copper wires, the speed of transmission went up and the cost went down.  There were two consequences.  First, the number of people connected to the internet drastically increased.  Second, more people became computer literate, at least to the point of using automated devices—especially, the children.

By 1995, the Internet was linking home and work PCs with the start of web (~1993), and by the 1996/1997 timeframe the combination of home computers, e-mail, word processing, and the Internet/web was beginning to disrupt retail commerce and the print information system.  At this point the computer started to affect all of data, information, and knowledge systems, which is disrupting culture worldwide.

User Interfaces and Networking

As I discussed in a previous post and in SOA and User Interface Services: The Challenge of Building a User Interface in Services, The Northrop Grumman Technology Research Journal, ( Vol. 15, #1, August 2007), pp. 43-60, there three sets of characteristics of every user interface.  The first is the type of user interface, the second is how rich the interface is, and third, how smart the interface is. 
There are three types of user interfaces, informational, interaction oriented, and authoring.  The first is typical of the “Apps” on your smart phone, getting information.  The second is transaction oriented.  This means interacting with a computer in a repeated manner, like when an operator is adding new records to a database.  The third is authoring.  This doesn’t mean writing only, it means creating anything from a document, a presentation, to a movie, to a song, to an engineering drawing, or to a new “App”lication.  This differentiation of the user interface only really developed in the late 1990s and early 2000s as each of these types requires a different form factor for the interface and increasingly complex software supporting it. 

A rich user interface is an interface that performs many functions internally, i.e. does a lot for you.  As computer chips have become smaller, using less power, and much faster, the interface has become much richer.  This started with the first graphics terminals (in which there were 24 by 80 address locations) in the early 1970s.  Shortly, real graphics terminals appeared costing upwards of $100K.  These graphics terminals required considerable computing power from the computers they were directly connected with to operate. 

In an effort to relieve the host computer of having to support the entire set user interface functions Intel and others developed chips for performing those functions.  When some computer geeks looked at the functionality of these chips, (the Intel 8008 chip, among them) they decided they could construct small computers from them; the genesis of the PC [Sidebar: I was one of these.  With two friends, a home grown electrical engineer and an account, I tried to convince a bank to loan us $5000 to start a “home computer” company and failed; most likely because of my lack of marketing acumen].

A smart user interface is one that that takes the information of a rich interface and intercommunicates with mainframe applications (“the cloud” as marketers like pretend is a new concept) and their databases to bi-directionally update (share) their data.  Rich interfaces have rapidly evolved as network technology has grown from copper wire in the 1950s to fiber optics, Wi-Fi, and satellite communications as competing interconnection technologies at the physical through network layers of the OSI model.  These enabled first the Blackberry devices and phones, then in 2003, the Iphone and competing products.  The term “App” from application is a rich and generally “smart” user interface.  [Sidebar: I put “smart” in quotes because many of these “rich/smart apps” require constant updating burning data minutes like they are free.  When you allow them to only use Wi-Fi they complain bitterly.]

The library

Initially, in the late 1970s, information technology started to disrupt the printed information center, that is, the library.  The library is the repository of printed documents (encompassing data, information, and knowledge) of the Age of Print.  It uses a card catalog together with an indexing system, like the Dewy Decimal or Library of Congress systems, creating metadata to organize the documents to enable a library’s user to find documents containing data or information contained in the document pertaining to the user’s search requirements.

It started from the use of the rudimentary data bases’ (records management systems’) ability to control inventory, in the case of a library the inventory of books.  Initially, automation managed the metadata about the library’s microfilm and/or microfiche collections.  [Sidebar: The libraries used microfilm and microfiche technologies to reduce the volume and floor space of its collections as well as enabling easier searches of those collections.  Microfilm and microfiche technologies greatly reduced the size of the material.  For example, an 18 by 24 inch newspaper could be reduced to less than a two inch square (or rectangle).  However, with so many articles in each daily paper, library patrons had difficulty finding articles on particular topics; enter automation.

Initially, the librarians used the one or two terminals connected to the computer to either enter the metadata about what was on the microfilm or fiche or pull that data for a library’s customer.  They would enter the data using a Key Word In Context (KWIC) indexing system. 

Gradually, as computing systems evolved the quantity and quality of metadata of what was in the libraries increased and access within the library’s computing system increased; generally with a terminal or two sitting next to the card catalog.  However, none of the metadata was available outside the library.

With the advent of the World Wide Web standards and software (both servers and browsers) all of that changed.  [Sidebar: Interestingly, at least to me, the two basic protocols of the web, HTML and XML were derivatives of SGML, Standard Generalized Markup Language.  SGML is a standard developed by the printing industry to allow it to transmit electronic texts to any location and allow printers at that location to print the document.  It’s ironic that derivatives of that standard are putting the printing industry out of business. One of the creators of SGML worked for/with me for awhile.] 

With the advent of the Internet, browser and server software, and HTML (and somewhat later XML), the next step in the disruption of libraries as repositories of data, information, and knowledge started with search engines.  The first commercially successful search engine was Yahoo.  It used (as do all search engines) web crawler technology to discover metadata about websites then organizes it in a large database.  The most successful search engine to date is Google; the key reason being that it was faster than Yahoo and contained metadata about more websites.  These search engines replaced card catalogs of libraries before the libraries really understood what they were dealing with.  This has been especially true since as a great deal of data and information has migrated to the web in various forms and formats.

One of the things many library users went to the library for, before the advent of the web, was to use encyclopedias, dictionaries, and other such materials.  Now, Wikipedia and others sites of this type are the encyclopedias, dictionaries, thesaurus, and so on, of the Age of Computing.  Additionally, many people read newspapers and magazines at the library.  These too, are now available on any rich, smart user interface.  [Sidebar: For the definitions see my paper on Services at the User Interface Layer for SOA.  There is a link on this blog.]   The net result is that libraries, as physical facilities, are nearly obsolete.  Now “Big Data” (actually the marketing term for knowledge management of the 1990s) libraries and pattern analysis algorithms are taking data, information, and knowledge development of the library to the next level, as I will discuss shortly.

Imaging: Photos, Videos, Television, Movies, and Pictures

One of the greatest transformations, so far, from the Age of Print to the Age of Computing is in the realm imaging.  Images, pictures if you will, have been found on cave walls inhabited in the early “stone age” and some written languages are still based on ideographs.  So imaging is one of the oldest forms of communications.

Late in the Age of Writing, in the Italian Renaissance, images became much more realistic with the “discovery” of perspective.  Up to that point images (paintings) had been very “two dimensional”; now they were three.  Early in the Age of Print, actually starting with Guttenberg, wood cut images were included in printed materials.  From ­­­1800 onward, a series of inventors created photography, capturing images on a photo-reactive film.  Lithography allowed these images to be converted into printed images.  Next moving images, the movies came into being; as well as color photography.

From the 1960s, the U.S. Defense Department started looking for methods and techniques to gather near real-time intelligence by flying over the area—in this case areas in the USSR; and the USSR objected.  The first attempt was through the use of aerial photography, which started with a long winged version of B-57, then the U2, and finally the SR-71.  All of these used the then state-of-the-art film-based photography.  But all had pilots and only the SR-71 was fast enough to evade anti-aircraft missiles. 

So a second approach was used, sending up satellites and then parachuting the film back to earth. There were two major problems with this approach.  First, was getting the satellite up in a timely manner.  Rockets at the time took days to launch so getting timely useful data was difficult.  Second, having the film canister land at the proper location for retrieval was difficult.

Therefore, the US government looked for another solution.  They, and their contractors, came up with digital imaging.  This technology crept into civilian use over the next 20 years. Meanwhile, the photographic industry, in the main, ignored it, in part, because of the relatively poor quality of the images early on.  But this improved, both the resolution and the number of colors.  Among others, this led to the demise of Kodak and Fuji Films.

Another part of the reason the photo film industry ignored digital imaging is the quantity of storage and the physical size of the storage units required to store digital images.  But as Moore’s Law indicated, the amount of storage went up while the cost dropped drastically and this size of the hardware needed decreased even more.  With the advent of SD and Micro-SD cards there was no need for film.  And with the advent of image standards like .tif, .gif, and .jpg the digital images could be shared nearly instantly.

Retail Selling

From before the dawn of history, until 1893, trade (buying and selling) was a face to face business.  In 1893, Sears, Roebuck, and Company started selling watches and then additional products by catalog using the railroad to deliver the goods.  When coupled with the Wells Fargo delivery system—across the railroad system—allowed people in small towns to purchase nearly any “ready-made” goods, from dresses to farm implements.  This helped mass production industries and helped to create cities of significant size.  It then followed (or led) the way by building retail outlets (stores) in every town of even small size. 

This model of retailing is still the predominate model, but is the one being challenged by the Sears and Roebucks catalog model in an electronic internet-based form of retailing.   Examples include the electronically based, Amazon, eBay, and Google.  Amazon rebooted the no bricks and mortar retailing catalog with an internet version.  It is successfully disrupting the retail industry.  Likewise, eBay used the earliest market model, trading in the local market, in a global version.  Early on in the existence of the internet various groups developed search engines.  Currently, Google is the primary search engine. But it is supporting a concierge service which the Agility Forum, The Future Manufacturing Consortium said would be a requirement for the next step in manufacturing and retailing, that is, mass customization.

Additive Manufacturing

Early in my studies in economics, the professors tied economic progress of the industrial age,  to mass production, to economies of scale.  However, in the Age of Computing mass production is giving way to mass customization.

Initially, in the 1970s, robotic arms were implemented on mass production lines to reduce the costs of labor [Sidebar: especially in the automotive industry.  At the time US automakers found it infeasible to fire inept or unreliable employees do to union contracts.  Additionally, the labor costs, do to those contracts priced the US automobiles out of competition with foreign automakers.  To reduce their labor costs the automakers tried to replace labor with robots numeric controlled machines.  They had mixed success do to both technical and political issues raised.  This is not unlike the conversion of the railroads for steam to diesel and the “featherbedding that forced many railroad into contraction or bankruptcy.]  By the 1990s automation and in particular agile automation (automation that is leading to mass customization) is becoming the business-cultural norm in manufacturing and fabrication industries.  Automation is replacing employees in increasingly complex activities.  It will continue to do so and will continue to enable increasing mass customization of products.

For thousands of years components for everything from flint arrowheads to automobile engine blocks to sculptures were created by subtracting material from the raw material.  This subtracted material is waste.  A person created a flint arrowhead by removing shards from a flint rock. 
Automobile engine blocks are created by metal casing, then milling the casting to smooth the surfaces for the moving engine components. 

Stone and wood sculptures use the same material removal procedures as creating an arrowhead.  These too create waste.  Some cast sculptures may not be milled or polished, but these are the exceptions and the mold for the casting is still waste material.

Recently, a process similar to casting called injection molding does create products with relatively little waste.  But most component manufacturing processes create considerable waste.

However, with the rise of ink jet printing technology, people began to experiment with overlaying layers of material and found they could create objects. This technology is called 3D printing or additive manufacturing. It will have a much greater impact on manufacturing and mass customization.

A simple example is car parts for older model vehicles.  A car enthusiast orders a replacement part for the carburetor in his 1960s vintage muscle car.  The after-market parts company can create the part using additive technology rather than warehousing hundreds of thousand parts, just in case.  The enthusiast gets a part that is as good as, or perhaps better than, the original, the after-market parts company doesn’t need to spend money on warehousing, and the manufacturing process doesn’t product waste (or at least only a nominal amount).

Research and development is using this technology is now looking at creating bones to replace bones shattered in accidents, war, and so on; in nano-versions to create a wide variety of products.  [Sidebar: Actually, one of the first “demonstrations” of the concept was on the TV show, Star trek, where the crew went to a device that would synthesis any food or drink they wanted.] 

In the future this technology will disrupt all manufacturing processes while creating whole new industries because it can create products that meet the customer’s individual requirement better, while costing less, and being produced in less time.  For example, imagine a future where this technology can create a new heart identical to the heart that needs replacement, except fully functional—researchers are looking into the technology that could, one day, do that.

Automotive

The automotive industry is already starting to feel the effects of the Age of Computing.  The automotive industry has been based on cost efficiency since Henry Ford introduced the assembly line.  The industry was among the first to embrace robots on the assembly line.  But, there is much more.

The cell phone is becoming the driver’s interactive road map.  This road map tells the driver which of several routes is the shortest with respect to driving duration based on the current traffic and backups, as well as speed, and distance.

Since the 1970s automobiles have had engine sensors and “a computer” to help with fuel efficiency and identifying engine malfunctions.  These have become increasingly sophisticated.
Right now the automotive industry is driving toward self-driving cars.  There some on the roads and many that have sensors (and “alerts”) that “assist” drivers in one or more ways.

In the Near Future

And there are many industries like the automotive industry which are feeling the effects of The Age of the Computer.  That is, there are many more systems which the technology and processes of the Age of Computing are disrupting.

While processes are in transformation today, it’s nothing compared with what will happen in the immediate and not very distant future.

Education

Shortly, in the Age of Computing, information technology will disrupt schools.  People learn in two ways, by doing (showing, or “hacking”) and by listening.  And everyone learns using differing combinations of these two methods.

Technology can and will be used to “teach” in all of these combinations.  Therefore, “the classroom” is doomed.

Some students learn by doing, a method that “academics” pooh-pooh; only “stupid” children take shop and apprenticeships don’t count, you must of a “degree” to get ahead.

However, children do learn by doing, and enjoy it.  Why do you think that so many boys, in particular, choose to play video games? 

Why is it, that pilots of the United States Navy have go through 100 hours or more of computer simulations before trying a carrier landing?  Why, because they learn by doing. 

In the near future most of the jobs will require learn by doing.  Learn by doing includes simulations, videos, solving problems, labs.  Automation has and increasingly will impact all of these, giving the learn-by-doers the opportunity the current mass production education system doesn’t.
The other method for learning is “learn by listening”.  Learn by listening includes reading and audio (audio includes both lectures and recordings of lectures).  Over the past two hundred years, these have been the preferred methods of “teaching” in mass production public schools.

In the main, it has worked “good enough” for a significant percentage of the students, but numbers of students have fallen from the system.  Part of the problem is that some teachers can hold the interest of some students better than other students, other teachers may hold the interest an entirely different group of students, and some may just drone on.

Now, using the technology of the age of computers, students will be able to listen to lectures from teachers that they are best suited to learn from.  This means that the best teachers are able to teach hundreds of thousands of students across the globe, not just the 30 to 50 using the tools of the age of print.

It also means that students can learn in ways the more align with their interests. [Sidebar: I saw a personal example of this when I was working on my Ph.D. at the University of Iowa.  The Chair of the Geography Department, Dr. Clyde Kohn was also a wine connoisseur.  He decided to offer a course, called “the world of wines” to a group of 10 to 15 students.  He would teach them about climates and geomorphology (soils, etc.) that create the various varieties of wine.  He would also teach them about wine making and distribution worldwide; so there was physical and economic geography involved.  In the first 5 minutes of enrollment the class was filled and students were clamoring to get into to it.  He opened it up.  By the time all students had enrolled there were 450 students in the geography class and they probably learned more geographic information than they ever had before. It also gave the state legislator apoplexy.]  As the technology becomes more refined, students will be able to learn whatever they need to learn without ever going near a classroom.  I suspect that home (computer) schooling will become the norm.  Even “class discussion” can be carried on using Skype/Gotomeeting/etc. like tools.  Sports will be team-based rather than school-based.

I will define a prescriptive architecture for education in another post.  It turns the educational system on its head.  [Sidebar: Therefore, it will be ignored by the academic elite.]

Medicine

Medicine, too, is starting and will continue to go a complete disruption of the way it is performed (not practiced).

Currently, most of medical performance is in the rational “weegee”-boarding stage and uses mass production methods, not mass customization.  But all people are biological experiments and are, consequently, individuals.  And every malfunction should be treated the same way.

To get the best result for the individual, each type of drug and dosage of that drug should be customized for the individual from the start—not by trial and error.

In the near future, people will be diagnosed using their complete history, analyzing their DNA, body scanning, and other diagnostic measurements (both current and undiscovered). Then, using additive nano-technology an exact prescription will be created.  The medicine may be a single pill, mixed with a liquid, through a shot, or some other method, introduced into the individual.

Much of this analysis will be done by a computer.  Already, in the 1070s, a program simulated a patient, so that medical students could attempt to diagnose the “patient’s” problem.  In order for this program to serve its intended function, the MDs and Computer Assisted Instruction mavens were continually refining the data used by the program.  If this continued, and I suspect it did, the database from this single program could have been used by an analysis program to produce a diagnosis that would be comparable with that of expert diagnosticians.

This type of program could be, and likely will be, used by every hospital in the country, saving time and a great deal of money in identifying problems.  The key reason that it is not used today is that it has poor “bedside” manners—but so to do many of the best diagnosticians. 

Also, in many situations, this will take “The Doctor” out of the loop.

For example, instead, the patient walks into “the office”, which may be in front of the home computer.  The analysis “App” asks the patient questions and gets the patient’s permission to access his or her medical record.  If the patient is at home and the “Analysis App” needs more information, the app may ask the patient user to go to the nearest analysis point of service (APOS) for further tests.
At the APOS the patient would lay on a diagnostic table, not unlike those mocked up in Star Trek.  This table would have all sensors needed to take the necessary measures—in fact; there will be a mobile version of this table in the back of a portable APOS vehicle.

Once the analysis is complete, the APOS will use additive manufacturing to incorporate all of the medicines needed in a form usable by the patient.

For physical trauma or where this is irreparable damage to a bone or organ, additive manufacturing will create the necessary bone or organ and a robotic system will then transplant it into the patient’s body.

The heart of this revolution in medical technology is Integrated Medical Information System based on the architecture I’ve presented in the post entitled “An Architecture for Creating an Ultra-secure Network and Datastore.”  Without such an ultra-secure system for the medical records of each individual, externalities are too grave to consider.

However, even with an Integrated Medical Information System there will be substantial side effects for all stakeholders, doctors, nurses, technicians, and patients.  There need no longer be any medical professions, except for medical research organizations. 

Because the recurring costs of an APOS are low when compared with the current doctor’s office/hospital facility, all people should be able to pay for their own medical costs.  So there will be little or no need for insurance.

Additionally, because medicines are manufactured on a custom basis as needed by the patient, there will be no need for pharmacies or systems for the production and distribution of medicines.
With no medical professionals, no insurance, and no need for the production and distribution of medicine, this whole concept will be fought, in savage conflict, by the those groups, as well as Wall Street and federal, state, and local welfare agencies, all of whom will lose their jobs.  However, it will be inevitable, though perhaps greatly slowed by governmental regulation.

Again, I will say a good deal more on this topic in a separate post.

Further into the Future

There are three alternative future cultures possible in the Age of Computers, the Singularity, Multiple Singularities, or the Symbiosis of Humans and Machines.  These may all sound like science fiction or fantasy, but they are based on my 50+ years of watching the Age of Computers and technology advance.

The Singularity

In a story that someone told me in the 1960s, a man created a complex computer with consciousness.  He created it to answer one question, “Is there a god?”  The computer answered, “Now there is.”  A definition of “The Singularity” is that all of the computers and computer controlled devices, like smart phones become “cells” in a global artificial consciousness. 

Many science fiction writers and futurists have speculated on just such an occurrence and its implications.  John von Neumann first uses the term "singularity" in the early 1950s as applied to the acceleration of technological change and the end result. 

In 1970, futurist Alvin Toffler wrote Future Shock. In this book, Toffler defines the term "future shock" as a psychological state of individuals and entire societies where there is "too much change in too short a period of time".

The Singularity Is Near: When Humans Transcend Biology is a 2006 non-fiction book about artificial intelligence and the future of humanity by Ray Kurzweil

Many science fiction writers and many movies have speculated about what happens when the Singularity arrives.  For the most part these stories take the form of Man/Machine Wars or conflicts.  In the first Star Trek movie, the crew of the Enterprise had to battle” a world consuming machine consciousness.  In the Terminator series of movies it’s man versus machine and man and machine versus a machine.  And in The Matrix, it’s about man attempting to liberate himself from being a slave of the machine consciousness. [Sidebar: In the mid-1970s I had a very interesting discussion with Dr. John Crossett about the concept that formed the plot for The Matrix.]

There are literally hundreds of other books and short stories about dealings and conflicts with the singularity.  While this is all science fiction, science fiction has often pointed the way to science and technology fact.

Multiple Singularities

A second scenario is that because of the advances in artificial intelligence there are multiple singularities.  Again, Science Fiction has dealt with this scenario.  Isaac Asimov was one that dealt with multiple singularities and the results in his I Robot series of stories.  In this scenario, more than one robot achieved consciousness.  In these scenarios, humanity plays a subordinate role to the “artificial intelligence”.  These singularities interact with each other in both very human and very un-human ways.

Symbiosis of Humans and Machines

The best set of scenarios, from the perspective of humanity, is the symbiotic scenarios.  All multi-cell life, above a very rudimentary level is composed of a symbiosis of cells and bacteria.  So it is reasonable that there could be a symbiosis of humans and machines.

For example, nano-bots could be inserted that would deliver toxins to cancerous cells to directly kill those cells, to inhibit their transmission of the cancer causing agent to other cells or to  link with brain with orders to repair any damaged cells. These nano-bots would be excreted when their work is complete.

Taking this a step further, these nano-bots could allow the human brain direct access to the information on the Internet or “in the cloud” (as marketers like to say). [Sidebar: “Cloud Computing” has been with us ever since the first computer terminals used a proprietary network to link themselves to a mainframe computer.  Yes, the technology has been updated, but it’s still remote computing and storage.]  This would mean that all you would have to do is think to watch a movie, or gain some knowledge about the world around you.  The very dark downside of this is that terrorists, politicians, news commentators, or other gangsters and thugs could control your thinking, i.e., direct mind control.  And actually the artificial consciousness could take over and use human to their benefit.  [Sidebar: Remember a thief is nothing more than a retail politician, retail socialist, or retail communist.  Real politicians, socialists, and communists steal at the wholesale level.]   This mind control is the ultimate greedy way to steal—anyone whose mind is controlled is by definition a slave of the mind controller.

“Space the Final Frontier”

I see only one way out of the mind-control conundrum, traveling into and through space.  Once humans leave the benign environment of the earth, the symbiosis of humans and machines (computers and other automation) becomes imperative for both humans and their automated brethren.  Allies are not made in peace, only when there are risks or threats.


Even the best astronomical physicists readily admit that while we don’t understand our universe, as humans we may never be able to understand the universe.  There is simply too much to fathom.  However, with the symbiosis with artificial consciousness, we may be able to take a stab at it.   

Thursday, July 13, 2017

An Architecture for Creating an Ultra-secure Network and Datastore

The Problem
According to United States records, from 2006 cyber attacks to 2016, (crimes, intelligence gathering, and warfare) have gone up 1300 percent.  Other reports identified in Forbes Magazine indicate that between 2015 and 2016 there was a 200 to 450 percent increase in attacks.  I suspect that though the numbers vastly underestimate the total number of attacks.  I know that in the late 1980s, one company was averaging 10,000 attacks per day on its website and access points to the internet; of which 4000 originated in Russia (then the USSR), China, North Korea, and the like
.
There are two goals for attacks, to disrupt the entire IT infrastructure or to gather or change protected data for various nefarious purposes.  There is a multiplicity of reasons for these attack, monetary gain, political change, and so on; the “so on” is too long to enumerate.

The cost for preventing and mitigating the effects of these attacks has spawned a new multi-billion dollar industry.  Consequently, the need is for an entirely new system (network and datastore) that completely defeats all attack vectors.  That is what I’m proposing here.

The Solution A Disruptive Architecture: The Once and Future System

The Goal

The goal of the architecture presented here is to define a highly secure system for the transmission and storage of data.

The architecture is for a fundamentally different “new” network and datastore.  I put “new” in quotes because I based the architecture on a number of concepts and standards from the late 1970s to the mid-1990s.  For reasons of economies and business politics these concepts and standards were abandoned.  When I submitted the architecture for a patent and even though the architecture uses old concepts and standard in a new way, I was told that since it was based on well known concepts and standards the architecture it is unpatentable. 

Consequently, I’m presenting it in this post in the hope that someone will take serious look at it and communicate with me so that I can present the details and we can build a secure network and datastore.

The Architecture

My fundamental idea is to create a separate “data only” network and datastore.  While initially, having a worldwide network for the storage and transmission of data separate from the Internet “of everything” may seem as a ludicrous idea for those looking at the “short-term” costs for an organization; what the cost of having data stolen, corrupted, or destroyed would be for an organization?  And remember that there are  initial and recurring costs for data security on a cloud or across the internet.

This new architecture has five components.  One of them has evolved over the past twenty years.  One of them was declared obsolete thirty years ago.  One of them is based on petrified standards of the 1980s.  And one uses a new twist on current hardware and software.  The fifth is a particular form of governance.

New User Interface Security

The base technology of the new user interface has been evolving over the past twenty years at least.  It is a combination of three functional technologies.  The first is biometric recognition.  Any secure system requires some form of authentication; that you are who you say you are.  Various forms of biometric authentication, facial recognition, fingerprint identification, retinal pattern recognition, and so on, are currently the least likely forms of identification to be broken by cyber attacks.

The second security technology is a version of the smartcard.  These are credit-card-like with a data storage computer chip embedded.  Under this new function the card reader would communicate the location, time of day, and date, whereupon the card would generate a pass code based on those parameters.
 
At the same time the reader would generate a pass code also based on those parameters.  The system would accept the identification if and only if they matched.  Since any secure system requires at least to factor authentication, a user would need both the smart card (which additionally could store the biometric data) and their own body.

Finally, authorization and access control are both static for a given user interface to the system.  This means that the user of a given device (be it a terminal, PC, smart phone, etc.) can only gain access to the set of data, records, or summaries to which their entitled. 

So a contract specialist has no access to engineering data for the contract or only a limited set.  If the contract specialist attempted to sign on to another device, to which he was not preapproved, he could not get to the data to which he is entitled. The reason is that an individual must be preapproved of every terminal the individual wants to use. 

Or a doctor may not see a patient’s complete medical history without the patient’s permission. This would be a two step process.  The doctor would have to sign in on his or her device using the two-factor authentication, described above.  Then the patient would have to sign on to the same device using the same two-factor authentication to give the doctor permission to access his or her medical record.

The security meta-data and parameters are stored on the ultra-secure data network (USDN).  Any updates or changes must be made and approved through the system’s security governance function.  No dynamic changes can be made until the changes are approved.  In a political/cultural context, this governance process will be the most difficult to secure since users expect changes to be made “NOW” and the process doesn’t allow “NOW” to happen.

The Bridge

The second architectural component is the bridge from the Internet to the USDN.  This is really the key component securing the USDN from attacks.  And this is the component that was declared obsolete thirty years ago.  In the early 1980s there were many proprietary data networks.  To communicate data from one network to another required a network bridge.

The following diagram is from the patent that I applied for.  It shows an example of how changing the protocol layers or stacks creates a portcullis in the bridge that provides the ultra-security.  On the left side of the bridge are the standard Internet protocols.  Other than the top layer (called the Application Layer in the OSI model) and the bottom layer (the Physical Layer in the OSI model), all layers link and guide the communications between the sender and receiver.

Notice that the functional protocols on each side of the bridge, with the exception of the physical layer are different.  On the left side all protocols are current Internet standards.  However, on the right side the bridge uses protocols from the Open Systems Interconnect (OSI) suite.  These protocols were abandoned in the 1990s in favor of the earlier TCP/IP suite, that at the time were less expensive and much less capable. [Sidebar: “The first example of a superior principle is always less capable than a mature example of an inferior principle”].

What this means is that the entire USDN will use these OSI protocols.  Any cyber attack software developed for Internet protocols would have to be redesigned for the OSI protocols.

Even if the hackers of whatever stripe did develop software capable of exploiting vulnerabilities in the OSI protocol stack they would still need to get it onto the network.  But the design of the bridge includes a portcullis in the middle of the bridge.

This portcullis is designed to allow only data and records in well defined formats to pass.  This means that no documents can move across the bridge.  In this case “documents” includes e-mail, documents, unformatted text, files, or other unformatted data.

This stringent requirement eliminates nearly every attack vector by hackers.  For example, there is no way that a Trojan horse attachment can get into the system because e-mail, let along e-mail with attachments, is allowed access across the bridge.

As shown in the diagram, only data in specific and static XML formats is allowed to move through the portcullis.  The XML data structures are installed in the portcullis only after approval using one of the governance processes.

So, for example, medical data would use an XML version of the international medical standards, engineering data would use an XML version of STEP, and so on.  Only data exactly following those standards to which the user is entitled would get through the portcullis.. This would initially have a very large overburden in meta-security and access control data about all individuals.

The Network

The third architectural component is the network.  The network is based on petrified standards of the 1980s.  Inside the portcullis-bridge data would be free to move among the various nodes of the network using the same OSI protocol stack that is used on the right side of the portcullis-bridge shown in the diagram.

Additionally, it would use improved versions of the Directory Service (X.500) standard.  This would include using static routing meta-data (which many network analysts would say is not an improvement).  However, static routing meta-data means that if an unauthorized node magically appeared on the USDN (because some hacker tapped one of the USDN lines) the node would be recognized as a threat immediately.  Consequently, any attempt to breach the security imposed by the portcullis-bridge by directly attacking the network would fail, as long as good governance is in place.

Datastores

The last technical function is data storage.  This datastore function uses a new twist on current hardware and software design for the storage of data and information.  The twist is that only specific data and records are store, not files from outside the network. 

An organization using an USDN-like system would have its data file structures created by authorized personnel inside the USDN.  These file structures would follow the various authorized XML data structures.  No freeform data like e-mail or documents would be allowed.  [Sidebar: remember its much much simpler to create documents from data than to glean data from documents.] 

The only applications that are authorized to run on the USDN and its datastore computers are those that create, read, update, or delete records or data elements.  Reading data would include reading for transfer, and for summarization. 

For example, suppose the medical profession of a state or of the United States adopts the USDN to protect patients’ medical records.  A medical researcher may be granted access to summaries of certain data elements of patients’ record that have a particular medical problem.  This access would be granted through an approval process—part of governance—prior to obtaining the summaries.

The advantage is that the medical researcher has access to a complete set of data for the population of an area.  The downside for the researcher is that they need to have a well formulated and defensible hypothesis to work from, in order to obtain the data, and that the governance processes take time.

Governance

The Governance processes function of the system’s architecture is most critically important of the five functions because it is the only one where humans are involved—Big Time.  As discussed above there are many security functions that are static and require administrative functions to change the parameters and meta-data.  While I expect that actually changing the meta-data and parameters will be automated, the various decision making processes will not.

One obvious example is in banking.  Some financial data must be secure within a financial institution and only shared with a client.  Other data, in the form of transactions must be shared between and among banks and other financial institutions.

The USDN security meta-data would determine which data could be sent to another financial organization, what data can be sent, and other characteristics of transaction.  It would be within the USDN and not across any portion of the Internet.  It can then be retrieved by the destination organization.

For example, if all defense contractors were on the USDN then when teams formed to respond to a DoD Request For Proposal (RFP), the various teams of contractors and subcontractors could share requirements and other data within their team.  When the DoD chose the winning team, program/project, risk, and design data could be shared and shared with the customer without fear a cyber attack on one of the sub-contractors leading to the capture or corruption of the program or mission critical data.  [Sidebar: frequently a third or fourth tier sub-contractor has more vulnerabilities than the prime contractor.]

Issues

Again,”The first instance of a superior principle is always inferior to a mature example of an inferior principle.”

There are three issues with the creation of such a system. 

The first is cost; creating an entire nationwide or worldwide network is very expensive in the startup phase.  Creating (or really resurrecting in many cases) software to support the functions of the USDN will be very expensive.  There is the cost of implementing software services to interface with existing organizational applications.  Acquiring the physical cabling for the system will be expensive.  

Modifying routers to use the new protocols will be expensive. Designing, constructing, and testing the new portcullis-bridge will be very expensive.  Most of this investment will need to be done before one data element is protected.

The cost is more than a straight financial issue of building the system.  It will threaten much of the multi-billion dollar cyber security industry’s income stream.  This industry will market and lobby against building out the system.

The second issue may be used by that industry as an argument against the USDN.  The issue that the system only protects data and not other types of information like e-mail and documents.  This is true.  However, the core of any organization is its data.  Documents can be easily constructed from data, but not the other was around.

The third issue, at least initially, is the response time of the system.  Currently applications and users have come to expect nanosecond response times to dynamic requests.  Initially, at least, I predict that the response time to requests will be in terms of seconds; maybe many.  I saw this with Microsoft DOS—until version 3.1 it was bad—other products from Microsoft, Apple, and Oracle [Sidebar: I worked with Oracle 4.1] and many other hardware and software products.]  So it will be a rocky start, but ultimately it will cost much less than the recover, rebuild, patch, upgrade, and get hacked again systems of today.

Summary

While the USDN does not protect an organization from cyber attacks, it does make an organization’s mission critical data nearly invulnerable an organization will be able to recover from an attack and will make it nearly impossible for terrorists, cyber criminals, etc. to get a personal data or its mission critical data protected.


For anyone who is interested, please comment on this post.  I have much of knowledge of the processes, technology, and construction process involved than I can put in a post, but would be happy to discuss it.