Prologue

History and Background

Problems cannot be solved at the same level of awareness that created them.” — Albert Einstein

We are still amid a technological epoch. Not since the harnessing of steam power have we seen such a change. During the Industrial Revolution, innovation moved from small-scale artisan endeavor to widespread industrialization, which started the upward spiral of globalization and eventually spat out an ever-expanding network of effective electronic communications. In short, rapid technological advances began to catapult human potential beyond its natural limits as the 20th century dawned.

As the world’s communication networks expanded, and our brightest minds connected, the take-up of applied know-how became transformative. So much so, that by the end of the 20th century, technology in general had irrefutably changed the course of history. As a key indicator, and even taking into account deaths due to war and conflict — rounding off at around 123[1] million [1] — our planet’s population grew three times faster than at any other time in history, from 1.5 to 6.1 billion souls in just 100 years [2].

But amid all that progress, one episode stands out.

As the demands of World War II pushed the world’s innovators into overdrive, a torrent of advance ensued. Where once crude equipment had proved sufficient to support mechanization, sophisticated electronic circuits would soon take over as the challenges of war work became clear — challenges so great that they would force the arrival of the digital age.

Whereas previous conflicts had majored in the manual interception and processing of military intelligence, by the outbreak of war in 1939, electronic communications had become dominant. Not only were valuable enemy insights being sent over the wire, but, with the earlier arrival of radio, long-distance human-to-human communications were literally in the air. World War II was, therefore, the first conflict to be truly fought within the newly formed battlegrounds of global electronic communication.

Warring factions quickly developed the wherewithal to capture and interpret electronic intelligence at scale. That advantage was coveted throughout the war and long after. That and the ability to harness science’s newest and most profound discoveries. Both provided the impetus for unparalleled advance, as security paranoia gripped the world’s political elite. On the upside came the rise of digital electronics and the whirlwind of information technology that would follow, but in parallel, we developed weapons of mass destruction like the atomic and hydrogen bombs.

At the heart of it all was information and kinship, in our voracious appetite for knowledge and the unassailable desire to find, share, and protect that which we hold true. Nowhere in human history will you find better evidence that we are a social species; all our surrounding computers and networks do today is underline that fact. Grasping that will be key to understanding the text to follow. For instance, as a race, we have often advanced by connecting to maximize our strengths and protect our weaknesses; whether that be how we hunt mammoth, build pyramids, or create our latest AI models. This is the tribe-intelligence that has bolstered our success and welcomed us in the onward march of technology to augment our strengths and protect our weaknesses. It speaks to the fact that evolution cannot be turned back and, likewise, neither can the advance of any technology that catalyzes or assists its progress. Information technology will always move forward apace, while the sprawling threads of the world’s networks can only ever extend to increase their reach.

These things we know, even though we might poorly distill and communicate the essence of the connected insight they bring. What is certainly lesser known, though, is what ongoing impact such expansion and advance will have on professional practice, especially since future technological advances may soon surpass the upper limits of God-given talents.

As technologists, we wear many hats. As inventors, we regularly push the envelope. But as architects, engineers, and inquisitors, we are expected to deliver on the promise of our ideas: to make real the things that we imagine and realize tangible benefit. In doing so, we demand rigor and aspire to professional excellence, which is only right and proper. But in that aspiration lies a challenge that increasingly holds us back: generally, good practice comes out of the tried and tested, and, indeed, the more tried and tested the better.

But tried and tested implies playing it safe and only doing the things that we know will work. Yet how can such practice succeed in the face of rapid advance and expansion? How can we know with certainty that old methods will work when pushing out into the truly unknown, and at increasing speed?

There can only ever be one answer, in that forward-facing practice must be squarely based on established first principles — the underlying tenets of all technological advances and the very philosophical cornerstones of advancement itself, regardless of any rights or wrongs in current best practice.

So, do any such cornerstones exist? Emphatically yes, and surprisingly they are relatively simple and few.

Scale and Complexity

As we become more proficient with a tool or technology, we largely learn how to build bigger and better things with it. Be they bridges, skyscrapers, or Information Technology (IT) solutions, their outcomes largely become business as usual once realized. What really matters though, is that the tools and techniques used for problem-solving evolve in kind as demand moves onward and upward. For that reason, when a change in demand comes along, it is normally accompanied by an equivalent advance in practice, followed by some name change in recognition. For instance, IT architects talk of “components”, whereas Enterprise Architects talk of “systems”. Both are comparable in terms of architectural practice but differ in terms of the scale and abstraction levels they address. In that way, IT architecture focuses on delivering IT at the systems level, whereas Enterprise Architecture is all about systems of systems.

Interestingly, the increases in scale and complexity that brought us Enterprise Architecture were themselves a consequence of advances in communications technology as new network protocols catalyzed progress and expanded the potential for connectivity — so that a disparate IT system could comfortably talk to another disparate IT system. Nevertheless, boil the essence of this progress down and only two characteristics remain: the scale at which we choose to solve problems, and the levels of complexity necessary to successfully deliver appropriate solutions.

That is it. In a nutshell, if we can work from a base of managing scale and complexity, then the selection of tools and techniques we use becomes less important.

Considering the lesser of these two evils first, in recent times, we have become increasingly adept at tackling complexity head-on. For instance, we now understand that the antidote to complexity is the ability to abstract. As more and more complexity is introduced into the solutions we build, as professionals we simply step back further in order to squeeze in the overall perspectives we need. We therefore work using units of a “headful”, as the architect Maurice Perks [3] once said — any solution containing more than a headful of complexity [4] needs multiple professionals in attendance. As we step back, detail is obviously lost as the headful squeezing happens, even though we admirably try to apply various coping techniques, like dissecting out individual concerns and structuring them into hierarchies, ontologies, or whatever. But today, that is mostly fine as we have learned to employ computers to slurp up the fallout. This is the essence of Computer Aided Design (CAD) and the reason why tools like Integrated Development Environments (IDEs) have proved so successful. Complexity, therefore, is not a significant challenge. We mostly have it licked. Scale, on the other hand, is much more of a challenge.

Ontologies

An ontology is a formal way to describe and connect knowledge in a specific domain. It is like a schematic or map that helps navigate and understand the relationships between concepts within that domain.

Ontologies are used in many fields, including Artificial Intelligence (AI), knowledge management, and information science. They provide a way to structure information and make it more accessible and understandable for both humans and machines. By defining a set of concepts and their relationships, an ontology can help to standardize terminology and facilitate communication across different disciplines and domains. It also enables the ability to reason over the relationships between concepts involved.

In simple terms, the difficulty with asking the question “How big?” is that there is theoretically no upper limit. This happens to be a mind-bending challenge, especially given that the disciplines of architecture and engineering are built on the very idea of limits. So, before we can build anything, at least in a very practical sense, we must know where and when to stop. In other words, we must be able to contain the problems we want to solve — to put them in a mental box of some kind and be able to close the lid. That is the way it works, right?

Well, actually …​ no, not necessarily. If we were to stick to the confines of common-or-garden IT architecture and/or engineering then perhaps, but let us not forget that both are founded on the principles of science and, more deeply, the disciplines of mathematics and philosophy in some very real sense. So, if we dared to dive deep and go back to first principles, it is surely relevant to ask if any branch of science or mathematics has managed to contain the idea of the uncontainable? Are either mathematics or philosophy comfortable with the idea of infinity or, more precisely, the idea of non-closable problem spaces — intellectual boxes with no sides, ceilings, or floors?

Not surprisingly, the answer is “yes” and yes to the point of almost embarrassing crossover.

Philosophy, Physics, and Technology at the Birth of the Digital Age

For appropriate context, it is important that we take a historical perspective. This is important stuff, as hopefully will become clear when ideas on new architectural approaches are introduced later, so please bear with the narrative for now. This diversion ultimately comes down to not being able to understand the future without having a strong perspective on the past.

As Alan Turing, the father of modern-day computer science, passed through the gates of Bletchley Park for the last time at the end of World War II, he was destined to eventually go to Manchester, to take up a position at the university there. Contrary to popular belief, he had not developed the world’s first digital computer at Bletchley, but the team around him had got close, and Turing was keen to keep up the good work. Also contrary to popular belief, Turing had not spent the war entirely at Bletchley, or undertaken his earlier ground-breaking work on logic entirely in the UK. Instead, he had found himself in the US, first as a doctoral student before the war, then as a military advisor towards its end. His task was to share all he knew about message decryption with US Intelligence, once America had joined with the Allied forces in Europe.

On both his visits, he mixed with rarefied company. As a student at Princeton University, for instance, he would no doubt have seen Albert Einstein walking the campus’s various pathways, and his studies would have demanded the attention of the elite gathered there. So rarefied was that company, in fact, that many of Turing’s contemporaries were drafted in to help with the atomic bomb’s Manhattan Project, as the urgency of war work wound up in the US. Most notably on that list was the mathematician and all-round polymath John von Neumann, who had not only previously formulated a more flexible version of logic than the Boolean formulation [5] central to Turing’s ground-breaking work, but had captured the mathematical essence of quantum mechanics more purposefully than anyone else in his generation. That did not mark him out solely as a mathematician or a physicist though. No, he was more than that. By the 1940s, von Neumann had been swayed by the insight of Turing and others, and had become convinced of the potential of electronic computing devices. As a result, he took Turing’s ideas and transformed them into a set of valuable engineering blueprints. This was ground-breaking, fundamental stuff, as, for instance, his sequential access architecture is still the predominant pattern used for digital processor design today.

As for his ongoing relationship with Turing, fate would entangle their destinies, and in the years following World War II, von Neumann and Turing’s careers would overlap significantly. Both worked hard to incubate the first truly programmable electronic computers, and both became caught up in the brushfire of interest in nuclear energy. Von Neumann’s clear brilliance, for instance, unavoidably sucked him into the upper workings of the US government, where he advised on multiple committees, several of which were nuclear-related, while Turing’s team nursed its prototype computer, the “Baby”,[2] on funds reserved for the establishment of a British nuclear program. Both, therefore, survived on a diet of computing and nuclear research in tandem, but what resulted was not, strictly speaking, pure and perfect.

For sure, Turing and von Neumann understood the base principles of their founding work better than anyone else, but both were acutely aware of the engineering limits of the day and the political challenges associated with their funding. So, to harden their ideas as quickly and efficiently as they could, both knew they had to compromise. As the fragile post-war economy licked its wounds and the frigid air of the Cold War swept in, both understood they had to be pragmatic to push through their ideas; it was not the time for idealism or precision. Technical progress, and fast technical progress at that, was the order of the day, especially in the face of a growing threat of a changing geopolitical world.

For Turing, that was easier than for von Neumann. His model of computing was based on the Boolean extremes of absolute and complete logical truth or falsehood: any proposition under test must be either completely right or wrong. In engineering terms, that mapped nicely onto the idea of bi-pole electrical switching, as an electronic switch was either on or off, with no middle ground. Building up from there was relatively easy, especially given the increasingly cheap and available supply of electronic bi-pole switches and relays in the form of valves. So, many other engineering problems aside, the route to success for Turing’s followers was relatively clear.

The same was not true for those ascribed to von Neumann’s vision, however. In his mind, von Neumann had understood that switches can have many settings, not just those of the simplest true/false, on/off model. Instead, he saw switching to be like the volume knob on a perfect guitar amplifier; a knob that could control an infinite range of noise. This was a continuous version of switching, a version only bounded by the lower limit of impossibility and the upper limit of certainty. The important point though, was that infinite levels of precision could be allowed in between both bounds. In von Neumann’s logic, a proposition can therefore be asserted as being partly true or, likewise, partly false. In essence then, von Neumann viewed logic as a continuous and infinite spectrum of gray, whereas Turing’s preference was for polarized truth, as in black or white.

In the round, Turing’s model turned out to be much more practical to build, whereas von Neumann’s model was more accommodating of the abstract theoretical models underlying computer science. In that way, Turing showed the way to build broadly applicable, working digital computers, whereas von Neumann captured the very essence of computational logic at its root. He, rather than Turing, had struck computing’s base substrate and understood that the essential act of computing does not necessarily need the discreet extremes at the core of Boolean logic.

Computing, von Neumann had realized, could manifest itself in many ways. But more than that, by establishing a continuous spectrum of logic, his thinking mapped perfectly onto another domain. In what was either a gargantuan victory for serendipity or, more likely, a beguiling display of genius, von Neumann had established that the world of quantum mechanics and the abstract notion of computing could be described using the same mathematical frameworks. He had, therefore, heralded what the physicist Richard Feynman would proclaim several decades later:

Nature isn’t classical, dammit, and if you want to make a simulation of nature, you’d better make it quantum mechanical…” [6]

Relevance to IT Architecture and Architectural Thinking

This, perhaps perplexing, outburst would turn out to be the clarion that heralded the rise of quantum computing, and although it may have taken some time to realize, we now know that the quantum paradigm does indeed map onto the most foundational of all possible computational models. It is literally where the buck stops for computing and is the most accommodating variant available. All other models, including those aligned with Turing’s thinking, are merely derivatives. There is no further truth beyond that, full stop, end of sentence, game over.

But why this historic preamble, and specifically why the segue into the world of quantum?

If you look broadly across the IT industry today, you will see that it is significantly biased toward previous successes. And for good reason — success builds upon success. For instance, we still use von Neumann’s interpretation of Turing’s ideas to design the central processing units in most of the world’s digital computers, even though we have known for decades that his more advanced vision of computing is far broader and more inclusive. Granted, state-of-the-art quantum computing is still not quite general purpose yet, or ready for mainstream, but the physical constraints dominant at the atomic level mean that the sweet spot for quantum will always be more focused than widespread. But regardless, that should not limit our thinking and practice. Only the properties carried forward from computation’s absolute ground truths should do that, and not any advantage that has been accrued above them through the necessities of application.

And that is a significant problem with IT architecture today. Like most applied disciplines, it is built from a base of incremental application success, rather than a clear understanding of what is possible and what is not. In other words, the mainstream design of IT solutions at both systems and enterprise levels has been distracted by decades of successful and safe practice. To say that another way, we rely heavily on hands-on tradition.

Engineers and architects of all kinds may well be applauding at this point. “If in doubt, make it stout and use the things you know about” is their mantra, and that is indeed laudable to a certain point. Nevertheless, as we seek to push out to design and build IT systems above enterprise scale, out into the era of hyper-Enterprise Architecture as it were, the limits of tried-and-tested are being pushed. In such a world, we may plausibly need to model billions of actors and events, all changing over time and each with a myriad of characteristics. And that takes us out into an absolute headache of headfuls worth of complexity.

Where, in the past, the scale and complexity of the IT systems we aspired to design and build could be comfortably supported by the same hands-on pragmatic style favored by Turing, as he steered toward the nascent computers of a post-war world, we are no longer afforded such luxury. The demands of today’s IT systems world now lie above the levels of such pragmatism and out of the reach of any one individual or easily managed team. No, the game has changed. Now, we are being forced to move beyond the security afforded by reductionist[3] approaches [7], in the hope that they might yield single-headful victories. This is therefore a time to open up and become more accommodating of the full spectrum of theory available to us. It is the time to appreciate the kick-start given by Turing and move on to embrace the teachings of von Neumann. This is the age where accommodation will win out over hands-on pragmatism. It is a time to think brave thoughts. Professional practice has reached a turning point, whether we like it or not.

Even our most respected experts agree [8]. Take Grady Booch [9] for instance, one of the inventors of the Unified Modeling Language™ (UML®) and one of the first to acknowledge the value of objects [10] [11] in software engineering. He openly talks in terms of three golden ages of architectural thinking.[4] The first, he suggests, was focused on algorithmic decomposition, invested in the translation of functional specifics into hand-written program code. In other words, the manual translation of rule systems into machine-readable forms dominated. Then, as Paul Homan [12] suggests, came the realization that ideas could be encapsulated and modeled independently of any underlying rules systems or data structures. While fed on the broadening out of use cases [13], that provided the essence of the second age, which saw the rise of object and class decomposition and in which, it might be argued, abstraction and representation broke free — so that architectural thinking need not focus exclusively on code generation. It also forked work on methodologies into two schools. The first embraced the natural formality of algorithms and therefore sought precision of specification over mass appeal. Today, we recognize this branch as Formal Methods [14], and its roots still run deep into mathematical tradition. For the design of safety-critical IT systems, like life support in healthcare or air traffic control, formal methods still play a vital role.

Alongside, however, came something much freer, more intuitive, and much more mainstream.

Feeding on the innate human preference for visual communication, and with a nod to the, then fashionable, use of flowcharts in structured thinking, a style of methodological tooling emerged that linked various drawn shapes together using simple lines. This gave birth to the world of Semi-Formal Methods, which still dominates IT architecture and software engineering practice today, and within which can be found familiar tooling favorites, like the UML modeling language and the TOGAF® Enterprise Architecture framework [15] — which have both served IT professionals well for decades.

The third age, however, as Grady advocates, is somewhat different from what has gone before, wherein we now do not always explicitly program our machines, but rather teach them [8]. This step-change comes from the advent of new technologies, like Generative Artificial Intelligence (GenAI) [16], Large Language Models (LLMs)  [17], and significantly-sized neural networks [18], and sees human experts partially replaced by machine-based counterparts. In saying that, however, it is important to remember that the “A” in AI does not necessarily always stand for “artificial”. It can also stand for “augmented”, in that synthetic assistants do not so much replace human function, but rather enhance it. The third age is therefore about both the replacement and enhancement of professional (human) practice, especially in the IT space. AI therefore allows us to abstract above the level of everyday work to focus on the augmentation of practice itself. In that way, we can now feed superhuman-like capabilities directly into human-constrained workflows. The hope is that these new superpowers might, in some cases at least, outstrip the limits of natural human skill to help make the once intractable tractable and the once unreachable reachable. So, in summary, this next-gen momentum is as much about providing antidotes to complexity and scale in IT systems as it is about advancing the fields of IT architecture and software engineering. In the age-old way, as with all technologies and professional practices, as they mature they become optimized to the point where complexity and unpredictability are effectively negated.

All that said, there is no need to feel a sense of impending doom, as much progress has already been made. For instance, well-established areas of practice now exist, not too distant at all from the traditional grounds of IT architecture, and which have already successfully ventured out far beyond the limits of the headful. These, and their various accompanying ideas, will all be introduced as the threads of this book come together. As a note to those set on reading this text, it should hopefully be fascinating to understand that these ideas are both old and established. They are the stuff of the conversations that held IT’s early pioneers together, only to be resurfaced anew today.

Even though the rats’ nest of connected technology around us today might not feel very quantum or von Neumann-like at surface inspection, stand back and quite the opposite should become clear. En masse, the overpowering complexity and scale of it all buzzes and hums above us, not at all unlike the apparent randomness inherent to every single atom involved. To capture that essence and bottle it is the key to future progress. That, and an acute appreciation of just how different the buzzing is in comparison to the familiar melodies of systems and Enterprise Architecture.

Widening the Lens — The Road to Ecosystems Architecture

As the digital age dawned, associated benefits were small and localized at first. In the world of commerce, for instance, it was soon realized that business accounts could be processed far faster by replacing human computers with electronic counterparts. Likewise, customers and suppliers could be herded with a simple phone call. It was all a matter of optimizing what was familiar and within the confines of well-established commercial practice. Beyond that, it was perhaps, just perhaps, about pushing norms to experiment with the new electronic trickery. Out of that came new business, and eventually whole new industries were spawned; the reach of the planet’s networks kept expanding, slowly finding their way into every nook and cranny of big business.

What came next was a forgone conclusion. Soon, it became clear that individuals, families, and communities could connect from far-flung places. That is what we do as a social species. It is in our nature. And so, too, this skill ascended to business level. Economics and the sheer intoxication of international opportunity took over, and the race to adopt and adapt began.

By the 1980s, desperate to keep up, many business leaders began to breach the boundaries of their traditional businesses and start to wrestle with the ultra-high-scale, complexly connected communications networks emerging around them. Some might have seen this as innovation in the wild, but, to others, it was nothing more than an emergence born out of the Internet’s arrival. Whichever way, the resulting mass-extension of commercial reach shifted the balance of business as the new century arrived. The slithering beast of the Internet Age was out of its shell, writhing and hissing as the walls of enterprise fell around it. Business emphasis had shifted. No longer was there a need for enterprise-centricity. It was about hyper-enterprise now. Where once systems of business dominated, now it was about ecosystems.

From an IT architecture perspective, this prompted talk of systems of systems theories and even sociotechnical networks, and all without so much as a nod from the professional standards community. Regardless, grass-roots interest flourished and the term Ecosystems Architecture first appeared in positioning papers [19] [20] [21] somewhere between 2014 and 2019, although the concepts involved had likely been in circulation long before that.

Looking back, it is possible to remember the rise in interest just before objects and Object-Oriented Design (OOD) were formalized in the 1980s. At that time, many professionals were thinking along similar lines and one or two were applying their ideas under other names, but, in the final analysis, the breakthrough came down to a single act of clarity and courage, when the key concepts were summarized and labeled acceptably. The same was true with Enterprise Architecture. By the time the discipline had its name, its core ideas were already in circulation, but practice was not truly established until the name itself had been used in anger. Thus, in many ways, naming was the key breakthrough and not any major change in ideas or substance. And so it is today. As we see the ideas of Enterprise Architecture blend into the new, the name Ecosystems Architecture will become increasingly important going forward, as will the idea of hyper-enterprise systems and third-age IT architecture.

This should come as no surprise. Enterprise Architects have been familiar with the idea of ecosystems for some time and the pressing need to describe dynamic networks of extended systems working toward shared goals. Such matters are not up for debate. What is still up for discussion, though, is how we establish credibility in this new era of hyper-enterprise connectivity.


1. 37 million military deaths, 27 million collateral civilian deaths, 41 million victims of "democide" (genocide and other mass murder), and 18 million victims of non-democidal famine.
2. Also called the Small-Scale Experimental Machine (SSEM).
3. Reductionism relates to any of several related philosophical ideas regarding the associations between phenomena that can be described in terms of other simpler or more fundamental phenomena. It is also described as an intellectual and philosophical position that interprets a complex system as the sum of its parts.
4. Grady’s actual words were “software engineering”, but he later agreed with the position outlined here.