6. Summary and Final Comments

If you think of [the Web] in terms of the Gold Rush, then you’d be pretty depressed right now because the last nugget of gold would be gone. But the good thing is, with innovation, there isn’t a last nugget.” — Jeff Bezos

If in doubt, make it stout and use the things you know about.” — anon

As IT architects, we like structure. Structure not only in what we do and say, but in the how and why and when of our practice. We are also naturally wary of risk and are taught early to neutralize it in the face of opportunity, innovation, or change. When we therefore come across a problem we have not encountered before, we recoil and ask for the opinions of our peers. If and when that hits ground truth, we hopefully find tenets laid down through best practice and community agreement. These are the foundational standards upon which the entire IT architecture profession is based. From there, lest some disaster befall us, we elaborate to instill the value of sober governance and consensus agreement. In essence, we do things by the book. That is what professionals do.

But what to do when such standards and governance processes cannot provide coverall protection? What to do when a world view simply refuses to conform with all that has gone before? These and other questions face us in the brave new world of hyper-enterprise systems.

During the preparation of this book, a group of experts met regularly within The Open Group Architecture Forum to discuss progress and any issues that arose. The result was not only some extremely useful professional debate, but also something of an adjustment process. Some of the participants were rightly nervous. While they welcomed the new ideas being presented, they were keen to ground them in what they knew and trusted. What followed was often a case of going back to first principles and then rebuilding in a fresh light.

For instance, we often referred to the fact that IT architects sometimes start the process of architectural thinking by referencing written descriptions of a desired solution offered up by business counterparts deep in an enterprise’s bowels. As a result, they pick over that text and pull out just the nouns and verbs that capture the solution’s essence. These become names or descriptors of the various elements in the first of a series of architectural schematics, all of which incrementally define the design and delivery process of the target IT system in hand.

In that, things are very systematic. Nouns become actors or events, and verbs become functions or actions. In that way, the prescription of any chosen architectural method can be doled out and progress towards some agreed end game can begin.

But what if you do not have the nouns or verbs to describe what is going on or required? Worse still, what if you have the required wording, but the implications of its use are just too much for you to handle? What if those words simply cause all the tools within your reach to short out or overflow under the stress? Likewise, what if the wording you have is correct today, but perhaps not tomorrow? These are the things that worried the team looking to supervise the birth of Ecosystems Architecture.

In the end, however, we can report that the child was delivered safely.

During that process, a few things became clear, not least of which was the absolute need not to jettison existing practice. Ecosystems Architecture, therefore, builds upon Enterprise Architecture, as does Enterprise Architecture atop systems architecture. All that changes is an added openness to the way typing is applied, and therefore how vocabularies of architectural terms and symbols might be laid down and adopted, and a few additional ways of working on top.

In a nutshell, that boils down to the dynamic extension and scaling of established architectural practice.

What normally happens is that when architects pick over the needs of their clients, they are happy to accept certain verbs, nouns, and numbers, but not others. If these relate to a change of state or a need to communicate at reasonable size and speed, for instance, then all is fine, as they translate into familiar tractable concepts etched into established architectural thinking. They are safe and are practically deliverable, in other words. But if they talk of deliciousness or succulence perhaps — to quote our examples from Chapter 3 — then noses might well be turned up.

Nevertheless, in an ecosystems’ world, these and other non-standard concepts might well need representing and may well change over time. This is simply because accompanying narratives could be too vague and/or too large and complex to record using conventional means. So, the starting point must be to first capture as much value as possible, then work from there. That also requires means that are qualifiable, quantifiable, and amenable to storage, representation, and translation at very high scales and complexities.

This might seem like a contradiction, but it is not. In fact, we have tackled all of these challenges and more besides, with outstanding success in the past. Qualifiable and quantifiable simply means that any information open to architectural thinking must be uniquely identifiable and in ways that we can classify, count, measure, query, and perhaps reason over. This should come as no surprise, given that we already do this all the time in IT systems involving large and complex data. We call that indexing or mapping, in that we merely replace any characteristics that make a thing or idea stand out with a unique string — normally called a key or hash — and which acts as a proxy shortcut for that thing or idea. What is more, given that indexes are normally much smaller than the characteristics they reference, that allows for catalogs of descriptive content at much larger scales and complexities than we humans can naturally handle.

This is where vector-based mathematics, borrowed from the world of information retrieval, comes in, as it can ground any denotational semantics associated with the things we need to model and implement, and in ways that allow measurable comparison relative to any broader problem space. It can also encode the various attributes that hang off these assets, as they are assigned unique keys. In that way, both keys and attributes can be compressed using the same mathematical approach and arranged to map out the details of any relevant ecosystem.

That nicely brings us to a simple summary of the change advocated in this book:

Ecosystems Architecture advances practice by focusing on higher-order systems archetypes and abstractions. It thereby augments assets and attribute representation using mathematical (vector-based) indexing and compression as a starting point.

This essentially recasts a remarkably familiar problem. As the Web’s early pioneers worked to bring its nascent search engines to life, they too had to ask how to embrace an ecosystem of sorts. Back then, interest focused on the expanding vastness of the Web as it blanketed its underlying Internet, and, looking back, that hurdle was jumped with ease. This time, however, the challenge is broader and more difficult.

In many ways, the World Wide Web was, and still is, a uniform phenomenon, with its various languages, protocols, and formats being governed by a surprisingly small group of standards. As such, its placement within the digital ether is fixed relative to the internet and its presentation is therefore relatively easy to predict and understand. With the planet’s web of interlinked IT systems, however, we are not as lucky. As new IT paradigms, specifications, and standards spring up, we are left with a wake of change in which generations of technology are left to fight it out for survival. That can leave IT professionals in a perpetual realm of firefighting. So, how can we rotate out from this inward-facing uphill struggle to practically grasp the advantages offered by the wider digital context?

The ultimate answer must lie in a combination of updated tooling and augmented practice. Today, for instance, chatbots like ChatGPT [158] make it relatively easy to extract nouns and verbs from human-readable text as justifiable entities, actions, attributes, and so on. Likewise, it is now trivial to create that text directly from human speech and perhaps even via some process involving chatbot interviews. This is the burgeoning world of openly available AI that we appear willing to accept, so it is by no means a stretch of the imagination to think of a world where requirements and supporting narratives are harvested automatically at scale and with ease. That is what the “A” for artificial in AI gives us, but there is also another “A” involved. Once any corpus of detail has been amassed and perhaps automatically structured into a graph format, it will still need inspection by suitably qualified IT architects to ensure that its contents can be moulded into downstream deployable assets. This may require direct interaction with that corpus, its index (by way of its vector-based graph) or may itself be assisted by the use of artificial intelligence so that implementation details are hidden. Regardless, the bringing together of human skill with advanced tooling gives us augmented intelligence, the other “A” in AI. What is more, if this work extends to more than just a handful of practitioners, then we soon arrive at the lower limit of what might be described as a social machine — as explained in Chapter 5.

Tried and tested architectural practice can kick in through filtering, inferencing, or other techniques applied to the corpus, so that manageable viewpoints can be extracted and worked on at the headful level. If, for example, returned results match or approximate something like a collection of components, then the chances are they can be modeled in a standard way, by segmenting into headfuls and using a component model maybe. In that way, progress can be made. Same with a sequence diagram or any other established form of architectural model, documentation, or method. Similarly, just because we might not have standard nomenclature to express the things that need to be said, so long as we can justify the representation required, there should be nothing to stop architects from creating new ad hoc types and/or associated symbology. In that way, it is perfectly acceptable, if somewhat extreme, to think of architectural diagrams containing shapes not unlike either chickens or waffles, just so long as their parent types can be justified and applied with repeated formality.

Furthermore, basic graph theory provides a safety net, in that if new nomenclature cannot be found or agreed, the default of circles for nodes and straight lines for connections between nodes can be used, and associated types overlaid via annotation. Graphs can therefore model most things, as they have an elegant and flexible syntax. As a result, we can think of graphs encoded using vector-based mathematics as a kind of back-end context-free-grammar,[1] allowing for universal jotter pads of ideas to be formalized by way of ontological corpora and without restrictive limits on scale, complexity, symbology, or semantics. From there, we can springboard in and out of whatever codification we like to get over the architectural points needing communication, construction, testing, and so on. Once architects are happy with the filter and typing overlays they have applied, they can use them to codify relevant semantics using more specialized language models — including XML derivatives like RDF and OWL. From there, it is a relatively short hop to the automatic generation of code, documentation, testing, and eventual deployment of useful technical assets, as advocated by ideas of MDA [159] [160] [161] [162] [163] [164] and especially ontology-driven software engineering [165].


1. A Context-Free Grammar (CFG) is a set of rules used to generate strings in a formal language. It is a formalism in the field of theoretical computer science and linguistics used to describe the syntax or structure of programming languages, natural languages, and other formal languages.