Sei sulla pagina 1di 38

Complex Systems Design

Content

Overview

Setting the Context

Complexity theory overview

Complex systems design overview

Key concepts

Systems Services

Abstraction & Emergence

Synergies

Design Principles

Networks

Adaptation

Self-Organization

Evolution

Systems Architecture

Services oriented architecture

Platform technologies

Modularity

Event driven architecture

Methods

Design thinking

Overview

Some technologies are simple, like a cup or hammer, some are complicated, like a circuit
board or car, but some are truly complex such as large information systems, supply chain
networks, sustainable urban environments, healthcare systems or advanced financial
services. These complex engineered systems are defined by consisting of multiple
diverse parts that are highly interconnected and autonomous. This course is a
comprehensive introduction to the application of complexity theory to the design and
engineering of systems within the context of the 21st century. From the bigger picture of
why we should care to key architectural considerations, it brings together many new
ideas in systems design to present an integrated paradigm and set of principles to the
design of complex systems.

In the first section of the course we will explore some of the major themes that are
shaping the design and engineering of systems in the 21st century, such as the rise of
sustainability, information technology, the revolution in services and economic
globalization, these will all provide a backdrop and recurring set of themes that will be
woven into our discussion. This section will also give you an overview to complexity
theory and the basic concepts that we will be using throughout the course, such as the
model of a system, a framework for understanding complexity and a definition for
complex systems. The last section of this model will give an overview to complex
systems design providing you with a clear and concise description of what a complex
engineered system is and how this new paradigm in design differs from our traditional
approach.

Next we introduce you to the key concepts within this domain, we will talk about services
and product-service systems; designing synergistic relations in order to integrate diverse
components. In this section we will explore one of the key takeaways from this entire
course, the idea of abstraction as a powerful tool for solving complexity.

In the third module to the course we discuss the primary principles to the designing of
complex systems. Firstly networks, with these highly interconnected systems networks
are their true geometry, understanding them and being able to see the systems we are
designing as networks is one of our key principles we will talk about. Secondly, we will
look at adaptive systems and how I.T. is enabling the next generation of technologies that
are responsive, adaptive and dynamic, allowing for self-organization and a new form of
bottom-up, emergent design. Lastly, in this section we will also cover the key
mechanisms of evolution and how it affects the life-cycle to the systems we are
designing.

With systems architecture we begin to change gears to talk about the more practical
mechanics of how to design complex systems based around a new systems architecture
paradigm that has arisen within I.T. over the past few decades, what is called Service
Orientated Architecture. In this section we will discuss platform technologies and their
internal workings, modular systems design and Event Driven Architecture which is
particularly well suited to the dynamic nature of the systems we are developing.

Design Methods

Lastly, we present a series of lectures on the design method and process best suited to
complex systems design. In this section you will be introduced to design thinking that
represents a repeatable set of stages in the design process for solving complex problems.

A Changing Context

In this lesson, we are going to start the course off by taking a look at the bigger picture
that is the environment or context within which we design and develop systems in the
21st century. Many factors point to the conclusion that we live in a time of transition, an
unprecedented change, this change is both fundamental, rapid and multidimensional, The
overarching paradigm that is often used to understand this change is that of a transition
from an industrial age to a post-industrial information age. We inherit a world of
technologies and systems of organization that was born out of the industrial revolution
where the new found knowledge of modern science was applied to engineering and
developing the technologies required to support a new form of mass society.

Mass society, unlike its predecessors, is focused upon the mass of people and thus
required the development of engineered systems on an unprecedented scale, key to
achieving this was standardization, systematization and economics of scale through
centralization, the world we inherit is dominated by these large centralized systems of
organization, such as governments, factories and corporations. Inherent to this
centralized model is the dichotomy between producers and consumers of products and
services. Systems are designed and developed by a minority of professionals who create
finished products that are pushed out to end-users. The industrial model is focused upon
the provision of tangible objects, these goods are designed as finished products that
operate in relative isolation from each other and follow a linear life cycle from production
to consumption and disposal.

Lastly, these products have been largely designed and produced for the less than 20
percent of the world's population that forms part of the global middle class. We can say
that the industrial model has been largely successful in what it was designed to achieve,
the provision of a relatively high-level of material standard of living for the mass of people
within advanced industrial economies. We can also say this industrial model is well
developed and due to its success is being exported or duplicated on a global basis. We
can now move financial capital and the expertise to produce skyscrapers, motorways,
and airports almost anywhere on the planet from Shenzhen to Lagos.

But as we transit further into the 21st century a number of factors are working to reveal
the inherent limitation to this model. We will now discuss the key drivers that are taking us
into a more complex environment within which we have to design systems for in the 21st
century. Primary among these are; The rise of the paradigm of sustainability; The rapid
and pervasive growth in Information Technology; The huge growth in the services
economy; And the expansion of economic globalization. Firstly sustainability; The growing
awareness of the need for sustainability can be derived from a very simple equation that
is a default position within the industrial model, exponential growth, within a linear system
that is dependent upon finite resources is unsustainable.

For this system to continue into the future some part of this equation has to change,
exponential growth in the consumption of resources will almost certainly continue as the
majority of the world population continue to come into the global economy. The
availability of resources will unlikely change in a positive direction, achieving sustainability
will involve many things but shifting from a linear to a nonlinear model will be at the heart
of it.

Developing the next generation of sustainable technologies is not about making more
things that are faster, bigger and better it requires us to design systems, that is, create
synergistic connections between things, that overcomes the dead-end effect of the linear
model.

Environmental change is another aspect of sustainability, one that is taking us into a more
volatile and uncertain environment, in response to which our traditional approach of
developing static systems for stable and predictable environments is inept, as we need to
develop systems that are more agile, flexible and capable of adaptation.

Secondly information technology; The information revolution is having a deep and radical
effect on almost all areas of society and technology, enabling new forms of networked
organization, as systems become unbundled from being monolithic, structured and static
to becoming increasingly distributed, dynamic and heterogeneous.

I.T. places platforms that connect people to people, such as the world wide web, or
technology to technology, such as the internet of things, at the centre of the design
challenges going forward, some of these platforms are true complex systems, consisting
of millions or even billions of nodes that are densely interconnect, interacting and are
capable of adaptation as they evolve over time. Information technology is also having a
strong democratizing effect on design, as the tools for design and production are
increasingly placed in the hands of may, the once formal world of professional design that
was closed off in patents, is giving way to the emergence of ecosystems for co-creation,
where the end-user is becoming a new source of innovation.

Thirdly the services economy; The industrial model is primed for the production and
distribution of tangible goods but the past few decades have seen a huge growth in the
services section of the economy, today services makeup approximately two thirds of the
global economic output and dominate the advanced economies. Services are based
upon a very different logic to products. Designing services is not about designing more
things it is about the function or service these things provide and about interconnecting
these services into processes that are centered around the needs of the end-user. We are
moving into a world less about products and more about real-time dynamic networks of
services. Services are all about people, Most of the really hard issues going forward are
not so much technical, they are social in nature. A well designed, well engineered, and
well-managed service system must be primarily centered and optimized around people,

whether we are talking about a patient in a healthcare system, a customer of a business,


or a citizen interacting with their government.

This leads us to one of the key themes in the design of complex systems, that is what we
call socio-technical systems, for reasons we will discuss later the industrial world was not
designed for people, it was designed for procedures, standards, and systems, machines
were the icons and idols of the industrial age, rational, stable and predictable, human
beans we expected to just fit into this model. The net result of this is that we inherit a
hugely alienating world that excludes the full engagement and resources of the social and
cultural domain. We are only just starting to re-explore the huge potential they offer
through such innovations as social networks, the world we will be designing for in the
21st century is more social it is more personal and human. This social layer that is being
placed over everything and how it interacts and interconnects with the technical world of
technology is another important theme we will be revisiting throughout the course.

Lastly economic globalization; To date, the majority of the products within the industrial
capitalist system were designed for a small minority of the world's population living in
advanced Western economies, in these countries the needs that capitalism has worked
so hard to meet have been met. Meanwhile, in other parts of the world, basic human
needs such as clean water and sanitation are not met, in an interconnected world like
ours this is increasingly an unstable situation.

The formal model for design and development that is part of industrial capitalism has
created a two-tier systems in many parts of the developing world, where either you are
part of the middle class and you have access to market products, public utilities, legal
rights and so on or you are not and you are simply left to improvises, as is the case for
over one billion people who live in slums around the world.

One of the challenges to design going forward is in expanding formal design for all
economic levels, this is not about charity it's about innovation in product design and
business models so as to be able to reach down to the very lowest economic levels and
still achieve a viable return on investment and there are a growing number of businesses
that are proving this possible.

The post-industrial world we live in is like wakening up the morning after a party with a
hangover, we inherit a world where we live inside of massive inert industrial systems that
are surrounded by challenges. The making of more products that are faster, stronger and
bigger is becoming increasingly commoditized. Whilst a new world of value is opening up
in the design of complex systems that connect pre- existing resources to provide users
with solutions to these real world problems.

In summary then; We can say that the industrial model of systems design has been highly
successful in developing a set of industrial technologies from microwaves to airplanes
that allow us to have a much high level of material well-being than was previously
possible. But today we are presented with new challenges that require us to go beyond
its logic. The need to design sustainable cities, healthcare services that enable people to
live a better quality of life and large information systems, all of these present us with the
challenge of developing complex systems and require a new paradigm in design.

Complexity Theory Overview

Some of the systems we have to design and develop today are highly complex such as
smart power grids, enterprise information systems, urban transportation networks and so
on, they may consist of millions or even billions of components, many different
stakeholders with diverse objectives, dense networks of interconnections and
interdependencies that may be unknown and still evolving over time, truly grasping and
experiencing the complexity of these systems is intimidating to say the least.

The only way to overcome this complexity of the real world that surrounds us is through
abstraction, that is to say, conceptual models that capture the underlying features whiles
hiding away the details and in the world of complex systems design, it is complexity
theory that offers us these basic abstract conceptual models to work with. Complexity
theory has emerged out of a number of different areas over the past few decades, in
particular from physics, mathematics, computer science and ecology. All of these very
different areas have found themselves trying to model, design and manage what we now
call complex systems. Complex systems are systems composed of many different parts
that are highly interconnected and are capable of adaptation.

So let's unpack that a bit, firstly complex systems are a type of system. A system is just a
set of things that perform some collective function. So the human body is a system in that
it consists of many individual organs that work together as a functioning entirety, a
business is another example of a system, many different individuals and department
functioning as an entirety to collectively produce some set of products or services, and of
course there are many other examples of systems such as ecosystems, hydraulic
systems, social systems and so on. Not everything is a system though if we take a
random collection of things, say a hard drive, a light, and a watch and put them together,
this is not a system it is simply a set of elements because they are not interconnected and
interdependent in performing some collective function.

It is because of the fact that the elements within a system perform some collective
function that systems are said to be greater than the sum of their parts. That is to say that
the system as a whole has properties and functionality that none of its constituent
elements possess, a plant cell is an example of this, it is composed of many inanimate
molecules but when we put these together we get a cell that has the properties of a living
system, so it is not any element that has the properties of life but it is the particular way
that we arranged these molecules that give rise to this emergent property of the living
system as an entirety.

A key thing to understand about systems thinking is that it represents an alternative to our
modern scientific way of thinking that is primarily focused on breaking things down into
their constituent parts in order to analyze these parts and then tries to understand the
whole system as simply the sum of these individual elements. This approach works well
when we are dealing with sets of things that do not have emergent properties but
because some systems, in fact, many systems, have these emergent properties as an
entirety this method, which is also called reductionism, does not always work best, in
which case we need to use systems thinking which places a greater emphasis on
understanding systems in their entirety and within the environment that gives them
context.

So that is a very quick overview of systems thinking, but to get to complex systems we
need to add complexity to this. Systems have a number of properties that make them
complex. Firstly the number of elements within our system, this is quite straight forward
the more parts there are, the more complex it will be. Secondly, complexity is a product of
the degree of connectivity between these elements, the more interconnected and
interdependent they are the more complex our system will be. Within simple systems,
there are few connections between elements and it is relatively easy to understand the
direct relations of cause and effect, that is to say, we can draw a direct line between a
single cause and a single effect, thus we call these simple organizations, linear systems.

But when we turn up the connectivity within the system and especially when there are a
high number of elements, these cause and effect relations become more complex as
there may be multiple causes for any given effect or vice versa, as opposed to our simple
linear system we call these more complex organizations, nonlinear systems, and
nonlinearity is a key property of complex systems.

Next complexity is also a product of the degree of diversity between elements. When all
the elements within our system are very similar or homogeneous then it is much simpler
to model, design or management as an opposed to dealing with a heterogeneous
organization composed of many diverse parts, each with their own unique set of
properties. Lastly, complexity is a product of the degree of autonomy and adaptation of
the elements within the system. When the elements have a very low level of autonomy
then the system can be designed, managed and controlled centrally in a top-down
fashion. But as we increase the autonomy of the elements this becomes no longer
possible as control and organization become distributed and it is increasingly interactions
on the local level that come do define how the system develops.

This gives rise to another important feature of complex systems which is self-
organization. When elements have the autonomy to adapt locally they can self-organize to
form global patterns, the process through which this takes place is called emergence.
Thus as opposed to simple linear systems where order typically comes from some form of
top-down, centralized coordination, patterns of order within complex systems emerge
from the bottom up. Self-organization will be another recurring theme in our exploration of
complex systems design.

So now we know a bit about complexity theory and have a working definition for what a
complex system is, so when we hear someone talk about a complex system we know
what they mean, it’s a system composed of multiple, diverse parts that are highly
interconnected, and capable of adaptation. We could even have a few examples in mind
such as financial markets with lots different highly interconnect traders adapting to each
other’s behavior as they interact through buying and selling. Or an ecosystem with
multiple different species that are all interdependent and adapting to each other and their
environment. Or a supply chain network with many different producers and distributors
interacting and adapting to each other in order to deliver a product. There are of course
many more examples of complex systems but we will wrap-up here and move on to talk
about the application of complexity theory to design and engineering in the next section.

Complexity Systems Design

Complex systems design represents an alternative paradigm to our traditional design


engineering approach. The paradigm of complex systems design is focused on the
development of open systems that integrate diverse components through dynamic
networks, with global functionality emerging from the bottom-up as elements interact,
adapt, and evolve over time. This is in contrast to a more traditional approach, which is
focused on the development of discrete, well-defined objects by breaking them down into
individual components, and then coordinating these components within one top-down
global design. Classical examples of these complex engineered systems are the Internet
and cities, but also health care systems, electrical power grids, financial portfolios,
logistics networks, and transportation networks.

Design and engineering are very broad and fundamental human activities, and there are a
lot of different definitions for them. However, at the heart of many of these definitions is
design as a process. That is a process where we conceive an original or improved
solution to achieve a desired, optimal end state. We then identify the set of factors and
constraints within the given environment and lastly, develop a model for the arrangement
of a set of elements to achieve this desired end state, that is, the design. Thus, whether
we are engineering a bicycle, new production process for our factory or designing some
healthcare service, we can say design is about the arrangement of elements within a
system in order to achieve an optimal global functionality. Within engineering, this optimal
functionally is typically talked about and quantified in terms of the system’s efficiency. A
design paradigm then is an overarching approach that consists of a set of basic
assumptions and theories about how the world we want to engineer works, coupled with
a complementary set of principles and methods with which to approach this design
process.

Like many other areas, our modern engineering paradigm inherits its theoretical
foundation from modern science and, in particular, classical physics. A key method
employed by both is that of reductionism. Reductionism holds that a complex system is
nothing but the sum of its parts and that an account of it can be reduced to accounts of
its individual constituents. The reductionist approach results in a vision of the world that is
made up of isolated components that interact in a predetermined linear fashion,
sometimes called the Clock Work Universe. As when we put our reductionist goggles on,
everything starts to look like little deterministic cogs in a vast machine. Thus, the
reductionist approach applied to engineering results in the decomposing or breaking
down of whole systems into discrete components that can be isolated and modeled using
linear equations. The overall functionality of the system is then achieved by defining an
overarching top-down plan as to how all these components fit back together. In order to
achieve this overall functionality of the system, it is important that the elements can be
constrained, that is to say, they are relatively static and their behavior can be
predetermined and thus controlled. The reductionist approach has worked well in the
engineering of bridges, airplanes, and skyscrapers. These systems are designed to be,
and we want them to be, stable, predictable, and reliable. Reductionism works well when
we are dealing with systems with a low level of interconnectivity and interdependencies,
where the components are static, controllable, and the environment relatively unchanging.

But what happens when this is not the case, when we have to design information
systems where the components are highly interconnected and interdependent, when we
have to build sustainable cities with multiple stakeholders that all have their own agendas
or infrastructure systems that will have to operate in a changing uncertain future
environment created by climate change? In this case, our basic assumptions or design
paradigm has to shift to one that is more focused on the connections that integrate
diverse components into systems as opposed to our traditional component-based
paradigm, and this is where complex systems design comes in.

Firstly, complex systems are open systems. In traditional design and engineering, we are
dealing with things like chairs, bridges, and buildings. They have well-defined boundaries.
We can fully control all the elements within these boundaries and fully design the system.
This makes them orderly and predictable. With the design of complex systems, what we
are dealing with instead are open systems. Think of electrical power grids, cities or the
internet itself, a massively modular, distributed system. It has no defined boundaries.
People and devices couple and decouple from the system. It is not random, but this
world of complex systems is not so orderly. It is to use the catchy phrase “edge of
chaos.” No one is in control and no one fully understands or can fully design these open
systems.

Whereas our traditional approach is very much focused on components, that is to say,
designing things, complex systems design is about connecting these things together, and
networks are the platforms through which we connect things into systems that deliver
functionality. Instead of focusing on the properties of things, that is, how to make them
bigger, faster and better, the primary focus here is on how to design the protocols and
interactions so that diverse components can work together. Think about smart power
grids. What we are designing here is a network through which multiple diverse
components – meters, power generators, and different electrical devices – can
communicate and inter-operate through a standardized set of protocols.

What we are used to designing are monolithic technologies. They are coordinated by one
master plan that is imposed on all elements in a top-down fashion. One monolithic design
constrains all the components within the system. This is how we make buildings, cars,
and airplanes, and it works well until the components of the system are autonomous.
When we try to apply this approach to design something like whole cities, the results can
be disastrous, massive waste of resources, hugely alienating, and disengaging
environments. In these complex systems, the real capacity to act, to deploy capability,
lies on the local level. Think of social networks. A social network without its users is
essentially nothing. It is, at the end of the day, the users who really create the value of the
system. Trying to control the network will likely end in overburdening it with system level
constraints. Users will become disengaged and simply opt out. In complex engineered
systems, global functionality emerges from the self-organization of elements on the local
level. Therefore, we do not seek to design the system in all its details, but focus instead
on configuring the context and the local interactions that may lead to effective global
coordination.

In classical engineering, the components to the system are specifically held static so as to
coordinate the system as a whole. This requires prediction of the environment in which
the system will operate, the conditions it will face, and the tasks it will be required to
perform. In complex engineered systems, the components have a high degree of
autonomy, whether we are talking about websites on the internet, or where people chose
to build their houses or invest their money. The elements are adapting to their local
environment, and thus the state of the system is a product of the evolutionary process
that results from this. Their capacity of adaptation and evolution allows for these systems
to operate in more complex and volatile environments, where complete knowledge of the
system and its environment is impossible.

Service Systems

A service system is  a coherent combination  of people, process, and technology that
delivers some value to an end user. Service systems are a type of complex sociotechnical
system  that is designed  to deliver some functionality within a particular context to a
particular end user by aggregating different technologies and people through procedures.
Examples of service systems are  all forms of education, entertainment, transportation
systems, and telecommunications services among many others.

The industrial model to design and engineering was or is very much focused on the
production of things. Thus, we live in a world of isolated things, which we call products:
cars, watches, tables and washing machines. They are conceived of, designed,
developed and operated in relative isolation from each other. But, over the past few
decades, there has been a quiet but fundamental revolution in services as they have
come to dominate post-industrial economies.

Services are not just another sector to the economy. They represent a whole new
paradigm in how we think about the systems we design, one that shifts the focus from
isolated technologies to integrated systems. Within the services paradigm, the post-
industrial world is saturated with products, and people who no longer want more things.
They just want the functionality of these things that is their service. So I don’t want four
credit cards, three debit cards, two bank accounts and a little pile of bank statements
sent to me every month. I want a financial service that is there when I need to pay for
something and not when I don’t. I don’t want a piece of software that I have to download,
install, update, and maintain. I want a software service that is there when I need it and not
when I don’t. This is the world of services and it is focused on pure functionality.

Services are essentially the product of connecting many products together, that is,
integrating products into systems of services, what we call product service systems, or
more simply service systems. Service systems can be characterized by the value that
results from the interaction between their components. A car sharing service might be a
good example of this. By connecting people, technology, and information, we are able to
deliver the end user with close to nothing but the pure functionality or services of
personal mobility. Another good example of a real-world product service system is Rolls
Royce that produces jet engines, but they do not sell these to their end-user. They
provide them as a service through what they call their “power by the hours” program. The
airline gets the functionality of the engine as a service but ownership and maintenance
remain in the hands of the producer. The highly successful website Airbnb is another
example of a product service system. They provide a common interface and platform for
integrating many different providers of accommodation to deliver a unified service to the
end-user.

This concept of a service is very important in the design of complex systems in that it
helps to shift the focus to what is of real importance, that is, the relations between
components, the whole system and most importantly the functionality of the system.
Because at the end of the day, we don’t really want things, components or even systems.
What we really want is functionality, pure functionality, and that is what we call a service.
By focusing on this end service, we can work backward to ask what the basics we need
to deliver this are, or what we need to connect to deliver this functionality. Most of these
things are already out there; we just need to design new configurations, new frameworks
for integrating them. Our example of Airbnb is a good one. The components of their
system – that is the people who actually provide the accommodation – they were already
there. Airbnb just created a new platform and interface for connecting these things to
delivering an integrated service.

This new paradigm of service systems brings with it a new logic that is very different to
our traditional product-centric one. So let’s take a look at some of the key characteristics
of service systems. Firstly, services are intangible. They cannot be touched, gripped,
looked at, or smelled. Tangibility is an important factor of industrial goods upon which
much of our economics is predicated. They can be easily quantified, priced, bought, sold,
and owned. Many services only have value-in-use, meaning the value of the service is
often only released when the product is used. Thus, the enforcement shifts from
ownership to access. A consequence of this is that defining and measuring the value
delivered becomes more complex with the designing of new, more sophisticated business
models moving to the forefront. This immaterial nature to services also means the shift
towards services represents a powerful way of doing more with less, dematerializing our
economies and is often presented as an important method of achieving sustainability.

Services are focused on the end-user. The industrial paradigm is centered around
objects, products and the properties of these things, and thus the end user is shifted to
the periphery of this model mainly having significance as an owner of these things. When
we begin to focus on the system’s functionality and services, we turn this the other way
around. The solution to the end user’s problem now becomes the center of this cosmos.
When we are selling solutions instead of products, to deliver these solutions properly we
need a deep understanding of the end user’s particular needs and context.

Whereas products exist in particular locations within space and are often sold as one-
offs, services are more about time. Integrated solutions involve long-term partnerships
between customers and producers. For example, Nike sells running shoes. Buying a pair
of sports shoes is a once off purchase of a product, but by creating Nike Plus, a digital
coaching service that connects up to a web platform where users can exchange their
running performance, they have managed to turn the product into an instance of a
prolonged service delivery relationship with their customers that will likely see them
coming back to buy shoes in the future simply to continue the service.

In service environments, the customer provides inputs to the service process, and often
the customer is present during the service and plays an active role. Hence, the value is
co-created by the customer and producer. Coupled to this is the fact that end-users
desire integrated services, and as services become more technologically sophisticated
with firms focusing more on their core capabilities, networks of firms have to co-operate
over prolonged periods of time to ensure the design and delivery of these services.

As information technology networks our world, the forefront of design challenges


becomes the designing of these complex service systems that are able to integrate many
diverse components into real-time dynamic networks that wrap around the end user to
simplify the complex and deliver them with a seamless service. The services paradigm
changes our conception of what we are designing when we talk about the design of
complex systems. That is a change from developing a thing to developing a function or
service, which provides a solution to a particular end-user. We can then ask what
resources or what components does one need to integrate into a system in order to
deliver this functionality.

Design Abstraction

Abstraction in design is the process of removing successive levels of detail from a


representation in order to capture only the essential features of a system. Through the
process of abstraction, a designer can hide the irrelevant information about a system in
order to reduce complexity and focus only on what is of essential interest.1 Abstraction
within a design helps to define what is of general relevance to the whole system and what
is specific to particular applications of it, thus making it possible to design multi-level
systems where smaller, more specific subsystems are nested within larger more generic
frameworks.

In complex systems, there are always two fundamentally different levels of the system:
the micro and the macro, or what can also be called the local and the global. This is in
contrast to simple linear systems, where it is possible to reduce the whole system to one
level, and it is due to the fact that firstly, complex systems are composed of many parts.
We may be talking millions, as in the number of inhabitants of a city, or billions, as in the
number of devices connected to the Internet. A number like a million is a highly abstract
thing, trying to relate a number like a billion to our everyday physical experience where we
are really dealing with numbers like 5, 10 or possibly 100. Thus, there is a vast gap or
difference in scale between any individual node on the local level and the system as an
entirety.

Secondly, because the components have some degree of autonomy, they are adapting to
their local environment. These components are often very simple and they do not respond
to information on the global level. Thus, one pattern of order can and often does develop
on the micro level, and a second pattern emerges or is imposed on the macro scale as a
result of trying to design the system to have some form of global coordination. An
example of this might be the official use of two different languages in many parts of the
world, where we have the local level that has emerged organically and English that has
been placed on top of this so as to make the system more interoperable on the global
level. Lastly, because of the high degree of connectivity within the system, we have many
interactions. These interactions inevitably lead to elements synchronizing, which gives
rise to macro-scale patterns. This is called emergence. Traffic jams are a good example of
emergence, as are bank runs. The net result of all this is that we have two qualitatively
different levels within complex systems, meaning they cannot be reduced to a single
level. This makes designing and managing these systems much more difficult.

We need to be firstly aware of this multidimensional nature to complex systems, and


aware that, if we try to reduce them to simple mono-dimensional systems, there will be
unintended and unfortunate consequences. Thus, we need to learn to design for this
multi-dimensional nature to complex systems. In order to do this, we have to be able to
structure and model the system we are designing according to its different levels of
abstraction. But what is abstraction? Abstraction is a powerful tool used in all areas of
math, science, and engineering. Maybe the easiest way to understand it is as a process
of removing successive layers of detail from our representation of the system in order to
capture its essential features, what we might call its global features; they are common to
all the components and are thus on the systems level.

This is like zooming in on a satellite map of a city. Each level will have a certain degree of
detail, creating a certain type of structure that will feed into defining the pattern on the
level below. Thus, we see how complex engineered systems are what we call systems of
systems. But unlike mechanistic systems where each level is just a scaled up version of
the components below it, what mathematicians call scale invariance, the levels to
complex systems can be understood to be scale invariant, but they also have their own
internal variation and dynamics that cannot be fully abstracted away. This is characteristic
of fractal structures, geometric structures that have this scale-invariant property
(examples being the arteries in the human body), the structure to snowflakes, the
formation of rigid mountains, and sea coastlines. If we take one of these, such as the
coastline, and zoom in on it, the overall structure on each level will be the same, but it will
not be exactly the same. There will be variation and unique differences on each level.

Imagine one is designing a structural adjustment program for the Mongolian economy. No
amount of data crunching and analysis from our IMF headquarters will tell us how things
will really play out on the ground. Yes, our abstract economic models will tell us how the
system works on the global generic level, but there is another level to the Mongolian
economy that represents a particular social, cultural and geographical mix, that is unique
to this particular instance of the global economy. If we want to design this program
properly, we need to model and understand the different levels of abstraction from the
local to the global level, how the interaction between the generic and the specific play out
on each level, and how each level feeds into defining the level below and above it.

One thing to take away from this with respect to our design methodology is the
importance of ethnographic studies and real end-user experience. Abstract models are
one thing, but in these large complex systems, they will go through phase transitions as
they are implemented on the ground. If we design a military intervention in Iraq without
understanding the local level context within which it takes place, then this phase
transition can go in any direction. A real experience of the local level is crucial if we hope
to have a strong influence over this process. With the huge scale of some of these
complex systems, things can go very wrong and get very messy. The so-called “Big Dig”
may be cited as an example of this. A mega-project to reroute a highway through the
center of Boston estimated at 2.8 billion dollars, was completed at a final cost of 14.6
billion dollars after being plagued by numerous implementation problems.

Design Synergies

A synergy can be defined as the interaction or cooperation of two or more organizations,


substances, or other agents to produce a combined effect greater than the sum of their
separate effects. It is the creation of a whole that is greater than the simple sum of its
parts. A synergistic design approach is one that is focused on the interaction between the
parts within the organization in order to identify and develop synergies.

Complex engineered systems are composed of many diverse components, we may be


talking about components that were never designed to inter-operate. Take the Internet as
an example: it is called the “network of networks” and many of those original local area
networks that were built for hospitals, businesses or factories were designed with their
own internal logic. When they were built, no one thought about how one day we might be
connecting them all together. Thus, today we have the huge challenge of opening all
these information systems up and exposing their data and functionality through common
interfaces. Designing these heterogeneous composite systems is a bit like being a DJ,
taking a song by the Beatles and mixing it with Nirvana and Fat Boy Slim. We have to
somehow make them work together seamlessly for the end-user, overcome all this
diversity, difference and general messiness, and do this by designing the relations
between the component. Relations can be fundamentally of two different types,
synergistic that is, constructive or destructive, what we might call relations of interference.
So let’s take an example of both. Destructive relations represent the interference of two or
more components within the system, such as a crossroad intersection. For traffic on each
road, the other road is essentially an interference, slowing it down and stopping it on its
way.

This is what we call a zero-sum game. When the traffic on one road gets what it wants,
then the traffic on the other loses, and vice versa. There are many examples of zero-sum
games in the systems we engineer, from noise pollution to over populated cities. We are
always trying to avoid the development of these zero-sum games and the relations of
interference that lead to them. On the contrary, constructive relations are synergistic, that
is, when two or more components interact and the net result is beneficiary to both parties.
This results in what we might call a positive sum-game, meaning when each gets out of
the interaction more than they put in, for example, social networks. The more that join, the
more valuable the network is for any individual user. Many forms of economic trade are
also positive sum. What we are trying to do then in designing these networks is to make
positive sum-games the attractor state, that is, the state towards which elements within
the system will naturally gravitate. Of course, this is easier said than done.

It requires a significant investment in the system’s infrastructure, that is, the relations
through which the elements interact. To illustrate this, let’s think back to our example of
the road crossing. How could we avoid a zero-sum game here? Well, engineers have
already figured this one out by creating a flyover with ramps connecting the two roads.
We now have a positive-sum game, but it took intelligent design and a significant
investment of resources. This was not the default position. Alternative technologies might
be another example. Whereas many of our traditional technologies create a zero-sum
game between human needs and ecological needs, well-designed alternative
technologies try to change this by harnessing synergistic relations. Another factor these
examples might illustrate is the importance of nonlinear or parallel systems in creating
synergistic relations. In simple linear systems, everything requires the same input and
produces the same output. The result is a linear process of inputting resources from the
environment and outputting waste back to the environment. Of course, we are all familiar
with this model as it represents the fundamentals of industrial economies.

In order to create synergies, there needs to be some diversity in the system, that is,
processes taking place on different, parallel levels. In this way, different components in
the system process different resources, and it may be possible to connect what is waste
for one component to what is an input resource for another, thus turning what is often a
zero-sum game of competition over one resource into a positive sum-game where the
more one consumes, the more the other can also. Of course, ecosystems are classical
examples of this, and being able to model and develop these synergistic cycles both on
the micro level and on the macro level within our industrial systems is key to achieving
sustainability. In the design of these complex systems, the huge heterogeneity and
diversity of components is a key challenge. We may be dealing with widely disparate and
qualitatively different socio-technical components. Instead of working against it by trying
to dumb down the variation within the system, we can harness it to create multi-level
systems by designing synergistic relations. Although this is quite an abstract concept
and, as such, easier said than done, it should still be a general principle in our complex
systems design toolbox.

Networked Systems Design

Complex systems are by many definitions highly interconnected, examples being social
networks, financial networks, and transportation networks. In these highly interconnected
systems, it is increasingly the connections that define the system as opposed to the
properties of their constituent components. For an example, think of an expensive sports
car. Out on the highway, it is king, doing 0-60 in under three seconds and up to 250
kilometers an hour. But put this car in urban traffic and it will be gridlocked like any other
car. No matter how great the properties of the car, it will only be going as fast as the
transportation network allows it. This should demonstrate that in complex engineered
systems it is the structure and dynamics of the network that really matter. It is not about
being bigger, faster or stronger. It is about access, and access is defined by where you lie
in the network and the structure of that network.

Think of the air transportation system. It is not so much the static properties of your
location in space and how far away your destination is, but more importantly where you
are located in the network. If you are beside a major hub, it can be quicker and easier to
travel to another major hub on the other side of the planet as it would be to travel from
one disconnected hub to another that is a fraction of the distance away.

Irrespective of whether we explicitly call them networks or just systems, networks are the
true geometry of complex systems. And thus, it is very important to think about designing
them from this perspective of access, connectivity, and network structure. In order to do
this, we first need to understand a bit about the nature of networks, and network theory is
the area of math and science that provides us with the models for analyzing networks. So
let’s take a look at some of the key features to networks and how they will affect the
system as a whole. Probably the most important feature of a network is its degree of
connectivity, that is, how connected is the whole system? Designing for a densely
populated urban environment like Hong Kong will be very different from designing for a
city like Los Angeles, which is dispersed. In highly interconnected systems, the dense
interconnections can require much greater layering. The components can be much more
specialized and there may be a much higher level of dependencies. As a result of these,
failures can quickly propagate. A small security scare in one airport, for example, can
result in delays across large areas of the air transportation system within a nation.

In these large, highly interconnected systems, we do not always know the dependencies.
No one has complete knowledge of all the inter-linkages that regulate complex systems
like large urban centers or our global supply chains. Thus, our aim should not be to
design these systems to be perfect, 100% fault tolerant, as this is not realistic. Instead,
they need to be engineered so as to be robust to failure. Again, the internet is a good
example, as it is called a “best effort network.” This means it tries its best, but if
something goes wrong, then it is no big problem. It just drops your packet and tries again.
It happens all the time, but the internet still works. The occurrence of failure should be
designed into these systems and not out of them in order to achieve robustness.

Another key consideration in the design of these networked systems is their degree of
centralization versus decentralization, as this is a defining factor in the structure and
makeup to networks. In centralized networks, we have a node or small set of nodes that
have a strong influence on the system, and the network will be largely defined by the
properties of these primary nodes. These centralized networks can leverage economies of
scale and it is possible to have a high degree of control over the system through one
point of access. Due to this, centralized networks can be very efficient in the short run, as
well as faster and easier to manage.

But they are also more vulnerable to strategic attacks, often less robust and sustainable
due to their dependencies on a few centralized nodes. They can also result in a high
degree of inequality and problems in load balancing. This is due to the occurrence of
highly centralized peak demands for resources, with rush hour traffic jams and
exaggerated properties prices in the center of cities being examples of this. The heavy
use of economies of scale engendered in industrial system of organization means many
of the networks that make up advanced economies are highly centralized, including our
global financial system centralized around a few key nodes, many national transportation
systems, and logistic networks, which are designed as a centralized hub and spoke
structures.

Decentralized networks are in contrary without centralized nodes. Responsibility, control,


and resources lie on the local level and are dispersed amongst a large percentage of the
nodes. Examples of these include peer-to-peer file sharing, sustainable agriculture
systems, car sharing services, and direct democracy. Decentralized networks typically
require greater user engagement, as they cannot depend on centralized batch processing
and economics of scale. The nodes in the network are often more self-sufficient and less
specialized, and thus it is easier to interchange and replace any node with any other,
making them less susceptible to attack and more robust to failure. They also have fewer
dependencies and are typically more sustainable in the long run.

Adaptive Technologies

Complex adaptive technologies refer to networks of technologies that can adapt and
respond to each other. Classical examples of this being swarm robotics, the Internet
routing system or urban transport networks. With the rise of the Internet of Things,
networks of adaptive technologies are set to become much more prevalent.

Adaptation is the capacity of a system to alter its state in response to some event within
its environment. This capacity of adaptation is something we associate more often with
biological systems as opposed to the technologies we design. The industrial world we
have engineered is in many ways a relatively static one. We produce things like electrical
power grids, buildings, and chairs, and then they sit there, specifically designed not to
change. Every day that one walks by the same advertisement on the street, it presents
the same information to thousands of people. But it will only be of any relevance to a very
small percentage of them, and because it is static, they will only take note of it the first
time before tuning it out to become simply background noise. Now imagine if that
advertisement changed every day, that is to say, it was dynamic. Instead of a poster, we
put in a screen that could be updated. It would be of more relevance, more functional.
Now let’s go even a step farther. Imagine if this screen could receive information about
the profiles and preference of the users that were in its vicinity, and dynamically deliver
content relevant to their interests. This is the world of complex adaptive engineered
systems and, as information technology provides us with the tools for building smarter
technologies, it is increasing the world we have to design for.

To understand this transition from static to adaptive systems, let’s take the history of the
Web as an example. Web 1.0 was a very static system where web developers hard coded
web pages. When you visited a site, the server just gave you the same page that had
been written possibly two or three years earlier with no changes. Web 2.0 that we all
know and love leveraged new server-side scripting technologies to get information in and
out of databases, and thus dynamically updating web pages, making them interactive and
change over time. The emerging Web 3.0 uses semantic technologies and social
networking to adapt content relevant to your specific profile and interests, thus making it
not just dynamic but also responsive to the context. Outside of the Web, the massive cost
reduction in integrated circuits is leading to sensors and actuators being placed in many
devices and objects. As packages within supply chains, cars in traffic and electrical
power grids are becoming smarter, they can respond to events within their local
environment through real-time mesh networks. But they can also feed data into large
centralized systems for analysis, allowing for greater optimization through dynamic load
balancing, as things like washing machines and street lights begin to have the capacity to
adapt their power demands to the current load on the system.

There are essentially two levels of these complex adaptive engineered systems that we
need to consider: the micro and the macro. On the micro level, we need elements with
some form of control system. A control system is a mechanism for taking in information,
processing it according to some set of instructions, and generating a response that alters
the state of the component. Of course, all living creatures have this, from the simplest
single-celled organisms to the most complex, the human brain. However, we are
increasingly using what we call cyber-physical systems to enable all kinds of technology
to have this adaptive functionality, as they become part of networks of technologies that
can communicate and respond to the changes in state of other technologies in real time,
as is the case in automated production lines, airplanes, and mass transit systems.

On the macro scale, when we are designing these adaptive systems, we can no longer
rigidly control the system and determine its functionality in the way we can when, say,
design a bridge, as the end result is going to be more organic, like an ecosystem of
products, devices, and people interacting and adapting within networks rather than the
rigid mechanical systems we are used to. Therefore, if we take away this key feature of
control that is central to our traditional conception of being able to design, where we see
constraining the autonomy of the components as a prerequisite to design, how can we
then engineer these adaptive systems at all? The answer is to work with the innate
features of adaptive systems, not against them. Adaptive systems by definition adapt to
their environment. If we place a plant in a new pot or a child in a new school they will
adapt to that particular environment. This is part of the dynamics of what biologists and
ecologists call homeostasis. They do this through the process of synchronizing their state
with that of other elements that they are exposed to, and this is the key to designing
adaptive self-organizing systems. We don’t try to directly alter the state of the
components. We indirectly influence them by designing the connections within the
environment that the system operates in.

If we want people to be more environmentally conscious, we don’t tell them to do this or


not do that. We connect them with the natural environment, expose them to the
consequences of their actions both negative and beneficial. When a consumer picks up
an anonymous product in a supermarket of a city, a few thousand kilometers from where
it was produced, they are totally disconnected and disassociated with it. But when we tell
them a story about the product’s life-cycle, make them feel engaged and a part of that
process, they are more likely to take ownership and responsibility for their actions. We as
designers of the product or service have not tried to manipulate them. We have simply
connected them with the reality of their environment and left it up to them as to how they
adapt and respond. Thus, designing these adaptive systems is about creating open
platforms that connect components. If we want to design an urban environment that
offers a better quality of life to its citizens, we need to build open spaces where people
can interact and self-organize to develop the socio-cultural fabric.

Self-Organization

Self-organization in design refers to the process of co-creation in the development of a


product or service. Instead of a professional designer producing a finished product and
pushing it out to the end user, self-organizational design involves the two-way interplay
between the designer and the end user where products are designed to be redesigned by
the user, thus enabling an evolutionary process of development. Implicit in our traditional
design engineering paradigm is the assumption that we are dealing with a well-bounded
system over which we have almost complete control and knowledge. This is true when
we are developing most physical systems such as chairs, bridges, cars and so on.


But part of the definition for complex systems is that the components of the system have
some degree of autonomy, which means, as designers, we can only have a partial
influence over the system as a whole. The degree to which we can define the system will
depend on the degree of autonomy of the elements in, say, a transportation network. We
can have a relatively high level of control by constraining the actions of the cars. But in
designing, say, a social network, people value their autonomy highly. No one is in control
of the networks that are spawned out of Facebook or Twitter. They are created out of the
self-organization of the users on the local level. What we are describing then is essentially
a spectrum on the one side of which we can have top-down control, allowing us to design
a well-defined system that will thus be relatively orderly.

On the other end, we have what we call self-organizing systems that are less designed
but are instead created from the actions and interactions of their users, thus they will
likely be less orderly. Our traditional design engineering approach is of course on the left-
hand side of this spectrum. It is a linear model based on the assumption that the end-
user is a passive recipient or consumer. Within this industrial model, end-user variation
and engagement is dumbed down so as to fit in with pre-designed procedures and
systems of mass production. The obvious result of this is disengagement, alienation, and
a world where end-users are constrained by systems that are created by a few designs
and engineers in large centralized organizations. This is a model we should all be very
familiar with. Although user-generated systems have always been there on the fringes of
the mainstream, the rise of I.T. and the internet has put powerful tools for self-organization
and collaboration in the hands of many. Today many of the most innovative, dynamic and
fast growing businesses and services are harnessing this by creating platforms for
technologies and people to interact, adapt and self-organize.

Instead of the traditional divide between producer and consumer of a technology or


service, co-creation harnesses the relatively untapped and potentially vast resource of
end-user engagement. Although finished products are sometimes what people want, it is
also true that when we give people the capacity to be part of the design and production
process they feel more engaged, are more likely to value the end product, and can be a
valuable source of innovation amongst other things.

The question then turns to, why design these platforms of co-creation to be productive if
we can’t actually control them? What happens if our company crowdsources the designs
for the production of our next pair of sports shoes? How do we know they are going to be
what we want? Part of giving over control is accepting the fact that one person will use a
social network for saving the planet, the next for avoiding work. What we can do though
is create attractor states, that is, when we build the platform we set desired default
positions that users can change but will be attractors for most as they are the default. For
example, when building a video sharing site, if we want the site to be open and sharing,
we could set the default copyrights for an uploaded video to creative commons. They can
change it, but this is an example of an attractor state. Similarly, when we are designing an
urban transport system, if we build lots of greenways, cycle paths and pedestrian streets,
people still have a choice as to what mode of transport they use. But, walking and cycling
increasingly become attractors in the system towards which people will naturally gravitate
as it becomes the course of least resistance.

Thus, we can see how, for every choice we have, there is a default because it is the
cheapest, nearest or easiest. Leveraging these default positions is a powerful method of
design. Creating subsidies for renewable energy or open workspaces for collaboration
make these desired outcomes attractors. If, at one end of our spectrum, we have our
traditional fully-controlled systems, and in the middle co-creation platforms, then at the
other end we have peer networks that are truly self-organizing systems, examples being
Bitcoin, mesh networks, and swarm robotics. There is very little in the way of a centralized
platform here. Elements in the system are almost fully autonomous. Each node
contributes to providing the system’s infrastructure and maintaining the system. Thus,
they require more engagement and responsibility from each node in the network but can
result in exceptionally robust systems. In complex adaptive systems, there is always a
dynamic between agents and structure, that is, between the elements in the system and
the system itself. Understanding this dynamic and the trade-off between being able to
control the system vs. harnessing the uncontrollable resources of the users is a key
consideration.

When we impose a top-down formal design on the system, we will create barriers to entry
and thus exclude elements either intentionally or unintentionally. They will then inevitably
self-organize on the local level to create a two-tier parallel system, one formal the other
informal. Shanty towns and favelas are good examples of this. Because the requirements
to enter the formal urban system were too high for the migrants, they created an informal
self-organized system. This is an undesirable state that will create chronic systems
integration problems.

By understanding the relationship between the formal and informal, the centralized design
and the distributed self-organization, we can design a multi-layered, co-creation platform
to integrate the two, with an integrated transition from informal low constraints and
requirements to the more highly constrained formal level. Top-down formal systems can
be very austere, abstract and impersonal. They are designed to be universal, one size fits
all. Bureaucratic systems are paradigms of this. They are designed to be impersonal and
standardized for all. McDonald’s is often cited as an icon of this paradigm, with the exact
same procedures and processes for making a big Mac whether in Bahrain or Santiago.
But there is a good reason why many large corporations who are icons of globalization
have regional offices and localized offerings.

If you want to really engage people, you have to engage them on their own terms, and co-
creation is one of the most effective ways of achieving this. When it came to translating
Facebook into French, all they had to do was open it up for the users to translate and
passionate users had it completed within a few days. Thus, an impersonal system was
adapted to local needs and this is what co-creation is all about – creating synergies
between producers and consumers, between systems and their constituent agents.
Harnessing the vast resources of the end-user through co-creation is one of the great
sources of untapped potential that we are only just beginning to discover in post-
industrial economies. It requires us to be aware of and better understand the dynamics of
self-organization, and just as importantly, the interplay between agents within the system,
that is the users and the structure of the system, the producer of the product or service.

Systems Lifecycle

System life-cycle, in systems engineering and design, is a view of a system or proposed


system that addresses all phases of its existence to include system conception, design
and development, production and/or construction, distribution, operation, maintenance
and support, retirement and phase-out. Complex systems are inherently dynamic
systems, meaning they are continuously changing over time. Biological systems are a
good example of this. The only time they are not engaged in some process of
development is when they cease to exist. This is in strong contrast to linear systems that
inherently gravitate towards some equilibrium state where they will remain static unless
perturbed by some external force. Designing for these dynamic complex systems requires
a holistic approach that looks across the system’s life-cycle

Linear systems theory is the backbone to modern science and engineering that has
created our industrial economies. Within this paradigm, technologies are designed to
operate at some kind of normal static equilibrium within a well-known and predefined
environment. Their life-cycle is a linear one where the system is created, put into its
operating environment where it is designed to function within some normal set of
parameters, at a stable and static equilibrium. It is, most importantly, designed to resist
change and to maintain operations within these parameters for as long as possible,
before being disposed of. This model works well for simple linear systems like bridges
where when we clamp our girders together, we want our bolts to stay there. We don’t
want them to answer back, to change or grow in any way.

In complex systems, we are not dealing with bolts. We are dealing with components that
have some degree of autonomy and capacity for adaptation. People, businesses, stock
prices, web applications, smartphones – these things have their own internal logic
through which they adapt to changes within their local environment. Sometimes this logic
is very simple, such as some financial algorithm that will automatically sell or buy a stock
at a certain price. Or sometimes it is very complex, such as why a person buys a
particular item of fashion over another. The net result though is that the system can
change and is not determined to follow a linear life-cycle from cradle to grave. It can
learn, grow and adapt in response to internal and external conditions in order to renew
itself, that is, to become more or less sustainable and thus alter its life-cycle.

Most businesses don’t last very long, less than a few decades. There are many reasons
why this might be, but one model to capture the underlying dynamic of how a business
evolves is called the explore and exploit algorithm. In its early years, a business may
explore many different products or services, being able to pivot, remain flexible and
diversified. But as the business developed, there will be a few products that prove most
lucrative, and the business may likely scale them up, becoming centered around them
and developing a more formal structured management system as they leverage
economies of scale. The result of exploiting a few lucrative products will be that the
company will become more profitable, but it has also self-organized into a more critical
state. Some small change in the market that moves against the business’ core product
might destroy the enterprise.

We can then take an example of a business that has designed an alternative course of
development for itself. Google has, since its inception, generated over 90% of its revenue
from its one core service of web search. So why does it develop and maintain a whole
suite of products that generate relatively little revenue? One theory is that Google knows
that in a rapidly evolving market it needs to be where the next great thing is going to
happen, whether it is in social media or video sharing. By creating this diversity, by
continuing to invest resources gained from exploiting its core service into exploring and
generating variety, it is able to better evolve and ensure its sustainability.

Because markets change, climate changes – everything changes – to design sustainable


systems is to design systems that build change and evolution into their structure. To
better understand this process of evolution, let’s break it down to see how it works. There
are just a handful of key components. We need to be able to create some variety, allow
for adaptation and perform selection to see which of the elements are the most adaptable
to the particular environment.

Firstly, in the production of variation, the system has to create many diverse types. In
biological systems, this is done by the mixing and deformation of genes. In the formal
world of product development, this typically happens in R&D labs, but when we harness
the power of co-creation we have a new resource for mixing and remixing to create
endless diversity. Secondly, adaptation. These diverse types are put into operation to
interact and adapt to the environment. This requires that we give the elements the space
or autonomy required for them to be able to interact and adapt, under their own logic, to
their own local environment. Lastly, selection. A process of selection is performed on the
elements in order to select the so-called “fittest,” that is, the most suited to that particular
environment. In design, this means building feedback loops such as user rating systems
so as to determine which products or services are truly functioning best for the end-user.

Thus, there are two different models to the design of a system’s life-cycle. At one end of
the spectrum, we have our traditional linear systems that will likely be easier and quicker
to design, requiring less of an investment and often operating more efficiently in the short-
run. But they will be subject to a linear decaying life-cycle as they try to externalize
change. At the other end of the spectrum, we have evolutionary systems that internalize
change, harnessing the mechanism of evolution to maintain a sustainable, cyclical life-
cycle for an indefinite period of time. Complex systems operate in a constant state of
change, whether we are talking about social networks, airports or emerging economies.
To be able to change as fast as change itself means internalizing this through the
integration of evolutionary mechanisms into the platforms that we are building. Evolution
is not a mystical process that only happens in natural environments. It is a key feature of
dynamic systems, whether we are aware of it or not. But by being conscious of it, one
can harness it in our design to create systems with sustainable and cyclical life-cycle.

Services Oriented Architecture

Service Oriented Architecture (SOA) is an approach to distributed systems architecture


that employs loosely coupled services, standard interfaces, and protocols to deliver
seamless cross-platform integration.1 It is used to integrate widely divergent components
by providing them with a common interface and set of protocols through which they can
communicate within what is called a service bus. Over the past few decades, service-
oriented architecture has arisen as a new systems architecture paradigm within I.T. as a
response to having to build software systems adapted to distributed and heterogeneous
environments that the Internet has made more prevalent.

There are many definitions for SOA, but essentially it is an architectural approach to
creating systems built from autonomous services that are aggregated through a network.
SOA supports the integration of various services through defined protocols and
procedures to enable the construction of composite functions that draw from many
different components to achieve their goals. It requires the unbundling of monolithic
systems and the conversion of the individual components into services that are then
made available to be reconfigured for different applications.

Imagine one wanted to build a new web application that allows people to pay their
parking tickets online. One could spend years developing a subsystem that functions as a
street map, and then another subsystem for dealing with the payments, and yet other for
log-in, user authentication and so on. Or I could simply avail of Google’s map service, a
payment gateway service from PayPal, and a user log-in service from Facebook. My job
then would be to integrate these diverse services by creating some common processes
that guide the user through the use of these different services to deliver the desired
functionality. Thus, instead of building a system that was based on all my different internal
components within my well-bounded piece of software, my new application would
instead be built with an architecture that is orientated around services, a service
orientated architecture.

Imagine I am a coffee shop owner. My interest is providing customers with food and
beverage in a pleasant environment. In order to do so, I need to bring many different
things together, from coffee beans and equipment to employees and so on. I need to
design a common platform for all these things to interoperate and deliver the final service.
But let’s think about this system within the more formal language of SOA. Firstly, each
component in the system is providing a service, whether it is the employee pouring the
coffee or the chairs on which people sit. We as designers of the system are not interested
in the internal functioning of these components because we don’t need that information.
We abstract it away by encapsulating it. Only the provider of the service needs to know
the internal logic of the component. To us, they are simply services.

When it comes to a customer paying with credit card, they simply swipe their card and
input the PIN number. No one in the shop understands how the transaction is actually
completed. Only the financial service provider has that information. For the rest of us, it is
abstracted away through encapsulation. We may also note that the financial service
provider has almost complete control over the logic they encapsulate, at least during the
system’s run-time, as is the case for many other components in the system. This is called
service autonomy.

What we do need to know though, is what function the component serves and how to
interact with it. We call this an interface. When one of our new employees picks up a
packet of coffee, she knows what it is and how to use it because it has a label. We may
also note the big yellow sign above the bin in the corner encouraging customers to
dispose of their waste. This is called service discoverability. Services are supplemented
with communicative meta data by which they can be effectively discovered and
interpreted. We might say our bin is broadcasting the availability of its service. Although I
have employed a multicultural team of staff, there is an agreement that everyone will
speak English when interacting with other staff on the shop floor. This is called a
standardized service contract. Services adhere to a communications agreement as
defined collectively by one or more service description documents.

As part of my business, I have a network of suppliers and maintenance people. If I need


more personnel, I call the recruitment agency and I will have a new employee starting next
week. If the shop needs painting, I can call my decorator. If I need more tables, I can buy
them. All of these different services are what is called loosely coupled to the system. This
means these different modules can join or leave, couple or decouple from the system as
need be, thus maintaining their independence. This loose coupling allows for the dynamic
provisioning or de-provisioning of resources to maintain an effective systems load
balancing mechanism.

Lastly, once my little business is up and running, generating revenue, I might want to start
another one or maybe another ten. But this time in, say, the restaurant business or maybe
a confectionery. The great thing I notice is that I don’t have to start from scratch each
time. I can just extend my insurance contract, order more tables, use the same bank
account and so on. This is called service reusability or composability. Because the
services are independent of any particular process I compile them into, that is to say, any
of the stores I set up, they can be endlessly reused, composed and recomposed into new
configurations.

The real value from SOA arrives when we scale things up to the world of complex
systems, to the architecture of large enterprises, designing urban environments or even
whole economies. Because of its abstraction, the same architecture can underpin our
design on the micro level as well as the macro level. We have a set of heterogeneous
components. These components might be people of different cultures in a society. They
might be different modes of transportation within a city, or they might be different devices
connecting to an internet of things platform. 

We create a generic language that can be used to describe all the components in the
system and their service. We create interfaces for the components that translate their
local language or functionality into the global language. 

We give them a descriptor to describe the components functionality, availability, terms,
conditions and various other parameters to its coupling and service provision. We create
a service bus that integrates these functional components into an entire system, and we
create an interface for the end user to interact with the services they need.

Platform Technologies

Platform technologies are systems build upon a platform architecture that distributes the
system out into different levels of abstraction. This is done in order to differentiate
between core – platform – functions, and the application layer that sits on top of, and
draws upon, these underlying common services.

During the past few decades, with the rise of the Internet platforms, such as the App
Store or eBay, have proven to be some of the most dynamic, innovative, and fastest
growing services. But of course, the platform model to systems architecture has always
been there since the invention of farms and factories to the making of Lego building
blocks. When many people see a new technology at work, they don’t usually consider all
the pieces that went into its creation. They simply see the amazing capabilities and never
give it much thought. However, within advanced industrial economies, many products
and services are enabled by the power of abstraction. They are remixes, built out of
services from platforms that enable the endless bundling and re-bundling of different
components. Wikipedia defines a platform technology as a structure or technology from
which various products can emerge without the expense of a new process introduction. In
order to achieve this, our system needs to be architected to have two fundamentally
different levels, that is, it must have a platform providing the basic services that can be
combined into different configurations on the application level to deliver various instances
of the technology to the end-user.

An example of a non-platform technology is a hammer, for it is a homogeneous system.


There is no differentiation between the system’s infrastructure and its application. They
are all just one thing. It is an instance of a hammer. It cannot generate new and different
configurations of itself. The same can be said of a car. It is an instance of a technology.
The end-user gets and uses the whole thing. To make the comparison clearer, we could
compare the instance of a car with an automobile platform that allows a motor company
to release several vehicles built upon a common chassis, which is the platform, with
different engines, interiors and form factors, for the same or different vehicles and brands
within the company.

Probably the clearest and best example of platform technologies are personal computers.
The platform, in this case, is the computer’s operating system. But before we can get to
the platform that’s doing all the great work, we need a foundation for it to sit on, that is, a
set of enabling technologies. In this case, our foundation layer is our computer hardware
and all the low-level firmware that interfaces between it and the operating system. But
within a business, our foundation layer might be the economic system it is a part of, the
public services such as security, rule of law and maintenance of natural resources that
would enable our business to function. The same would be true of a city. It rests upon
and is enabled by a national infrastructure system.

The next layer up from the foundations or hardware is the platform itself, the computer’s
operating system in this case. It essentially manages the computer’s resources and
services that will be required by applications. The platform takes the resources available
to it from the infrastructure and creates the Lego blocks that we will be used to build
things with. These resources are presented to producers on the application level through
APIs, or application program interfaces. In an automotive factory, the platform would be
the physical technologies in the production line for creating the car’s parts. Our
employees can rearrange this production line to create different vehicles. In the example
of a city, this platform level might be the urban utilities that contractors will interface with
to build offices and residential spaces, and there will be a standard set of procedures for
them to do this. On top of the operating system lies the application layer. Developers
draw on the services provided by the operating system and bundle them in various
different combinations to deliver a finished application to the end-user. Apps in the App
Store, the cars coming off of our production line, the buildings in a city or the financial
products offered by a bank are examples of the application layer, endless configurations,
and reconfigurations in response to the perceived needs and feedback of the end-user.

Lastly, the user interface layer. When the end-user switches on their computer, they don’t
want to see 0’s and 1’s or lines of code. They want to see things they understand,
pictures of files, and nice drop down menus. The majority of people who interface with
the systems we are architecting will do so, so as to get the maximum functionality out
with the minimum input of effort. In order for them to do this, we need a layer that
translates the internal logic of the system into a language they understand. This interface
might be the dashboard on our car or the receptionist in our hospital telling people where
to go. Whatever it is, it is all about the end-user, the language they speak, what they
need, and how to translate the system's functionality into a solution that involves the
participation of the end-user.

In continuing with our analogy from the world of I.T. we might call this a solution stack,
the full set of subsystems and layers of abstraction to provide the platforms full
functionality without dependencies. An important thing to note is that as we go up each
level of abstraction towards the end-user, we are simplifying the complexity and level of
engagement required. Those working on the platform level require a deep understanding
of the system and have to deal with its full complexity but are relatively unconstrained.
Those who engage with the system on the application and user level are constrained by
what the platform providers have designed, but being enabled by this technology they will
be able to do more with less input and engagement. The net result is that we should get
an amplification effect as we go up the solution stack due to the increased ease of
engagement. Thus, there will be many more application developers than there are
operation systems developers, and there will, in turn, be many more end-users than there
are application developers, and this should be the case wherever we are using this
platform model to systems architecture.

Finally, we might ask – why should we care about platform technologies? There are a
number of reasons this architecture should be of benefit to us. Firstly, by distributing the
system across multiple layers, we can abstract away the complexity that users or
producers of the service have to deal with. Everything gets its own space. Secondly, we
can avoid redundancies by having the platform provide the common services required by
all components. We can reduce the need for each component on the application layer to
re-invent the wheel. Thirdly, platforms are the ideal architecture for creating user-
generated systems. Thus, we can leverage the amplification effect we discussed earlier to
do more with less, helping to maintain an agile core system. And lastly, the platform
architecture is ideal for building flexible, adaptive, and evolutionary systems. Given its
independence from fixed instances, the system can stay innovating on the application
level to continue regenerating itself.

Modular Design

Modular design is a design pattern that is built around the idea of autonomous modular
components that can be independently created, easily configured and reconfigured into
different systems.

When designing a system, one of our key architectural considerations will be to decide
whether we are building a homogeneous system or a modular system. For example, in
the design of the mechanical structure of a new 3D printer, one could take two different
approaches. Firstly, deciding to take a monolithic approach, in which case the whole
frame would be cast out of plastic. Or one could take a modular approach, using nuts and
bolts to clamp sub-components together and combine these subsystems to build the
whole mechanical system as a composite of modules. A homogeneous architecture
integrates and constrains the sub-components into one overarching system. For
example, building my house out of poured mass concrete instead of individual building
blocks. The Unibody to Apple’s Mac Book is another example of a homogeneous
architecture.

Modular design, in contrast, is a systems architecture that distributes the whole system
into a set of distinct modules that can be developed independently and then plugged
together. Examples of this might be an electronic circuit board made out of autonomous
electric components or a house made out of prefabricated modules that plug together.
Modularity is a key feature of complex engineered systems (with the Internet again being
a paradigm of modular architecture) for a number of reasons.

Firstly, by definition, complex systems are highly interconnected. That is why we model
and talk about them as networks. In these highly interconnected systems, the cost of
interaction will be very low. This low collaboration cost makes it much easier to unbundle
homogeneous systems, distribute them into modules, and then reconnect them into a
composite whole through the network. Economic globalization might be an example of
this, as the barriers and cost of international trade have dropped. Modular supply chains
have emerged where production processes and supply chains become unbundled and
distributed due to the ease of interaction through global logistics networks and
telecommunications. Secondly, complex systems are composed of autonomous
elements, meaning we cannot fully constrain them within one integrated top-down model.
Componentization gives each element a degree of autonomy, thus allowing it to adapt.

So, what do we need to design a modular system? Well, we need at least three things: a
module, an interface, and a set of protocols for interconnecting those modules. Firstly, in
order to create modules, we need to unbundle our monolithic system. In computer
science, this is called separation of concerns. We are trying to capture what makes each
component a separate concern, that is to say, autonomous and different from everything
else. Looking at a map of the world, we will see it is distributed out into countries. Each of
these countries is different in some way and has some degree of autonomy. How we
disaggregate the system is very important. If we draw arbitrary lines in the sand without
proper respect to the cultural and economic systems that lie behind them, then problems
will arise further down the line. The concept of a module is to both define distinct
separate functions and also to encapsulate this function, thus abstracting away its
internal mechanics from the system at large. This is called black-boxing.

In science and engineering, a black box is a device, system or object that can be viewed
in terms of its input, output and transfer characteristics without any knowledge of its
internal workings. We define and display these inputs, outputs and functional
characteristics with an interface that will tell other modules in the system what this
element will do. It is a bit like a person’s profile on LinkedIn. Their profiles are the interface
telling you what they do. The interface should also tell you what to give the module and
what to expect in return, like when you buy a new car there will be a poster in the window
telling you to give it a certain quantity of gas and it will transport you a certain distance. A
car, like many things, is a black box, you don’t need to know what goes on inside of it to
drive it. Lastly, we need some way of joining these modules together, what we might call
coupling them. This coupling may be loose, meaning the modules have a high degree of
autonomy, or they may be tightly coupled, meaning they are more constrained by their
interaction with other modules and the system. This coupling also requires a set of
protocols to define the terms, agreements and common language through which modules
interoperate.

The advantages of modular design include: Firstly, it can enable distributed collaboration
and problem-solving. We may be dealing with a very large complex system that would
require nothing less than a genius or a hero to fully grasp and manage. Through
modularizing the system, various functions can be more easily distributed across a large
team with no team member creating or even understanding the whole system. 

Secondly, modules can be endlessly re-used, like Lego bricks. We can combine and
recombine them, making it more likely to be a sustainable solution. It should also be
much easier to manage and maintain the life-cycle of the system. Individual components
showing fatigue can be replaced without having to replace the whole system, and we
should be able to do this with minimal downtime.

Thirdly, modular systems are much more versatile, adaptive and can be customized more
easily. The modular design employed in the automotive industry makes it easier for car
dealers to offer a wide variety of customizations, where they simply “snap in” upgrades,
such as a more powerful engine or seasonal tires. Lastly, modular design can be a very
important contagion mechanism. Because a cup is a monolithic design, a crack will easily
propagate through the structure, rendering the entire system dysfunctional. By designing
for autonomy we reduce the dependencies within the system and the modules act as
natural buffers to disaster spreading and for maintaining security.

The disadvantages to modular architecture include: Firstly, because everything has been
disaggregated, everything will have to go through some network of interactions to take
place. Unless the cost of interaction is very low, it will place a very high burden on the
system. So if my bank takes 50 dollars for processing every financial translation on my
credit card, the system would bankrupt me pretty quickly. Excessive modularization can
lead to a fractured system. Say I have 100 people to feed and I go into the store to buy
some bananas, but each one is wrapped in a module of plastic with a label. Well, I am
going to spend quite a bit of time unwrapping each one and create quite a lot of waste in
the process. Unless the protocols and interfaces to the modules are well designed, we
can waste a lot of time continuously negotiating contracts between modules. If you had
to do your shopping at a local market where you have to barter for everything in a
language you don’t understand, your response would likely be to look for a supermarket
where you could get everything in one go. When contracts between modules are difficult
and costly to negotiate, we will want to reduce the number of interactions, and thus make
the system more homogeneous.

Lastly, modular systems may be good for a lot of things but they are not optimized for
performance. Typically, a homogeneous design will offer the quickest, easiest solution
with the greatest performance, at least in the short run. If you go looking for the cheapest
and easiest option for a chair, shovel, wheelbarrow or most consumer goods, it will likely
be one homogeneous mass of injection molded plastic. Modular systems architecture
may have been around since the building of the Egyptian pyramids but the Internet is
showing us that by building a platform with effective protocols that reduce interaction
costs, we can use modularization and mass distribution of components as a highly
effective way of engineering complex system.

Event Driven Architecture

Event-driven architecture (EDA) is a design pattern built around the production, detection,
and reaction to events that take place in time. It is a design paradigm normalized for
dynamic, asynchronous, process-oriented contexts; it is most widely applied within
software engineering.

Information technology is key to enabling a new world of event-driven architecture. When


we start putting chips in all kinds of devices and objects, instrumenting our technologies
and putting smartphones in the hands of many, the world around us stops being dumb
and static and starts being more dynamic, adaptive, and things start happening in real-
time. When the lights in a house or a garage door are instrumented with sensors and
actuators, they no longer need a person to turn them on. Instead, they wait in a restless
state, listening for some event to occur to which they can then instantly respond.

This is in contrast to many of our traditional systems where the components are
constrained by some centralized coordination mechanism, with information often having
to be routed from the local level to a centralized control mechanism, then batch
processed and returned to the component to respond after some delay. The components
within complex systems are able to adapt locally, which means they can often act and
react in real-time. Added to this is the fact that many of these complex engineered
systems are loosely coupled networks of unassociated components. They don’t really
have any structure. Sometimes they don’t even exist until some event actually happens.
When I make a query on a search engine, my computer might be coupling to a data
center in Texas. But the next time I make the same query, I might be interacting with a
server in South Korea. Depending on the system’s load balance at that instant in time, the
network’s structure is defined dynamically during the system’s run time.

An event-driven architecture consists primarily of event creators, event managers, and


event consumers. The event creator, which is the source of the event, only knows that the
event has occurred and broadcasts a signal to indicate so. An event manager, as the
name implies, functions as intermediary managing events. When the manager receives
notification of an event from a creator, it may apply some rules to process the event. But
ultimately, events are passed downstream to event consumers where a single event may
initiate numerous downstream activities. Consumers are entities that need to know the
event has occurred and typically subscribe to some type of event manager. For example,
an online accommodation service where event creators (that is, property owners)
broadcast the availability of their accommodation to the event manager (the online
platform), which would aggregate these, and event consumers (people looking for
accommodation) could subscribe to the platform’s mailing list sending them notifications
for any new relevant listings.

Some of the advantages to using an event-driven architecture are: EDA is particularly well
suited to the loosely coupled structure of complex engineered systems. We do not need
to define a well bounded formal system of which components are either a part of or not.
Instead, components can remain autonomous, being capable of coupling and decoupling
into different networks in response to different events. Thus, components can be used
and reused by many different networks.

Secondly, versatility. Event-driven architecture allows systems to be constructed in a


manner that facilitates greater responsiveness because event-driven systems are, by
design, normalized to unpredictable, nonlinear, and asynchronous environments. They
can be highly versatile and adaptable to different circumstance. Next, EDA is inherently
optimized for real-time analytics. Within this architectural paradigm, we have a much
greater capacity to find, analyze, and then respond to patterns in time before critical
events happen, whereas traditionally, we spend a lot of time looking in the rear-view
mirror analyzing data about things that happened yesterday or last year. Event-driven
architecture enables a more preemptive world. Instead of waiting for my car to break
down and then leaving it in the garage for a few days to be fixed, we can preempt the
whole thing by having the car send a stream of information to a data center that analyses
it and triggers events when patterns leading to dysfunctional activity are identified.

This is the world of real-time Big Data, advanced analytics, and complex event
processing. Complex event processing is a method of tracking and analyzing streams of
information that combine data from multiple sources to infer events or patterns that
suggest circumstances. The goal of complex event processing is to identify meaningful
events (such as opportunities or threats) and respond to them before they happen or as
quickly as possible after they happen. Complex event processing goes hand in hand with
an event-driven architecture.

The disadvantages of EDA include security risks and increased complexity. Because EDA
systems are often extremely loosely coupled and highly distributed: we don’t always
know exactly what components are part of the system and the dependencies between
them. The system can become opaque and some small event could trigger an unforeseen
chain of relations. A good example of this is algorithmic trading, which is a paradigm of
event-driven architecture. This involves computer algorithms that are engineered to
conduct financial transactions given a certain event, typically buying or selling a security
on some change in the market price. No one has full or even partial information about
what algorithms are out there, what they all do and how they might be interconnected.
This situation within a critical infrastructure is clearly a serious security problem. A
corollary to this is that things can get very complex due to the open and inherent
nonlinear nature of this type of design pattern.

Design Thinking

Design thinking is a design process that enables us to solve complex problems. It


combines deep end-user experience, systems thinking, iterative rapid prototyping and
multi-stakeholder feedback to guide us through the successive stages in our design.
Design thinking, like complex systems, is interdisciplinary. It cuts across traditional
domains by recognizing that everything in our world is designed. Thus, it takes design out
of its comfort zone of building chairs and fancy coffee cups to apply it to all areas, from
designing effective organizations to creating healthcare and financial services.

The design process is a bit like blowing up a balloon and then slowly letting all the air out
of it again. It requires an initial phase of divergent thinking where we ask expansive
questions to explore the full context and many different possible options before having to
narrow our vision down upon a single solution and refine it through convergent thinking.
But this process is not mechanical. It is more evolutionary, meaning we cannot fully
foresee the end product from inception. It emerges, and thus we need to think about the
future in an open way. That means having confidence in the possibility that an unknown
outcome is feasible, as the whole point of the design process is that we will create
something that does not yet exist and thus is unforeseen. But we don’t have to reinvent
the design process wheel every time. There are a few broad stages to it, which different
people will define in different ways, but we are going to talk about some of the most often
identified phases in the design thinking process. They include the stages of researching,
ideating, prototyping, and testing. These steps don’t necessarily follow a linear path. They
can occur simultaneously and be repeated.

Firstly, the researching phase. What we are doing here is not creating a thing. What we
are creating is a solution, which is a solution to a problem that a particular person or
people have. Thus, we need to understand the context within which our system will exist
and where it lies in relation to other things within that environment. It is only when we see
the given context within which a pre-existing version of the system operates that we get a
full insight into why it is the way it is, and from this can begin to conceive of an improved
solution. When we don’t understand the context then we will be likely to simply go around
in circles, reacting to the pre-existing solution. One generation of designers decides that
straight lines are the greatest thing, extolling all their virtues, making everything square
and rectangular with pointy corners, until the next generation of designers comes along
and are now sick of straight lines. So they start a new revolution of curves and rounded
corners, until everyone gets tired of all the curves and rediscovers the straight line again
and so on.

By understanding the context and the history of the context to a design, we can see its
parameters, the advantages, and disadvantages of both extremes and try to find an
integrative solution. If we remember that there are always two qualitatively different levels
of a complex system, the local and the global, as designers of the system we will be
dealing with it primarily on the macro scale. But at the end of the day, everything really
plays out on the local level and we need to understand the local context where people
interact and live out their lives through these products and services. People can’t always
express what exactly the problem is or know exactly what it is they want, so we need
deep immersion to piece it together for ourselves, ethnographic studies, customer
journey maps, all forms of end-user experience, and, most importantly, empathy. This
research phase is a muddy, confusing part of the process that is often bypassed in
pursuit of arriving at an end result quickly. However, with these complex engineered
systems, it is critical to building a solid foundation with which to move forward on.

If the initial phase of empathy and context understanding is all about the “why”, then
ideation is about the “what”. Within these complex systems, there will be multiple
stakeholders involved, and we need to consider the perspectives of each one. The
solution has to be viable for each of the multiple stakeholders involved, that is, from the
human perspective of the end-users that have to live with the finished product, from the
economic perspective of the businesses or organizations that are going to deliver the
solution, from the technological perspective of what is physically possible – and we might
add, from an environmental perspective of what is sustainable given the resources
available.

We need to be aware that everyone brings some kind of perspective to this ideation
process, and each one of these perspectives will have some kind of filter on it. It is only
by identifying and removing these filters that we can truly think outside the box where real
innovation happens. One way of achieving this is through collaboration. Design thinking
suggests that better answers happen when 5 people work on a problem for a day, rather
than having one person working on it for five days. From all these different perspectives,
we can come up with what is possible and what is feasible for all. Ideation requires
creativity but also rationality, creative thinking to continuously create possibilities, and
then analytical thinking to rationalize their viability within the given constraints.

Once we have an idea of what we are building, then we need to know how to build it, and
prototyping is the vehicle through which we experiment to discover this. Prototyping can
be an art and a craft. It requires practical skills to build and imagination to bridge the gap
between a mock-up and the finished product. The idea is to fail early. Many complex
systems are big lumpy things like transportation networks. You build them once and then
you are stuck with them for many decades. Prototyping offers us a safe environment to
keep failing until we succeed.

Lastly, deployment and feedback. Designing something is a lot about learning, and the
best learning is supported by experience. Experience in the functioning of a system we
are designing, and it can only really be gained by putting it out there into its operating
environment because it is only when it is in its finished context that we can gain a full
360-degree view of it. We do this by creating a minimal viable product, the very most
basic version of our system that is fully functional and from which we can get real
feedback. We can then define key performance metrics and start our accounting of how
well it is delivering its functionality. We then iterate on this, gaining feedback each time
that goes into our accounting system to see if we are evolving in the right direction. But of
course, the design doesn’t stop there. To make the product or service sustainable, we
have to integrate this evolutionary mechanism into its full life cycle.

A Complexity Labs Publication


Curated by Joss Colchester
info@complexitylabs.io

Potrebbero piacerti anche