So how does complexity work?

I’ll use John Holland’s categories. A Complex Adaptive System (cas) is made up of three mechanisms (Schema, Schemata, and Signals) and four properties (Aggregation, Non-Linearity, Flows and Diversity). Agents interact on the local level, self-organize, and produce “emergent” complex behavior.

The Agents are the adaptive actors. They make IF/THEN responses. If they see specific stimuli, then they respond with an action. They learn new information from the outcome, and continue making IF/THEN responses.

This is a bit of a generalization, so I’m not striving for extreme detail.

John Holland describes CAS:

A Complex Adaptive System (CAS) is a dynamic network of many agents (which may represent cells, species, individuals, firms, nations) acting in parallel, constantly acting and reacting to what the other agents are doing. The control of a CAS tends to be highly dispersed and decentralized. If there is to be any coherent behavior in the system, it has to arise from competition and cooperation among the agents themselves. The overall behavior of the system is the result of a huge number of decisions made every moment by many individual agents.

Here’s a way to visualize CAS from Wiki.
350px-complex-adaptive-system.jpg

The Three Mechanisms: Schema, Schemata, and Signals

1) Schematic
A schematic uses a small amount of building blocks which can be combined and reused to form complex structures.

If you take a simple example: there are 10 components of an agent with 10 variations for each component. The possible variations are 10^10, or 10 billion. That’s a very simple model but one with extraordinary diversity.

The building blocks combine to create larger constructs. At the basic level, there are Quarks. Then there are Nucleons, Atoms, Molecules, Amino Acids, Organelles, Cells, Organs, Organisms, etc.

The Nucleic Acids (DNA and RNA) contain the building instructions for all biological species. The Genome creates the DNA and RNA, and the DNA forms the 23 human chromosomes. All variations in human construction and behavior emerges from genes.

DNA and RNA has a code, A, G, C, T, U. A = adenine G = guanine C = cytosine T = thymine U = uracil. RNA uses uracil, DNA uses thymine. So each DNA strand has only 4 possible combinations (AGCT). The combinations create different results if the sequence changes.

The DNA and RNA codon table describes the 64 possible sequences. Each codon produces an amino acid. TCT produces Serine, GAG produces Glutamic acid, and so on. This produces 20 different amino acids. The codons then combine with one another in sequences like TCTGAGTAA. The TAA is one of the three “stop” codons at the end of a sequence. The amino acids combine to form a more complex protein, and the “stop” codons mark the end of the protein. After the stop order is given, the protein is used by the organism.

That’s a lot of variation. The combinations of different DNA strands into Codons and into 23 Chromosomes can create all the needed proteins to construct a functional human body. There are astronomical variations

We can simulate these biological schemata with computer schemata. It’s possible to create an even more complex computer organism.

2) Signals
Signals, or “tags”, provide the agent information about the environment to enable it to adapt. It’s the mechanism used to aggregate information. Signals include size, shape, logos, icons, genders, species, etc. Agents study sensory data (eg visual, audio) to sense external patterns. So agents need to identify female agents to mate and they must avoid predator species. It helps to distinguish the two.

Signals identify crucial information. Sometimes, species or agents attempt to disguise their tags to gain a competitive advantage. Some predators have camouflage fur or feathers to obscure them in the surrounding vegetation. Mimicry is used by some prey species to disguise themselves as other animals that predators avoid.

In the IF/THEN process, signals provide the “IF” information which the internal models to respond to.

3) Schema
The Schema is an internal model that governs the use of the building blocks to respond to the environment. This is the THEN part of the IF/THEN process.

The agent identifies signals from the noise in the external environment. The internal model decides how to use the building blocks to respond to stimuli.

Schemas anticipate and predict results based on stimuli. A sequence of IF/THEN actions can lead to complex behavior. For instance, everything from bacteria to mammals uses an internal model to discover food. It finds signals which point the direction towards food in this direction. A bacterium detects chemical changes near sugar. A wolf smells the scent of a deer. The internal model then initiates a behavior response to look for the food in that direction. When it sees the signal for the food – like a wolf seeing the deer – another IF/THEN response occurs and the wolf attacks. If it kills the deer, then it eats it.

Internal models can update themselves based on which response produces the greatest benefit. Bad choices cause death, which remove bad models. One advanced type of model governing behavior is a heuristic. This actively updates itself as it discovers new environments and consequences. A person is not born knowing what fire is or that fire is hot. If he touches a fire, then he is burnt. A heuristic is learned. If there’s a fire, then don’t touch.

Pavlov discovered this process when he conditioned the dogs to believe the sound of a bell leads to food. Models can be fooled by unrelated correlations.

Internal models govern behavior and construction of the agent. Evolution favors slow changes to the construction of agents to adapt to environmental changes. Behavioral changes are more akin to a learning process that allows faster response times.

Computers simulate the biological process with a genetic algorithm. This is a blind search technique that examines possible combinations and how they would interact with external stimuli. This is a weak AI. A theoretical strong AI will be able to actively learn the way mammals do.

The Four Properties: Aggregation, Flows, Diversity, Non-linearity

1) Aggregation
Similar things are categorized. Instead of counting everything as separate and unique things, we classify them – trees, dogs, humans, roads, etc. Instead of counting each tree separately, you aggregate them and count them as one forest.

A meta-agent is the aggregation of agent interactions and behavior. Meta-agents behave differently than individual agents. Instead of counting a thousand individual ants, you aggregate them and count them as a meta-agent – the ant hive. The ant hive displays Swarm behavior that is more complex and adaptive than any individual ant.

2) Flows
In a network there are nodes and connectors. Nodes represent agents and the connectors represent flows. The flow consists of exchange of resources, services, and other interactions between agents. Agents sort out their needs and adapt to flows in the network based on signals (tags). Agents compete with other agents for resources and the flows are always being redirected from one node to another. Load-redistribution describes how flows are directed to different nodes for whatever reason.

Without signals, flows would move randomly or very inefficiently and the agents will fail to adapt.

Systems theory describes the process of flows and how agents use the resources. There are different effects and cycles within a system. There are reinforcing and balancing cycles. There are multiplier effects and recycling effects.

3) Diversity
Diversity describes the quantity of variation. This is produced by the schema which permits a vast range of variation in each agent in the system; and there needs to be a vast range of different kinds of agents.

Adaptation adds additional diversity as agents adapt to fill niches in the environment.

4) Non Linearity
Small changes in inputs can cause major changes in the outputs. Inputs affect the signals in the environment, the flow of resources, and cause adaptation by all the agents in the system. This causes a “butterfly effect” in CAS. Nonlinearity makes aggregation more complicated than linear averaging.

Most of mathematics is linear. A linear equation is 2+2 = 4.
The total value is a result of adding up the value of each part – that’s a superposition. Nonlinear functions are not superpositions.

Mathematics circumvents the problems of nonlinearity by measuring oscillations with linear methods. The calculus measures the instantaneous change of a point at any time. This does not really solve nonlinearity per se, but it gives us a way of measuring it.

From that basis, we get more complicated tools, such as partial differential equations to describe non-linear functions.

Differential Equations and Partial Differential Equations can measure nonlinearity.
Some examples of nonlinear equations are the Lotka-Volterra differential equations for predator-prey relations or the Navier-Stokes equations for fluid dynamics.

There is one final factor
Reaction Time
This estimates how long an agent or meta-agent will take to respond to external stimuli. A human nervous system will respond to heat or pressure in less than a second. Businesses may take months to respond to international trade deals. Biological evolution may take thousands of years.

CAS Conclusions
CAS is a very important advance as it reduces complex phenomenon into its component parts. It allows scientists to produce computer models to test hypotheses and model non-linear dynamics. Learning about this process helps us better understand entire systems in our world without inserting null variables. We know that all human interaction is the result of genes – we don’t need to add irrelevant variables like souls, spirits, magic, or cultural norms.

Understanding how IF/THEN processes accumulate to form complex behavior will probably lead to artificial intelligence. One type of CAS is Swarm Intelligence which allows otherwise stupid ants to behave in extremely complex and somewhat intelligent ways. Scientists are studying swarm theory to program stupid robots to behave more intelligently as a group.

For human society, we can study the complex adaptive systems. First, the basic agents are the genes. These form into meta-agents like the internal organs and systems (such as the immune system, nervous system). These meta-agents form meta-meta-agents – Humans. Human agents combine to form a family, a tribe, and a nation. Nations are just meta-meta-meta-meta-meta-meta-meta-agent of genes. Usually, we simplify this and create a hierarchy of reductionism.

The economy is a complex adaptive system with human agents. As is biological evolution and memetic evolution (or cultural diffusion). Activity in the stock market or biological arms races are due to adaptation.

Friedrich Hayek was one of the pioneers in complexity research. He discovered some of the adaptive processes in economics and found that this process also played a role in biology and cybernetics. And I believe Dawkins hit it out of the ballpark with the Selfish Gene.

A lot of the fancy terms used to describe complexity like self-organization, decentralization, self-organizing criticality, punctuated equilibria, etc, are all part of CAS.

But that’s reductionist! you say
Ma’am, I have not yet begun to reduct.

Advertisements