A good theory of complexity must apply equally well to the emergence ofAIDS and the fall of Central American civilizations , as well as the evolution of the Internet. Computer-based thought experiments : Computer-based models allow complex explorations not possible with real systems. These can guide theoretical thinking, but Holland is insistent that computer models that happen to match certain characteristics of real systems should not be mistaken for a deeper understanding of underlying principles and predictive theoretical constructs.
A correspondence principle: Holland insists that a successful theory of complex adaptive systems must encompass standard models from prior Access options available:.
Project MUSE promotes the creation and dissemination of essential humanities and social science resources through collaboration with libraries, publishers, and scholars worldwide. Forged from a partnership between a university press and a library, Project MUSE is a trusted part of the academic and scholarly community it serves. Built on the Johns Hopkins University Campus.
Unable to display preview.
Download preview PDF. Skip to main content. Advertisement Hide. Studying Complex Adaptive Systems. This is a preview of subscription content, log in to check access. Whitehead and B.
Russell, Principia Mathematica , Cambridge Univ. The only behavioural detail imposed by the modeller is that the machine is connected to an "eye" which responds to boredom situations when internal states change only small amounts by shifting its attention to another vertex. An ordinary neural net would not have produced so much with so little modeller input, so Holland , page has to use neurons with variable thresholds, fatigue and connection weight updated by Hebb's rule, a reinforcement learning algorithm for synaptic weights.
We presume that this is close to the minimum set of requirements for any "triangle parser". What does this tell us about the ability to generalise? The machine described could be used to sort the world into triangular and non triangular entities. In some sense, it does this by a process that is similar to Locke's description of abstraction. In Holland's simulation, a triangle in the world eventually produces a synchronous firing pattern in the net.
John Henry Holland - Wikipedia
Every triangle produces a different firing pattern, depending on orientation and size. But the firing patterns themselves induce firing patterns that have persistent common characteristics. Those patterns can in turn induce their own patterns and the further removed the firing pattern from that of a specific triangle, the more general it becomes. Patterns at any level can interact with those at any other level. In Locke's account, an impression is stripped of detail and turned into an "abstract idea" that is a picture of something but nothing in particular.
In both cases, we are a long way from having an object that is capable of representing all triangles for the purposes of a Euclidean demonstration. This was Berkeley's point. Categorisation is not all that is involved in the meaningful use of a concept. As Holland says: [The process of net patterns firing up other net patterns] is a precursor of that everyday, but astonishing, human ability [ The emphasis is mine - Holland is faultless at not claiming too much for his case, while also maintaining the sense of excitement surrounding the research program.
- Hidden Order: How Adaptation Builds Complexity by John H. Holland.
- Top Authors;
- The Signature Style of Frans Hals: Painting, Subjectivity, and the Market in Early Modernity?
Not all authors of popular science are so careful. Emergence relies on generalisation. Models that demonstrate the phenomenon will do some generalising, and will be composed of elements that may themselves have been "generalised" by other models, as if in a cascade. The second basic component on which modelled emergence relies is illustrated by the concept of the Game. In chess, a small number of rules can generate a huge number of board configurations or "states". We usually think of games as invented by humans to generate a particular type of outcome.
This is a kind of "reverse science" in which laws are invented to generate phenomena. We can see the attraction of the "game design" model in attempts to display bare-bones emergence in the laboratory. It involves choosing components, inventing rules, letting phenomena emerge and possibly decomposing that is, reducing these to close the loop and understand the process.
The danger of doing this directly led them to build a physical model in a tank of liquid with snow represented as plastic pellets. They had theoretical reason to believe that their artificial world was a good replica of ours. They chose components liquid of a given density and pellets to mimic different types of snow , the physics of this world determined the rules that is what a physical model constrains one to, it can be a very efficient calculator and avalanches occurred.
As television will, this film left the subject too early. I presume that they are now designing controlled experiments, validating, and also trying to decompose the avalanches into the behaviour of elementary parts. The model of the game establishes a clear definition of possibility , and of the requirements on every component or agent to generate each possibility. Take chess, for example, where for any state of the board, we could enumerate every legal next state, and for each of those, every possible next state This process defines possibility in the world of chess.
Moreover, for any board in our list of possible boards, we will be able to list the sets of inputs that would have been required from the machines or humans playing the game to achieve it. Holland describes in fascinating detail the way that Art Samuel built a good checkers draughts playing machine in the s. Holland uses the example both to elaborate on the correspondence between games and models, and to demonstrate the emergence, through automatic learning, of good play. The exhaustive way of determining the possible states of the game is to lay out its entire tree.
Every possible move at the first round is listed and each of these determines a board state for round two. For each round two board state, every possible set of moves is listed to determine the possible board states at round three, and so on until the end of the game. Once we have listed all the ways of finishing the game, our enumeration is over. Just a few rules about the permitted transitions from one state to another have provided a rich and complicated universe of possibilities. Holland thinks that our social, biological and physical reality may stand in the same sort of relation to theory as the checkers' board tree does to the rules of checkers.
And in the same way, many social, biological or physical possible worlds will not usually be encountered because of their instability. Samuel's program shows us that once we have a set of "relevant" categories to apply to boards, an automatic procedure can create or lead to the emergence of a good checkers' player. Generating good play is already impressive, despite the ex machina categories. The huge set of possible trees is effectively although only approximately pruned to the set of trees that involve good players.
The ex machina categories are themselves emergent phenomena of good play in checkers. Board configurations like "one piece advantage" or, more strangely, "board moment" and some others that do even better are phenomena of the checkers' world that would be tracked by a good player.
Holland moves on to a discussion of the triangle perceiver mentioned above to show that categories could emerge automatically. So filling in for Holland a little, I think we can imagine the fully emergent checkers' world might be composed of the following hierarchy:. There is nothing trivial about building any of these machines.
Hidden Order: How Adaptation Builds Complexity by John Holland
But we can see that a single rule - "survival in a noisy world depends on playing good legal checkers" - could lead to identification of emergent phenomena in that game. Much of "Emergence" formalises the notions of the game, the agent and the "level". The first two are analysed as "constrained generating procedures" cgp , and "variable structure cgp's" cgp-v. The formalisation is very clear, and makes fertile reading for modellers.
New ways of describing problems suggest new modeling strategies.
Shop now and earn 2 points per $1
The cgp and the cgp-v are both finite state machines. Although they are not the only way of modelling games, they have been used to considerable effect as a framework in evolutionary game theory Abreu and Rubinstein , Nowak et al. They are ideally suited to reductionist modelling and therefore to the production of emergence because they force the modeller to be totally explicit about the interactions of the component agents.
Much more elusive is the notion of a level. Emergent phenomena are composed of persistent patterns. They emerge from "messy" low-level interactions and their emergence is recognised by the fact that they can be operated on by a separate set of rules. This is the sense in which emergent phenomena define "levels". Holland's framework allows a nice formalisation of this notion. Take a system composed of interacting cgps or cgp-vs and aggregate the cgps in some way.
Hidden Order: How Adaptation Builds Complexity (Helix Books)
Now construct a cgp which behaves just like the aggregate. The intuition is that one can define a function that performs the relevant transformations at the aggregate level while omitting the unnecessary detail on interaction between the "internal-only" connections of cgps. As usual, Holland provides a nice example to hook into as soon as the going gets a bit tough.