Piero Scaruffi(Copyright © 2013 Piero Scaruffi | Legal restrictions )
These are excerpts and elaborations from my book "The Nature of Consciousness"
In the 1970s the US computer scientist John Holland had the intuition that the best way to solve a problem is to mimic what biological organisms do to solve their problem of survival: to evolve (through natural selection) and to reproduce (through genetic recombination). Genetic algorithms apply recursively a series of biologically-inspired operators to a population of potential solutions of a given problem. Each application of operators generates new populations of solutions, which should better and better approximate the best solution. What evolves is not the single individual but the population as a whole.
Genetic algorithms are actually a further refinement of search methods within problem spaces. Genetic algorithms improve the search by incorporating the criterion of "competition".
Recalling Newell’s and Simon's definition of problem solving as "searching in a problem space", David Goldberg defines genetic algorithms as "search algorithms based on the mechanics of natural selection and natural genetics". Most optimization methods work from a single point in the decision space and employ a transition method to determine the next point. Genetic algorithms, instead, work from an entire "population" of points simultaneously, trying many directions in parallel and employing a combination of several genetically-inspired methods to determine the next population of points.
One can employ simple algorithms such as “reproduction“ (that copies chromosomes according to a fitness function), “crossover“ (that switches segments of two chromosomes) and “mutation“, as well as more complex algorithms such as “dominance“ (a genotype-to-phenotype mapping), “diploidy“ (pairs of chromosomes), “abeyance“ (shielded against over-selection), “inversion“ (the primary natural mechanism for recording a problem, by switching two points of a chromosome); and so forth.
Holland's “Classifier” (which learns new rules to optimize its performance) was the first practical application of genetic algorithms. A classifier system is a machine-learning system that learns syntactically rules (or "classifiers") to guide its performance in the environment. A classifier system consists of three main components: a production system, a credit system (such as the "bucket brigade") and a genetic algorithm to generate new rules. Its emphasis on competition and cooperation, on feedback and reinforcement, rather than on pre-programmed rules, set it apart from knowledge-based models of Artificial Intelligence.
A measure function computes how "fit" an individual is. The selection process starts from a random population of individual. For each individual of the population the fitness function provides a numeric value for how much the solution is far from the ideal solution. The probability of selection for that individual is made proportional to its "fitness". On the basis of such fitness values a subset of the population is selected. This subset is allowed to reproduce itself through biologically-inspired operators of crossover, mutation and inversion.
Each individual (each point in the space of solutions) is represented as a string of symbols. Each genetic operator performs an operation on the sequence or content of the symbols.
When a message from the environment matches the antecedent of a rule, the message specified in the consequent of the rule is produced. Other messages produced by the rules cycle back into the classifier system, some generate action on the environment. A message is a string of characters from a specified alphabet. The rules are not written in the Predicate Logic of expert systems, but in a language that lacks descriptive power and is limited to simple conjunctive expressions.
Credit assignment is the process whereby the system evaluates the effectiveness of its rules. The “bucket brigade“ algorithm assigns a strength (a measure of its past usefulness) to each rule. Each rule then makes a bid (proportional to its strength and to its relevance to the current situation) and only the highest-bidding rules are allowed to pass their messages on. The strengths of the rules are modified according to an economic analogy: every time a rule bids, its strength is reduced by the value of the bid while the strength of its "suppliers" (the rules that sent the messages matched by this bidder) are increased. The bidder’s strength will in turn increase if its consumers (the rules that receive its message) become bidders. This leads to a chain of suppliers/consumers whose success ultimately depends on the success of the rules that act directly on the environment.
Then the system replaces the least useful (weak) rules with newly generated rules that are based on the system's accumulated experience, i.e. by combining selected "building blocks" ("strong" rules) according to some genetic algorithms.
Holland went on to focus on "complex adaptive systems". Such systems are governed by principles of anticipation and feedback. Based on a model of the world, an adaptive system anticipates what is going to happen. Models are improved based on feedback from the environment.
Complex adaptive systems are ubiquitous in nature. They include brains, ecosystems and even economies. They share a number of features: each of these systems is a network of agents acting in parallel and interacting; behavior of the system arises from cooperation and competition among its agents; each of these systems has many levels of organization, with agents at each level serving as building blocks for agents at a higher level; such systems are capable of rearranging their structure based on their experience; they are capable of anticipating the future by means of innate models of the world; new opportunities for new types of agents are continuously being created within the system.
All complex adaptive systems share four properties (aggregation, non-linearity, flowing, diversity) and three mechanisms (categorization by tagging, anticipation through internal models, decomposition in building blocks).
Each adaptive agent can be represented by a framework consisting of a performance system (to describe the system's skills), a credit-assignment algorithm (to reward the fittest rules) and a rule-discovery algorithm (to generate plausible hypotheses).
Back to the beginning of the chapter "Machine Intelligence" | Back to the index of all chapters