CoDi is a cellular automaton (CA) model for spiking neural networks (SNNs).[1] CoDi is an acronym for Collect and Distribute, referring to the signals and spikes in a neural network.

A CA-space with 64x64 cells in the signaling phase with the CoDi-model with axonal (red) signal trails and dendritic (green) and neuron bodies (white).

CoDi uses a von Neumann neighborhood modified for a three-dimensional space; each cell looks at the states of its six orthogonal neighbors and its own state. In a growth phase a neural network is grown in the CA-space based on an underlying chromosome. There are four types of cells: neuron body, axon, dendrite and blank. The growth phase is followed by a signaling- or processing-phase. Signals are distributed from the neuron bodies via their axon tree and collected from connection dendrites.[1] These two basic interactions cover every case, and they can be expressed simply, using a small number of rules.

Cell interaction during signaling

edit
 
Codi signaling: the arrows inside the axonal (red) signal trails and dendritic (green) signal trails indicate the direction of information flow during the signaling phase.

The neuron body cells collect neural signals from the surrounding dendritic cells and apply an internally defined function to the collected data. In the CoDi model the neurons sum the incoming signal values and fire after a threshold is reached. This behavior of the neuron bodies can be modified easily to suit a given problem. The output of the neuron bodies is passed on to its surrounding axon cells. Axonal cells distribute data originating from the neuron body. Dendritic cells collect data and eventually pass it to the neuron body. These two types of cell-to-cell interaction cover all kinds of cell encounters.

Every cell has a gate, which is interpreted differently depending on the type of the cell. A neuron cell uses this gate to store its orientation, i.e. the direction in which the axon is pointing. In an axon cell, the gate points to the neighbor from which the neural signals are received. An axon cell accepts input only from this neighbor, but makes its own output available to all its neighbors. In this way axon cells distribute information. The source of information is always a neuron cell. Dendritic cells collect information by accepting information from any neighbor. They give their output, (e.g. a Boolean OR operation on the binary inputs) only to the neighbor specified by their own gate. In this way, dendritic cells collect and sum neural signals, until the final sum of collected neural signals reaches the neuron cell.

Each axonal and dendritic cell belongs to exactly one neuron cell. This configuration of the CA-space is guaranteed by the preceding growth phase.

Synapses

edit

The CoDi model does not use explicit synapses, because dendrite cells that are in contact with an axonal trail (i.e. have an axon cell as neighbor) collect the neural signals directly from the axonal trail. This results from the behavior of axon cells, which distribute to every neighbor, and from the behavior of the dendrite cells, which collect from any neighbor.

The strength of a neuron-neuron connection (a synapse) is represented by the number of their neighboring axon and dendrite cells. The exact structure of the network and the position of the axon-dendrite neighbor pairs determine the time delay and strength (weight) of a neuron-neuron connection. This principle infers that a single neuron-neuron connection can consist of several synapse with different time delays with independent weights.

Genetic encoding and growth of the network

edit
 
CA-space with chromosome during growth phase with two randomly positioned neuron cells (white) with two dendrites and two axons each. (Right) The beginning of the growth phase, after three CA-steps.

The chromosome is initially distributed throughout the CA-space, so that every cell in the CA-space contains one instruction of the chromosome, i.e. one growth instruction, so that the chromosome belongs to the network as a whole. The distributed chromosome technique of the CoDi model makes maximum use of the available CA-space and enables the growth of any type of network connectivity. The local connection of the grown circuitry to its chromosome, allows local learning to be combined with the evolution of grown neural networks.

 
A growing neuron in the CoDi-model with two dendrites and two axons. The arrows inside the axonal and dendritic signal trails indicate the direction of information flow during the growth phase.

Growth signals are passed to the direct neighbors of the neuron cell according to its chromosome information. The blank neighbors, which receive a neural growth signal, turn into either an axon cell or a dendrite cell. The growth signals include information containing the cell type of the cell that is to be grown from the signal. To decide in which directions axonal or dendritic trails should grow, the grown cells consult their chromosome information which encodes the growth instructions. These growth instructions can have an absolute or a relative directional encoding. An absolute encoding masks the six neighbors (i.e. directions) of a 3D cell with six bits. After a cell is grown, it accepts growth signals only from the direction from which it received its first signal. This reception direction information is stored in the gate position of each cell's state.

Implementation as a partitioned CA

edit
 
State representation in the CoDi model. During the growth phase 6 of the bits are used to store the chromosome's growth instructions. The same 6 bits are later used to store the activity of a neuron cell during the signaling phase.

The states of our CAs have two parts, which are treated in different ways. The first part of the cell-state contains the cell's type and activity level and the second part serves as an interface to the cell's neighborhood by containing the input signals from the neighbors. Characteristic of our CA is that only part of the state of a cell is passed to its neighbors, namely the signal and then only to those neighbors specified in the fixed part of the cell state. This CA is called partitioned, because the state is partitioned into two parts, the first being fixed and the second is variable for each cell.

The advantage of this partitioning-technique is that the amount of information that defines the new state of a CA cell is kept to a minimum, due to its avoidance of redundant information exchange.

Implementation in hardware

edit

Since CAs are only locally connected, they are ideal for implementation on purely parallel hardware. When designing the CoDi CA-based neural networks model, the objective was to implement them directly in hardware (FPGAs). Therefore, the CA was kept as simple as possible, by having a small number of bits to specify the state, keeping the CA rules few in number, and having few cellular neighbors.

The CoDi model was implemented in the FPGA based CAM-Brain Machine (CBM) by Korkin.[2]

History

edit

CoDi was introduced by Gers et al. in 1998.[1] A specialized parallel machine based on FPGA Hardware (CAM) to run the CoDi model on a large scale was developed by Korkin et al.[2] De Garis conducted a series of experiments on the CAM-machine evaluating the CoDi model. The original model, where learning is based on evolutionary algorithms, has been augmented with a local learning rule via feedback from dendritic spikes by Schwarzer.[3]

See also

edit

References

edit
  1. ^ a b c Gers, Felix; Hugo Garis; Michael Korkin (1998). "CoDi-1Bit : A simplified cellular automata based neuron model". Artificial Evolution. Lecture Notes in Computer Science. Vol. 1363. pp. 315–333. CiteSeerX 10.1.1.2.17. doi:10.1007/BFb0026610. ISBN 978-3-540-64169-8.
  2. ^ a b de Garis, Hugo; Michael Korkin; Gary Fehr (2001). "The CAM-Brain Machine (CBM): An FPGA Based Tool for Evolving a 75 Million Neuron Artificial Brain to Control a Lifesized Kitten Robot". Autonomous Robots. 10 (3): 235–249. doi:10.1023/A:1011286308522. ISSN 0929-5593. S2CID 28589336.
  3. ^ Schwarzer, Jens; Müller-Schloer, Christian (2004-08-05). Lernverfahren für evolutionär optimierte Künstliche Neuronale Netze auf der Basis Zellulärer Automaten. Logos Verlag Berlin. pp. 125–. ISBN 9783832506285. Retrieved 7 January 2013.