Bio-inspired computing, short for biologically inspired computing, is a field of study which seeks to solve computer science problems using models of biology. It relates to connectionism, social behavior, and emergence. Within computer science, bio-inspired computing relates to artificial intelligence and machine learning. Bio-inspired computing is a major subset of natural computation.

History

edit

Early Ideas

The ideas behind biological computing trace back to 1936 and the first description of an abstract computer, which is now known as a Turing machine. Turing firstly described the abstract construct using a biological specimen. Turing imagined a mathematician that has three important attributes.[1] He always has a pencil with an eraser, an unlimited number of papers and a working set of eyes. The eyes allow the mathematician to see and perceive any symbols written on the paper while the pencil allows him to write and erase any symbols that he wants. Lastly, the unlimited paper allows him to store anything he wants memory. Using these ideas he was able to describe an abstraction of the modern digital computer. However Turing mentioned that anything that can perform these functions can be considered such a machine and he even said that even electricity should not be required to describe digital computation and machine thinking in general.[2]

Neural Networks

First described in 1943 by Warren McCulloch and Walter Pitts, neural networks are a prevalent example of biological systems inspiring the creation of computer algorithms.[3] They first mathematically described that a system of simplistic neurons was able to produce simple logical operations such as logical conjunction, disjunction and negation. They further showed that a system of neural networks can be used to carry out any calculation that requires finite memory. Around 1970 the research around neural networks slowed down and many consider a 1969 book by Marvin Minsky and Seymour Papert as the main cause.[4][5] Their book showed that neural network models were able only model systems that are based on Boolean functions that are true only after a certain threshold value. Such functions are also known as threshold functions. The book also showed that a large amount of systems cannot be represented as such meaning that a large amount of systems cannot be modeled by neural networks. Another book by James Rumelhart and David McClelland in 1986 brought neural networks back to the spotlight by demonstrating the linear back-propagation algorithm something that allowed the development of multi-layered neural networks that did not adhere to those limits.[6]

Ant Colonies

Douglas Hofstadter in 1979 described an idea of a biological system capable of performing intelligent calculations even though the individuals comprising the system might not be intelligent.[7] More specifically, he gave the example of an ant colony that can carry out intelligent tasks together but each individual ant cannot exhibiting something called "emergent behavior." Azimi et al. in 2009 showed that what they described as the "ant colony" algorithm, a clustering algorithm that is able to output the number of clusters and produce highly competitive final clusters comparable to other traditional algorithms.[8] Lastly Hölder and Wilson in 2009 concluded using historical data that ants have evolved to function as a single "superogranism" colony.[9] A very important result since it suggested that group selection evolutionary algorithms coupled together with algorithms similar to the "ant colony" can be potentially used to develop more powerful algorithms.

Areas of research

edit

Some areas of study in biologically inspired computing, and their biological counterparts:

Bio-Inspired Computing Topic Biological Inspiration
Genetic Algorithms Evolution
Biodegradability prediction Biodegradation
Cellular Automata Life
Emergence Ants, termites, bees, wasps
Artificial neural networks Biological neural networks
Artificial life Life
Artificial immune system Immune system
Rendering (computer graphics) Patterning and rendering of animal skins, bird feathers, mollusk shells and bacterial colonies
Lindenmayer systems Plant structures
Communication networks and communication protocols Epidemiology
Membrane computers Intra-membrane molecular processes in the living cell
Excitable media Forest fires, "the wave", heart conditions, axons
Sensor networks Sensory organs
Learning classifier systems Cognition, evolution

Artificial intelligence

edit

Bio-Inspired computing can be distinguished from traditional artificial intelligence by its approach to computer learning. Bio-inspired computing uses an evolutionary approach, while traditional A.I. uses a 'creationist' approach. Bio-inspired computing begins with a set of simple rules and simple organisms which adhere to those rules. Over time, these organisms evolve within simple constraints. This method could be considered bottom-up or decentralized. In traditional artificial intelligence, intelligence is often programmed from above: the programmer is the creator, and makes something and imbues it with its intelligence.

Virtual Insect Example

edit

Bio-inspired computing can be used to train a virtual insect. The insect is trained to navigate in an unknown terrain for finding food equipped with six simple rules:

  • turn right for target-and-obstacle left;
  • turn left for target-and-obstacle right;
  • turn left for target-left-obstacle-right;
  • turn right for target-right-obstacle-left;
  • turn left for target-left without obstacle;
  • turn right for target-right without obstacle.

The virtual insect controlled by the trained spiking neural network can find food after training in any unknown terrain.[10] After several generations of rule application it is usually the case that some forms of complex behaviour emerge. Complexity gets built upon complexity until the result is something markedly complex, and quite often completely counterintuitive from what the original rules would be expected to produce (see complex systems). For this reason, when modeling the neural network, it is necessary to accurately model an in vivo network, by live collection of "noise" coefficients that can be used to refine statistical inference and extrapolation as system complexity increases.[11]

Natural evolution is a good analogy to this method–the rules of evolution (selection, recombination/reproduction, mutation and more recently transposition) are in principle simple rules, yet over millions of years have produced remarkably complex organisms. A similar technique is used in genetic algorithms.

Brain-inspired computing

edit

Brain-inspired computing refers to computational models and methods that are mainly based on the mechanism of the brain, rather than completely imitating the brain. The goal is to enable the machine to realize various cognitive abilities and coordination mechanisms of human beings in a brain-inspired manner, and finally achieve or exceed Human intelligence level.

Research

edit

Artificial intelligence researchers are now aware of the benefits of learning from the brain information processing mechanism. And the progress of brain science and neuroscience also provides the necessary basis for artificial intelligence to learn from the brain information processing mechanism. Brain and neuroscience researchers are also trying to apply the understanding of brain information processing to a wider range of science field. The development of the discipline benefits from the push of information technology and smart technology and in turn brain and neuroscience will also inspire the next generation of the transformation of information technology.

The influence of brain science on Brain-inspired computing

edit

Advances in brain and neuroscience, especially with the help of new technologies and new equipment, support researchers to obtain multi-scale, multi-type biological evidence of the brain through different experimental methods, and are trying to reveal the structure of bio-intelligence from different aspects and functional basis. From the microscopic neurons, synaptic working mechanisms and their characteristics, to the mesoscopic network connection model, to the links in the macroscopic brain interval and their synergistic characteristics, the multi-scale structure and functional mechanisms of brains derived from these experimental and mechanistic studies will provide important inspiration for building a future brain-inspired computing model.[12]

Brain-inspired chip

edit

Broadly speaking, brain-inspired chip refers to a chip designed with reference to the structure of human brain neurons and the cognitive mode of human brain. Obviously, the "neuromorphic chip" is a brain-inspired chip that focuses on the design of the chip structure with reference to the human brain neuron model and its tissue structure, which represents a major direction of brain-inspired chip research. Along with the rise and development of “brain plans” in various countries, a large number of research results on neuromorphic chips have emerged, which have received extensive international attention and are well known to the academic community and the industry. For example, EU-backed SpiNNaker and BrainScaleS, Stanford's Neurogrid, IBM's TrueNorth, and Qualcomm's Zeroth.

TrueNorth is a brain-inspired chip that IBM has been developing for nearly 10 years. The US DARPA program has been funding IBM to develop pulsed neural network chips for intelligent processing since 2008. In 2011, IBM first developed two cognitive silicon prototypes by simulating brain structures that could learn and process information like the brain. Each neuron of a brain-inspired chip is cross-connected with massive parallelism. In 2014, IBM released a second-generation brain-inspired chip called "TrueNorth." Compared with the first generation brain-inspired chips, the performance of the TrueNorth chip has increased dramatically, and the number of neurons has increased from 256 to 1 million; the number of programmable synapses has increased from 262,144 to 256 million; Subsynaptic operation with a total power consumption of 70 mW and a power consumption of 20 mW per square centimeter. At the same time, TrueNorth handles a nuclear volume of only 1/15 of the first generation of brain chips. At present, IBM has developed a prototype of a neuron computer that uses 16 TrueNorth chips with real-time video processing capabilities.[13] The super-high indicators and excellence of the TrueNorth chip have caused a great stir in the academic world at the beginning of its release.

In 2012, the Institute of Computing Technology of the Chinese Academy of Sciences(CAS) and the French Inria collaborated to develop the first chip in the world to support the deep neural network processor architecture chip "Cambrian".[14] The technology has won the best international conferences in the field of computer architecture, ASPLOS and MICRO, and its design method and performance have been recognized internationally. The chip can be used as an outstanding representative of the research direction of brain-inspired chips.

Challenges in Brain-Inspired Computing

edit

Unclear Brain mechanism cognition

edit

The human brain is a product of evolution. Although its structure and information processing mechanism are constantly optimized, compromises in the evolution process are inevitable. The cranial nervous system is a multi-scale structure. There are still several important problems in the mechanism of information processing at each scale, such as the fine connection structure of neuron scales and the mechanism of brain-scale feedback. Therefore, even a comprehensive calculation of the number of neurons and synapses is only 1/1000 of the size of the human brain, and it is still very difficult to study at the current level of scientific research.[15] Recent advances in brain simulation linked individual variability in human cognitive processing speed and fluid intelligence to the balance of excitation and inhibition in structural brain networks, functional connectivity, winner-take-all decision-making and attractor working memory.[16]

Unclear Brain-inspired computational models and algorithms

edit

In the future research of cognitive brain computing model, it is necessary to model the brain information processing system based on multi-scale brain neural system data analysis results, construct a brain-inspired multi-scale neural network computing model, and simulate multi-modality of brain in multi-scale. Intelligent behavioral ability such as perception, self-learning and memory, and choice. Machine learning algorithms are not flexible and require high-quality sample data that is manually labeled on a large scale. Training models require a lot of computational overhead. Brain-inspired artificial intelligence still lacks advanced cognitive ability and inferential learning ability.

Constrained Computational architecture and capabilities

edit

Most of the existing brain-inspired chips are still based on the research of von Neumann architecture, and most of the chip manufacturing materials are still using traditional semiconductor materials. The neural chip is only borrowing the most basic unit of brain information processing. The most basic computer system, such as storage and computational fusion, pulse discharge mechanism, the connection mechanism between neurons, etc., and the mechanism between different scale information processing units has not been integrated into the study of brain-inspired computing architecture. Now an important international trend is to develop neural computing components such as brain memristors, memory containers, and sensory sensors based on new materials such as nanometers, thus supporting the construction of more complex brain-inspired computing architectures. The development of brain-inspired computers and large-scale brain computing systems based on brain-inspired chip development also requires a corresponding software environment to support its wide application.

See also

edit
Lists

References

edit
  1. ^ Turing, Alan (1936). On computable numbers : with an application to the Entscheidungsproblem. Mathematical Society. OCLC 18386775.
  2. ^ Turing, Alan (2004-09-09), "Computing Machinery and Intelligence (1950)", The Essential Turing, Oxford University Press, pp. 433–464, doi:10.1093/oso/9780198250791.003.0017, ISBN 978-0-19-825079-1, retrieved 2022-05-05
  3. ^ McCulloch, Warren; Pitts, Walter (2021-02-02), "A Logical Calculus of the Ideas Immanent in Nervous Activity (1943)", Ideas That Created the Future, The MIT Press, pp. 79–88, doi:10.7551/mitpress/12274.003.0011, ISBN 9780262363174, S2CID 262231397, retrieved 2022-05-05
  4. ^ Minsky, Marvin (1988). Perceptrons : an introduction to computational geometry. The MIT Press. ISBN 978-0-262-34392-3. OCLC 1047885158.
  5. ^ "History: The Past". userweb.ucs.louisiana.edu. Retrieved 2022-05-05.
  6. ^ McClelland, James L.; Rumelhart, David E. (1999). Parallel distributed processing : explorations in the microstructure of cognition. MIT Press. ISBN 0-262-18120-7. OCLC 916899323.
  7. ^ Hofstadter, Douglas R. (1979). Gödel, Escher, Bach : an eternal golden braid. Basic Books. ISBN 0-465-02656-7. OCLC 750541259.
  8. ^ Azimi, Javad; Cull, Paul; Fern, Xiaoli (2009), "Clustering Ensembles Using Ants Algorithm", Methods and Models in Artificial and Natural Computation. A Homage to Professor Mira’s Scientific Legacy, Lecture Notes in Computer Science, vol. 5601, Berlin, Heidelberg: Springer Berlin Heidelberg, pp. 295–304, doi:10.1007/978-3-642-02264-7_31, ISBN 978-3-642-02263-0, retrieved 2022-05-05
  9. ^ Wilson, David Sloan; Sober, Elliott (1989). "Reviving the superorganism". Journal of Theoretical Biology. 136 (3): 337–356. Bibcode:1989JThBi.136..337W. doi:10.1016/s0022-5193(89)80169-9. ISSN 0022-5193. PMID 2811397.
  10. ^ Xu Z; Ziye X; Craig H; Silvia F (Dec 2013). "Spike-based indirect training of a spiking neural network-controlled virtual insect". 52nd IEEE Conference on Decision and Control. pp. 6798–6805. CiteSeerX 10.1.1.671.6351. doi:10.1109/CDC.2013.6760966. ISBN 978-1-4673-5717-3. S2CID 13992150. {{cite book}}: |journal= ignored (help)
  11. ^ Joshua E. Mendoza. ""Smart Vaccines" – The Shape of Things to Come". Research Interests. Archived from the original on November 14, 2012.
  12. ^ 徐波,刘成林,曾毅.类脑智能研究现状与发展思考[J].中国科学院院刊,2016,31(7):793-802.
  13. ^ "美国类脑芯片发展历程". Electronic Engineering & Product World.
  14. ^ Chen, Tianshi; Du, Zidong; Sun, Ninghui; Wang, Jia; Wu, Chengyong; Chen, Yunji; Temam, Olivier (2014). "Dian Nao". ACM SIGARCH Computer Architecture News. 42: 269–284. doi:10.1145/2654822.2541967.
  15. ^ Markram Henry, Muller Eilif, Ramaswamy Srikanth Reconstruction and simulation of neocortical microcircuitry [J].Cell, 2015, Vol.163 (2), pp.456-92PubMed
  16. ^ Schirner, Michael; Deco, Gustavo; Ritter, Petra (2023). "Learning how network structure shapes decision-making for bio-inspired computing". Nature Communications. 14 (2963): 2963. Bibcode:2023NatCo..14.2963S. doi:10.1038/s41467-023-38626-y. PMC 10206104. PMID 37221168.

Further reading

edit

(the following are presented in ascending order of complexity and depth, with those new to the field suggested to start from the top)

edit