Home Closet Article from an old book – “How to Create an Optical Brain”

Article from an old book – “How to Create an Optical Brain”

by admin

Good evening everyone!
Recently I got hold of a book, the article from which intrigued me so much, that I rushed to share it with the community. The book is old – if I’m not mistaken 1986. Of course – this article is typical scientific speculation – publication of an unproven hypothesis in order to attract attention, but it intrigued me.
I publish directly from the scanner – one and the same. That’s why there are LOTS of letters.
HOW TO CREATE AN OPTICAL BRAIN" V. M. ZAKHARCHENKO, G. V. SKROTSKY
Advances in neurophysiology in recent years have largely clarified the principles of the brain, the most complex and mysterious natural phenomenon known to us. According to the famous American scientist D. Huebel: In the last decade neurobiology has become one of the most active branches of science. The consequence of this has recently been a veritable explosion of discoveries and insights.
On the other hand, the 1970s were characterized by the rapid development of microelectronics, optoelectronics, and optical information processing techniques. Therefore, it is natural and natural that attempts were made to use the achievements of modern techniques and technology for modeling brain functioning and to create, on this basis, fundamentally new information processing systems. Thus, the combination of optoelectronic capabilities and some methods of optical information processing made it possible to propose and substantiate a new idea, the idea of creating an optical brain.
As we know, brain consists of nerve cells – neurons, which are connected with each other by neuron branches and inter-neuron connections – synapses. According to the latest data, there are at least 5-10^14 neurons in the brain. Despite their enormous number, neuronal bodies occupy only a few percent of the total brain volume. The rest of the space is occupied by inter-neuronal connections – nerve fibers of micron and submicron thickness. Each neuron of the cerebral cortex has up to several tens of thousands of connections that carry signals from other neurons. If the cumulative effect of these signals exceeds the neuron’s trigger threshold, it is excited and generates an output signal. A neuron has only one output, but it branches into a great number of connections going to other neurons. Signal transmission coefficients of connections are unequal (both in value and sign), so other neurons receive completely different signals. Neurons can be compared to control centers that receive and distribute signals coming through inter-neuronal connections. There are at least 10^14 such connections in the brain. The understanding that synapses are among the main structural components of the brain, primarily determining its functional characteristics, is one of the most significant conclusions made by neurophysiologists. In confirmation of this, we can quote the statement of a famous neurophysiologist E. Kandel: "It is the conviction of many neurobiologists that eventually it will be proved that the unique properties of each individual – ability to feel, think, learn and remember – are contained in strictly organized networks of synthetic interconnections between neurons of the brain".
Most of the brain, about 1, 000 cm^3 of 1, 400 cm^3, is occupied by the cerebral cortex. It is gathered in folds and is about 3 mm thick. The entire cortex area is divided into functional information processing zones: visual, auditory, motor, etc. In their turn, the functional zones are divided into modules with the area of a fraction of a square millimeter and height equal to the thickness of the cortex. Each module is responsible for processing a certain type of signals coming from certain receptors, such as a section of the retina.
The enormous variety of information from the senses to the brain about the properties of the environment is displayed on a multitude of cortical neurons. Depending on the parameters of the incoming signal and its position in space, certain areas of the cortex are excited. The cortex is organized vertically in layers. Each neuron of one layer is predominantly connected with neurons of another layer. An ensemble of excited neurons of one layer sends signals to another layer; the second layer also produces an ensemble of excited neurons, etc. Each cortical module is a local neural network, which transforms information by transferring it from input to output.
In such a simplified model of the brain, the problem of developing its artificial analogue can technically be divided into two parts: creation of artificial neurons and realization of a spatial structure of tens and hundreds of trillions of inter-neuron connections.
Various electronic models of neurons have been developed. With the help of modern integrated technology it is always possible to manufacture them in sufficient quantity. Recreating the spatial structure of neuronal connections is an incomparably more difficult task. In the rich arsenal of microelectronic circuit technology, there are no methods that allow to create systems, each element of which would have thousands and tens of thousands of connections with other elements of the system. And not just any links, but ones each with its own individual conductivity. In order to realize the most complex spatial structure of a huge number of intertwined connections, fundamentally new solutions are required.
The real practical way to solve this problem lies in optical modeling of neural structures. Light rays do not interact with each other, and therefore the restrictions on the density of space saturation by optical communication channels and the geometry of their location are completely removed. The technique created during development of holographic memory can be used for such modeling. Almost all numerous variants of holographic memory existing nowadays can be used with greater or lesser modifications. For example, the first experimental model of a neural network was based on the most common scheme of a holographic memory device with a gas-discharge laser, a beam deflector and a rectangular matrix of holograms. The most promising for creating optical models of neural systems is the technique that uses the capabilities of integrated technology of micro- and optoelectronics. Therefore, let us consider as an example an optical neural network with holographic memory based on semiconductor laser matrices.
Information in such a memory is recorded on a light-sensitive medium in holograms (up to 1 mm in diameter) assembled into matrices. A matrix of semiconductor lasers is located in front of the matrix of holograms. A laser beam passing through a hologram splits into a set of light beams, which location and intensity depend on information recorded in a hologram. A matrix of photocells which register light signals is located at some distance behind a matrix of holograms.
Now let us imagine that each laser is the output of a particular neuron. Its output signal – a beam – is split in the hologram into a set of light connections – beams, going to inputs of neurons of the next layer – photocells. Light connections differ in their weight-intensity of the beam. All light signals going to a certain neuron are summed up by the photocell, the output signal of which is proportional to this summed signal at the input. So, the input of a neuron is a photocell, and the output is a laser plus a hologram with a fan of connections of this neuron with all neurons of the next layer recorded on it. It remains to connect the input with the output, putting a threshold element between them, and we have a model of a neuron.
Let’s place one more holographic memory matrix after the photocell matrix, so that signals from the photocells of the first memory control radiation of the semiconductor lasers matrix of the second one. Then we put a third memory after it, and so on. As a result, we obtain a periodic structure equivalent to a sequence of neuron layers of the brain. Here as well as in the brain, incoming information is passed from layer to layer, passing through higher and higher stages of processing which program is determined only by structure of connections, recorded on holograms. The density of these connections is equal to the density of information recorded on holograms and is about 104 connections per 1 mm2.
To change the system of connections it is enough to replace the block of holograms with another one. Even a brain created by nature does not have such a technical advantage. True, it has another advantage. All inter-neuron connections of the brain are flexible, they can be changed in the process of human learning, accumulation of life experience. The optical brain is trained beforehand, all its knowledge is contained in changeable blocks of holograms, in structures of synthetic interconnections recorded on them. If one sets a task of creating a full analogue of a human brain, such differences are of course a disadvantage. But if we have in mind technical applications of artificial neural systems, for example in robotics, where the possibility of serial production and fast change of the robot behavior program are required, these differences become an advantage.
The described system has one more advantage – modularity of construction, and a module is a holographic memory block. Let’s consider possible parameters of such a module. Density of links recording on holograms can reach 10, 000 links per one square centimeter. This means that 1000 holograms can be written on a plate, the area of 1 cm2, each having 1000 connections, which connect 1000 outputs of neurons of one layer with 1000 inputs of neurons of another layer. Possibilities of modern technology allow to make a matrix of 1000 lasers on an area of 1 cm2. But a matrix of 10 photocells on the area of 1 cm2 is already a passed stage for modern integral technology. The task is made easier by the fact that neither laser matrices nor photocell matrices have external electrical connections, except, of course, for the power supply.
Consequently, the module considered, let’s call it an optoneuron module, is equivalent to a layer of a thousand neurons with a million inter-neuron connections, contains a matrix of a thousand semiconductor lasers, a matrix of a thousand photocells and has a form of a cube with a side of 1 cm. Response time of the module elements does not exceed 10^-6 s, and the number of its elements approximately corresponds to the number of neurons in one layer of the cerebral cortex neural module. The modules can be used as cubes to build complex neural structures.
Let’s try to estimate the size of optoneuron model of a brain containing 5-10^10 neurons and 5-10^13 inter-neuron connections. To build such an opto-brain we would need 5*10^7 modules of one thousand neurons with total volume of 50 m2. Volume of modern computers with all equipment set is approximately the same. Of course, in comparison with the human brain, which has a volume of about 1.5 l, the optical brain loses approximately 30 000 times. But we should not forget that in speed of elements, and consequently in computing power, it is 10^4 – 10^5 times better than the human brain.
Consider now another problem. It is not enough to make an optical brain. To breathe life into it, we need to fill it with information content – to determine the structure of light connections. Then the optical brain will come to life and do the work, which is determined by the character of its connections: translate from Russian into English, or operate a spaceship, or analyze visual images, etc. But determining the structure of brain connections is much more difficult than creating the artificial brain itself. There is a direct analogy with computer technology: the cost of computer software is several times higher than the cost of the machine itself.
There are two main ways to create algorithms for information processing in artificial neural systems. The first one requires studying the principles and schemes of information processing in the brain by methods of neurophysiology. This kind of work is actively carried out. An example is the research of principles of visual information processing in the brain, carried out by the American scientists D. Huebel and T. Wiesel, who were awarded the Nobel Prize in medicine in 1981. The simplest of such algorithms include, for example, keyword search algorithms used in most current information retrieval systems.
Let us consider a variant of the algorithm developed and practically implemented in an optoneuron system for recognition of document search images. Despite its simplicity, this algorithm resembles in many ways the intelligent operations performed by humans when searching for information.
Imagine that you search a library’s catalog for literature on a particular query, e.g. : "Designing transistor radios". Let us see how you go about your work. First of course you read the text of the query and in the process of reading turn sequences of letters into words for concepts. This is the first stage of information processing. Then you recollect words close in meaning. As a result, your consciousness fixes not only the query words, but many other words and concepts associated with them. For example, if looking through the catalog you come across a card for a book "Development of portable radio equipment", although this card contains words other than in the query, you put it aside, because the word "development" is associated with "design", the word "portable" most likely means that the equipment is "transistorized", etc. So, query enrichment is the second stage of information processing, which uses your knowledge of the topic. Finally, the third stage of processing is assessing the semantic proximity of the content of the catalog cards and the query.
We have analyzed the human information search process and distinguished three main stages in it. Now let us try to develop the structure of a neural information retrieval system based on this analysis. Let us write down the scheme of information processing: letters – words – associative set of words – cards with book addresses. Four forms of information and three stages of processing at transition from one form to another. In neuronal systems, information is transformed when passing from one layer to another. So our optoneuron system must contain four neuron layers and three matrices of holograms with inter-neuron connections that fill the three interlayer gaps.
The first layer is letters. Each neuron of the first layer corresponds to a certain letter of the alphabet (taking into account its place in the word). The second layer is words. Each neuron of the second layer corresponds to a certain word from the used dictionary. The third layer is also words. Finally, the fourth layer is search objects – catalog cards. Each neuron of the fourth layer corresponds to a particular catalog card.
Now let’s look at the inter-neuronal connections. First the connections of the first and second layers. A neuron of the second layer is connected with a neuron of the first layer if the corresponding letter is in the corresponding word. Optical connections of the second and third layers are a reflection of associative connections between words in the human brain. If there is an associative connection between two words, the corresponding neuron of the second layer is connected by a light connection of a certain intensity with the corresponding neuron of the third layer. The third set of connections between words and search objects reflects the set of keywords contained in the cards. If a card contains some keyword, its neuron in the third layer is connected by a light beam with the neuron of this card in the fourth layer.
By writing holograms with inter-neuron connections we have thereby entered the necessary information into the system’s memory. Now let’s look at how it works. As you type in the letters that make up the query words, the corresponding neurons of the first layer are excited. The lasers at the output of these neurons are turned on. Holograms split the emission of lasers into a set of beams, which go to the inputs of neurons of the second layer, according to the scheme of inter-neuron connections. It excites neurons, whose inputs received a total signal exceeding the neuron actuation threshold. The ensemble of excited neurons of the second layer corresponds to the set of query words. Light connections of neurons of this layer get to neurons of the third layer and excite some part of them as well. The ensemble of excited neurons of the third layer corresponds to the associated set of words, and the ensemble of excited neurons of the fourth layer – to the catalog cards that meet the request. The lasers switched on at the output of the neurons of this layer denote the found cards.
Let us compare the capabilities of modern computing technology, the human brain, and the optical brain. The comparison is based on two key parameters: speed of information processing and memory capacity. For a computer using a digital mechanism of information processing, these parameters are determined by the number of arithmetic operations per second and memory capacity in bits. For a brain working on other principles these parameters are not determined. Therefore, we will consider the computational power of the brain to be equal to the power of the computer that will be needed to simulate its operation, and the memory capacity to be equal to the binary memory of the computer in which the information stored in the brain’s neural connections can be recorded. It is all the more natural, because the main tool for modeling neural systems nowadays is computer technology. The end address and start address of each connection between neurons, its weight, neuronal excitation thresholds, etc. are written into the machine’s memory.
The signal at the output of a communication channel is equal to the product of the input signal by the channel transfer coefficient. Therefore, one analog multiplication operation is performed when the signal is transmitted via inter-neuronal communication. Then the signal is summed up with the rest at the input of the neuron. So, for each act of signal passing through the inter-neuronal communication there is one analog multiplication operation and one addition operation. The number of addition and multiplication operations simultaneously performed in the brain by the whole brain is equal to the number of its inter-neuronal connections, and the total brain computing power is equal to the number of inter-neuronal connections multiplied by the signal repetition frequency. When simulating brain activity on a computer, all these operations are performed digitally. The required power of the computer is not less than the computing power of the brain. If we assume that the number of inter-neuron connections of the brain is 10^14 and the signal repetition rate is 100 s^-1, then the equivalent computing power of the brain is 10^16 operations per second.
The amount of memory is determined by the bit depth of the binary numbers encoding the link addresses and the total number of links. For 10^14 links, the bits of each link’s start and end address will be in the order of 50 binary bits, and the total memory capacity is 10^16 bits. Computational power of artificial brain constructed from optoelectronic modules with signal frequency not less than 10^6 s-1: approximately 10^20 operations per second. The usual computing power of a computer is about 10^9 operations per second There is not only a quantitative difference between 10^9 and 10^20 operations per second, but a huge qualitative leap in information processing technique and technology. To implement parallel algorithms of information processing, created by nature, we need fundamentally different technical means, hundreds of millions times more powerful than the existing ones.
The idea of an optical brain meets precisely these requirements. Why an optical brain? Because holography and optoelectronics are currently the only real means for modeling complex spatial structures of inter-neuronal connections in the brain. And also because the creation of the optical brain is within the capabilities of modern technology and not of tomorrow.

You may also like