Leon Chua -- father of Amy Chua -- conceived the memristor in a paper back in 1971. The memristor is a resistor with a memory of an earlier state. It behaves differently, depending upon its history. Because inter-neuronal synapses typically also behave differently, depending upon their histories, the memristor is often seen as a building-block for creating more brain-like computers. Researchers are already simulating what a memristor-based computing system might look like:
Memristors are resistors that "remember" the state they were in, which changes according to the current passing through them. They are expected to revolutionise the design and capabilities of electronic circuits and may even make possible brain-like architectures in silicon, since neurons behave like memristors.
Today, we see one of the first revolutionary circuits thanks to Yuriy Pershin at the University of South Carolina and Massimiliano Di Ventra at the University of California, San Diego, two pioneers in this field. Their design is a memristor processor that solves mazes and it is remarkably simple.
...Pershin and Di Ventra begin by creating a kind of a universal maze in the form of a grid of memristors, in other words an array in which each node is connected to another by a memristor and a switch. This can be made to represent any regular maze by switching off certain connections within the array.
Solving this maze is then simple. Simply connect a voltage across the start and finish of the maze and wait. "The current flows only along those memristors that connect the entrance and exit points," say Pershin and Di Ventra. This changes the state of those memristors allowing them to be easily identified. The chain of these memristors is then the solution.
That's potentially much quicker than other maze solving strategies which effectively work in series. "The maze is solved in a massively parallel way, since all memristors in the network participate simultaneously in the calculation," they say. _TechnologyReview_via_NextBigFuture
One of the problems with asking a physicist, engineer, or computer scientist to devise a brain-like computer, is that persons trained strictly within these disciplines are not likely to know which elements of brain functioning should be "simplified" or "abstracted", and which elements should be closely copied.
The pursuit of artificial intelligence is rife with failed promises and predictions, over the past 60+ years. If we are not to go at least another 60 years without meaningful success, we will need researchers who are cross-trained in multiple disciplines relating to the problem.
The research described in the Technology Review article above was based upon the simulation of an array of memristors -- not on an actual memristor circuit. But even with real memristors, the circuit is simplistic in the extreme. The idea that one could assemble large numbers of simplified "synapses" into something that might behave like a biological brain -- in any meaningful way -- appears silly to anyone with even a basic understanding of how the brain works. And yet such silliness represents one of many parallel hopes for a so-far failed endeavour: artificial intelligence.
The synapse is not the basic unit of human intelligence or consciousness. The basic unit of human consciousness is something far less substantial and more ephemeral. It exists at multiple logical levels above the synaptic level. It is dependent upon the simultaneous function of trillions of synapses of distinctly multiple types, involving efferent, afferent, and re-entrant activity at multiple logical levels.
What the researchers describe in the Technology Review article is the simulation of a toy. Not the toy itself -- a simulation of the toy. The human brain is not a toy. Unless, of course, you are a god.
No comments:
Post a Comment