Scientists at the University of Cambridge put physical constraints on an artificial intelligence system, similar to how human and other animal brains have to develop and operate with both physical and biological constructs. The system then developed some features of the brains of complex organisms to solve tasks.
In a study published in the journal Nature Machine Intelligence today, Jascha Achterberg and Danyal Akarca from the Medical Research Council Cognition and Brain Sciences Unit (MRC CBSU) at the University of Cambridge worked with their colleagues to develop a simplified version of the brain and applied some physical constraints before giving the system tasks. This technology could potentially be used to develop more efficient AI systems and even understand the human brain itself better.
Instead of using real neurons or brain cells, they used computational nodes. This is because both neurons and nodes have similar functions. They both take an input, transform it and produce and output. Also, a single node or neuron might connect to multiple others, with all of them outputting and inputting information.
The physical constraint they placed on their system of computational nodes was similar to a constraint experienced by neurons in the brain—each node was given a specific location in a virtual space, and the further it was away from another, the more difficult it was for the two to communicate.
After placing this constraint, they gave the system a task to complete. The task in this case was a simplified version of a maze navigation task that is typically given to animals like rats and monkeys when studying their brains. Basically, it was given multiple pieces of information to decide on the shortest route to reach the endpoint of the maze.
Source: Indian Express