Dr Will Browne
Artificial Intelligence from rat brains
Dr Will Browne says modern artificial intelligence is still very easily fooled.
“For artificial intelligence, a cat is just numbers dispersed throughout a network, so if you change just a few of those input pixels, it’ll easily think it’s an ostrich.
“Currently, AI doesn’t know that an ostrich has got a narrower face and larger eyes. Maybe in four or five years, there’ll be some more discernment – a cat’s got whiskers, and a small nose...”
“Maybe in four or five years, there’ll be some more discernment...”
Will leads a SfTI research team that’s trying to advance AI’s ability to learn using rat brains as inspiration. The work combines three of the five main approaches to AI, known as connectionism, symbolism and evolutionary computation.
“AI really struggles to look at things at different levels of abstraction at the same time. Vertebrate brains have a functional architecture that allows them to take knowledge from simple and small-scale problems and then reuse it to solve complex problems.
“For example, in visual perception, the left hemisphere processes visual information at a local level (wheels) while the right hemisphere processes visual information at a more global level (car).
“Humans do that by deploying the left and right sides of their brains together. Commonly, the left side effectively sees the wheel, or the detail, and the right side sees the abstraction – it’s a car,” Will says.
“It’s not limited to sensory processing. In language, the left hemisphere may activate single, literal, meanings of words or sentences, while the right hemisphere keeps alternative, metaphorical, or figurative meanings. Once we’ve figured out that a person is making a statement, we’ll inhibit the option that it could have been a question.
“In humans, there’s more inhibition going on than excitory signals. We are excluding what things are not all the time and narrowing down what they could be, and finally are.
“But you can think of AI as having nowhere near enough inhibitory signals. Everything is exciting all the time. It doesn’t have enough inhibition. You could say AI is permanently drunk.”
“You could say AI is permanently drunk.”
Will’s team reasons that if AI were built more like a human brain, with sides that talked to each other from different viewpoints, each taking control when most appropriate, then AI would get more stuff right than it does now.
“There are very interesting studies from the 60s and 70s when surgeons were trying treatments for epilepsy that involved splitting the corpus collosum bit of the brain where the right and left sides talk to each other. Patients would come out of that perfectly normal except for some specific thinking changes. They’d look at something with their left eye and know how to use it, but not what it was called. Or see it with the right eye and be able to name it, but not know what it’s for.”
The team is using data from previous international testing of rat brains as a more accessible model than a human brain.
The team is using data from previous international testing of rat brains...
“Those tests studied signals from the brains of wired up rats who were fully awake as they raced through a series of mazes to find a treat – American rats get Cheerios; the UK ones get Sugar Puffs. The imaging outputs showed how different clumps of neurons fire up simultaneously as the rat made decisions, like ‘south west corner, left-hand turn, second exit’ to build a picture of where to find the treat.
“The rats don’t build up one clump of neurons per position. It’s not match-for-match thinking where individual things are stored uniquely; they operate in patterns. A grid of neurons that are used as a basis for a map for getting round a maze, are then reused as building blocks later for working out relationships between other things.
“The big question in AI is ‘what makes it intelligent?’. Our research is pointing to the idea that we shouldn’t be trying to model every individual neuron of a brain to get an intelligent system, but rather looking at producing the building blocks that can form reusable patterns.”
In the lab, the team are not working with rats or humans, but simulations.
“We’re starting with toy problems, where we know the answers, and seeing if the machine can work out what the actions should be based on that particular pattern of numbers. We are hiding patterns within patterns for the computer to find and recognise. The results are going well,” Will says.
“The team that cracks an AI that understands the patterns underlying its decisions will be very popular.”
Will and his team will be publishing their findings soon.
Keep up to date by following us on Twitter @sftichallenge.