The struggle to create a microchip that can mimic the human brain and open a portal to another world
Tadi grew up in Hyderabad, India, and never had much interest in working in healthcare. His mother was a doctor, his father was a doctor, and even in an environment that particularly exalted the medical profession, you could argue that there were more than enough doctors for one family.
What Tadi decided to study after finishing an undergraduate degree in electrical engineering was special effects. “It’s a creative urge,” he says. “I wanted to find ways to make a simulated explosion or a burst of flame look real on-screen.”
In 2004 Tadi joined the Swiss Federal Institute of Technology (EFPL) in Lausanne to begin a master’s degree in VR computer graphics. His first project was to develop more realistic walking motion in virtual avatars. That project required understanding how the human brain controls locomotion, resulting in regular trips to a nearby hospital to study people with motor impairments, such as Parkinson’s disease.
“Being exposed to the injured brain and how it functions really helps you to then better understand the healthy brain,” Tadi explains.
He recalls one 19-year-old boy who had lost both arms in an accident. Like more than 80 per cent of amputees, the boy was suffering from phantom pain, a condition where an amputee suffers from pain in the location where his arms would have been. While the mechanisms behind phantom pain are poorly understood, one theory is that it is the brain’s response to the conflict between the sight of the missing arm, versus continuing signals from the nerves in the remaining portion of the limb that suggest the whole arm is still present.
Tadi had the idea of introducing a pair of virtual limbs to resolve that conflict. He set up the patient with an EEG (electroencephalography) cap, which detects waves of brain activity through the skull, and used this to give him rough control over a pair of arms displayed on a screen in front of him. It didn’t matter that these were little more than a cartoon sketch of human limbs: the synchronisation between their movement and the boy’s thoughts alone was enough to resolve his brain’s internal conflict, and so to lessen that chronic pain. “It was super impactful for me,” Tadi says. “Particularly because he was just extremely vocal about how amazing this was.”
He had encountered a new form of illusion. One that depended not on graphical fidelity, but something more basic, more bodily: a category of neuroscience tricks, called embodied illusions, that alter not only how a person sees the world around them, but how they feel their own body within it.
In 2012, Tadi began a PhD in the lab of EFPL neuroscientist Olaf Blanke to investigate the mechanisms behind such illusions and how else they might be manipulated to aid in rehabilitation.
Blanke’s lab specialises in the mechanisms of multisensory integration, investigating how a person’s nervous system binds together all the inputs it receives from touch, sound, sight, smell and other sensory receptors across the body, into a single unified experience. How, for example, our brains take the sound of a meow, the tactile sensation of fur and the sight of something cat-shaped, and combines all of these sensory inputs into the awareness of a single, living, breathing cat.
At the time, Blanke had just begun investigating ghosts, a symptom in a range of psychiatric disorders, such as schizophrenia, which gives patients the sense of an additional unseen presence in the room. Blanke had realised what every one of the patients he was studying had in common was damage to the areas of the brain responsible for the integration of multisensory signals.
“Think about how our limbs are represented in the brain,” he tells me. “There’s more than a single representation corresponding to your arm, right? You have proprioception, the sense of your arm’s location in space [provided by sensory receptors spread throughout the muscles and joints]. You can see it. You can hear it when it bumps into something. You can move it, and your brain will predict how these other inputs will change when you do so. That’s four different parts of the brain involved in understanding just your left arm.”
It is the brain’s successful combining of these inputs that underpins the difference between seeing an arm as a part of your own body, or experiencing it as a foreign object.
When this fails, when the timing at which your brain processes the sight of the arm does not match with the proprioceptive system’s sense of its location, then you get the kind of symptoms experienced by Blanke’s patients. The feeling that the body they see belongs not to them but to someone else.
Altering the synchronisation between different sensory inputs can disrupt peoples’ identification with their own body. Tadi and Blanke began to wonder if it could also be used to place them inside a different one?
This had been done before, in a sense. In 1998, a group of researchers at the University of Pittsburgh discovered an effect called the rubber-hand illusion. First, the experimenter hides the participant’s real hand under the table. Then they place the rubber hand on top, and simultaneously stroke both the invisible real hand and visible fake hand with paintbrushes. Merely the synchronisation between this tactile and visual input is sufficient for the participant to start to feel as though the observable rubber hand is their own. The effect can be dramatically illustrated by their panicked reaction when the experimenter suddenly tries to smash it with a hammer.
How much more, Tadi wondered, could be done with today’s virtual reality technology? So in 2006, under Blanke’s supervision, he took a head-mounted display and tried to extend the rubber-hand effect to the whole body. Participants were stroked on the back while watching a video stream of a virtual avatar, appearing a couple of metres in front of them, having its back stroked in synchronisation.
Several participants reported feelings of discomfort, but for all participants there was a drift in their sense of location, away from their own physical body and into the avatar itself. For the first time it had been demonstrated that synchronised stimulation could be used to trick the brain into adopting not just a new body part, but a new body altogether.
Imagine, Tadi thought, what that could mean for neurorehabilitation. And imagine what it could mean for virtual reality. So, in 2012, Tadi founded MindMaze. The plan: to combine what he’d learned about tricking the brain’s perception of its own body with the principle of neuroplasticity to encourage damaged brains, such as Ibbott’s, to rewire themselves.
MindMaze’s first product, the MindMotion Pro, was launched in 2015. Unlike the Kinect camera used in the MindMotion Go system that Andy Ibbott worked with, the MindMotion Pro’s camera, which received FDA clearance for use in US hospitals in May 2017, can detect a patient’s arms even when they’re lying motionless against a hospital bed. It then renders a simple, cartoonish equivalent on a portable screen placed at the patient’s bedside. By moving their real arms the patients can thus control these virtual ones, in order to perform a range of tasks, from simply placing a virtual disk into a virtual tray, to more complex challenges as their rehabilitation progresses.
The principle is the same as traditional rehabilitative physiotherapy: repeatedly performing normal actions, such as raising an arm, to strengthen the damaged neural connections involved, just as repeated exercise strengthens the muscles. But unlike with traditional post-stroke therapy, which typically involves manipulating wooden blocks, the MindMotion Pro can adjust the movements of the virtual arms to make it look as though the patient has been more successful than they have been.
That means it can accelerate rehabilitation by exploiting the way the brain regions involved in controlling a particular movement are often activated, not only when a person performs that movement, but also when they watch the same movement being performed.
“In the case of post-stroke patients, who cannot move their arm to do any kind of training at all, we can just ask them to move the other hand and show the avatar flipped as if it is the paretic arm that’s moving,” explains MindMaze’s head of neuroscience, Andrea Serino. “Even just seeing this activates the damaged motor circuits for that arm.”
Beyond tricking unconscious brain mechanisms, the ability to adapt to challenging tasks has a more obvious impact: it makes things much more enjoyable. “Exploiting neuroplasticity requires many, many repetitions,” explains Tadi. “The fact that these challenges develop with progress makes the patient more motivated to indulge in practise, even when the physiotherapist isn’t present.”
Tadi’s next aim, and the motivation behind developing the cheaper, Kinect-compatible, MindMotion Go, is to support patients in continuing this therapy over the long term, without needing to make regular hospital trips. While early physiotherapy, during post-stroke plasticity, makes the greatest difference to rehabilitative success, research is increasingly showing that the fully-developed adult brain is not quite so rigid after all. Beyond maintaining function, ongoing practice after leaving hospital can continue to improve a patient’s abilities even years after a stroke – just as it did for Ibbott.
“When you get discharged from hospital currently, the question is what do you do next?” Tadi says. “With the MindMotion Go, patients like Andy won’t need to make constant trips to a clinic, they can just continue to practise frequently at home, without needing expert supervision.”
However, Tadi had realised that no matter the fidelity of the display, tricking the eyes alone is insufficient to fully convince a person that what they are seeing is real.
No passive VR film could ever have convinced Ibbott that he was running through the Sahara rather than lying in a hospital bed. No mere visual stimulation could compensate for the absence of the dry 50°C heat on his skin, the softness of sand sinking beneath his feet.
The body is the lynchpin of our experienced reality. When you don a VR headset, for all that your eyes look into another world, the rest of your body remains firmly stuck in this one. Add binaural audio to replicate the spatial soundscape of that virtual world, and it feels significantly more tangible. But drive a virtual racing car around a virtual bend, and the tiny lakes of inner-ear fluid that guide the vestibular system’s experience of movement will remain stubbornly placid. You might be able to rotate your head to look around, but unless you’re using a high-end full room tracking system, try to crouch, jump or even just walk, and nothing will happen.
MindMaze’s neurorehabilitation work has been only the first step in Tadi’s plan for the company. For all the years spent putting disordered brains back in touch with their bodies, Tadi never lost the passion that drew him towards VR and special effects; to take ordinary brains outside of their bodily reality.
Ultimately, the aim of MindMaze is to combine this knowledge of how to trick the brain’s sense of embodiment with experience in developing motion-capture systems for neurorehabilitation to begin to solve the problem of creating a truly interactive, multi-sensory, virtual reality. “This,” he says, “is going to make the difference between just having a television stuck to your face and actually having a portal to another world.”
In March 2015, Tadi revealed a glimpse of that future at the San Francisco Game Developers Conference: the MindLeap headset. Its forward-facing motion-tracking camera monitored the position of a user’s hands, while an EEG cap picked up neural activity through the skull to assess the wearer’s mood. Virtual flames lapping round their virtual fingers changed colour according to their detected mood – from calm to agitated.
The latter feature may sound like nothing more than an expensive mood ring. But it builds on research showing that translating physiological signals, such as the rate of a heartbeat, on to a virtual limb strengthens the viewer’s identification with it in just the same way as the synchronised stroking of the original rubber-hand illusion.
In June, to assist in more accurately tracking parts of the body outside the field of view of the motion-capture camera, MindMaze acquired a fellow EFPL spin-out, motion-analysis company Gait Up, creators of the world’s smallest motion sensor – a tiny two-by-three millimetre nodule called Droplet. “Gait Up has fine-tuned its algorithms on more than 6,000 subjects,” Tadi explains. “So these can detect very specific actions.”
To capture facial movements, MindMaze has developed a foam strip that sits around the outside of a mask embedded with electrodes, called Mask. This can detect the arrival of the electrical signal sent to activate muscles at eight sites around the face, such as the forehead and cheeks. These signals can then be used to recognise particular expressions tens of milliseconds before the face itself actually moves, eliminating latency in the avatar’s response.
“There’s obviously a need to explore the social aspect of VR,” Tadi says. “But you can’t expect to be social without expressions or emotions, which the Mask enables in a very intuitive way.”
The Mask won’t be available direct to consumers, but MindMaze is currently investigating partnerships with the major VR headset producers. Currently, it can discriminate ten expressions, including blinks, winks, smiles and grimaces, with 96 per cent accuracy, Tadi says. MindMaze is working on increasing this to 30 more subtle variations of expression, such as a snarl or a wince.
With this being MindMaze’s first, and so far only, non-healthcare product, Tadi is keen to show it off. He takes me across the building to the office of the team responsible for its creation, who crowd around as I don their pair of Mask-enhanced VR goggles.
The avatar on the display in front of me blinks furiously, and its face quickly distorts into a smile, matching my own awkward expression at finding myself sat in the middle of a room of onlookers, with my eyes covered by goggles.
I try a sad face, and the avatar mirrors it, then a surprised one, which the Mask struggles to recognise. It still has some kinks, the team admits, such as working better with certain face shapes.
Still, no matter how accurately MindMaze is able to track the physical body’s position through a mix of motion-capture cameras, muscle-activation detecting electrodes and Gait Up’s motion sensors, the key to the virtual embodiment illusions that Tadi discovered during his PhD is not just accuracy but timing. Specifically, in maintaining perfect synchronisation between the movements of the real body detected by this array of sensors and the appearance of those corresponding movements in the virtual body.
When the brain’s own ability to co-ordinate the synchronisation of information from these different sensory streams fails, you get the kind of out-of- body experiences and ghostly presences described by Blanke’s patients. When this co-ordination fails in a virtual world, that world ceases to feel real. Think of how even a few milliseconds of desynchronisation between the visual and auditory tracks of a film makes an actor’s speech feel strangely artificial.
Co-ordinating this synchronisation across such a wide range of different sensors is impractical with current computing technology. Something that Tadi first understood when setting up the VR-induced full-body illusion of his PhD project, which required 20 separate computers just to co-ordinate all the different elements.
To keep up with the brain, traditional chips with a few separate processing cores running through a timed, linear sequence of discrete states won’t do the job. To keep up with the brain, Tadi has come to realise, you need brain-like hardware. A processing architecture capable of handling multiple analogue data streams in parallel.
In a rumoured announcement, MindMaze will reveal the step that will consolidate all of its work, with the investment of $30 million in the creation of a chip, designed at the hardware level to mimic the way that the brain integrates data streams from multiple sensors. A chip designed to simultaneously process and co-ordinate all these continuous streams of data –from forward-facing cameras, from inertial motion sensors, from EEG readings of muscular or neural activity – and combine these into the generation of a single virtual body.
Named the CogniChip, it has always been the ultimate goal of MindMaze, Tadi claims. But he also acknowledges that, as a young researcher seeking hundreds of millions in startup funding, he couldn’t just tell people that. “If I’d said straight up we’re going to simulate reality with a chip that can think like a human – well that would have been ridiculous,” he says.
The years spent in pure research, then developing technology to the standards of the medical profession, matter to Tadi not just for what he’s learned, but for giving him the credibility to talk about such a goal, and for setting him apart amid a growing field of neuroentrepeneurs, without a background in neuroscience.
Soon, Tadi says, he’ll be launching a consortium, called the Neuro Initiative, to advocate for a more realistic approach to integrating neuroscience, technology and entrepreneurship. “This is a passion project of mine,” he says. “It’s going to bring together the leading academic, scientific, technical brains in the world to really lay down a path of what is an ambitious, but plausible, pipeline for the future of neurotechnology.”
I suggest Elon Musk’s direct brain-machine interface startup, Neuralink, which has a job listing that baldly states, “No neuroscience experience required”, as an example of an endeavour that falls a little too much into the former camp.
“Exactly,” says Tadi. “It’s time to get a little more aware of what is realistically possible versus what is still fantasy. What troubles me is that these whimsical ideas keep getting put out and polluting our collective understanding of the incredible things that can actually be done now.” He gives me a wry smile. “Of course, being an entrepreneur myself, I don’t want to criticise dreaming.”
Source: http://www.wired.co.uk– Wednesday 20 June 2018