Consciousness. Aside from the odd moment of solitary reflection, precious few of us truly take the time each day to appreciate just what a breathtaking novelty it is to be thinking, perceiving, self-conscious agents. Of course, perception and consciousness are not uniquely human qualities in the animal kingdom, but the cognitive abilities of homo sapiens are unparalleled.
At the seat of all of this is our brain. Weighing about 1.3 kilograms and comprising a cellular mass of billions of neurons, synapses and axons, it is one of the most studied objects in the universe. Over the last 50-odd years, far-reaching research has given us a greater understanding of how the brain operates at a cellular and structural level. And yet, despite tangible progress, researchers still don’t quite know how the brain actually generates cognition. Or to rephrase cognitive scientist Daniel Dennett, how parts ‘with competence but without comprehension’ can give rise to us as beings ‘with competence and with comprehension’.
To take a quantum leap in our understanding of the brain, the European Union launched the Human Brain Project (HBP) in October 2013. This ten-year project, funded to the tune of 1 billion euros, is the most comprehensive research project in the history of brain research. It involves more than 100 universities and about 400 researchers from various fields and is divided into a number of sub-projects with a shared focused on gathering experimental data for theoretical models and novel kinds of computing which can in turn be used to create simulations of the inner workings of the brain. One of these sub-projects is on Systems and Cognitive Neuroscience and is being coordinated by the Prof. Cyriel Pennartz from the University of Amsterdam’s (UvA) Swammerdam Institute for Life Sciences. This Institute is participating in the UvA’s Amsterdam Brain and Cognition Center. In this edition of UvA in the Spotlight, we speak to Pennartz about the HBP, the future of brain research and what it means to be conscious.
The opinions differ. Most neuroscientists, however, agree that consciousness can mostly be explained as externally or internally induced sensations, including imagery (e.g. mental processing and sensory observation). These conscious experiences are characterised by an ordering in time and space and by very different sensory qualities such as colour vision, smell and melodies. Consciousness is what we lose when we fall asleep and gain when we wake up. In this sense, consciousness presents us with a representation or model of what’s going on in the world around us and in the world within us. Looked at from this perspective, the actual function of consciousness can be summarised in a relatively clear and simple way. The difficult thing is to retrace and link subjective, qualitatively rich consciousness back to its origins in nerve and brain cells, which basically operate on electrical impulses but do not by themselves produce something like visual perception. In my book The Brain’s Representational Power, I argue that one can best approach consciousness as a process constructed at different levels of complexity. Beginning all the way down at the micro-level (cells and electrical impulses) and moving through to several higher levels (doing operations like pattern recognition and other computations), consciousness arises at the highest level of multi-sense networks, but completely in sync with the lower levels.
Substantial qualitative differences exist in this respect. The pressing question is: how do circumstances conducive to consciousness come about? This is one of the underlying aims of the Human Brain Project, which hopes to gain an understanding of the brain through the use of computation and by locating the mechanisms that might explain why one form of calculation in the brain leads to one specific form of sensation instead of another. Pinpointing and identifying these mechanisms have thus far proven to be notoriously difficult. Nonetheless, progress has recently been made, also in the Pennartz lab, by studying the interactions between different sensory systems: vision, audition, touch sensations. These systems appear to be much more interconnected than thought before.
Episense analyses daily-life, autobiographic memory and investigates how it is fed with information from multiple senses. We know a fair bit about how memory works in the brain – most likely it operates by networks in which cells store memory information in the connections between brain cells. What eludes us though is insight into how different senses deliver information to consciousness and memory and how these are accordingly combined into a single, complete representation. Take an apple, for example. Whenever we think of holding an apple in our hand, we remember more than just a visual image: we also vividly recall a myriad of sensory memories such as the smoothness of the peel, its solid feel, its slightly sweet taste and the environment in which it was consumed.
How does the brain do this? This is the question Episense will try to answer. Together with fellow researchers, we will conduct a series of experiments to identify the precise neural mechanisms underlying memory and validate them by computational models, computer simulations and by building robots with their own autobiographical memory. We will do this in collaboration with researchers from different disciplines within HBP, such as neurophysiologists but also researchers in robotics. We hope to deliver our first results in two years’ time.
HBP is unique in the way it integrates brain research with the development of brain-inspired computing, modelling, data analytics and robotics. The project has a wide scope and brings together researchers from a wide spectrum of disciplines. Its aims are complementary and should deliver not only a deeper insight into the brain but will transfer this knowledge to the development of new applications. A key example is the HBP’s Neuromorphic Computing Platform, which consists of two systems that are designed to mimick neural microcircuits and apply brain-like principles in artificial intelligence, including, for instance, computer vision and image recognition. These systems operate on neuromorphic chips, which are unbelievably fast as they can compute hundreds of thousands of times quicker than comparable amounts of biological neurons.
Theoretically, it’s possible to construct such a model. The question, however, is whether it works in a biological sense and delivers the intended cognitive results. Will we be able to speak to such a brain? Or to interact with it? Will it understand a certain situation it is put in, feel something and generate sensible behaviour? Personally, I think a decade is too ambitious. I am more optimistic about creating a model of the rat brain which, despite its much smaller scale, shares many similarities with its mammalian human counterpart. In principle, we should be able in the next 10 years to map the rat brain, from neurons all the way up to hemispheres, and completely model it. And then put this into a robot which will simulate rodent behaviour. To do this, we’ll need greatly increased computing power, but also more knowledge on how the brain integrates information at the different levels already mentioned. This is something we lack at the moment.
On a side note, there are those like Raymond Kurzweil who believe in the coming singularity and the advent of artificial consciousness surpassing human intelligence by 2045. I don’t see that happening. The fundamental problem of human consciousness, intelligence and creativity hasn’t been sufficiently solved by us to simulate it, or predict when this may happen. It’s cool to make spectacular predictions, but better to be realistic about what we can do at the moment.