Brain and Mind

We have many reasons to be optimistic about our increased understanding of the brain. We have certainly learned quite a bit already, and current scientific efforts are likely to reveal much more in the future.

The brain speaks the language of nature: biology, chemistry, and physics. Our tools for understanding the workings of nature are improving, and so we continue to make steady progress. In this journey, the brain is slowly revealing its secrets.

Some of the brain’s output have the making of nature: chemical reactions, electricity, mechanics. Other outputs, however, seem different: its thoughts, intentions, reasons, and fantasies, the way things feel, smell, taste, and sound to us - what we usually refer to as our mind - appear to be different types than say, the movement of my arm or the beating of my heart.

For many, the hope is that a deeper understanding of the brain will reveal the mind’s functioning. Those who believe this expect to map what we think, feel, and dream to the brain’s specific activities. This would provide a correlation between the mind’s subjective experience and the objective activities in the brain.

Others believe this pursuit will fail.

Some people believe this deep subjective experience we call our mind is nothing but an illusion. They think our brains have evolved to trick us into believing we have a rich internal life when in reality, it is making it all up in real-time, a bit like the movie Matrix, but the simulation is running on the edge in our brains.

Others, of more classical inclination, believe the mind is a real, subjective, likely immaterial thing.

The subjective experience certainly feels real to each of us. It seems very hard to suppose each subjective experience has a perfect correlation in a material representation in the brain. What it is and how it relates to the material brain is a question with many answers. Still, to those who believe this, there is an equally heavy burden of proof on those who think the mind experience is illusory or perfectly representable in organic - brain - matter.  
 
I

One of the most exciting things in brain science is our ability to probe a brain with microelectrodes, while the person is perfectly conscious and awake. In doing so, we can see people act or experience things as a direct result of these electrical stimulations. Indeed, there are many experiments where simple sensations – say a flash of the color blue, or the smell of roses – have been generated via direct stimulation of the correct brain region.

We have also seen successful experiments where certain volitions are translated into brain signals. We read the signals and know what the person wanted to see or do. People who have lost mobility can connect to a brain-computer interface and translate their thoughts for things like “move arm and grab Apple” into code that controls a mechanical arm. That is, the person can control a machine with their thoughts.

At first, the idea of sending tiny electrical currents into someone’s brain while they are awake seems strange and cruel. Still, two conditions have been made possible: first, neurosurgeons who need to remove an unhealthy area of the brain must first make sure they don’t cut the wrong piece of tissue, so they test what areas do what by stimulating them. Second, and more importantly, the brain does not feel pain. It can certainly process it and send signals to different parts of the body, but evolution was not expecting anyone to tinker inside our brains directly, so did not bother to place pain receptors there. Hence, once the skull has been lifted open (that does require anesthesia), we don’t feel anything done on the brain -

Certain brain regions map directly to experiences such as vision, hearing, language understanding, and others, strongly suggesting that the mind and the brain are the same things. More broadly, biology has proven that our brains are the function of the same material process that evolved everything else in the living world. If human beings are the outcome of a material process, we should expect human beings to be material beings.

Often, what we have explained in non-material terms is a phenomenon that needs to be explained at a different level of abstraction.

For example, the heart is an organ with parts (valves, muscles) that enable it to attain certain bodily functions (distribute blood to the body). The heart is made up of different types of cells for different parts of the organ, but all follow basic properties common to all cells. These cells, in turn, are made of atoms organized in various configurations.

It makes no sense to explain the role of the heart in the body, or how the heart works, or how the cells in the heart work in terms of the physics of atoms, yet physics, chemistry, and biology are congruent, and so it should be possible to do so. Pressure, electricity, chemical reactions, and cell biology are all underpinning the functioning of the heart.

Something similar may apply to the brain: we describe ourselves in mentalistic states like “I wonder if I should have breakfast now” but we are making steady progress in having an equivalent description in terms of a set of states of our neurons.

Ultimately, the pursuit of truth via scientific inquiry requires the presentation of verifiable and reproducible evidence. The mind, however, rests on subjective evidence - my experience of things -.

Consider Daniel Dennett hypothetical, which sheds light on this challenge:

Imagine you wake up after neurosurgery to discover grass looks red, and the sky seems yellow. What happened? Maybe some neurowires got crossed and affected your perception of color. Or, perhaps, the surgery tampered with something in your memory that makes you think grass was green and the sky was blue.

The only way to know which of these two is trustworthy is by asking the neurosurgeon. We can’t rely on our experience and mental states to determine what has happened; we need a third person’s evidence to know the truth. It is indeed hard to think of the mind as an entirely subjective thing since it would be impossible to know anything about it truly.

Or, Imagine a future where we have finally achieved a full understanding of how the brain works. In this future lives a world-renowned neuroscientist by the name of Mary. Her field of specialization is color perception. She fully understands the physics of colors and the physiology of their perception. Yet, she has never seen color. She has lived in a black and white room all her life.
Now, let’s assume she finally can leave the room and, once outside, sees the color red for the first time.

Paul Churchland, who I think first came up with Mary’s scenario, asks: Has Mary learned anything new about color by looking at the color red? Maybe she didn’t learn anything new but rather now knows the same thing, not merely by description, but also by experience. Or perhaps she learned how to recognize and imagine the color red, but not really learned a new thing.

Maybe, the mind is whatever happens in the brain, and that is all there is to it.  
 
II

If there is such a thing as a mind, it is likely not what it seems.

Think about the blind spots in our visual perception, how, for example, we don’t really see our nose sticking in the middle of our visual field. Perception is designed not to show us what we see, but to give us a representation of the stable world in front of us. What is incredible, this representation of the world is not only lacking from what our external organs can capture, but it is not stored somewhere in the mind; it is actually generated on demand.

An interesting example is a syndrome called visual neglect, where those who suffer it can’t see one side of their visual field. What is fascinating is that they go on living their lives with a sense of a complete visual world; they just don’t know one half is missing.

We experience access to our conscious mind and sense of self, but a lot may be hidden from us in reality.

There is ample evidence that the actions we take are done before our brain is conscious of it. The brain keeps a model of reality and uses it to anticipate what it will experience. Before you even touch or see something, the brain already created a model of it.

If that does not seem to map to your experience, it is because the brain pushed consciousness of the event back in time, to account for the action. That is, the mind is a trick the brain deliberately plays on all of us.

For me, the most exciting experiments shedding light on the illusion of self and consciousness were conceived by Michael Gazzaniga, a behavioral neuroscientist who worked with patients who had the left and right hemispheres of their brain disconnected to prevent seizures.

The experiments consisted of feeding information to one side of the brain and asking the other - disconnected side - to process it.

Imagine a setup where you could show information to only one of the subject’s eyes. The word “face” is flashed to the right-field view. Input received on the right side is processed at the left side of the brain, dominant for language processing. Then, the subject is asked to write what it saw and successfully writes “face”.

Then, the word “face” is shown to the left-field view, but because the right side of the brain cannot share information with the left, it cannot say what it saw. And yet, when asked to draw something, the subject draws a face.

As if this wasn’t surprising enough, a fascinating part comes when the subject is asked why it drew a face. In this setup, subjects consistently made something up. They would say they drew it because they were feeling happy, or they wanted to draw. These replies were not performative; subjects believe what they are saying.

Another interesting experiment, this time applied to memory:

Dr. Claparède visits a patient with amnesia. The patient never recalls meeting him, and he must re-introduce himself every time. One day, Dr. Claparède reaches out to shake her hand as usual, but this time, he concealed a pin in his hand. Naturally, the patient withdrew her hand in pain.

On the next visit, the patient as usual failed to recognize him, but this time, she refused to shake his hand.

Daniel Wegner and Timothy Wilson call this form of thinking and acting the adaptive unconscious. It accounts for our ability to act and learn without us being aware of it.

The difference between this adaptive unconscious and our experience conscientiousness is akin to the system one and system two from Khaneman, in thinking fast and slow: one system or self is fast and reflective, the other is slow and rational.

We can see these two consciousnesses or systems in distinct brain pathways: there are neural pathways that form from the sensory thalamus to the amygdala, which allows for fast, unconscious reactions. A different path is available for the common sense of mind and consciousness: it first goes out of the cortex and then to the amygdala.

We believe that actions follow thoughts, but in reality, there is often a hidden third explanatory variable that accounts for the action: the adaptive unconscious.

It can then be pointless to try to examine our mind by introspection because, at least in part, it is designed to hide from us. We end up looking at our actions and then constructing a narrative that makes us feel good about ourselves. It makes little sense to talk about a single self and a single mind; we are often of different minds. The experience of consciousness entails providing some consistency to people’s experiences.

Introspection then involves constructing a story; many of the facts for the biography must be inferred, rather than directly observed. Introspection is akin to writing a self-biography.

It may seem we lie to ourselves because we need our adaptive unconscious to be fast.

We certainly don’t have intuitive access or introspection into perception, memory, language processing, likely because they developed before the brain features that enable consciousness where developed. They have stayed outside of the purview of the conscious experience most likely for reasons of expediency.

There are endless stories we can tell ourselves. It might be useful for some people to think of themselves as honest, while others benefit more to think of themselves as friendly. Whichever is most valuable is a function of some internal compositions (it is said 20-50 of personality traits are genetically determined) and social settings.

As a given model of self gets used repeatedly, it becomes chronically accessible, meaning they are the fastest model to load and the easiest to use.

What’s most puzzling to me is why we can’t detect our own confabulations.

Some behaviorist scientists that focus on evolutionary explanations have pointed out that the best way to convince others of our dispositions (say honest or friendly), is to convince ourselves first. To be great liars, we need first to be great at lying to ourselves.

Perhaps this idea that the mind is the executive in chief of our lives might be an illusion. Maybe what we think as the conscious mind is not animating the brain’s activity, but rather is at its peripheries, creating lies as we go, to make sense of it all.  
 
III

I am now considering having breakfast. I am trying to figure out if I am hungry enough to eat or if I should wait a bit longer. This thought depends on other mental states: my belief that there is a certain level of hunger that justifies eating, which in turn relies on my belief that feeling some level of hunger is not bad, or perhaps even good, which in turn depends on a belief I hold regarding the role of rationally in my experience of bodily sensations, and so on, and so forth.

These relationships cannot be described with the mechanics of cause and effect that served us well in understanding things like vision or hearing - and the physics of the brain that enacts them -; because all these variables that go into my decision of what to eat are something else: they are relational, symbolic, logical in nature.

I decided instead to drink some tea. It has a specific taste, odor, texture, and color. Yet, the particles that make up the tea have none of these properties. In fact, there is tons of space between the atoms that compose the tea, yet I experience it as a liquid.

What happens is these atoms are arranged so that my sensory organs perceived them as having all these qualities. Of course, my sensory organs and my brain have the same material features. And yet, these qualities do exist in what we experience to be our mind. It would seem, then, that the mind is real and nonetheless substantially different than the brain.

Thomas Nagel pointed out that the puzzling thing here is not that my tea feels this way or another, but simply that there is such a thing as “feeling like” something. It is interesting because that seems to be a relevant fact that needs to be accounted for, but it is hard to do so on material grounds.

Thinking on this topic, Nagel offers a fascinating thought experiment: try to imagine the experience of a bat, an animal that uses echolocation to find its way around. One has to speculate that their experience must be very radically different from ours. We may investigate their brain and nervous system’s inner workings, but clearly, that won’t reveal the bat’s perceptual experience; it won’t tell us anything about what it is like to be a bat. So again, the puzzling thing is that there are such things as “being like” someone.

There also seems to be a subjective experience of meaning and knowledge. In exploring this, John Searle came up with the now classic Chinese Room thought experiment:

Imagine someone is inside a room, locked in with a Chinese to English dictionary, a pen, and a paper. The room has a small opening where the person can receive pieces of paper.

Outside, someone who doesn’t know how this room operates introduces a piece of paper with a phrase written in Chinese and after a brief moment, gets a paper in return with the English translation. Assume the person inside has a great dictionary and as a result, the person outside experiences translations indistinguishable from a native Chinese speaker.

The person inside the room has operated a bit like a computer, manipulating symbols according to specific physical properties (in this case, their shape) and in doing so, arrived at the desired outcome. Yet, it can’t be said the person inside the room understands Chinese since she does not know the meaning of anything she is writing.

Maybe the person inside does not understand Chinese, but perhaps the whole system of room, dictionary, person, etc. can be said to understand it? On this, Searle points out that the system can only be said to understand relative to our interests and our interpretation. What makes the Chinese room a system that computes translation is what we are making of it.

Any organized matter - a tree, a wall, the mitochondria, anything really - can be used as a system of inputs and outputs and turned into a computational device. Sure, this could be technically unreasonable, but it is certainly possible. What makes a particular arrangement of atoms a wall and not a computer is the use we give to it.


Think of a knife: indeed, there are material limitations to what a knife can be made of: it can be made out of steel or plastic, but it cannot be made out of cream cheese. Yet it is not the material that makes the knife, but the use we give it. A knife is only a knife in relation to a subject and its use of the object.

The same thing happens in the mind. A single physical structure - a specific neuronal firing pattern - can potentially represent various subjective experiences we think of as radically different: For the brain, there might not be a difference between X, the representation of X and the representation of the representation of X. If the mind was equal to matter, it would be hard to see how these different experiences would be possible.

Conversely, multiple physical representations can have a singular meaning. Take a one-dollar bill, a one-dollar coin or a one-dollar digital record on a bank account; these are all the same dollar, despite different material subtracts. What makes them the same thing is a social consensus on value and meaning, both attributes which are hard or perhaps impossible to reduce to physics.

For the brain, the arrangement for firing patterns in neurons generates information (Information is nothing but the arrangement of physical things in a certain way). But information is not meaningful. Meaning comes from context and prior knowledge. That is, it comes from a unified experience of self.

We also experience the mind as having casualty over the brain. We can clearly state that a neural firing pattern is causing me to move my fingers while a type, by my decision to write where I am writing what I am writing, seems to be different from the causal pathways that get my fingers to move.

More importantly, it would seem my intention to write precedes the casual chain of neurons firing to get my fingers moving. So perhaps, there is a downward casualty where intentions, desires trigger the firing of neurons.

Is hard to imagine a state of consciousness that does not involve subjective intentionality. As I tap this keyboard and look at the words that appear on my laptop screen, there are multiple perceptual systems engaged in my brain, yet it is my intentionality, experienced somehow in a unified manner, that tells me this is a computer.

The mind seems to be symbolic and meaningful; it requires a material subtract but might not be determined by it.  
 
IV

We know that what we perceive externally or internally is not necessarily what is happening. Countless visual illusions show how our perception can be tricked. There is also ample evidence that our brain readouts say little about how that computation was performed. In fact, we know that the brain might deliberately lie about how our thoughts were constructed.

One interesting learning derived from these illusions, together with the study of patients with split brains where one side of the brain loses communications with the other, or the multiple studies of brain injuries where a section of the brain is destroyed, is that it reveals that the brain has a modular structure.

From the fact that the brain is modular, I gather three learnings relevant to the question of the mind and consciousness.

First, we remain conscious even when some modules are not working.

When a patient has a severe brain injury, we see a direct impact on functions associated with that region, while others work well. Individuals can certainly be changed by these injuries, but they remain conscious.

For example, people with a severe right parietal lobe injury can suffer from spatial hemineglect, which means they feel the left side of the world does not exist. This includes the left side of their laptop keyboard, the left side of their body (including their face, so they might shave just one side), and the left side of everyday objects like a book or the people standing at their left side. There is a dramatic change in their life experience; they clearly have a functional mind.

The second lesson: we are not conscious of brain circuits that are not working.

Think about the patients with seizures who wake up after surgery with the connection between the two brain hemispheres severed. The left and right sides of the brain are unaware of each other’s visual fields. When the patient is asked about his visual perception, his speaking left hemisphere will complain of lost vision. The patient can’t complain because the module responsible for noticing the problem is on the other side, and since the right side can’t communicate with the left, it can’t find out. What’s even more mind-blowing, as reported by Gazzaniga, is that the memories of having the other half of the visual field are also gone.

And the third lesson: we are not conscious of the output of different modules simultaneously, but rather there seems to be a competition for what bubbles up to consciousness.

There is this very peculiar brain condition called Feeling of a Presence. Is the feeling that someone is close to you when there is no one there.

We are aware of our location in space, but we are not aware of the coordination of inputs that derived the sense of location. Our brain orchestrates sounds, touch, proprioception, balance, vision, etc to help us give a sense of where we are. When this coordination fails, we can get confused when we fail to integrate one of these inputs or make another extremely salient. My understanding is that the Feeling of a Presence is caused by such problems, and it serves as an example of how there seems to be coordination and competition between modules of the brain as they reach our mind or consciousness.

As an aside, there is an exciting experiment involving a robotic arm that managed to induce the Feeling of Presence condition.

Imagine you are blindfolded, arms stretched out. One of your fingers has some sensors attached that allows it to control a robotic arm that sits behind your back. If the robotic arm is entirely in sync with your finger movement, you feel as if your body has moved forward and you are touching your own back. But if it is clearly out of sync, the illusion is that someone else is touching your back.

Given we remain conscious even when some modules don’t work, and that we are unaware when a module is not working and that different brain modules seem to compete to becomes conscious, it appears to many that consciousness is intrinsic to each brain modules not the brain as a whole and that, if we lose a brain function, we lose the consciousness associated with it.

This means there are multiple viable paths and possible instantiations of consciousness, and if one path gets shut down, others get more preeminence to deliver what we experience as our mind.

It is not impossible to shut down consciousness on a living person. If enough subcortical modules are damaged, those that control our sensorimotor experience, then we go into a coma. Yet, an experience limited to the sensorimotor realm can’t quite be thought of as the experience of having a mind.

It seems then that the interplay of emotions and feeling on one end, and cognition on the other, that is, the interplay between cortical and subcortical modules of the brain, make up the experience of consciousness.  
 
V

Plenty of our brain wiring is shared with the rest of the animal kingdom. The basic equipment is designed to sense and move around in the world. For us humans, much of this rests on the subcortical regions. Looking at brain injury patients we have seen evidence that a sense of self is developed deep in these structures we share with other animals.

According to neuroscientist Jaak Panksepp, the subjective feeling of self arose in evolution when the more primitive emotion module connected with a module that mapped the limits of the body against the external world. Several zoologists have proven that such an egocentric sense is all it is needed in animals to have a subjective experience, a sense of self. In some almost incredible studies conducted by Barron and Klein, they believe the emotion and body map sense integrated in common ancestors to vertebrates and invertebrates, back to half a billion years ago.

Gazzaniga (remember him from the split-brain studies mentioned before) thinks that if the brain evolved to control the body as it moves about in the world, then cognition, memory, imagination are just added tools to increase performance in an ever-evolving and more complex environment. As the brain system has gotten more complex, the models have been placed in a layered architecture.

There are subcortical activities that bubble up into consciousness like say the feeling of pain (actual reaction to move away from pain happens on the periphery, and the edge of the body computation of sorts). For higher-level modules like cognition, we can see a process where subcortical regions provide inputs for the higher-level modules before these generate an output.

For Gazzaniga, subcortical regions are closer to the OS of a smartphone, and what makes us distinctly humans are closer to the Apps. We are conscious without the apps, but the apps completely change and define the conscious experience. Human brains, rich with layers, experience a sort of stitching together of modules at different layers bubbling up to consciousness.

This layered architecture makes sense: it allows for evolutions to do its bidding on the higher layers while keeping the core modules stable. This goes all the way down in biology: we share many the same organs and tissues with other animals. We also share the same enzymes with plants for cell division and metabolism processes with bacteria. Lower layers are made stable so that newer ones can be tinker with.

It also makes sense that the complexity is hidden from us: the same it is hard to build apps if you need to work with machine code, it would be hard to think if we had to be constantly conscious of what happens at every layer.

We experience the modularity and layered architecture of the brain in everyday life. Let’s say you are buying a puppy dog. You feel super happy and excited about it. This moment in time is committed to memory, in the higher layers of our brain architecture.

In memory, we keep tons of info about the event, including how we felt. But we don’t actually keep the feeling. When we reminisce on the event, we invoke the memory of the event, we may even remember how we felt at the time, but how we feel now is coming from a deeper module of the architecture. Current feelings are mapped to a past moment. The feeling might be the same, it may be more positive and intense, or negative, full of regret.

We don’t fully understand how memory works, so we don’t quite know how that memory of buying a puppy gets stored. But stored it is. However it physically gets stored, the question for the beginning of the text remains: how is this subjective experience associated with buying that puppy possible in the context of the material brain?

We want to know how the physical experience of caressing the fur of the puppy ends up feeling a certain way to us, how positive emotions evolve into complex feelings of love and attachment, and how an idea of what that puppy means to us comes up. How do we go from physical objects touching to the firing of neurons into patterns, to the subjective feeling of self experiencing what it is like to hold that puppy in our hands?  
 
VI

The brain runs on physics and chemistry, our mind on a system of symbols. The brain is clearly there, yet there is no clear path that connects the symbolic representation of the mind to the brain’s physics. It seems then that we need to start thinking about how these two seemingly incompatible systems can coexist.

We should start by noting that dual or complementary explanations are maybe not so crazy. Within the world of physics, we see this type of duality in quantum vs. Newtonian physics. One of the initial insights in quantum physics came about when scientists discovered that light could behave as both a particle and a wave. This issue arises when a subject tries to measure and describe a system: once measured it can be described as a wave or a particle, but not both.

When we try to explain reality, the quantum realm is best described statistically. In contrast, the physics of big objects follow laws. It is not that they are different realities, but instead, they must be explained in different modes.

There is good reason to question if complementarity, a principle formulated to handle the wave-particle duality, has any bearing on the brain-mind duality. If one thinks that this duality is a property of the electrons, then the doubt is warranted. But if it is a function of the observers’ nature, then it most certainly can be relevant.

Complementarity is a suspicious explanation because we don’t see it at the macroscopic level, where the laws of motion reign supreme. I have found there is a persuasive and somewhat straightforward explanation for this: the macroscopic physical world is one where the subject-object duality can be suppressed without affecting the quality of our predictions. Planets and stars are entirely isolated from the observer, so there is no impact in surprising the subject-object duality. For microscopic objects, where quantum physics is needed, the subject-object duality cannot be avoided. As we will see, this is also true of biological systems.

That idea that something can be described in two very different ways should not be a surprise, and it extends far beyond fundamental physics. Michael Polanyi points out as an example of how machines can be described both in terms of the principles of the machine’s design, and another layer which consists of its parts and processes. A machine he said can only be defined by its constraints (is it useful? Simple?), not the laws of motion.

It seems something similar (or the same thing at a different level) happens with the brain and the mind: one can describe it as a set of matter acting in this and that way, or as a set of symbols within a system, but not in both ways.

The question then is, how should we think about symbols within matter: the brain is a physical thing, symbols seem like they are not, so how do they co-exist? What is the nature of their complementarity?

Howard H. Pattee was fascinated by this difference, albeit on a scale much smaller than the brain. He wondered how the DNA, a simple, linear molecular structure could function as symbolic information that controls super complex enzyme dynamics that fully comply with the laws of physics.

In building a cell, the symbolic sequence in the genetic DNA produces other strings of polypeptides that are precursors to proteins. Eventually, the symbols sort of getting lost, and we find enzymes, folding themselves into machines, and then into proteins that then turn into membranes, muscles, etc. This assembly process is not symbolic; it harnesses the laws of motion that come about from the constraints and limitations established by the symbols. Of course, as we move further up, the proteins themselves serve as constraints for higher levels of biological organizations.

It is often assumed that physical laws must determine all events because they are inexorable. The fact is, that is not the case in any part of physics. To get a specific physics measurement, we need to set up the initial conditions; otherwise, there is no output to the laws of motion. These initial conditions cannot be derived from the laws, and they need to be subjectively determined.

We could say that what physicists call an objective model is a very restrictive type of subjective model common to us all by its variance, by virtue of the symbols in the model.

As Born said, “thus it dawned upon me that fundamentally everything is subjective, everything without exception. That was a shock”. At the extreme, one could say all our models of reality exist in the heads of individuals.

Once initial conditions are imputed, the laws of motion do not include alternatives: we get a complete picture of the laws of motion that tell us the system’s state. In contrast, biology needs constraints in decision making to be able to distinguish alternative behaviors of the system and use rules to pick one from all.

The laws of motion are said to do a one-to-one mapping whereas the hereditary process performs a one-to-many mapping which transmits traits from a larger set of alternatives. This entails a classification process. Symbolic concepts like genetic memory or code are not really expressible in terms of physics’s elementary laws.

Pattee was interested in the physics of symbols, as manifested in the DNA. He argued (building on work Von Neumann had done on logical requirements to create an automaton) that the hereditary process worked like a language: the genetic code is the rules of syntax while the protein foldings that resulted from those rules are its meaning or semantics.

This symbolic description can be read two ways: syntactically to be transmitted, or semantically to control the constructions of enzymes and proteins.

For Pattee, matter can function as a symbol insofar as an agent uses it to constrain dynamics that follow the laws of physics. For him, a symbol is a local and discrete structure that triggers an action, usually related to a system more complex than the system itself.

The concept of the symbol has no meaning outside the system of symbols to which it belongs, and the material organizations that work with these symbols read them and build from them.

It is easy to assume that since these symbols and rules need physical structure to take effect, they must obey physical laws. This assumption fails because rules and symbols are not interpreted in the same languages as physical laws.

Think of a computer: clearly, physical switching networks both follow physical laws and execute symbolic, logical rules, but that does not mean that the rules of logic are predictable from these laws. Both the physical laws of the switch and the symbolic rules of logic are complementary explanations of the same system.

Another important property of symbols like measurements, codes, rules, and descriptions is that by selecting one of many things, they ignore most aspects of a system. Von Neumann pointed out how a measurement or description that operates at the same detail as the system is describing becomes indistinguishable from the system itself.

In complex systems like our cells and our brain, we see hierarchies delineated by the rules by which they interpret or give meaning to a lower layer. The better the rule, the more selective it is in measuring the element its constraints. Following an example by Pattee, a good stoplight does not measure all the details of traffic patterns, but only the minimum to make traffic flow faster and safer. As Pattee points out, this simplification or loss of detail of the rule also makes it hard to understand its origin.

We don’t know yet how matter becomes symbolic. But we know two things: first, matter can embody symbols which can, in turn, unlock dynamics that are perfectly lawful from the perspective of physics. We also see that symbolic representation of heredity, in the separation of the phenotype (the description in symbols) and genotype (the actual build-up of those instructions)t is a logical and physical requirement for life and evolution.  
 
VII

Pattee argued that the symbol-matter problem, which required a clean cut between object and subject, exists on all levels, from the origin of life all the way to the human mind. From phenotype to genotype, to the symbol reference problem in cognitive science, to the mind-brain problem we are interested in here.

The argument here is that the key sign of biological activity is the existence of efficient and stable codes. Pattee model sees the cell as an agent that uses or interprets information for its survival. His point is that this process suggests a cut between subjective and objective, way before human consciousness and the experience of the mind.

For starters, brains have gradually evolved from cells. Material structures and functions have evolved together. The brain itself has continued to evolve. Given that we can claim to see subjectivity at the cellular level, it is possible that concepts like intentions, meaning, thoughts, which associate exclusively with the mind, might have primitive precursors in the cell.

From this perspective, the brain is the most complex set of coordinate symbols rules and resulting constraints coming out of billions of years of natural selection applied to many individuals.

We can also see hierarchical levels of organization in all biological systems, from the cell to the brain. In all we see discrete switch modes that are associated with the informational, programable, linguistic functions (e.g; a linear sequence of nucleotides in DNA or pulses generated by a single neuron), and the continuous dynamics with the folding or patterns of these sequences we associate with semantics or interpretation.

For the brain, the hierarchical structure and evolutionary process mean that the brain does not need to build explicit meanings for everything from scratch. Like the cell, the brain is already endowed by evolutions with hierarchies of functions that only need to be constrained or harnessed to acquire meaning.

For Pattee, the self, whether a cell or a human, is an individual thanks to the symbols that serve as memory patterns in its genes and its brain. These subjective memories (subjective in that they are a selection by a subject from a pool of many options) define the self.

Another critical aspect of the biological self: it needs to be able to change. For evolution to work, the symbolic instruction needs a material structure that contains all the processes for the construction of enzymes but also it needs instructions to copy the symbol itself, all following the laws of motion.

We don’t know how these patterns or symbols come about. Francis Crick called the DNA a frozen accident, and there is evidence the symbols are fairly arbitrary. The same is likely true with the brain, and its formidable plasticity seems to corroborate this.

Yet, the brain is categorically different than any other collection of cells. For all its similarities, the brain is very different from the DNA and the hereditary process in many ways.

In the cell, we go from symbol to dynamics, following the laws of physics. The brain goes the other way around: a complex dynamic neural network produces discrete symbols and the subjective experience we are calling the mind or consciousness.

Another important difference is that, with the cell, variation in genetic information must be expressed before selection can begin. Natural selection can only operate through the phenotype. With the brain, we can acquire, evaluate, and select information before expression.

The hereditary process shows us that to make a self; we need parts that implement descriptions, translations of these descriptions, and its construction. You also need to describe, translate, and construct the parts that describe, transcribe, and construct. This infinite loop is, in effect, the definition of self.

Pattee gives us a framework that helps us see how symbols can be embodied in matter and how symbols and matter can cooperate in a complementary fashion in the same system. With this framework, we can say the mind’s subjective experience does not violate physics laws. At the beginning of life, we can also say that at the core of what differentiates living and non-living matter, the break between the objective and subjective takes place. Symbols are at the root of all that is living.

So the self at the cell level is a type of loop. Douglas Hofstadter believes that the “I” or The self or consciousness we experience subjectively is also a type of self-referential loop. He called it a strange loop.

The “I,” according to Hofstadter, is the brains’ strangest and most complex symbol. He thinks of it as an infinite feedback loop. The same way a camera hooked to a TV and pointing at it would generate an infinite self-referential loop that somehow stabilizes into a new image, the multiple layers of our brain creating rules and symbolic representations of lower layers created a feedback loop that is self-referential and stable.

Or think of the feedback loop and echo in an auditorium. The “I” is no more physically identifiable than that auditory feedback loop, and yet, it feels very real.

Of course, the “I” does not come about in the brain due to video of audio loops. To Hofstadter, it comes about thanks to our ability to think. For Hofstadter, thinking is a way of possessing and being able to manipulate an extensive repertoire of symbols.

So to Hofstadter, symbols in the brain are perceived by symbols, until it becomes a stable, self-referential symbol.

In a multi-layer system, where each module generates its own set of symbols, higher-level cortical modules create symbols to manipulate symbols, turning into a self-sustaining self-referential loop, which becomes our mind’s experience.

So, one way to think about the mind is as an infinite, self-sustainable loop. Layered modules bubble up symbols to be used by other modules, moving up a layer in the stack.

The duality between the brain and the mind is complementary since we can’t explain one with the other. Not because the mind exists outside of physics, but because symbols and the laws of motion speak different languages.

We can’t tell how DNA came about, and we know even less about how the mind emerges from the animal brain, but at least we know that is perfectly reasonable, and likely necessary for a break between object and subject, and in our case brain and mind, to co-exist.

· science