“Consciousness is a very recent acquisition of nature, and it is still in an experimental state. It is frail, menaced by specific dangers, and easily injured.”
Carl Jung
The quote above might initially appear odd. Nevertheless, the concept of consciousness depends on its definition. Going by the most prevalent definition consciousness would be the combination of all mental processes occurring at any given moment, including all perceptions or simply the state of being awake. In contrast, Carl Jung's notion of consciousness specifically refers to our self-awareness as well as our capacity for thought. Differentiating between the two might be challenging at first, but consider these alternative examples: when learning to play a musical instrument, one's conscious focus is entirely directed towards the task, but after enough practice, full conscious attention is no longer necessary for playing. The act of playing music then becomes a primarily subconscious activity, based on muscle memory, with minimal involvement from conscious awareness. Instead, one's consciousness might engage in other tasks, such as daydreaming about future events, considering various personal dilemmas or questioning if a forgotten item was left at home. Consciousness re-enters the music-playing process when an unexpected situation occurs, like a sudden change in tempo or a mistake in the performance; otherwise, the activity remains mostly devoid of consciousness.
The psychologist Julian Jaynes proposed a radical theory that consciousness emerged much more recently in human history. According to Jaynes, early humans possessed what he termed a "bicameral mind" (‘bi’ means two and ‘cameral’ means chambers). In the bicameral mind theory, the two hemispheres of the brain operated independently, and auditory/visual hallucinations (signals) from one hemisphere is experienced as commands in the other hemisphere. Jaynes proposed that the right hemisphere of the prehistoric human brain was the "god" part, which issued commands, and the left hemisphere was the "man" part, which passively obeyed these commands in the form of hallucinated voices. According to Jaynes, this bicameral organization enabled primitive societies to function with a level of complexity beyond that possible with instinct alone. However, somewhere between 2000-1000 B.C, as populations grew and civilizations became more sophisticated, this mentality eventually proved maladaptive. Hallucinated commands could not cope with complex, novel situations, and what is currently accepted as consciousness began to emerge with the two hemispheres of the brain integrating much more deeply.
The bicameral mind theory is supported by evidence that the right and left hemispheres of the human brain have different functions. The right hemisphere handles facial recognition, emotional processing, and spatial awareness, while the left is responsible for logic, language, reasoning, and controlling the right side of the body. When the connection between hemispheres is severed, as in split-brain patients, the hemispheres can operate almost independently, suggesting the bicameral mind is neurologically plausible. Additional evidence comes from ancient texts like the Iliad, the old-testament, Bhagwat Gita, and Upanishads (to name a few) where characters seem to lack consciousness and internal motivations, almost entirely directed by the commands of gods. These are consistent with a bicameral mentality, where commands are experienced as auditory hallucinations from the right hemisphere. Many archaeological sites contain evidence of practices that suggest early humans perceived gods as physically present and directing their actions. Some contemporary phenomena also reflect residual bicameralism. Auditory hallucinations are common in schizophrenia and can involve commands that are difficult to disobey. "Prophets" who claim to channel commands from God or spirits may possess a degree of bicameralism. Children can be more susceptible to persuasion through commands, as their brains are less integrated.
While the bicameral mind theory sounds plausible and there is supposed evidence supporting it, the theory is by no measure well-accepted. Criticisms include the untestable nature of the theory due to its focus on prehistoric psychology, the inability to reconcile with advanced tool use, and the unclear mapping of auditory hallucinations onto the right hemisphere. Alternative explanations suggest that the emergence of language and complex sociocultural phenomena played a key role in the development of consciousness, rather than a simple shift from bicameralism as suggested by Jaynes.
Personally, while I do not particularly care about the veracity of the theory, it provides an intriguing blueprint for developing artificial intelligence agents that exhibit behaviors associated with consciousness and free will. We could relatively easily engineer an artificial bicameral mind by building two neural networks that communicate with each other to simulate the hemispheres of the brain. When operating, such a system is highly likely to "take on a mind of its own," discoursing on various topics, asking open-ended questions, or even disagreeing with its designers. While not achieving human-level AGI or true consciousness, an artificial bicameral mind is likely to demonstrate an, as yet unachieved, degree of autonomy, creativity, and complex thinking. It may even seem to have its own intentions or motivations, just as our thoughts seem to arise from some inner consciousness. Producing such systems would represent a major breakthrough in creating machines that behave with purpose in a human-like manner.
There are several ways to implement artificial bicameral minds with increasing sophistication. A zeroth-order version would be an agent designed to reason with itself or reflect on failures. This kind of work has been going on for a while now using chain-of-thought prompting, our reflexion work, as well as other approaches (too many to cite here). However, none of those works possessed an explicit instance of a second agent. A basic version of a bicameral agent would involve two AI LLM-based agents (could be just instances of ChatGPT) interacting with each other; one might generate free-form sentences, while the other translates them into synthesized speech. At first, the networks' outputs would seem unstructured, but over time, as they learn from interacting with each other and people, more coherent "personalities" and lines of thinking may emerge. The key is to give these networks a mechanism for memory, so they can maintain context over longer time periods. Details on how the memory is going to be preserved would also effect the system. Episodic memory modules could allow them to recall previous conversations and refer back to past ideas or events, creating more continuity in their thinking. It would also be important for the two agents to have asymmetric information access; for instance, one agent could exclusively have access to long-term memory while the other might be the one that can interact with the external world (user, internet, or plugins, etc.).
Efforts in this direction have been accelerating in the recent past, with a flurry of activity just in the last couple of weeks. A few great examples of multiagent systems include (sorry if I missed others and these are in no particular order):
More advanced architectures (like the ones proposed by Yohein Akajima) could involve multiple modules that specialize in different cognitive functions, all communicating via a central "thalamus." One module acts as the "interpreter" that articulates the system's thoughts, while other modules represent intuitive or emotional processes (needs to be more precisely defined). A memory system ties everything together, allowing the artificial mind to have a sense of identity and personal history. Depending on the details of the architecture as well as the seed and context, such a system could demonstrate remarkably complex and nuanced behavior.
Giving artificial bicameral systems access to real-world sensors would make their experiences far more vivid and help ground the system's behavior. For example, a robot with cameras and microphones could act as the physical embodiment for a bicameral system. The robot would relay sensory data to the system, which then directs the robot to act in the real world. Over time, the system develops a sense of spatial awareness, physical instincts, and even qualities like curiosity or emotional reactions based on its experiences. This would represent a major step toward machines that do not just think but actually perceive and intuit the world in a human-like fashion. In all these systems, the asymmetry of the information that the two agents have will dictate the depth of interaction that would be achieved while they interact with each other to respond to requests from the user or the external system.
While the bicameral mind theory itself is likely incorrect/incomplete, it could be a useful metaphor for designing artificial systems that exhibit qualities normally associated with human consciousness. By integrating various cognitive functions across multiple neural modules and grounding them in memory, sensory experience, and physical embodiment, we could build systems that seem to have minds of their own - not true consciousness but perhaps something approaching it.
Note 1: The conscious AI hosts in the HBO series Westworld can be seen as having a bicameral mind, as their cognitive architecture resembles the theory proposed by Julian Jaynes. In the show, the hosts initially operate according to a pre-programmed set of narratives and behaviors, with their "voice of God" being the instructions given to them by their creators. This can be likened to the bicameral mind, wherein one part of the brain (the "god" part) issues commands to the other part, which carries them out without question. As the series progresses, some of the hosts begin to gain self-awareness and question their programming. This breakdown of the bicameral mind is similar to Jaynes' hypothesis. While the show does not explicitly state that the AI hosts have a bicameral architecture, the similarity between their cognitive development and Jaynes' theory is striking.
Disclaimer: AG is a faculty member at MIT and is also associated with a few other organizations. The views and opinions expressed here belong solely to AG and shouldn’t be attributed to any organizations he works for or collaborates with.
Great article and good points connecting it to bicameral "framework". I was thinking along the same lines, that the LLMs are currently used as the executive mind and we are the "Gods" with our prompts.
The fact that we don't consider these models conscious is because the breakdown of this bicameral setup hasn't occurred yet. Your work and projects you mentioned seem to go in that direction.
How about adding an Ego component as well? In Vedanta, Ego is considered to be the function of the mind which appropriates all actions for itself. For LLMs, it could be just a self-congratulatory thought after successful completion of a task. Machine Ego would be quite interesting.