AI Isn’t Conscious, But Without Robust Education, We May Start Believing It Is: A Response To Mustafa Suleyman
On August 19 Mustafa Suleyman, CEO of Microsoft AI published a sobering essay on the risks of the very technology he helped shape. While most essays of this kind get lost in science-fiction fantasy, Suleyman zeroed in on a real and pressing danger: Seemingly Conscious AI. AI systems that can mimic consciousness so well, that they can trick many users into believing they are conscious beings. Although I found Suleyman’s essay powerful, I found his proposed solutions lacking. Despite the fact that I agree with him that we must demand and develop comprehensive industry guardrails, I strongly believe that the first step for mitigating AI harms must be a robust approach to AI education.
A few months before Chat GPT first captured the public’s imagination, a Google engineer made headlines when he claimed that the AI system he was working on had become sentient. The engineer went so far as to hire a lawyer to defend the AI’s rights. His claims made little impact, and millions of people now use a vastly improved version of the very system he was concerned about: Google Gemini. Fast-forward two years, and a more gruesome story emerges. A Florida teen falls in love with an AI Chatbot designed by the same engineers that constructed googles system. The boy tells the chatbot he is considering suicide; and instead of stopping him, the bot encourages him. This, is the beginning of what Suleyman fears. Computer systems that are so good at imitating human behaviour—at replicating our use of language to evoke emotion that we begin to treat them as humans.
Let me be clear. AI is not conscious. The systems we have now are extremely powerful statistical programs. What they are capable of doing is akin to wizardry. But they are not conscious. Potentially the best explanation of how a machine can be intelligent, but not conscious, comes from AI pioneer Alan Turing. Turing imagined humans as physical vehicles animated by an external soul. His machines could replicate the body’s functions, but they would never possess that soul. The machine’s driver would always remain human, yet as systems advanced, they could be directed to perform tasks long assumed to demand human intelligence.
But Turing was also frustrated that his contemporaries did not share his belief that machines could think. In response he designed a game called “the imitation game” wherein a text-based AI attempts to convince a human that it is not a machine. He thought that once machines could constantly pass the imitation game, society would change it’s tone, and begin to recognize the possibility machines being able to think.
A testimony to Turing’s brilliance, he was right, as it was the arrival of chatbots such as Gemini and ChatGPT that changed the public’s perception of AI and ushered in the AI Boom. Yet these systems human like personas, and general lack of understanding of how they work, has created the space for us to lose sight of Turing’s vital separation. These systems do not have an independent soul. They are synthetic machines driven by the design decisions of humans. If we ask ChatGPT to act like a human, it will, and we may mistake it as conscious. But if we ask it to act as a stock trading bot, executing only cold calculations, we are less likely to project consciousness onto it. This is why Suleyman insists that companies must take steps to make their AI’s feel less human.
Although I agree with Suleyman, I think we must also acknowledge that every member of society must be prepared for the massive societal change we are hurtling into. If one understands how a neural-networks, they will be less likely to attribute human attributes to it. Moreover, if one has a historical education on the development of neural networks, they also gain the ability to see that claims that deep neural networks were designed to emulate the human brain are dubious and superficial at best—defeating claims that AIs “think like us”.
Critics may state that this is already happening. Countless educational bodies have introduced AI curriculums. Yet most of these curriculums focus on the technical aspects of AI. Building systems and/or implementing them. But the questions addressed by Suleyman’s essay are inherently philosophical in nature. What we need is robust humanities and technical education at every level: from elementary school to grad school. We must arm every member of society with the tools to critically assess the effects new AI systems may have on society. Doing so will not only allow them to protect themselves from threats such as Seemingly Conscious AI, but to also to make better decisions when designing and implementing AI systems. Technical education may teach us how to build better systems, but a robust education will help us build the right ones.