In 2021, Google fired two ethics researchers, Timnit Gebru and Margaret Mitchell, who co-authored an AI research paper, ‘On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?’, where they addressed the possible risks associated with large language models and the available paths for mitigating those risks. Stochastic Parrots, eh? These models are essentially “parroting” back statistical patterns that they have learned from large datasets, but they are not capable of true reasoning or understanding. People would, however, need ChatGPT or Midjourney to invade society to comprehend the potential danger of AI.

In an open letter titled ‘Pause Giant AI Experiments’ this March, thousands of professionals from the top echelons of AI research, business and society called for a six-month moratorium on the development of systems “more powerful” than GPT-4. In a New York Times opinion piece, author Yuval Noah Harari offered a gripping analysis of AI-driven scenarios akin to those in the movie Terminator.

Should the Stochastic Parrots then be caged? However, not everyone supports that. In a recent article headlined ‘The age of A.I. has begun’, Bill Gates asserted that he had personally experienced the development of two transformative technologies: the graphical user interface and, more recently, generative AI. Pedro Domingos, the author of the 2015 book The Master Algorithm, even referred to the AI moratorium letter as “an April Fools’ joke,” and he said the level of urgency and fear about existential risk was wholly out of proportion to what present AI systems are capable of.

The world, however, was rocked when Geoffrey Hinton, frequently referred to as a “godfather of AI” and who pioneered the use of backpropagation in neural networks in 1986, left Google. He cited concerns about the spread of misinformation, the potential for AI to upend the job market, and the “existential risk” posed by the development of true digital intelligence. Hinton cautioned that some of the dangers posed by AI chatbots were “quite scary” and that they might one day surpass human intelligence and be exploited by “bad actors”.

Pause letter

Should we listen to the “godfather”? Hinton isn’t the only “godfather,” though. Hinton, Yann LeCun, and Yoshua Bengio — three scientists who received the 2018 Turing Award — are commonly referred to as the “Godfathers of AI.” Bengio signed the “pause letter,” which Hinton opted not to do. Bengio called for a more responsible approach to the development of AI technologies and acknowledged the very real hazards of abuse.

LeCun, however, believes AI has the power to spark a “renaissance.” In his 2019 article ‘Don’t Fear the Terminator’ (again, Terminator!) in Scientific American, LeCun referred to the AI apocalypse as being unlikely. LeCun even compared the “pause letter” to the Catholic Church’s 1440 call for a six-month moratorium on the use of the printing press and movable type!

Google Translate and Apple’s Siri both use neural networks developed by German scientist Jürgen Schmidhuber in the 1990s. In 2016, The New York Times wrote that when AI matures, it might refer to Schmidhuber as “Dad.” Well, the “Dad” predicts that AI will advance to the point where it surpasses human intelligence and will pay no attention to people. The best counter to bad actors using AI, according to Schmidhuber, is to develop good tools with AI.

And could the Stochastic Parrots be caged at all? “You cannot stop it,” Schmidhuber declares. Could we really cage the internet and the web within decades? However, in an effort to ensure that rapidly evolving technology improves lives without endangering people’s rights and safety, the US President and Vice President met with the CEOs of Google, Microsoft, and two other companies on May 4. Biden addressed the CEOs, saying, “What you’re doing has enormous potential and enormous danger.”

Society’s confusion, however, persists as the who’s who, or even the godfathers, express divergent viewpoints.

The writer is Professor of Statistics, Indian Statistical Institute, Kolkata

comment COMMENT NOW