The Research Director at Google’s DeepMind AI division has reportedly stated that the quest to achieve human-level artificial intelligence is about to be over after the division unveiled a new AI system, called Gato AI.
Dr. Nando de Freitas, DeepMind, stated that the race to obtain artificial general intelligence (AGI) is over. The Gato AI is capable of carrying out various complex tasks, ranging from stacking blocks to writing poetry.
Dr. de Freitas added that the new AI, described as a ‘generalist agent’, now only requires scaling up to develop an AI which can compete with human intelligence.
In response to an opinion piece that claimed AGI will never be achieved by humans, Dr. de Freitas tweeted that the game is over and it’s all about scale now.
de Freitas added that it was all about making models bigger, safer, faster at sampling, compute efficient, innovative data, more modalities, smarter memory, on/offline, and that by resolving these challenges, AGI will be a reality.
However, leading AI researchers have cautioned that the advent of AGI could possibly result in an existential catastrophe for the world and humanity.
Prof. Nick Bostrom, Oxford University, stated that having a ‘superintelligent’ system that essentially surpasses biological intelligence might lead to humans being replaced as the dominant Earth life form.
A major concern regarding the introduction of the AGI system is that it would be impossible to switch it off, given its capability to teach itself and outsmart humans.
While answering questions from various researchers on Twitter, Dr. de Freitas wrote that while developing AGI, safety is of utmost importance and is probably the biggest challenge DeepMind is facing, along with a lack of diversity.
When machine learning researcher, Alex Dimakis, asked how far the Gato AI was from passing the Turing test, Dr. de Freitas stated that it was ‘far still’.
DeepMind, which Google acquired in 2014, is also working on a ‘big red button’ to mitigate any intelligence explosion risk.
The researchers had outlined a framework in a 2016 paper, Safely Interruptible Agents, to ensure that advanced AI does not ignore shut-down commands.