Artificial general intelligence (AGI) could just be moments away as Meta joins the race to develop super AI. On January 18, Mark Zuckerberg announced his plans to spend $10 billion into the computing infrastructure to create a human-level artificial intelligence.
Open-Source AI for All
In a recorded message uploaded in Instagram, Zuckerberg said that he aims to bring Meta's two AI research efforts closer together to support their long-term goals of building AIG. The company also plans to open-source it responsibly and make it available and useful for everyone.
The Meta CEO believes that the next generation of services requires the creation of full general intelligence. This includes building the best AI assistants, AIs for creators, AIs for business, and more that demands services in every aspect of AI. The 39-year-old tech leader did not give an exact definition of AGI but he mentioned some of the changes to be made in Meta's AI research group.
The current AI systems are highly specific and cannot comprehend nuance and context as well as humans. An AGI system is expected to solve problems in a wide range of environments by mimicking the key features of human intelligence such as learning and flexibility.
Zuckerberg also announced that Meta is buying 350,000 Nvidia H100 graphics processing units (GPUs), considered as some of the most powerful graphics cards in the world and are also key to training the best AI models. This will more than double the platform's computing power equivalent to 600,000 H100 GPUs in total.
Nvidia's H100 is the latest version of the A100 graphics card which is currently used by OpenAI to train ChatGPT. It is believed that OpenAI roughly uses 25,000 Nvidia A100 GPUs for the chatbot's training, although other estimates suggest this number could be lower.
While Zuckerberg did not provide an exact timeline for creating AGI, he reported that this absolutely massive amount of infrastructure will be in place soon. As of now, Meta trains Llama 3, the company's contender to ChatGPT and Google's Gemini. The proposed Gen AI text model also teased a roadmap which includes a future AGI system.
READ ALSO : Meta Releases Next-Generation AI Model, Provides the Open-Source Large Language Model Free for All
Threat from Superintelligence
The breakthrough in achieving AGI may feel like a point of no return for humanity. According to Google CEO Sundar Pichai, the field of AI research is more profound than electricity or fire.
Last year, tech experts like Open AI CEO Sam Altmana nd Microsoft founder Bill Gates signed a statement which stresses the collective responsibility of humanity to mitigate the risk if extinction from AI. Because of this agreement, many scientists thought that humanity can never build AGI. However, this progress seems inevitable considering the current AI talent wars.
According to Ryan Carrier, founder and CEO of AI systems auditing agency ForHumanity, there are significant risks associated with AI. These involve copyright, bias, and misinformation which need to be addressed before considering superintelligence.
Carrier believes that recent advances in generative AI, as well as efforts to build AGI, could boost productivity but it can also come with a downside. AI systems have expanded our collection of information which makes it harder to source knowledge and wisdom.
RELATED ARTICLE : Self-Improving AI Is Dangerous; Superintelligence Will Be Unpredictable With God-Like Abilities
Check out more news and information on Artificial Intelligence in Science Times.