Meta's future Llama AI models are trained on a GPU cluster that is “bigger than anything else”

Meta's future Llama AI models are trained on a GPU cluster that is “bigger than anything else”

Kyiv  •  UNN

 • 14851 views

Mark Zuckerberg announced that Meta is training a new version of the Llama model on an unprecedentedly large cluster of GPUs. The company is increasing investments in AI amid a 9% increase in operating expenses and a 22% increase in sales.

Meta CEO Mark Zuckerberg announced the latest token in generative artificial intelligence training - according to him, the next release of the Llama model is training on the 'Bigger Than Anything' GPU cluster.

Writes UNN with reference to Wired.

Managing such a gigantic array of chips to develop Llama 4 is likely to present unique engineering challenges and require large amounts of energy. Meta CEO Mark Zuckerberg on Wednesday set a new milestone in generative AI training, saying the company's next major release of its Llama model is being trained on a GPU cluster that is "the biggest ever.

Help

This year, Meta's total operating expenses increased by about 9%. Total sales (mainly due to advertising) grew by more than 22%, resulting in higher margins and higher profits.

Meanwhile, OpenAI, which is still a non-profit company and is considered the undisputed leader in the development of advanced AI, announced that it is developing GPT-5, the successor to the model that ChatGPT is currently running on.

OpenAI has raised $6.6 billion in investmentsOct 3 2024, 02:00 AM • 20956 views

OpenAI said that GPT-5 will be larger than its predecessor, but did not say anything about the pool of computers it uses for training. OpenAI also said that in addition to scale, GPT-5 would include other innovations, including a newly developed approach to reasoning.

Meta will allow third-party apps to call WhatsApp and Messenger from 2027Sep 8 2024, 12:20 AM • 22772 views