AMD has unveiled the EPYC line of processors: the new “brains” to train artificial intelligence

Share This Post

- Advertisement -

The IT industry certainly has focused on artificial intelligence. A technology that many still fear or have doubts about its implementation, but which has come to radically change the way in which companies with their data centers process large quantities of information and solutions with artificial intelligence to make work more efficient.

- Advertisement -

This growing demand from users regarding response accuracy and waiting timeamong other issues, it requires increasingly sophisticated equipment: data centers that host systems or solutions Google cloud, Microsoft and Amazon Web Service (AWS) to the processors they include artificial intelligence at the local level.

AMD (Advanced Micro Devices) presented its new data center processor ecosystem in the week that confirmed its alliance with Intel. A business unit that in seven years it went from 0% to 50% of income from the processor manufacturer.

- Advertisement -

The new “brains” of data centers

AMD announcements on processors and data center hardware that train AI models.AMD announcements on processors and data center hardware that train AI models.

The fifth generation of EPYC 9005 series processorsknown internally as “Turin”, appears as an option for improve cloud server performancein enterprise environments and even for training artificial intelligence applications.

It is a chip composed of 150 billion transistors, 17 chiplets, 192 cores and 384 processing threads, which reaches speeds above 5 GHz.

Compatible with the SP5 “Genova” platform, the EPYC series is designed to power a variety of services used daily by millions of people, including WhatsApp, Facebook, Netflix, Office 365, SAP, Zoom and Oracleamong others.

The new processors arrive with significant improvements in business performance and AI GPU host node management. For example, the AMD EPYC 5th Gen 9355 model, which has 32 cores, offers 1.4 times more performance per core than its rival model. Fifth generation Intel Xeon 6548Y+which has a similar configuration.

Likewise, AMD only highlights this 131 equipped servers with the new EPYC processors they are equivalent to 1000 of the previous generations of the competition, which is equivalent an 83.9% reduction in physical space; Furthermore offer energy savings of more than 50%.

This not only represents an improvement in operational efficiency, but also responds to the growing demands of the enterprise sector, according to AMD, which is increasingly looking for new solutions. to train AI models, machine learning, language modules, and database searches.

Data center accelerators

The launch window for improve data center performance He also focused on acceleratorss AMD MI325x and MI350 (launched in the second half of 2025), designed for the management of networks with an increase of 35 times the performance of artificial intelligence.

The accelerators AMD Instinct improve data center performance at any scale, from single server solutions to supercomputers largest in the world.

These devices designed for Optimize performance and efficiency in artificial intelligence (AI) and data center operationsbased on the AMD CDNA architecture. These accelerators focus on training, tuning, and inference of AI models, offering advanced capabilities.

Therefore, in terms of performance, the manufacturer guarantees that the model MI325X can provide 1.3 times greater learning capacity in models such as Meta’s Mistral 7B and Llama 3.1, both ChatGPT rivals in summarizing and classifying texts or writing programming code.

The MI325X is expected to be available in the first quarter of 2025, with production to begin in late 2024, from brands including Dell, HP, Lenovo, and Supermicro.

On the other hand, the AMD Instinct MI350, based on CDNA 4, which will offer 288 GB of HBM3E memory and a 35% improvement in inference performance.

Thinking about Salina, AMD's artificial intelligence network card. Thinking about Salina, AMD’s artificial intelligence network card.

These accelerators, together with the open ROCm software, which includes new features such as FP8 and Kernel Fusion, are used in advanced artificial intelligence tasks, optimizing next-generation AI networks through the DPU (data processing unit). AMD thinks about Salina for frontend management and the AMD Pensando Pollara 400 NIC for the backend.

The cards are optimized for run a software stack that offers cloud servicescloud-scale computing, networking, storage and security, with minimal latency, jitter and power requirements.

“Therefore, AMD Instinct accelerators are positioned as powerful solutions for… training and inference of generative AI models and large-scale data center optimization,” according to sources at the Californian company.

Artificial intelligence in the new notebooks

The Ryzen AI PRO 300 series notebook chip.The Ryzen AI PRO 300 series notebook chip.

Although the data center business is very important for AMD, commercial mobile artificial intelligence processors, i.e. incorporating into the same platform the technology necessary to “manage” artificial intelligence, continue to be one of its pillars.

The most recent, Ryzen AI PRO 300 Seriesspecially designed to transform business productivity with Copilot+ which includes real-time subtitles, language translation during conference calls and advanced AI image generators.

Ryzen AI PRO 300 Series processors feature new AMD “Zen5” architecture at 4 nanometersfor more power and efficiency, especially for working with Copilot+, a Windows artificial intelligence (AI) assistant that improves productivity and creativity.

With the addition of the NPU with over 50 trillion operations per second (50 TOPS)exceeding Microsoft requirements for PCs with Copilot+ AI.

With the high-end option Ryzen AI 9 HX PRO 375, according to the manufacturer, it will offer up to 40% more performance and up to 14% faster productivity than its main rival Intel Core Ultra 7 165U.

In short, the upcoming notebooks equipped with Ryzen AI PRO 300 series chips are designed to handle the most demanding enterprise-level workloads.

Source: Clarin

- Advertisement -

Related Posts