AMD has announced asignificant leap in performance for its Ryzen AI 300 series mobile processors in local large language model (LLM) applications. According to a recent blog post, the Ryzen AI 9 HX 375 processor demonstrated a 27% speed advantage over the Intel Core Ultra 7 258V intoken generation speed using LM Studio, a desktop application for locally downloading and hosting LLMs.
The tests were conducted using a range of popular LLM models including Meta Llama 3.2 (1b and 3b), Microsoft Phi3.1 4k Mini Instruct 3b, Google Gemma 2 9b, and Mistral’s Nemo 2407 12b. The Intel platform utilized 8533 MT/s memory, while the AMD platform ran on 7500 MT/s. Across all five models, the AI 9 HX 375 consistently outperformed the Ultra 7 258V, showing a notable lead in both token generation speed and time to first output.
It is important to note that the Intel Ultra 7 258V is a mid-to-high-end processor, while the HX 375 is a flagship model. This direct comparison may seem unfair, but it highlights the potential of AMD’s Ryzen AI 300 series forlocal LLM applications.
Furthermore, LM Studio supports GPU acceleration through the Vulkan API. When enabled, the AI 9 HX 375 saw a 20% performance increase compared to using CPU-only processing.
These findings suggest that AMD’s Ryzen AI 300 series processors offera compelling alternative for users seeking to run LLMs locally. The processors’ performance advantage, particularly in token generation speed, could be a significant factor for developers and researchers working with LLMs.
However, it’s crucial to consider that the testing methodology and specific models used in this evaluation may not be representative ofall LLM applications. Further testing and real-world benchmarks are needed to fully assess the performance of these processors in various LLM scenarios.
References:
- AMD Blog Post: [Link to AMD blog post]
- LM Studio: [Link to LM Studio website]
Disclaimer: This article is for informationalpurposes only and does not constitute financial or investment advice.
Views: 0