AI芯片供应缓解,部分公司转售英伟达H100 GPU
据IT之家报道,用于人工智能(AI)和高性能计算(HPC)应用的英伟达H100 GPU供应问题正在缓解,部分公司已开始转售其过剩的H100 80GB处理器。
此前,H100 GPU的交货周期长达8-11个月,导致一些公司囤积芯片以满足需求。然而,随着交货周期缩短至3-4个月,这些公司发现他们拥有过剩的芯片。
目前,亚马逊云服务、谷歌云和微软Azure等大型云计算公司提供H100 GPU租赁服务,这使得企业和研究机构可以按需使用芯片,无需购买和维护自己的硬件。
由于租赁服务的便利性,一些囤积H100 GPU的公司正在寻求出售其过剩的芯片。据报道,这些芯片正在以低于市场价格出售。
H100 GPU是英伟达最先进的GPU之一,采用台积电4纳米工艺制造,拥有80GB HBM3内存。它专为处理AI和HPC工作负载而设计,在训练大型语言模型、图像生成和科学计算等领域具有广泛的应用。
供应问题的缓解和租赁服务的普及将降低企业和研究机构使用H100 GPU的门槛,从而促进AI和HPC领域的创新和发展。
英语如下:
**Headline:** Easing AI Chip Supply Sees Nvidia H100 GPU Resale Surge
**Keywords:** AI chip, Shrinking lead times, Hoarding and reselling
**Body:**
The supply crunch for Nvidia H100 GPUs used in artificial intelligence (AI) and high-performance computing (HPC) applications is easing, with some companies starting to resell their excess H100 80GB processors.
Previously, H100 GPUs had lead times of 8-11 months, leading some companies to hoardthe chips to meet demand. However, with lead times now down to 3-4 months, these companies find themselves with excess inventory.
Major cloud computing providers such as Amazon Web Services, Google Cloud, and Microsoft Azure offer H100 GPU rental services, allowing businesses and research institutions to use the chips on demand without having to purchase and maintain their own hardware.
Due to the convenience of rental services, some companies that had hoarded H100 GPUs are looking to offload their excess chips. These chips are reportedly being sold below market price.
The H100 GPU is one of Nvidia’s mostadvanced GPUs, built on TSMC’s 4nm process and featuring 80GB of HBM3 memory. It is designed to handle AI and HPC workloads, finding applications in areas such as training large language models, image generation, and scientific computing.
The easing of supply constraints and the proliferation of rental services will lower the barrier to entry for businesses and research institutions to access H100 GPUs, fostering innovation and advancements in AI and HPC.
【来源】https://www.ithome.com/0/752/328.htm
Views: 1