U.S.officials order Nvidia to halt sales of AI chips to China

2022-12-05 18:36:27 浏览数 (1)

  • Nvidia said on Wednesday that it’s been told by the U.S. government to stop selling chips in China and Russia.
  • Thae said it was applying for a license to continue some Chinese exports but doesn’t know whether the U.S. government will grant an exemption.

Nvidia said the U.S. government told the company on Aug. 26, about a new license requirement for future exports to China, including Hong Kong, to reduce the risk that the products may be used by the Chinese military.

Nvidia said the restriction would affect the A100 and H100 products, which are graphics processing units sold to businesses.

“The license requirement also includes any future Nvidia integrated circuit achieving both peak performance and chip-to-chip I/O performance equal to or greater than thresholds that are roughly equivalent to the A100, as well as any system that includes those circuits,” the filing said.

The company expects that it could lose 400 million in potential sales in China in the current quarter after previously forecasting revenue of 5.9 billion. The new rule also applies to sales to Russia, but Nvidia said it doesn’t have paying customers there.

In recent years, the U.S. government has applied increasing export restrictions to chips made with U.S. technology because of fears that Chinese companies could use them for military purposes or steal trade secrets.

Nvidia said it was applying for a license to continue some Chinese exports but doesn’t know whether the U.S. government will grant an exemption.

“We are working with our customers in China to satisfy their planned or future purchases with alternative products and may seek licenses where replacements aren’t sufficient,” an Nvidia spokesperson told CNBC. “The only current products that the new licensing requirement applies to are A100, H100 and systems such as DGX that include them.”

NVIDIA A100

Accelerating the Most Important Work of Our Time

NVIDIA A100 Tensor Core GPU delivers unprecedented acceleration at every scale to power the world’s highest-performing elastic data centers for AI, data analytics, and HPC. Powered by the NVIDIA Ampere Architecture, A100 is the engine of the NVIDIA data center platform. A100 provides up to 20X higher performance over the prior generation and can be partitioned into seven GPU instances to dynamically adjust to shifting demands. Available in 40GB and 80GB memory versions, A100 80GB debuts the world’s fastest memory bandwidth at over 2 terabytes per second (TB/s) to run the largest models and datasets.

NVIDIA H100

基于 Hopper 架构的 NVIDIA H100,是“全球 AI 基础架构的新引擎”。

语音、对话、客服和推荐系统等 AI 应用正在推动数据中心设计领域的巨大变革。“AI 数据中心需要处理海量且持续的数据,以训练和完善 AI 模型,原始数据进来,经过提炼,然后智能输出——企业正在制造智能并运营大型 AI 工厂。” 这些工厂全天候密集运行,即便是质量上的小幅改进也能大幅增加客户参与和企业利润。

H100 将帮助这些工厂更快发展。这个 “庞大” 的 800 亿晶体管芯片采用了台积电的 4 纳米工艺制造而成。

“Hopper H100 是有史以来最大的一次性能飞跃——其大规模训练性能是 A100 的 9 倍,大型语言模型推理吞吐量是 A100 的 30 倍。”

H100 GPU 为加速大规模 AI 和 HPC 设定了新的标准,带来了六项突破性创新:

  • 先进的芯片—— H100 由 800 亿个晶体管构建而成,这些晶体管采用了专为 NVIDIA 加速计算需求设计的尖端的 TSMC 4N 工艺,因而能够显著提升 AI、HPC、显存带宽、互连和通信的速度,并能够实现近 5TB/s 的外部互联带宽。H100 是首款支持 PCIe 5.0 的 GPU,也是首款采用 HBM3 的 GPU,可实现 3TB/s 的显存带宽。20个 H100 GPU 便可承载相当于全球互联网的流量,使其能够帮助客户推出先进的推荐系统以及实时运行数据推理的大型语言模型。
  • 新的 Transformer 引擎—— Transformer 现在已成为自然语言处理的标准模型方案,也是深度学习模型领域最重要的模型之一。H100 加速器的 Transformer 引擎旨在不影响精度的情况下,将这些网络的速度提升至上一代的六倍。
  • 第二代安全多实例 GPU—— MIG 技术支持将单个 GPU 分为七个更小且完全独立的实例,以处理不同类型的作业。与上一代产品相比,在云环境中 Hopper 架构通过为每个 GPU 实例提供安全的多租户配置,将 MIG 的部分能力扩展了 7 倍。
  • 机密计算—— H100 是全球首款具有机密计算功能的加速器,可保护 AI 模型和正在处理的客户数据。客户还可以将机密计算应用于医疗健康和金融服务等隐私敏感型行业的联邦学习,也可以应用于共享云基础设施。
  • 第 4 代 NVIDIA NVLink—— 为加速大型 AI 模型,NVLink 结合全新的外接 NVLink Switch,可将 NVLink 扩展为服务器间的互联网络,最多可以连接多达 256 个 H100 GPU,相较于上一代采用 NVIDIA HDR Quantum InfiniBand网络,带宽高出9倍。
  • DPX 指令—— 新的 DPX 指令可加速动态规划,适用于包括路径优化和基因组学在内的一系列算法,与 CPU 和上一代 GPU 相比,其速度提升分别可达 40 倍和 7 倍。Floyd-Warshall 算法与 Smith-Waterman 算法也在其加速之列,前者可以在动态仓库环境中为自主机器人车队寻找最优线路,而后者可用于 DNA 和蛋白质分类与折叠的序列比对。

H100 的多项技术创新相结合,进一步扩大了 NVIDIA在 AI 推理和训练的领导地位,利用大规模 AI 模型实现了实时沉浸式应用。H100 将支持聊天机器人使用功能超强大的monolithic Transformer 语言模型 Megatron 530B,吞吐量比上一代产品高出 30 倍,同时满足实时对话式 AI 所需的次秒级延迟。利用 H100,研究人员和开发者能够训练庞大的模型,如包含 3950 亿个参数的混合专家模型,训练速度加速高达9倍,将训练时间从几周缩短到几天。

0 人点赞