Here Comes Hot Chips!
It’s that time of year again—Hot Chips will soon be upon us. Taking place as a virtual event on August 21–23, the conference will once again present the very latest in microprocessor architectures and system innovations.
As EE Times’ AI reporter, I will of course be looking out for new and interesting AI chips. As in recent years, this year the program has a clear focus on AI and accelerated computing, but there are also sessions on networking chips, integration technologies, and more. Chips presented will run the gamut from wafer–scale to multi–die high–performance computing GPUs, to mobile phone processors.
The first session on day 1 will host the largest chip companies on the planet as they present the biggest GPU chips on the planet. Nvidia is up first to present its flagship Hopper GPU, AMD will present the MI200, and Intel will present Ponte Vecchio. Presenting these one after another contrasts their form factors: Hopper is a monolithic die (plus HBM), the MI200 has two enormous compute chiplets, and Ponte Vecchio has dozens.
Alongside the big three, a surprise entry in the at–scale GPU category: Biren. The Chinese general–purpose graphics processing unit (GPGPU) maker, founded in 2019, recently lit up its first–gen 7–nm GPGPU, the BR100. All we know so far is that the company uses chiplets to build the GPGPU with “the largest computing power in China,” according to its website. Biren’s chip has been hailed as a breakthrough for the domestic IC industry, as it “directly benchmarks against the latest flagships recently released by international manufacturers.” Hopefully, the company’s Hot Chips presentation will reveal whether this really is the case.
The main machine learning processor session is on day 2. We will hear from Groq’s chief architect on the startup’s inference accelerator for the cloud. Cerebras will also present a deep–dive on the hardware–software codesign for its second–gen wafer–scale engine.
There will also be two presentations from Tesla in this category—both on its forthcoming AI supercomputer Dojo. Dojo has been presented as “the first exascale AI supercomputer” (1.1 EFLOPS for BF16/CFP8) that uses the company’s specially designed Tesla D1 ASIC in modules the company calls Training Tiles.
Data center AI chip company Untether will present its brand new second–gen inference architecture, called Boqueria. We don’t know the details yet, but we know the chip has at least 1,000 RISC–V cores (will it take Esperanto’s crown as largest commercial RISC–V design?) and that it relies on a similar at–memory compute architecture to the first generation.
AI folks may also want to look out for the tutorial session on Aug. 21 on the topic of compiling for heterogeneous systems with MLIR.
The other tutorial session is on CPU/accelerator/memory interconnect standard Compute Express Link (CXL). CXL just announced the third version of its technology, which looks set to become the industry standard since previously competing standards recently threw their weight behind CXL.
Elsewhere on the program, we’ll hear from Lightmatter about its Passage device, a wafer–scale programmable photonic communication substrate. Ranovus will present on its monolithic integration technology for photonic and electronic dies.
I’ll also be looking out for Nvidia’s presentation on its Grace CPU, a presentation on a processing fabric for brain–computer interfaces from Yale University, and keynotes from Intel’s Pat Gelsinger and Tesla Motors’ Ganesh Venkataramanan.
The advance program for Hot Chips 34 can be found here.