Competitive Edge on Speed Is the Key to HBM Products’ Success
HBM (High-Bandwidth Memory) is a high-value-added DRAM product that provides high bandwidth and is used in computing systems that require high performance, such as supercomputers and AI accelerators. The development of such computing technologies led to the recent rise in machine-learning, which is based on the neural network model that has been studied since the 1980s. As the fastest DRAM, HBM plays a key role in overcoming the limitations of computing technology.
HBM’s high bandwidth requires a variety of elementary technologies and advanced design techniques. The development is further complicated by the fact that a logic die is stacked with 4 to 16 DRAM dies in a 3D structure. Due to this technological complexity, HBM is recognized as a flagship product that demonstrates a manufacturer’s technological prowess.
From the introduction of HBM1 in 2015 to the , SK hynix has played a leading role in the HBM industry. The success of SK hynix’s HBM products lies in their characteristics, and, more specifically, design plays a significant role in ensuring this competitiveness. The HBM Design Team at SK hynix is the department responsible for implementing specifications into actual circuits as well as developing architectures and design technologies to ensure accurate functions, high performance, and low power characteristics. As a result of their comprehensive understanding of products, the team also plays a crucial role in the planning of future products as well as defining the specifications of these products. Additionally, they respond to customer feedback and analyzes defects.
The characteristics of a product are typically classified into three categories: power, performance, and area (PPA). This article looks at how to improve performance, or speed competitiveness, by employing superior design techniques. HBM, as previously mentioned, has a high bandwidth, which refers to the amount of data that can be transmitted in a specific unit of time. As it has the characteristic of having a high bandwidth, HBM is primarily used for applications requiring high-performance computing.
Resolving Skew with Machine Learning
Over the last eight years, the bandwidth of HBM products has increased sevenfold, and the industry is now approaching the 1TB/s milestone. Given that bandwidth in other products has increased by two to three times in the same period, it is reasonable to attribute the rapid development of HBM products to the fierce level of competition among memory manufacturers.
Memory bandwidth indicates how much data can be transmitted per unit of time, and increasing the number of data transmission lines is the easiest way to increase bandwidth. As a matter of fact, HBM is made by as many as 1024 pins per product, and the data transmission paths inside HBM have grown significantly with each generation, as shown in Figure 2.
However, there are constraints to increasing transmission paths within the confines of a chip’s limited size. This is due not only to an increase in the number of data transmission lines, but also due to an increase in the number of transmission/reception circuits that use each transmission line. Furthermore, as the number of transmission lines increases, it becomes more difficult to match the length and configuration of each transmission line equally, creating problems with increasing operational speed.
The difference in timing between transmission lines is what we define as skew. To reduce this skew, each transmission line’s total length and electrical components should be designed similarly. However, because HBM contains thousands of internal transmission lines, matching them one–by–one is nearly impossible. SK hynix resorted to machine learning techniques to solve this problem. Reinforcement learning is used to reduce skew between entire transmission paths by attaching surplus transmission paths to each transmission line, which accurately optimizes the skew without the need for manual work from engineers.
Figure 3 shows this optimization process. Several lines bent at 90-degree angles have different characteristics, so red surplus lines must be added to reduce skew. Starting with a random solution (shown on the left of Figure 3), reinforcement learning is used to generate the optimal result (shown on the right). This reduces the skew by 30% from 100 ps to 70 ps.
Improving Speed with Optimization of PVT-aware Timing
Even if the skew is optimized, the issue of matching the relative timing relationship between the various signals remains. For example, since there is one clock signal* for every 32 data signals, the clock signal must have different circuits compared to the data signal if data signals need to be controlled by this clock signal. The difference in circuit configuration will also have different relationships depending on the PVT (Process, Voltage, Temperature) variation. In any case, the clock must be located in a specific timing section of the data. But as the operating speed increases, this timing section decreases, further complicating the design.
*Clock signal: In synchronous digital circuits, a clock signal oscillates between a high and a low state and is used like a metronome to coordinate actions of digital circuits.
To address this issue, SK hynix applied PVT-aware timing optimization technology to detect PVT variations in HBM3 to find the optimal timing. This technology determines what stage of the unit circuit has the same cycle as the accurately-cycled external clock input, and based on this data, automatically optimizes the circuit configuration in the main timing margin circuit. As shown in Figure 4, clock timing generally shifts the clock to one side depending on PVT variation, but PVT-aware timing optimization technology can improve speed by keeping the clock in the center in all cases.
To improve bandwidth, which is a key performance indicator of HBM, SK hynix is developing a wide range of design technologies including data path optimization, machine learning-based signal line optimization, PVT-aware timing optimization technology, and new process technologies. The base die differs from a typical DRAM process in that it lacks a cell, and by leveraging this characteristic, HBM-optimized process technology is being developed as well as advanced package technologies for 3D stacks.
SK hynix has achieved rapid development of HBM through these efforts. However, in order to meet high customer expectations, new technological developments that break existing frameworks are necessary. Furthermore, collaboration with various players in the HBM ecosystem—customers, foundries, and IP companies—is required to change the system level. A change in business model is also required. As a leading HBM company, SK hynix will devote all its capabilities to the long-term development of HBM through the advancement of computing technology.
The post Competitive Edge on Speed Is the Key to HBM Products’ Success appeared first on EE Times.