What is AMDs new high bandwidth memory. HBM achieves higher bandwidth with less power than DDR4 and GDDR5. To do this, it uses 3D packaging of several memory chips (dies), stacked together and connected with many I/O through-silicon vias (TSV) and thermal microbumps. First prototypes were made with 4 stacked memory dies and one base die which implements logic and physical level. HBM uses more I/O pins per memory package, up to 1,024 (DDR3 and GDDR5 used 8–32 I/O pins per memory chip). Memory package in prototypes of HBM consists of 4 memory dies, each implementing two independent 128-bit channels (193 I/O pins); total 8 channels per package.
High Bandwidth Memory differs from GDDR5 in a few others ways as well. For example, the bus width on a HBM chip is 1024-bits wide, versus 32-bits on a GDDR5 chip. As a result, High Bandwidth Memory can, and likely needs to, be clocked much lower. Even at much lower clocks though, that wider memory bus and vertical stacking results in much more bandwidth—more than 100GB/s on HBM versus 28GB/s with GDDR5. HBM also requires significantly less voltage, which equates to lower power consumption. All told, HBM offers much more bandwidth than traditional GDDR5 at roughly 50% less power. The implementation of HBM coming on at least one future AMD GPU, however, will use a 4-channel design and be limited to 4GB of memory.