AI training models are growing in both size and sophistication at a breathtaking rate, requiring ever greater bandwidth and capacity. With its unique 2.5D/3D architecture, HBM4 can deliver Terabytes per second of bandwidth and unprecedented capacity in an extremely compact form factor. Join Kevin Yee from Samsung and Nidish Kamath from Rambus discuss the design considerations of HBM4 memory subsystems (PHY, Memory Controller, and Packaging) in next-generation AI SoCs.
![]() |
Kevin YeeSr. Director of IP and Ecosystem Marketing, Samsung FoundryKevin Yee is Sr. Director of IP and Ecosystem Marketing at Samsung Foundry, responsible for driving strategic partners for IP enablement and the SoC ecosystem. With over 30 years in the semiconductor industry, he has served a variety of senior management roles in R&D engineering, product planning, sales, marketing and business development in system, semiconductor, FGPA, IP/VIP and EDA companies. Kevin's background includes system/ASIC design, FPGA architecture, IP development and he holds several patents in design architecture. Kevin is actively involved in the HPC/AI, Automotive and Chiplet space as well as with industry standards organizations such as UCIe, OCP, CXL, JEDEC, PCI-SIG, USB I/F and MIPI, driving the latest in industry standards and technologies. He holds a Bachelor of Science in Electrical Engineering from the University of California. |
![]() |
Nidish KamathDirector of Product Management for Memory Interface IP, RambusNidish Kamath is the Director of Product Management for Memory Interface IP at Rambus. He previously held marketing and product management roles at AMD, Kioxia (formerly Toshiba Memory), Avalanche Technologies, Brocade and Qualcomm, where he worked on computational storage, SmartNICs and GPU cluster networking solutions. He has served in various standards and industry associations such as SNIA, Center for Open Source Software (CROSS), CXL Consortium, UEC and JEDEC. |