Infrastructure advancements are changing numerous arenas, more specifically, the integration of artificial intelligence infrastructure and memory technology, which, in turn, is reshaping the data landscape.
The discussions at HPE Discover 2024 highlighted the industry’s relentless pursuit to overcome the memory wall, a challenge arising from the imbalance between CPU/GPU capabilities and memory performance. These developments are crucial in supporting the burgeoning demands of AI and large language models, paving the way for more efficient and powerful computing solutions, according to Alan Walker (pictured, right), senior director of sales at Samsung Semiconductor Inc.
“When we think about the traditional memory pyramid with the cache memory at the top, and then your system memory storage underneath, that pyramid is now growing several layers and becoming a much larger pyramid,” Walker said. “At the top, we have high bandwidth memory…under the memory itself, we’re now adding additional capabilities…and then even under that, you now have multiple different types of SSDs…all of that is to help us solve what we’re calling the memory wall.”
Walker and R.C. Hurbanis (left), senior manager of Device Solutions Americas business enablement at Samsung Semiconductor, spoke with theCUBE Research’s Dave Vellante and Rebecca Knight at HPE Discover, during an exclusive broadcast on theCUBE, SiliconANGLE Media’s livestreaming studio. They discussed AI infrastructure integration, advancements in memory technology and future developments in memory capacity and bandwidth. (* Disclosure below.)
How AI infrastructure advancements are changing memory technology
The future of memory technology is promising, driven by continuous innovations. These advancements aim to meet the growing demands of AI and large-scale data processing, Hurbanis noted.
“One of the challenges is HBM is difficult to make because it’s stacking a bunch of layers of silicon on top of one another and connecting them, that’s challenging to do,” he said. “That takes some very special technology to do … were not there. A big part of what the industry is doing is spending CapEx to increase … the capabilities to do that. That’s also why it’s going to take a little bit of time for the industry to catch up.”
AI’s growing demand for higher memory capacity and bandwidth is shaping industry priorities, Hurbanis explained. The focus remains on developing cost-effective and energy-efficient solutions.
“There’s new technologies that we’re working on, we’ve mentioned a few earlier…these are going to come into play to help ultimately also bring down that memory wall,” he said. “Once those guys come out with the next generation of their chips, then the industry starts again and we start attempting to improve the infrastructure and address whatever other bottleneck might come to be.”
Here’s the complete video interview, part of SiliconANGLE’s and theCUBE Research’s coverage of HPE Discover:
(* Disclosure: TheCUBE is a paid media partner for HPE Discover. Neither Hewlett Packard Enterprise Co. and Intel Corp., the primary sponsors of theCUBE’s event coverage, nor other sponsors have editorial control over content on theCUBE or SiliconANGLE.)
Photo: SiliconANGLE
Your vote of support is important to us and it helps us keep the content FREE.
One click below supports our mission to provide free, deep, and relevant content.
Join our community on YouTube
Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.
THANK YOU