Sunday, December 22, 2024

NetApp unveils new data storage infrastructure for enterprise AI workloads – SiliconANGLE

Must read

Data storage and management company NetApp Inc. has introduced a new piece of hardware for companies running demanding workloads such as generative artificial intelligence, VMware virtualization and enterprise databases in their on-premises data centers.

The NetApp AFF A-Series systems announced today were launched alongside the new NetApp AIPod with Lenovo ThinkSystem servers for Nvidia Corp.’s OVX, which are designed specifically to support retrieval augmented generation for generative AI workloads.

The company said the all-flash NetApp AFF A-Series systems are designed to eliminate storage silos and complexity, helping to accelerate advanced workloads while optimizing for storage costs. As a unified data storage solution, NetApp said, they’re suitable for any kind of data, application or cloud, delivering up to double the performance with support for 40 million input/output operations, 1 terabit-per-second throughput and 99.9999% data availability.

With support for block, file and object storage protocols and native integration with Amazon Web Services, Google Cloud and Microsoft Azure, the new systems will enable enterprises to consolidate multiple workloads, lower the cost of data and operate without silos, the company said. They feature industry-leading raw-to-effective capacity with always-on data reduction capabilities that work constantly in the background to optimize storage efficiency, and come with integrated, real-time ransomware detection to prevent malware attacks.

Steve McDowell of NAND Research Inc. gave NetApp’s A-Series systems the thumbs up, telling SiliconANGLE that NetApp is bringing the right set of capabilities to an all-flash market that’s focused on high-performance.

“As we’ve seen from other vendors, the move to PCI gen5 is really what’s behind the ‘double performance’ claims rather than anything magical in ONTAP,” he said. “But combining that new performance with everything else NetApp offers, such as ONTAP’s converged block/file/object capabilities, industry-leading flash capacity and real-time malware detection, means that the new A-series becomes the current product to beat for all-flash converged storage.”

Sandeep Singh, NetApp’s senior vice president and general manager of Enterprise Storage, said data is fast becoming enterprises’ most valuable asset, so the underlying infrastructure that supports it is of paramount concern. “NetApp’s extensive, unified data storage portfolio, from on-premises to the public clouds, makes it the go-to solution for enterprises looking to have the robustness for the most demanding workloads,” he said.

Converged infrastructure for RAG operations

Rather than consolidate multiple kinds of workloads, the NetApp AIPod with Lenovo ThinkSystem servers for Nvidia’s OVX offering is a converged infrastructure platform that’s laser-focused on one very specific task – retrieval augmented generation or RAG, which is a technique that allows companies to enhance generative AI models with their own, proprietary data.

Although the potential applications of large language models such as GPT-4 and Gemini Pro are enormous, they also suffer from a lack of knowledge. Most LLMs are trained on out-of-date public information and have no access to the more specific knowledge held within corporate servers. RAG is the process through which LLMs can access this knowledge, plugging into enterprises’ private data repositories to enhance their knowledge with key information about their products, services, business processes and operations, and more besides.

The NetApp AIPod is based on Lenovo Group Ltd.’s high-performance ThinkSystem SR675 V3 servers, which incorporate Nvidia’s L40S graphics processing units. They also integrate NetApp’s advanced data management capabilities, and are designed to support the OVX AI framework and Nvidia Spectrum-X networking. As such, they provide a complete infrastructure platform for both RAG and AI inference operations, enabling applications such as AI chatbots, knowledge management and object detection.

McDowell said NetApp and Lenovo have enjoyed a long and fruitful collaboration that moves a step forward with the launch of the new AIPod. “It’s a tacit acknowledgement that AI training is a cloud-first endeavor and that the main focus for enterprises in the near term is on fine-tuning pre-trained AI models,” he pointed out. “The AIPod is built precisely for that, and it stands alone in providing enterprises with a turnkey solution for generative AI inferencing.”

According to the analyst, the main benefit of the AIPod is that it takes the guesswork out of building a high-performance, GPU-powered system for on-premises AI, which is not something many IT workers are comfortable with. “It’s similar to what Nvidia’s DGX does for training, and it is a much-needed solution that’s going to see a positive market response,” McDowell added.

Kamran Amini, Lenovo’s vice president and general manager of server, storage and software-defined solutions, said the new offering will enable every business to leverage the most advanced generative AI technologies. “As customers deploy AI, they demand business critical availability, ease of management, and infrastructure efficiency,” he explained. “The NetApp AIPod with Lenovo ThinkSystem servers for Nvidia OVX deliver optimized validated solutions to make generative AI more accessible for businesses of every size.”

Data management updates

In addition to the new hardware, NetApp announced a host of new data management capabilities within its NetApp ONTAP software, including the launch of five new StorageGRID models designed to speed up access to large volumes of unstructured data. The new models enable companies to access unstructured information within flash-based systems at a lower price point, delivering a three-times performance increase, an 80% footprint reduction and power consumption savings of up to 70%, the company said.

A new capability known as SnapMirror Active Sync is designed to safeguard enterprise’s business operations by implementing a symmetrical, active-active business continuity plan that spans two individual data centers, so if one goes offline for any reason, the other can pick up the slack. Meanwhile, FlexCache with Writeback is a new tool for creating local copies of key data for distributed teams, in order to reduce latency and ensure uninterrupted access to that information. The local copies can read and write data, meaning teams can work more efficiently, while maintaining consistency with enterprises’ core data centers.

NetApp also debuted a new Cyber Vault Reference Architecture for “logically air-gapped storage,” based on the latest advances in secure data storage, autonomous real-time ransomware detection and rapid data restoration.

International Data Corp. analyst Ashish Nadkarni said enterprises are desperate to leverage their data in new ways to support cutting-edge AI capabilities, and this is placing more demands on the underlying infrastructure. “They need storage infrastructure that gives them the flexibility to combine their on-premises data storage with cloud environments,” he explained. “NetApp’s strategy of delivering powerful unified data storage that works with any data protocol, in any environment, to run any workload gives its customers the power and flexibility they need to face whatever challenges come their way.”

Image: SiliconANGLE/Microsoft Designer

Your vote of support is important to us and it helps us keep the content FREE.

One click below supports our mission to provide free, deep, and relevant content.  

Join our community on YouTube

Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.

“TheCUBE is an important partner to the industry. You guys really are a part of our events and we really appreciate you coming and I know people appreciate the content you create as well” – Andy Jassy

THANK YOU

Latest article