Thursday, November 21, 2024

Hitachi Vantara: Infrastructure immaturity impinges AI intelligence

Must read

Technology infrastructure provisioning, optimisation controls and a wider capability for governance across entire IT stacks are factors impinging the development of generative artificial intelligence (gen-AI). This is the claim made by Hitachi Vantara, the data storage, infrastructure and cloud management subsidiary of Hitachi, Ltd. In a set of suggested analysis notes stemming from a co-sponsored research report with Enterprise Strategy Group (ESG) spanning Western Europe and the Americas, an increasing number of firms say they have identified at least one potential use case for gen-AI, but operational aspects such as governance and usage policy controls are hindering progress.

The infrastructure and data tools company survey “finds” only one-third of firms believe their infrastructure and data ecosystem is well-prepared for implementing gen-AI solutions; if this is a somewhat preloaded comment, the point is (arguably) still well made i.e. there is a perhaps painful tradeoff in modern IT departments between the need for lower-cost more capable infrastructure solutions and the need to also cope with privacy and latency concerns. 

The gen-AI bandwagon

“Enterprises are clearly jumping on the gen-AI bandwagon, which is not surprising, but it’s also clear that the foundation for successful gen-AI use is not yet built and its full potential cannot be realised,” said Ayman Abouelwafa, chief technology officer at Hitachi Vantara. “Unlocking the true power of gen-AI, however, requires a strong foundation with a robust and secure infrastructure that can handle the demands of this powerful technology.” 

Abouelwafa and the team say that their report uncovered some expected and some less expected results. Something that many of us might have realised had we stopped to think about it amid all the generative hype is the fact that many businesses simply don’t have the process automation and optimisation scope to bring in new technologies like this so rapidly. It might sound mundane, but improving operational efficiency and bolstering fraud detection are areas heavily cited as key focus points.

RAG-time

While most organisations prefer non-proprietary models and the majority say that will use Retrieval-Augmented Generation (RAG), over the long term, however, organisations expect the use of proprietary models to increase – six-fold according to the survey – as businesses gain expertise and seek to achieve competitive differentiation.

“The need for improved accuracy shows organisations prioritising the most relevant and recent data gets incorporated into a Large Language Model (LLM), followed by the desire to keep pace with technology, regulations and shifting data patterns,” said Mike Leone, principal analyst at Enterprise Strategy Group. “Managing data with the right infrastructure will not only enable greater levels of accuracy, but also improve reliability as data and business conditions evolve.”

When it comes to some of the top concerns and challenges being faced, more than four in five of respondents agreed on concerns around ensuring data privacy and compliance when building and using applications that use gen-AI, while three quarters agreed that data quality issues needed to be addressed before accepting the results of gen-AI outputs.

Specifically industry-specific

Hitachi Vantara says it is actively building AI solutions such as the recently introduced Hitachi iQ, an industry-optimised suite for AI workloads that goes beyond basic integration and testing by layering industry-specific capabilities on top of the AI solution stack, making it more relevant to an organisation’s business. Complimenting Hitachi iQ, Hitachi’s Center for Excellence (COE) for generative AI supports customers on their journeys, while also promising to help control risks.

According to Hitachi Vantara chief product officer Octavian Tanase, some 60 – 80% of AI projects fail, or at least encounter significant setbacks or cost overruns.

“It’s no surprise that enterprise leaders (on the business and IT side) are cautious. The hesitancy to embrace AI stems from a fundamental lack of trust and an amplified perception of risks, spanning failures, escalating costs, and potential damage to reputation. There are a few pitfalls to avoid i.e while public clouds provide excellent platforms for playgrounds and learning the trenches, not all public clouds are created equally and generally they pose their own challenges regarding cost and lock-in, real-time performance (think autonomous robotics) and the intricacies of solution assembly. Compliance considerations, such as data residency for proprietary and confidential data, further complicate their adoption.”

On the other hand says Tanase, traditional (on-premises) datacentres face their own readiness challenges. He thinks that most are ill-equipped to meet the demands of high-performance computing, grappling with issues of latency, throughput, power consumption and cooling requirements, not to mention the burden of traditional high capital expenses. 

Meticulously right-sized infrastructure 

“It’s time for a paradigm shift in enterprise AI. While the foundational elements of data, models and IT infrastructure remain essential, a re-engineered approach is necessary to meet the unique demands of enterprises. A more refined data landscape spanning edge, core and hybrid cloud environments is crucial. Models should go beyond LLMs, incorporating small or Standard Language Models (SLMs) for specialised, real-time applications like grid optimization, futures trading or carrier routing. The IT infrastructure must be meticulously right-sized, cost-effective and sustainable, addressing critical needs in storage, computing (for both training and inference) and networking.

Tanase’s extended comments on re-engineered refactored re-imagined infrastructure and data management make for (arguably) more compelling reading than his firm’s survey analysis; they pave the way for us to have a more open discussion about work focused on achieving more nimble management to the infrastructure element of the technology stack. As MIT Technology Review reminds us, the training of a single AI model can emit as much as 300 tons of CO2, so we need manageable, ethical, accountable and environmentally conscious AI practices as well as smart ones. If we are going to build AI with explainability (XAI) that is responsible AI (RAI) and effective in a way that also embodies a core appreciation for sustainability, then we need a firm foundation to perform upon.

Latest article