Friday, January 31, 2025

How JPMorgan Chase’s infrastructure chief keeps the AI engine humming

Must read

This audio is auto-generated. Please let us know if you have feedback.

When the largest U.S. bank needs compute, there’s little margin for error.

Executives at JPMorgan Chase don’t want IT teams waiting in line to access resources needed to spin up an AI use case, upgrade a customer service portal, enhance market forecast analytics or support critical business applications.

“We’re putting a lot of effort into improving the ability of our software engineers to be productive as they do development,” the company’s CFO Jeremy Barnum said during a January earnings call.

As technology drives business outcomes, infrastructure availability becomes a competitive advantage — and a priority.

“For our capacity, we like to use a just-in-time model, which usually means planning years before we need the capacity — as much as five to 10 years out,” Darrin Alves, CIO of infrastructure platforms at JPMorgan Chase, told CIO Dive. “If you wait until you’re only a year or two away, you won’t have the capacity available when you need it.”

Alves oversees the digital systems that connect more than 600 JPMorgan offices globally and over 5,000 individual branch locations to a complex hybrid ecosystem of private data centers, colocation facilities and hyperscaler deployments.

“We have everything from mainframe to quantum computers and blockchain to public cloud and generative AI,” Alves said.

Like cloud before it, generative AI is transforming banking processes, as coding assistants help unlock data in legacy COBOL applications, smarter chatbots improve customer experience and agentic solutions ease workflow processes.

JPMorgan Chase is an industry leader in AI adoption. The company widely rolled out its in-house generative AI assistant LLM Suite last September and ranked first for AI maturity in Evident Insights’ analysis of the sector’s top 50 financial institutions last year.

Modernization investments are integral to AI success going forward, Alexandra Mousavizadeh, co-founder and CEO of Evident, said.

“Whether banks are building their own AI or buying third-party solutions, the end result will only be as good as the underlying infrastructure,” said Mousavizadeh. “As the drive towards organizationwide AI deployment ratchets up, we’ll start to see which institutions have placed the right bets.”

Computing capacity

Major transformations call for rigorous planning and a modernized infrastructure foundation. There are multiple challenges to surmount, especially in banking.

“Anytime you go through an application, you first have to solve for legal and compliance,” Alves said. “It’s got to meet our security and risk and controls and then it’s customer experience.”

Infrastructure and spending are also key considerations, though they are further down the decision tree.

“We have to decide where it’s going to fit into our architecture and the last step is to optimize for cost,” said Alves.

Finding the right compute environment and the capacity to run generative AI use cases became an enterprise pain point last year as hyperscalers raced to deploy GPU servers and build out datacenters.

In August, CBRE analysts estimated that nearly 80% of data center space under construction had already been leased. Gartner forecasted 40% of existing AI data centers will be operationally constrained by power shortages in the next two years.

“AI expansion is putting constraints on the entire supply chain for data centers,” said Alvin.

JPMorgan Chase navigated around the hardware hurdles proactively.

“We’re a company that wants to make sure we can control our own destiny, for our own business reasons and for regulatory reasons,” Alves said. “So, we partner closely with the hardware manufacturers and the colocation and data center designers.”

The bank is also vigilant about refreshing its data center infrastructure regularly, as it expands capacity where needed and leverages public cloud.

Latest article