Artikel

What “AI at Scale” Really Looks Like in Banking

Learn how Teradata helps a leading bank use AI to analyze customer conversations and uncover hidden insights.

Martin Willcox
Martin Willcox
30. September 2025 4 min Lesezeit

One of the perks of my job as Vice President of Industry and Analytics at Teradata is that I get to work with lots of different, smart people across different industries and geographies. So that’s how I found myself in Edinburgh recently as part of a panel discussion organized by The Data Lab on the impact of AI in financial services.

One of the questions we were asked was, “What does ‘AI at scale’ really mean in the context of financial services?”

Now that’s a great question—and if you want to see me stammer my way through an answer with a few too many “umms” and “errs,” then, umm, you can, because, err, the event was livestreamed.

And if you’re in a hurry, here’s an answer that’s simultaneously more comprehensive and to the point!

Let me give you a very specific example of what “AI at scale” means to one company.

One of Teradata’s customers is a leading Asian bank. In fact, they are the largest and most profitable bank in their local market. They have built a mature enterprise data warehouse on Teradata Vantage® that is evolving into a data lakehouse that supports a variety of analytic use cases and business processes: Customer 360° analytics, risk management, credit scoring, regulatory reporting, etc.

The bank takes the view that NPS (net promoter score) is a leading indicator of customer engagement, and that even marginal improvements in NPS can significantly improve retention and cross-sales. Their challenge has been understanding what is actually driving NPS.

Our team at Teradata identified that they were capturing 50,000 inbound customer conversations per week from their website and online banking app. Every one of those conversations represents a real customer want or need. And yet none of those interactions were systematically analyzed to understand those needs nor to assess whether they made customers feel better about the bank.

Actually, I think that’s pretty common across large B2C organizations. We’ve been saying for years that there is real value in understanding the interactions we have with customers that precede purchase decisions (or a decision not to purchase), but the traditional “frequentist” methods of text analytics that have been available to us are often pretty hopeless when confronted by a large corpus of messy, real-world text data. So most of that data gets left on the data center floor.

We were able to take those customer chats and to vectorize them using a task-specific language model in-database. The resulting vector embeddings could then be clustered using k-means into roughly 20 main themes and 50 sub-themes. You can think of this a bit like retrieval augmented generation (RAG). (If you’re a practitioner looking for practical guidance on how to combine unstructured and structured data, there’s a nice recipe here.) We can then apply a large language model (LLM) to create topic summaries that help the bank to understand the challenges that customers have in navigating the bank’s processes—and how those challenges affect NPS. And by vectorising new customer chats as they occur, we can now track and understand those drivers of NPS continuously.

I love this example for a few different reasons.

  • This is a bonafide AI solution. I think there’s a lot of AI-washing going on in the industry right now. Vendors put “AI” in front of everything, because that’s what customers want to buy. And employees in large organizations put “AI” on every program submission, because “AI” is the magic word that secures funding. But this solution really is leveraging the advancements that we’ve made in the last several years in neural networks and language models. We couldn’t have done this five years ago with bag-of-words and TF-IDF (term frequency–inverse document frequency), even if we had wanted to.
  • This is truly AI at scale. Teradata is already processing 50,000 chats per week for the customer, and our hope is that the customer will use generative AI to create transcripts of customer call recordings, and then to use the same process to analyze those conversations, too. At that point we’ll be analyzing over 20 million customer conversations annually, at marginal cost, on a platform that the customer has already purchased. Better yet, you can easily imagine multiple adjacent use cases—complaint analytics, dispute resolution, compliance and mis-selling analytics, etc.—that we can apply this same design pattern and infrastructure to. Essentially, Teradata is building an Enterprise Vector Store that will support multiple use cases, applications, and agents.
  • This is a real solution. So much of what we see right now is what my colleague Simon Axon likes to call “AI as theater”; cool demonstrations of what the technology can do that are mostly divorced from real-world business needs. Instead, this customer started with a clear articulation of the business problem that they wanted to solve and worked backwards from there. And as a result, this solution has already led to significant process improvements, especially around the way disputed transactions are handled. 

So, what can we learn from this one project about what “AI at scale” means for financial services?

First, that success means starting with the end in mind, not with a “cool” technology demonstration with no real link to business strategy and outcomes.

Second, that one of the superpowers of the current crop of AI technologies is enabling large organizations to make sense of unstructured interaction data including images, audio, and text.

The last big takeaway is that scaling over the longer term means designing for re-use. My bet is that five years from now, every major bank is going to need to vectorize every customer interaction—and potentially with several different embedding models and chunking strategies optimized for different requirements. If they do that on a use-case-by-use-case basis, I predict that we’ll see an explosion in data volumes that will make the “big data” era look like child’s play.

But more on that in my next article, “Scaling AI: Architectures for What’s Next"! 

Tags

Über Martin Willcox

Martin has over 27-years of experience in the IT industry and has twice been listed in dataIQ’s “Data 100” as one of the most influential people in data-driven business. Before joining Teradata, Martin held data leadership roles at a major UK Retailer and a large conglomerate. Since joining Teradata, Martin has worked globally with over 250 organisations to help them realise increased business value from their data. He has helped organisations develop data and analytic strategies aligned with business objectives; designed and delivered complex technology benchmarks; pioneered the deployment of “big data” technologies; and led the development of Teradata’s AI/ML strategy. Originally a physicist, Martin has a postgraduate certificate in computing and continues to study statistics.

Zeige alle Beiträge von Martin Willcox
Bleiben Sie auf dem Laufenden

Abonnieren Sie den Blog von Teradata, um wöchentliche Einblicke zu erhalten



Ich erkläre mich damit einverstanden, dass mir die Teradata Corporation als Anbieter dieser Website gelegentlich Marketingkommunikations-E-Mails mit Informationen über Produkte, Data Analytics und Einladungen zu Events und Webinaren zusendet. Ich nehme zur Kenntnis, dass ich mein Einverständnis jederzeit widerrufen kann, indem ich auf den Link zum Abbestellen klicke, der sich am Ende jeder von mir erhaltenen E-Mail befindet.

Der Schutz Ihrer Daten ist uns wichtig. Ihre persönlichen Daten werden im Einklang mit der globalen Teradata Datenschutzrichtlinie verarbeitet.