TalksAWS re:Invent 2025 - The Next Frontier in Financial Systems: Transformer-based Foundation Models
AWS re:Invent 2025 - The Next Frontier in Financial Systems: Transformer-based Foundation Models
Transforming Financial Systems with Transformer-based Foundation Models
Why Transform Payments?
The financial services industry has traditionally relied on structured, tabular data and algorithms to extract insights and make predictions.
However, this approach has limitations in capturing the dynamic, contextual nature of customer behavior and transactions.
Transformer-based models offer a new frontier in financial AI, as they can better capture temporal patterns, entity connections, and personalized insights from vast, diverse data sets.
Transformer-based foundation models can be fine-tuned on financial data to "learn the language" of the domain, unlocking more accurate predictions, fraud detection, personalization, and pattern mining.
Transformer-based Foundation Models for Financial Data
Transformer models are well-suited for processing sequential, time-series data like financial transactions, as they can capture both short-term and long-term patterns and context.
The key is to represent the tabular financial data in a way the transformer can understand, through custom tokenization and feature engineering.
Two main approaches:
Pre-train a transformer model from scratch on the financial data
Fine-tune an existing open-source language model (e.g. BERT, GPT) on the financial data
Embeddings generated by the transformer models can then be used as rich, contextual features to enhance traditional machine learning models like XGBoost for improved performance.
Building Transformer-based Solutions on AWS and NVIDIA
Data ingestion and preprocessing:
Leverage AWS services like Kinesis, Athena, and Lambda to ingest and structure raw financial data
Use NVIDIA Rapids for efficient GPU-accelerated data wrangling and feature engineering
Model training and fine-tuning:
Utilize Amazon SageMaker with NVIDIA Nemo framework for transformer model training and fine-tuning
Leverage NVIDIA's graph neural network (GNN) libraries like PytorchGeometric for modeling entity relationships
Deployment and inference:
Deploy the transformer-based models using Amazon SageMaker for scalable, real-time inference
Leverage NVIDIA Triton Inference Server for high-throughput, low-latency inference on massive transaction volumes
Key Takeaways and Business Impact
Transformer-based foundation models are transforming the financial services industry, enabling more accurate fraud detection, personalization, and pattern mining.
Leading financial institutions have seen significant improvements in model performance, with one fintech improving fraud detection accuracy from 58% to 97%.
The combination of AWS services and NVIDIA's accelerated computing capabilities provides a powerful platform to build, train, and deploy these advanced financial AI solutions.
The goal is to enable enterprises to harness the power of transformer-based models for their own financial data and use cases, rather than having to build these complex models from scratch.
Early adopters are seeing 10-12% improvements in key business metrics like customer engagement and dispute prediction accuracy.
Real-World Examples
A large LATAM neobank is building a "one-size-fits-all" transformer-based foundation model to power multiple use cases like fraud, personalization, and dispute prediction.
A major payment network published research on "transaction embeddings" using transformer models to capture rich, contextual representations of financial transactions.
A large bank presented at NVIDIA GTC on using transformer-based recommender systems to improve customer engagement.
These cookies are used to collect information about how you interact with this website and allow us to remember you. We use this information to improve and customize your browsing experience, as well as for analytics.
If you decline, your information won’t be tracked when you visit this website. A single cookie will be used in your browser to remember your preference.