Categories: FAANG

Google Cloud and Swift pioneer advanced AI and federated learning tech to help combat payments fraud

Conventional fraud detection methods have a hard time keeping up with increasingly sophisticated criminal tactics. Existing systems often rely on the limited data of individual institutions, and this hinders the detection of intricate schemes that span multiple banks and jurisdictions. 

To better combat fraud in cross-border payments, Swift, the global provider of secure financial messaging services, is working with Google Cloud to develop anti-fraud technologies that use advanced AI and federated learning.  

In the first half of 2025, Swift plans to roll out a sandbox with synthetic data to prototype learning from historic fraud, working with 12 global financial institutions, with Google Cloud as a strategic partner. This initiative builds on Swift’s existing Payment Controls Service (PCS), and follows a successful pilot with financial institutions across Europe, North America, Asia and the Middle East.

aside_block
<ListValue: [StructValue([(‘title’, ‘$300 in free credit to try Google Cloud security products’), (‘body’, <wagtail.rich_text.RichText object at 0x3e9bd4f886d0>), (‘btn_text’, ‘Start building for free’), (‘href’, ‘http://console.cloud.google.com/freetrial?redirectPath=/welcome’), (‘image’, None)])]>

The partnership: Google Cloud and Swift

Google Cloud is collaborating with Swift — along with technology partners including Rhino Health and Capgemini — to develop a secure, privacy-preserving solution for financial institutions to combat fraud. This innovative approach uses federated learning techniques, combined with privacy-enhancing technologies (PETs), to enable collaborative intelligence without compromising proprietary data.

Rhino Health will develop and deliver the core federated learning platform, and Capgemini will manage the implementation and integration of the solution.

“Swift is in a unique position in the financial industry – a trusted and cooperative network that is integral to the functioning of the global economy. As such, we are ideally placed to lead collaborative, industry-wide efforts to fight fraud. This exploration will help the community validate whether federated learning technology can help financial institutions stay one step ahead of bad actors through sharing of fraud labels, and in turn enabling them to provide an enhanced cross-border payments experience to their customers,” said Rachel Levi, head of artificial intelligence, Swift. 

“At Google Cloud, we are committed to empowering financial institutions with cutting-edge technology to combat the evolving threat of fraud. Our collaboration with Swift exemplifies the transformative potential of federated learning and confidential computing. By enabling secure collaboration and knowledge sharing without compromising data privacy, we are fostering a safer and more resilient financial ecosystem for everyone,” said Andrea Gallego, Managing Director, global GTM incubation, Google Cloud.

The challenge: Traditional fraud detection is falling behind

The lack of visibility across the payment lifecycle creates vulnerabilities that can be exploited by criminals. A collaborative approach to fraud modeling offers significant advantages over traditional methods in combating financial crimes. To be effective, this approach requires data sharing across institutions, which is often restricted because of privacy concerns, regulatory requirements, and intellectual property considerations.

The solution: Federated learning

Federated learning offers a powerful solution for collaborative AI model training without compromising privacy and confidentiality. Instead of requiring financial institutions to pool their sensitive data, the model training occurs within financial institutions on decentralized data.

Here’s how it works for Swift: 

  1. A copy of Swift’s anomaly detection model is sent to each participating bank. 

  2. Each financial institution trains this model locally on their own data.

  3. Only the learnings from this training — not the data itself — are transmitted back to a central server for aggregation, managed by Swift. 

  4. The central server aggregates these learnings to enhance Swift’s global model. 

This approach significantly minimizes data movement and ensures that sensitive information remains within each financial institution’s secure environment. 

Core benefits of the federated learning solution

By using federated learning solutions, financial institutions can achieve substantial benefits, including:  

  • Shared intelligence: Financial institutions work together by sharing information on fraudulent activities, patterns, and trends, which creates a much larger and richer decentralized data pool than any single institution could gather alone.

  • Enhanced detection: The collaborative global model can identify complex fraud schemes that might go unnoticed by individual institutions, leading to improved detection and prevention.

  • Reduced false positives: Sharing information helps refine fraud models, leading to more accurate identification of genuine threats and fewer false alarms that disrupt both legitimate activity and the customer experience.

  • Faster adaptation: The collaborative approach allows for faster adaptation to new fraud trends and criminal tactics. As new threats emerge, the shared knowledge pool helps all participants quickly adjust their models and their fraud prevention tools.

  • Network effects: The more institutions participate, the more comprehensive the data pool becomes, creating a powerful network effect that strengthens fraud prevention for everyone involved.

For widespread adoption, federated learning must seamlessly integrate with existing financial systems and infrastructure. This allows financial institutions to easily participate and benefit from the collective intelligence without disrupting their operations.

Architecting the global fraud AI solution

The initial scope remains a synthetic data sandbox centered on prototyping learning from historic payments fraud. The platform allows multiple financial institutions to train a robust fraud detection model while preserving the confidentiality of their sensitive transaction data. It uses federated learning and confidential computing techniques, such as Trusted Execution Environments (TEEs), to enable secure, multi-party machine learning without training data movement.

There are several key components to this solution:

  • Federated server in TEE execution environment: A secure, isolated environment where a federated learning (FL) server orchestrates the collaboration of multiple clients by first sending an initial model to the FL clients. The clients perform training on their local datasets, then send the model updates back to the FL server for aggregation to form a global model.

  • Federated client: Executes tasks, performs local computation and learning with local dataset (such as  data from an individual financial institution), then submits results back to FL server for secure aggregation.

  • Bank-specific encrypted data: Each bank holds its own private, encrypted transaction data that includes historic fraud labels. This data remains encrypted throughout the entire process, including computation, ensuring end-to-end data privacy.

  • Global fraud-based model: A pre-trained anomaly detection model from Swift that serves as the starting point for federated learning.

  • Secure aggregation: Using a Secure Aggregation protocol to compute these weighted averages would ensure that the server learns only the historic fraud labels from participating financial institutions, but not exactly which  financial institution, thereby preserving the privacy of each participant in the federated learning process.

  • Global anomaly detection trained model and aggregated weights: The improved anomaly detection model, along with its learned weights, is securely exchanged back to the participating financial institutions. They can then deploy this enhanced model locally for fraud detection monitoring on their own transactions.

We’re seeing more enterprises adopt federated learning to combat global fraud, including global consulting firm Capgemini. 

“Payment fraud stands as one of the greatest threats that undermines the integrity and stability of the financial ecosystem, with its impact acutely felt upon some of the most vulnerable segments of our society,” said Sudhir Pai, chief technology and innovation officer, Financial Services, Capgemini. 

“This is a global epidemic that demands a collaborative effort to achieve meaningful change. Our application of federated learning is grounded with privacy-by-design principles, leveraging AI to pioneer secure aggregation and anonymization of data which is of primary concern to large financial institutions. The potential to apply our learnings within a singular global trained model across other industries will ensure we break down any siloes and combat fraud at scale,” he said. 

“We are proud to support Swift’s program in partnership with Google Cloud and Capgemini,” said Chris Laws, chief operating officer, Rhino. “Fighting financial crime is an excellent example of the value created from the complex multi-party data collaborations enabled by federated computing, as all parties can have confidence in the security and confidentiality of their data.”

Building a safer financial ecosystem, together

This effort to fight fraud collaboratively will help build a safer and more secure financial ecosystem. By harnessing the power of federated learning and adhering to strong principles of data privacy, security, platform interoperability, confidentiality, and scalability, this solution has the potential to redefine how we combat fraud in the age of fragmented globalized finance and demonstrates a commitment to building a more resilient and trustworthy financial world.

AI Generated Robotic Content

Recent Posts

Can “Safe AI” Companies Survive in an Unrestrained AI Landscape?

TL;DR A conversation with 4o about the potential demise of companies like Anthropic. As artificial…

17 hours ago

Large language overkill: How SLMs can beat their bigger, resource-intensive cousins

Whether a company begins with a proof-of-concept or live deployment, they should start small, test…

18 hours ago

14 Best Planners: Weekly and Daily Notebooks & Accessories (2024)

Digital tools are not always superior. Here are some WIRED-tested agendas and notebooks to keep…

18 hours ago

5 Tools for Visualizing Machine Learning Models

Machine learning (ML) models are built upon data.

2 days ago

AI Systems Governance through the Palantir Platform

Editor’s note: This is the second post in a series that explores a range of…

2 days ago

Introducing Configurable Metaflow

David J. Berg*, David Casler^, Romain Cledat*, Qian Huang*, Rui Lin*, Nissan Pow*, Nurcan Sonmez*,…

2 days ago