Apr 14, 2025

Artificial Intelligence is no longer a futuristic idea, but it's already improving task workflows and helping financial institutions operate more efficiently than ever before. But alongside all the benefits, one question is front and center: how do we protect sensitive data in this new AI-powered world?
In highly regulated environments, such as finance, it's both a technical and a strategic challenge. Responsible data protection is necessary to earning trust, staying compliant, and maintaining long-term competitiveness.
AI in Finance: Beyond IT
AI isn’t simply a tool for your tech team anymore, as it affects how entire companies operate, how they manage risk, and how they engage with clients. That’s why data protection in the context of AI belongs on the agenda of the entire leadership team, including the Board of Directors.
This is why it matters more than ever:
Stricter Regulations: AI systems must comply with complex legal frameworks like the EU’s GDPR, Switzerland’s updated Data Protection Act (DPA), and the EU AI Act. Compliance has become foundational.
Operational and Reputational Risks: With AI come new risks such as data leaks, cyberattacks, and misuse of personal data, which can cause significant financial and reputational harm.
Customer Expectations: Data privacy has become a trust signal. Institutions that take it seriously will win client loyalty. Those that don’t may never recover from a breach.
FINMA’s Position on AI and Data Handling
Swiss financial institutions are bound not only by general data protection laws but also by the Swiss Financial Market Supervisory Authority (FINMA). These regulations have direct implications for how AI can and should be deployed.
Key FINMA Circulars include:
FINMA Circular 18/3 “Outsourcing – Banks and Insurers”
When AI tools are provided by external vendors or cloud services, this circular defines how data security, confidentiality, and risk management must be ensured, especially when customer data leaves internal infrastructure.FINMA Circular 08/21 “Operational Risks – Banks”
This circular mandates the identification, assessment, and mitigation of risks related to IT and data processing. Any AI solution must be evaluated through this lens, especially in terms of system integrity and availability.FINMA Circular 11/2 “Internal Control Systems – Banks”
Ensures that internal governance structures and risk controls are in place to manage technologies like AI. This includes clear responsibilities, audit trails, and oversight.
In short, FINMA expects institutions to handle AI systems with the same care and scrutiny as any other critical infrastructure.
Responsible AI Starts with Strong Data Protection
So what does responsible AI look like in practice? It starts with making sure your data handling is airtight with the following six principles every financial institution should follow:
1. Data Minimization and Purpose Limitation
AI models don’t need access to all your data just the right data. Limit collection and use to what’s necessary for each task. Techniques like anonymization, pseudonymization, and secure embeddings help protect privacy while preserving value.
2. Privacy by Design and by Default
Build systems where privacy isn’t an add-on, but instead built in from the start. That means default settings that protect users, strong access controls, and clear audit trails.
3. Secure Infrastructure and Encryption
Encryption for data at rest and in transit is non-negotiable. Combine that with robust identity management and zero-trust principles to control who can access what data.
4. Clear Consent and Transparency
Clients and employees need to understand how their data is used. That includes clear disclosures, options to opt out, and full transparency.
5. Cross-Border Data Transfers with Compliance in Mind
If you’re working with international cloud providers, ensure your setup is compliant with Swiss law. Either:
Work with providers certified under the Data Privacy Framework, or
Conduct direct due diligence and put legal safeguards (e.g. Standard Contractual Clauses) in place.
The responsibility for compliance always lies with you, not the vendor.
6. Continuous Monitoring and Auditing
AI systems must be tested regularly for vulnerabilities and compliance. As models evolve, your governance must keep up. Build review mechanisms into your operations as an ongoing process.
How Olymp Supports AI Security and Compliance
At Olymp, we support financial institutions in using AI safely, securely, and efficiently. Our modular systems are built with security and compliance at their core:
We only deploy localized models. Your data stays within your infrastructure.
All of our hosting and cloud deployments are in Switzerland.
We support full FINMA-compliant integration, with clear documentation and control mechanisms.
Our solutions follow the principles of privacy by design, and we help our partners implement the right governance and risk frameworks.
Final Thoughts
AI is transforming the financial industry, but only companies that take data protection seriously will unlock its full potential. With strong governance, secure architecture, and clear accountability, AI can become a driver of trust and not a threat to it.
At Olymp, we believe that secure, compliant AI is the only AI worth building.