Private AI
-
In which jurisdiction is our AI data stored and processed?
Read on single page
All AI data and models are stored and processed exclusively in Switzerland.
-
Which open formats, APIs, or interfaces do you support to minimise vendor lock-in?
Read on single page
Large Language Models are accessed via the industry-standard OpenAI API. Interfaces to enterprise data are based on the Model Context Protocol (MCP) standard. Together, these ensure a very high degree of interoperability between systems, making it straightforward to switch models or providers.
-
How easy is it to move our models, data, and prompts from Safe Swiss Cloud to another provider in the future?
Read on single page
We use open-source models accessible via the industry-standard OpenAI API. This allows customers to switch models and providers as needed, without being locked into proprietary formats or interfaces.
-
What services does Safe Swiss Cloud offer for integrating enterprise data into AI?
Read on single page
We offer services to develop MCP (Model Context Protocol) servers for connecting AI systems to enterprise data sources and workflows.
-
What support options are available?
Read on single page
Safe Swiss Cloud offers paid support packages with 24/7 coverage. For details, please visit our Support Services page.
-
Which performance guarantees does Private AI provide for GPU availability, I/O, and storage throughput?
Read on single page
Like all AI service providers — including the major international public cloud providers — Safe Swiss Cloud does not guarantee specific performance levels for AI workloads. However, a large pool of high-performance GPUs and dynamic capacity management have been implemented to ensure strong performance under normal operating conditions.
-
What service levels and SLAs do you offer for GPU-intensive AI workloads?
Read on single page
Safe Swiss Cloud offers an SLA with 99.9% uptime. For paid support plans, the guaranteed response time is a maximum of one hour.
-
How do you protect customers against non-Swiss or extraterritorial government access to data?
Read on single page
Swiss law requires that Safe Swiss Cloud may only hand over customer data to non-Swiss or foreign governments if a warrant issued by a Swiss court under Swiss law has been served. This provides a robust legal barrier against extraterritorial data access requests.
-
How does Safe Swiss Cloud’s Private AI comply with the revised Swiss Data Protection Act and other data-protection laws?
Read on single page
Safe Swiss Cloud’s Private AI complies fully with the revised Swiss Data Protection Act (CH DSG) and the EU GDPR.
-
What does “Private AI” mean at Safe Swiss Cloud?
Read on single page
Private AI at Safe Swiss Cloud means three things:
- Customer data — prompts, replies, and AI output — is never used to train models.
- Customer data is handled in accordance with the Swiss Data Protection Act (DSG) and the EU’s GDPR, ensuring full privacy compliance.
- The infrastructure is sovereign, meaning it is not subject to arbitrary service interruptions for non-technical reasons.
-
Which certifications, audits, or attestations does the Private AI platform have for regulated use cases?
Read on single page
Safe Swiss Cloud is ISO 27001, ISO 27017, and ISO 27018 certified and audited annually. This ensures that an Information Security Management System (ISMS) is in place, that the necessary standards for protecting Personally Identifiable Information (PII) are met, and that additional security and privacy measures for cloud environments are implemented.
Safe Swiss Cloud also complies with a range of industry-specific standards, including FINMA and BaFin (finance), HIPAA and FMH (healthcare), EU GDPR and Swiss DSG (data protection), as well as C5 and NIS2.
-
Is the Private AI environment suitable for processing regulated data, such as health or financial data?
Read on single page
Yes. The technical privacy features, together with the regulatory frameworks of the Swiss Data Protection Act and the EU GDPR, ensure that the Private AI environment is suitable for regulated data, including healthcare and financial data.
-
How are backups and snapshots handled, and are they encrypted?
Read on single page
Backups are always encrypted. Snapshots of an encrypted volume are also encrypted. Customers can additionally choose to encrypt data at rest, which ensures that all snapshots remain encrypted and cannot be used in any way other than intended.
-
Are AI prompts and responses stored in logs and therefore visible to Safe Swiss Cloud staff?
Read on single page
No. Prompts and responses are not stored in logs and therefore cannot be viewed or traced by Safe Swiss Cloud staff.
-
Do you offer dedicated GPU and storage clusters for a single customer?
Read on single page
Yes. Safe Swiss Cloud offers dedicated GPU and storage clusters for individual customers.
-
Can we run our AI environment on dedicated, “no other tenants” hardware if required?
Read on single page
Yes. This option is more expensive because it involves dedicated GPUs for a single customer. It is a viable solution for customers with sufficient workload volume and strict compliance requirements for a dedicated AI infrastructure.
-
Are AI workloads for one customer fully isolated from other customers?
Read on single page
Yes. Every customer receives their own dedicated front end and RAG (Retrieval-Augmented Generation) system. Prompts are fully isolated from one another and have no cross-customer side effects.
-
How is Safe Swiss Cloud’s Private AI different from public cloud AI services or consumer chatbots?
Read on single page
Safe Swiss Cloud uses open-source models and does not train — or contribute to training — these models. As a result, customer data (prompts, AI output, logs, etc.) is never used for any purpose other than what the customer explicitly requests. This prevents confidential information from inadvertently entering the public domain.
Many publicly available AI services silently use customer data for training. For example, an employee who uploads a file containing confidential information to a public AI service may see that data incorporated into a future large language model (LLM) iteration — potentially enabling a competitor to retrieve proprietary information in a future query.