PAI Agentic

Autonomous Workflows: From Prompt to Action

Product Overview


Agentic AI with PAI brings intelligent decision-making directly into your automated workflows. Systems call the LLM via API and let it determine the next action for each input, combining deterministic workflow execution with reasoning at every step. Your workflow engine handles tasks reliably, while the LLM adds the intelligence needed for complex, multi-step processes such as interpreting requests, selecting the right system actions and adapting to changing context – all inside your sovereign Private AI environment.

This turns business logic into secure, self-running workflows that connect internal systems like ERP, CRM, email and databases. Typical applications include procurement and RFP automation, document lifecycle management and customer service routing. Organisations benefit from consistent execution across teams, fewer manual steps, shorter response times and full transparency through activity logs and audit trails. It is a fully supported path to smart, compliant automation with Agentic AI.

Features


Safe Swiss Cloud works with your teams to design and implement Agentic AI workflows that map business processes into reliable, multi-step automations. This includes modelling the workflow, defining where the LLM makes decisions, integrating approval logic and maintaining these flows over time as requirements, systems and policies evolve.

Agentic AI solutions are hosted and operated in Safe Swiss Cloud’s sovereign environment, including workflow engines and supporting components. Safe Swiss Cloud takes care of deployment, monitoring, scaling and updates, ensuring that Agentic workflows run reliably in production and remain aligned with your performance and availability needs.

To connect Agentic AI with existing landscapes, Safe Swiss Cloud provides integration and architecture guidance for ERP, CRM, ticketing, email, databases and other systems. The focus is on secure data flows, clear responsibilities between LLMs and workflow engines, and a target architecture that supports long-term governance, resilience and extensibility.

For organisations that build their own Agentic workflows, Safe Swiss Cloud offers expert support around prompt design, decision strategies, error handling and testing. This helps internal teams adopt best practices, avoid common pitfalls and accelerate the journey from proof-of-concept to robust, production-ready Agentic AI automations.

Ready for PAI Agentic?

In our free briefing, we show you how to use Agentic A productively and securely in the company – without data risks and compliance concerns.

Service Description


PAI Agentic enables organisations to automate internal business processes using Private AI. Workflow programs ingest incoming events or data, retrieve the relevant enterprise context through connectors or MCP servers, and pass it to an LLM for analysis and decision-making — keeping data, models and orchestration logic firmly within Safe Swiss Cloud’s Swiss infrastructure.

The following service details describe the scope and commercial model:

  • Workflow automation: Custom-built agentic workflows automate decision-driven processes — receiving input data, gathering enterprise context via connectors or MCP, and delegating analysis and decisions to a Private AI LLM.
  • Built on Private AI infrastructure: All workflows run on Safe Swiss Cloud’s Private AI platform, so prompts, enterprise data and model responses remain within Swiss data centres throughout the process.
  • Agentic AI Managed Service: The managed service covers ongoing operations, monitoring and maintenance of each deployed workflow.
  • Pricing: Workflow development is billed on a time-and-materials basis. Ongoing operation of each workflow is covered by a monthly subscription fee.

LLM Models at Safe Swiss Cloud: Core Characteristics


Choose from a rich catalog of sovereign LLMs – all with the same strict privacy and compliance guarantees. Safe Swiss Cloud’s Private AI (PAI) services combine a broad selection of open-source LLMs with a consistent security, privacy and compliance foundation. You keep full control over data, infrastructure and model choice, while we provide the sovereign hosting and operational excellence

Which LLM Models are supported by PAI?

SSC provides access to the following open source large language models. These are provided as-is with no warranties:

ModelTypeDetails
apertus-8bChatOptimized for multilingual dialogue use cases.
apertus-70bChatOptimized for multilingual dialogue use cases.
bge-m3EmbeddingOptimized for Embeddings and parse retrieval with support for Multi-Functionality, Multilinguality, and Multi-Granularity.
bge-rerankerRerankerOptimized for Reranker to get relevance score.
deepseekr1-70bChatOptimised for reasoning chat completions
gemma-12b-itMultimodalOptimized for handling text and image input and generating text output.
gpt-oss-120bChatOptimized for powerful reasoning, agentic tasks, and versatile developer use cases.
granite-33-8bChatOptimized for Reasoning and instruction-following capabilities.
granite-emb-278mEmbeddingOptimized for Embeddings.
granite-vision-2bMultimodalOptimized for compact and efficient vision-language model
llama33-70bChat onlyOptimized for multilingual dialogue use cases.
llama4-maverickChat and multimodalOptimized for text and multimodal experiences.
llama4-scout-17bChat and multimodalOptimized for text and multimodal experiences.
mistral-v03-7bChat onlyOptimized for multilingual dialogue use cases.
qwen3-8bReasoningOptimized for thinking and reasoning.
qwq-32bReasoningOptimized for thinking and reasoning.
qwen3-vi-235bMultimodalOptimized for text and multimodal experiences.
qwq25-vl-72bMultimodalOptimized for compact and efficient vision-language model.
WhisperxSpeech to TextFor converting speech to text.

Other commercial or proprietary LLMs can also be integrated depending on licensing and infrastructure requirements.

Characteristics of all Private AI Services

Private AI (PAI) involves SSC providing Customers with sovereign Private AI solutions. The following service characteristics apply to all of Safe Swiss Cloud‘s PAI services.

  • Multi-lingual LLMs provide you access to knowledge from any language with results being delivered in your own language.
  • Choice of models: Select from a large catalogue of open large language models (LLMs) including DeepSeek, Llama4-Maverick, Apertus, Mistral and many more.
  • Sovereign AI by design: All LLMs are privately hosted in Switzerland, by Swiss owned entities. Your data always stays under your control. Your data is processed strictly in accordance with Swiss and privacy regulatio
  • Privacy: Your data is processed strictly in accordance with with the Swiss Data Privacy Act (DSG) and EU GDPR.
  • No Training, Retraining or Fine Tuning of the LLMs with your data. 
  • ISO Certifications: 27001, 27017, 27018.
  • Conforms to the C5 and NIS2 standards
  • 100% hosted in Swiss data centers under Swiss control. 

Why Private AI?


PAI Workflow Hosting Pricing


Safe Swiss Cloud provides sovereign and private hosting for various AI workflow solutions like n8n. This allow customers to create agents that automate tasks and workflows.

ServiceDescriptionPrice
Managed Workflow Hosting n8nIncludes server infrastructure, security, backups, monitoring and application management.226.-
n8n Enterprise LicenseThis is needed for multi-user installations.Not included*
*Safe Swiss Cloud will be happy to advise, arrange and organise licenses for n8n and other workflow tools based on the parameters of your organisation.

FAQ


Questions and answers about Private AI by Safe Swiss Cloud

What does “Private AI” mean at Safe Swiss Cloud?

Private AI at Safe Swiss Cloud means three things:

  1. Customer data — prompts, replies, and AI output — is never used to train models.
  2. Customer data is handled in accordance with the Swiss Data Protection Act (DSG) and the EU’s GDPR, ensuring full privacy compliance.
  3. The infrastructure is sovereign, meaning it is not subject to arbitrary service interruptions for non-technical reasons.
How does Safe Swiss Cloud’s Private AI comply with the revised Swiss Data Protection Act and other data-protection laws?

Safe Swiss Cloud’s Private AI complies fully with the revised Swiss Data Protection Act (CH DSG) and the EU GDPR.

Are AI prompts and responses stored in logs and therefore visible to Safe Swiss Cloud staff?

No. Prompts and responses are not stored in logs and therefore cannot be viewed or traced by Safe Swiss Cloud staff.

Are AI workloads for one customer fully isolated from other customers?

Yes. Every customer receives their own dedicated front end and RAG (Retrieval-Augmented Generation) system. Prompts are fully isolated from one another and have no cross-customer side effects.

Can we run our AI environment on dedicated, “no other tenants” hardware if required?

Yes. This option is more expensive because it involves dedicated GPUs for a single customer. It is a viable solution for customers with sufficient workload volume and strict compliance requirements for a dedicated AI infrastructure.

How do you protect customers against non-Swiss or extraterritorial government access to data?

Swiss law requires that Safe Swiss Cloud may only hand over customer data to non-Swiss or foreign governments if a warrant issued by a Swiss court under Swiss law has been served. This provides a robust legal barrier against extraterritorial data access requests.

Is the Private AI environment suitable for processing regulated data, such as health or financial data?

Yes. The technical privacy features, together with the regulatory frameworks of the Swiss Data Protection Act and the EU GDPR, ensure that the Private AI environment is suitable for regulated data, including healthcare and financial data.

Do you offer dedicated GPU and storage clusters for a single customer?

Yes. Safe Swiss Cloud offers dedicated GPU and storage clusters for individual customers.

How are backups and snapshots handled, and are they encrypted?

Backups are always encrypted. Snapshots of an encrypted volume are also encrypted. Customers can additionally choose to encrypt data at rest, which ensures that all snapshots remain encrypted and cannot be used in any way other than intended.

How easy is it to move our models, data, and prompts from Safe Swiss Cloud to another provider in the future?

We use open-source models accessible via the industry-standard OpenAI API. This allows customers to switch models and providers as needed, without being locked into proprietary formats or interfaces.

In which jurisdiction is our AI data stored and processed?

All AI data and models are stored and processed exclusively in Switzerland.

How is Safe Swiss Cloud’s Private AI different from public cloud AI services or consumer chatbots?

Safe Swiss Cloud uses open-source models and does not train — or contribute to training — these models. As a result, customer data (prompts, AI output, logs, etc.) is never used for any purpose other than what the customer explicitly requests. This prevents confidential information from inadvertently entering the public domain.

Many publicly available AI services silently use customer data for training. For example, an employee who uploads a file containing confidential information to a public AI service may see that data incorporated into a future large language model (LLM) iteration — potentially enabling a competitor to retrieve proprietary information in a future query.

What service levels and SLAs do you offer for GPU-intensive AI workloads?

Safe Swiss Cloud offers an SLA with 99.9% uptime. For paid support plans, the guaranteed response time is a maximum of one hour.

What services does Safe Swiss Cloud offer for integrating enterprise data into AI?

We offer services to develop MCP (Model Context Protocol) servers for connecting AI systems to enterprise data sources and workflows.

What support options are available?

Safe Swiss Cloud offers paid support packages with 24/7 coverage. For details, please visit our Support Services page.

Which certifications, audits, or attestations does the Private AI platform have for regulated use cases?

Safe Swiss Cloud is ISO 27001, ISO 27017, and ISO 27018 certified and audited annually. This ensures that an Information Security Management System (ISMS) is in place, that the necessary standards for protecting Personally Identifiable Information (PII) are met, and that additional security and privacy measures for cloud environments are implemented.

Safe Swiss Cloud also complies with a range of industry-specific standards, including FINMA and BaFin (finance), HIPAA and FMH (healthcare), EU GDPR and Swiss DSG (data protection), as well as C5 and NIS2.

Which open formats, APIs, or interfaces do you support to minimise vendor lock-in?

Large Language Models are accessed via the industry-standard OpenAI API. Interfaces to enterprise data are based on the Model Context Protocol (MCP) standard. Together, these ensure a very high degree of interoperability between systems, making it straightforward to switch models or providers.

Which performance guarantees does Private AI provide for GPU availability, I/O, and storage throughput?

Like all AI service providers — including the major international public cloud providers — Safe Swiss Cloud does not guarantee specific performance levels for AI workloads. However, a large pool of high-performance GPUs and dynamic capacity management have been implemented to ensure strong performance under normal operating conditions.