In this post, I will show you how to make your application data persist by adding a persistent volume claim to an already deployed pod/container.
In this article I demonstrate how to set up an autoscaler to scale up the pods when the CPU usage exceeds a certain threshold and back down again.
The Kubernetes cloud is the perfect infrastructure for microservices architecture. It provides an out-of-the-box solution to get you up and running quickly on a managed public or on a dedicated private Kubernetes/OpenShift cluster.
JupyterLab is the most widely used data science / machine learning IDE. Deploying it on OpenShift / Kubernetes adds another layer of flexibility in terms of convenience, resource allocation and horizontal scaling across user groups.
Containers and microservices have revolutionized the deployment of enterprise software. Only one industry still behaves very sluggishly when transferring your applications to cloud containers: banks and financial service providers. But there are also successful counter-examples.
At Safe Swiss Cloud, we hear from software developers time and again that with dedicated Openshift clusters, the benefits of deployment in the cloud can be perfectly exploited. Red Hat’s Platform-as-a-Service OpenShift enables faster development, deployment, monitoring and scaling of applications in docker containers. The feedback is almost always similar:
Deploying OpenShift to the cloud as opposed to bare metal, is an ideal way to get up and going quickly, being particularly well suited to development and test environments where instant resource availability and flexibility is key. A great way to smooth the path to a successful OpenShift deployment is by using automation.
More and more companies are using OpenShift to develop and deploy their cloud applications. It leverages the advantages of docker containers, manages scaling, and increases efficiency. Now available in the Safe Swiss Cloud.