All you need to do is to manage your applications, having absolutely no worries about security updates, patching or system upgrades. Our experienced K8s experts will be responsible for maintaining cluster components.
Hosted by ASERGO
Be in complete control of your cluster and enjoy the 100% flexibility it brings to your Kubernetes environment. This solution requires experience and knowledge to manage, run and operate. Should you need our assistance, just contact our experts.
Hosted by ASERGO
Offload your day-to-day software infrastructure management hosted by other vendors with confidence. This include Google Kubernetes Engine (GKE), Amazon Elastic Kubernetes (EKS), Microsoft Azure Kubernetes Service (AKS) and on-premises Kubernetes clusters.
Hosted by other vendors
We have made it easy for you
Focus on your applications while we handle lifecycle management, software updates and hardware.
Dedicated Kubernetes is for you who are serious about data privacy.
Full Kubernetes API
We strive to avoid any vendor lock-in with proprietary API's or custom versions of Kubernetes. The Kubernetes control plane provides an endpoint which enables Kubectl CLI or any other Kubernetes ecosystem tool. You may therefore develop your applications on any pure Kubernetes platform, even using MiniKube, and deploy directly on your cluster.
We provide a unique opportunity for your large databases or other applications that do not work well within a Kubernetes cluster. Our network topology allow you to inter-connect standalone dedicated servers with your Kubernetes cluster at full wire speed. Combine dedicated servers running Linux, FreeBSD, OpenBSD or Windows applications with your Kubernetes pods.
Enjoy the freedom to enable the promise of the Hybrid Cloud. Connect your ASERGO cluster with any 100% Kubernetes compatible vendor or even your own on-premises cluster.
We give you the best conditions for running your own networks. With the ability to create isolated networks, flexible IP assignments and routing, you can run your own private isolated networks using our infrastructure.
You do not share any hardware with others. Master and worker nodes are dedicated to you and thereby give you full control over cluster configuration and data privacy. Noisy Neighbors and privacy issues is a thing of the past.
Do not pay for more than you use. Additional hardware, master and worker nodes, are added to your cluster with ease. Even specialized worker nodes for AI or Ceph storage may be added. Tools for predicting scaling and proformance metrics are already installed and pre-configured.
Carrier grade network purposely built for Kubernetes
High Performance Network
For both Internet and Private networks our high performance data center network allow you to run even the most demanding workloads.
Extremely Flexible WAN
By allowing the nodes a direct WAN interface, we can provide extremely flexible solutions for IP, BGP and high internet bandwidth applications.
Without NAT on WAN
We do not use natting on WAN networks. We allow you direct access to the Internet.
BGP + L2 Ingress
We support BGP L3 and L2 (ARP) Load Balancing for ingress high availability.
Unmetered Internal Traffic
Internal cluster traffic and traffic between clusters and private networks is completely unmetered.
Native IPv4 and IPv6 Dual Stack
We support both IPv4 and IPv6 natively in dual stack.
We give you different data storage options designed to facilitate your requirements.
Your data is too important to be stored in a shared storage environment where you have no control.
Fast Local Storage
For IO intensive applications each cluster comes with the ability to provision local storage based on Local SSD for maximum performance. This storage is available on all of your Kubernetes nodes. You may also dedicate specific nodes to hold huge amounts of local storage in the Terabyte or even Petabyte range.
Each cluster comes with a pre-configured Ceph unified distributed storage system. Just start using this safe, secure and convenient storage solution. The Ceph block storage is accessed the same way as you would access an SSD. Mount your storage and read and write data without learning an API. You may configure Ceph to provide: Block Storage, Object Storage and Shared File System.
Our inter-connect technology binding Kubernetes clusters and dedicated servers together gives you the opportunity to utilize remote storage. Store your data outside Kubernetes on dedicated hardware, without paying for the data I/O.
Full control over your environment
Integrate your preferred CI/CD (Continuous Integration/Continuous Delivery) system with your Kubernetes. Whether that being Gitlab, Jenkins or Circle CI, we have no restrictions.
Design your environment
Clusters have no namespace limit.
You can freely create, delete and manage all namespaces.
Take advantage of namespace separation by running staging and production environments in the same cluster.
Rancher for user management
Take advantage of optional Ranchers easy-to-use web-interface across all of your clusters. Ranchers
user-management adds a layer to Kubernetes, which allow you advanced user-control and access-rights management,
such as namespace access and command access.
A Helping Hand
We help you through the entire process of implementing Kubernetes. If you have issues, our in-house Kubernetes specialists will help you. We provide support in English and Danish.
6 years with Kubernetes
We have been involved in Kubernetes for the past 6 years, deploying development and production clusters. Our own software infrastructure is running on the same type of clusters as provided to our customers. Right now this webpage you are looking at is being served by NGINX from one of our Kubernetes clusters.
No "one size fits all"
We understand our customers individual requirements. The huge Kubernetes ecosystem is well known to our engineers and they will guide and help you tailor your specific deployment.
Kubernetes (k8s) is a Production-Grade container orchestration that manages deployments, management, and connections between containers by using a fleet of worker servers, also called worker nodes. These nodes will hold all containers running in the cluster. All nodes are managed and overseen by the Kubernetes Control Plane.
The ASERGO Way
We use Kubernetes in our own stack and have developed and optimized a new infrastructure specialized for Kubernetes.
Our ASERGO cluster is a combination of load balancers, Kubernetes nodes, and advanced networking. All clusters are installed using stacked masters principle. To achieve HA, the control plane use a load balancer (API LB) to get a list of all active control plane nodes. This API LB will also act as management API access to your cluster.
The network consists of internet (1/10Gbps), Private Kubernetes Network (10/20/100Gbps) and Private Network (1/10/20/100Gbps).
Internet is the public access to your applications. Applications will be exposed by using a BGP load balancer located in the cluster. Private Kubernetes Network is where all internal cluster traffic happens. This is a closed private network where only your cluster is located. The Private Network is your entrance to the Private Kubernetes Network.
We are using Canal as the chosen CNI for Kubernetes. Canal brings the best of Flannel and Calico. Flannel brings a simple and tested overlay network while Calico brings the security aspect of network policies for ingress and egress traffic.
Each cluster comes with a ready to use deployed Ceph cluster, and the ability to provision local storage, if low latency storage is needed. Ceph supports Block storage, Object storage and Shared File System.
Expand the Cluster
You can provision new worker nodes from the ASERGO Dashboard. This is an automated action. As soon as the node is ready will it add itself to your cluster.
We have tested multiple ways of monitoring a cluster and found the optimal combination for monitoring the cluster and its running applications.
Monitoring system for metrics and alerting.
Collect and show logs from all running pods.
Enable monitoring and troubleshooting transactions in distributed systems.
Control and manage your cluster with your browser.
In order to distribute traffic sent to the cluster, we have pre-installed a Nginx Ingress Controller. The controller can distribute HTTP, TCP, and UDP traffic to various services inside the cluster.
Logging and Tracing
We have pre-installed Operators for Prometheus, logging and tracing for easy and simple deployment if needed. Operators work as a templating system, used for deploying and managing a Kubernetes application.