Production Ready Cluster
- HA control plane
- Bare-metal worker servers
- 100% kubectl compatible
- In-house support
- Load Balancers (public and/or private)
- Mix Kubernetes and legacy applications
- Optional Kubernetes services
Network purpose-built for Kubernetes
- 10 to 100 Gbit/s unmetered private network
- 30TB public outbound traffic included
- Unlimited public inbound traffic
ASERGO Kubernetes dashboard
- User administration
- Kubernetes version upgrade
- Native Kubernetes Web UI (dashboard)
- Hardware monitoring
Let's get in touch
Reliable and dependable clusters for production workloads
You don't have to worry about the underlying or internal workings behind Kubernetes. We take care of configuration and maintanance for you, and the cluster comes pre-configured ready to use with our K8s services.
To save you from additional operational costs and complexity we have moved several services which would normally be configured inside the cluster, to external services that you can use as add-ons. Those services include log and metric retention, storage such as block, object, file, archiving and backup. Many of these services are provided as single-tenant instances making conforming to compliance and regulations a breeze. For your security, all external services are accessed over your private cluster network.
Whether you are new to Kubernetes or an expert, you can draw from our vast experience with Kubernetes and get assistance from our friendly support team. A Kubernetes Production Cluster gets you prioritized support, ensuring you can always get in touch with us when you need us.
We provide both Kubernetes production and development clusters. Our production clusters are optimized for high-availability, performance, and expandability, while our development clusters are a cost-effective way to separate your different environments with a physical barrier.
The table below shows the key differences between our production and development clusters:
Our Kubernetes production clusters come with bare-metal worker nodes to ensure high performance and help you save money by allowing you to utilize hardware resources optimally.
The table below shows the minimum resource allocation that comes with our Kubernetes production clusters. You can upgrade the cluster resources to match your requirements at any time.
The master nodes provide the control-plane for a Kubernetes cluster. We provide three master nodes in a production cluster, each running in an individual availability zone for high-availability. As master nodes do not run your application workloads, we provision master nodes as virtual machines. ASERGO fully manages the master nodes.
The worker nodes are what execute your workload. All worker nodes consist of bare metal servers for maximum performance, security, and compliance. The bare-metal nodes are all server-grade hardware from top tier manufacturers.
We have equipped each worker node with 400+ GB of ephemeral solid-state storage. The worker nodes are configured in RAID 1 (mirror) for redundancy. RAM and CPU are per customer quote.
In our Kubernetes production clusters, each worker node is connected to the network by 10 Gbit/s, or optionally 25 Gbit/s or 100 Gbit/s.
ASERGO fully manages all worker nodes.
General Network topology
We use hardware-based switching and routing for optimal performance for both the Internet and private network layers. Relying on hardware packet forwarding enables wire-speed transfers and low latency interconnects across our data center network.
We use the following Kubernetes network stack:
- Technology: Canal
- Pod CIDR: 10.244.0.0/16
- Service CIDR: 10.96.0.0/12
Private Network IP ranges (example):
- Kubernetes Cluster Network: 10.X.X.0/24
- ASERGO service endpoints: 10.255.0.0/16
All nodes connect directly to the Internet, and each node has a public IP and a firewall to block unwanted traffic.
With network processing happening at the network access layer in hardware, there are no provider-level software processing of your network traffic, imposing bottlenecks and restrictions.
Unobstructed access to the Internet and network functions implemented directly in the data center network allows us to support many different connection scenarios from straightforward to advanced configurations, such as Native IPv6, "Bring Your Own IPs"(BYOIP), BGP Routing, and distributed Load Balancers.
Private Cluster Network
The cluster network is a private network allowing a secure and isolated transport between cluster nodes. The nodes use this network as a transport layer for pods and services. Your cluster nodes can also reach additional hosted add-on services through this network.
Connect to Bare Metal Servers (Optional)
Suppose you have infrastructure that runs in and outside Kubernetes. In that case, we let you combine Kubernetes and traditional bare-metal servers over private networks—allowing you to get the best of both worlds without worrying about traffic bills.
The Kubernetes cluster comes clean with only a bare minimum of services required when deploying your application(s). We believe your cluster should be committed to running your application(s), allowing you to utilize its resources effectively while keeping it simple.
We provide several managed hosted services, such as pod logging, Prometheus metrics, and remote storage. Your cluster can connect to hosted services via the private cluster node network, allowing high-speed interconnect and free unmetered traffic.
You can add additional services to a cluster at any time.
Redundant Load Balancer
Your cluster comes pre-configured with single-tenant redundant load balancing. Single tenancy protects you against noisy neighbors and ensures against the possibility of connection leaking.
MetalLB is our load balancer of choice, and it comes pre-configured for you with BGP connectivity to announce your IP(s) from worker nodes in different availability zones.
One public load balancer IP address is included by default. If needed, we can allocate additional IP addresses upon your request.
We use OpenID Connect (OIDC) user authentication in Kubernetes.
Role-Based Access Control (RBAC) allows the assignment of individual access levels per user. Roles apply both on the namespace and cluster-level. We provide four levels of access to a cluster:
Super Admin Allows super-user access to perform any action on any resource. Grants full control over all resources in the cluster across all namespaces.
Admin Grants admin access to select customer namespaces. Allows read/write access to most resources in a namespace, including creating Roles and RoleBindings within the namespace.
User Grants read/write access to most objects within a namespace. It does not allow viewing or modifying Roles or RoleBindings.
Read-only Allows read-only access to see most objects in a namespace. It does not allow viewing Roles or RoleBindings. It does not allow viewing Secrets since those are escalating.
Both the Admin and User role also have the following extra cluster rules:
- List Namespaces
- List, get Nodes
- List, get Metrics (Nodes)
- List, get StorageClass
- List, get, create, delete, edit PersistentVolumes
In addition to the above, the Admin role also has:
- List, create, delete ClusterRoles
- List, create, delete ClusterRoleBindings
Resources in the cluster managed by ASERGO are all labeled with asergo.com/managed=true .
If required, it is possible to obtain Super Admin rights to the cluster. Please contact us for more information .