Cost-effective Clusters for Development

A development cluster is a separate physical cluster that is 100% compatible with our production clusters but designed to be more cost-effective. Whether you are just getting started with Kubernetes or are already running Kubernetes in production but need a physical separation between your development, production, and staging environments, our development cluster is for you.

Using namespaces to isolate your production from staging and development environments often don't work very well. It can also make it difficult to live up to compliance and regulations.

Our recommended solution is a cost-effective way of running a physically isolated and scaled-down version of our production clusters. Being compatible with our production clusters allows you to operate both types seamlessly. A development environment must be able to withstand rapid and radical changes. Having physically isolated clusters for different environments prevent unforeseen interruptions in your production environment.

Comparison

While a development cluster is both compatible and very similar to our production clusters, there are some key differences:

Development K8s Cluster
Production K8s Cluster
Support Level
Standard
Priority
Control Plane HA
No
Yes
Master Nodes
1
3
Bare-metal Worker Nodes
Yes
Yes
WAN Speed
1 Gbit/s
10 Gbit/s
Private LAN Speed
1 Gbit/s
10 Gbit/s
Network Bandwidth Upgrade
No
100 Gbit/s
Availability Zones
1
3+
Multiple Data Centers
No
Yes
ASERGO K8s Services Support
Full
Full

Specifications

An ASERGO Kubernetes Development Cluster comes with Bare-metal Worker Nodes to ensure high performance and ample resources, so you don't have to spend precious time waiting for your applications to deploy.

The table below shows the minimum resource allocation that comes with our Development Clusters. You can upgrade the cluster resources to match your requirements at any time.

Worker Nodes
From 3
Kubernetes CPUs
From 24
RAM
From 96 GB
Load Balancer IPs
1 included
Bare Metal ServersBGP Load BalancersPrivate Kubernetes Network (10/25/100 Gbps)Private Bare Metal Server Network (10/25/100 Gbps)Worker NodesMaster Nodes 1, 2 and 3EtcdKube-API serverKube-schedulerKube-controllerPodContainerContainerContainerContainerPodPublic Network (1/10/25 Gbps)BGP Load BalancersRouterRouterInternetASERGO Managed Kubernetes ClusterAPI Load BalancerOperatorKubectl & API access

Master node

The master nodes provide the control-plane for a Kubernetes cluster. In a development cluster, we provide a single master node, running as a VM. ASERGO fully manages the master node.

Worker nodes

The worker nodes are what execute your workload. Even in our development clusters, all worker nodes consist of bare metal servers for maximum performance, security, and compliance. The bare-metal nodes are all server-grade hardware from top tier manufacturers.

We have equipped each worker node with 400+ GB of ephemeral solid-state storage. The worker nodes are configured in RAID 1 (mirror) for redundancy. RAM and CPU are per customer quote.

In our Kubernetes development clusters, each worker node is connected to the network by 1 Gbit/s.

ASERGO fully manages all worker nodes.

General Network topology

We use hardware-based switching and routing for optimal performance for both the Internet and private network layers. Relying on hardware packet forwarding enables wire-speed transfers and low latency interconnects across our data center network.

We use the following Kubernetes network stack:

  • Technology: Canal
  • Pod CIDR: 10.244.0.0/16
  • Service CIDR: 10.96.0.0/12

Private Network IP ranges (example):

  • Kubernetes Cluster Network: 10.X.X.0/24
  • ASERGO service endpoints: 10.255.0.0/16

Internet

All nodes connect directly to the Internet, and each node has a public IP and a firewall to block unwanted traffic.

With network processing happening at the network access layer in hardware, there are no provider-level software processing of your network traffic, imposing bottlenecks and restrictions.

Unobstructed access to the Internet and network functions implemented directly in the data center network allows us to support many different connection scenarios from straightforward to advanced configurations, such as Native IPv6, "Bring Your Own IPs" (BYOIP), BGP Routing, and distributed Load Balancers.

Private Cluster Network

The cluster network is a private network allowing a secure and isolated transport between cluster nodes. The nodes use this network as a transport layer for pods and services. Your cluster nodes can also reach additional hosted add-on services through this network.

Connect to Bare Metal Servers (Optional)

Suppose you have infrastructure that runs in and outside Kubernetes. In that case, we let you combine Kubernetes and traditional bare-metal servers over private networks—allowing you to get the best of both worlds without worrying about traffic bills.

Services

The Kubernetes cluster comes clean with only a bare minimum of services required when deploying your application(s). We believe your cluster should be committed to running your application(s), allowing you to utilize its resources effectively while keeping it simple.

We provide several managed hosted services, such as pod logging, Prometheus metrics, and remote storage. Your cluster can connect to hosted services via the private cluster node network, allowing high-speed interconnect and free unmetered traffic.

You can add additional services to a cluster at any time.

Redundant Load Balancer

Your cluster comes pre-configured with single-tenant redundant load balancing. Single tenancy protects you against noisy neighbors and ensures against the possibility of connection leaking.

MetalLB is our load balancer of choice, and it comes pre-configured for you with BGP connectivity to announce your IP(s) from worker nodes in different availability zones.

One public load balancer IP address is included by default. If needed, we can allocate additional IP addresses upon your request.

User authentication

We use OpenID Connect (OIDC) user authentication in Kubernetes.

Role-Based Access Control (RBAC) allows the assignment of individual access levels per user. Roles apply both on the namespace and cluster-level. We provide four levels of access to a cluster:

Super Admin Allows super-user access to perform any action on any resource. Grants full control over all resources in the cluster across all namespaces.

Admin Grants admin access to select customer namespaces. Allows read/write access to most resources in a namespace, including creating Roles and RoleBindings within the namespace.

User Grants read/write access to most objects within a namespace. It does not allow viewing or modifying Roles or RoleBindings.

Read-only Allows read-only access to see most objects in a namespace. It does not allow viewing Roles or RoleBindings. It does not allow viewing Secrets since those are escalating.

Both the Admin and User role also have the following extra cluster rules:

  • List Namespaces
  • List, get Nodes
  • List, get Metrics (Nodes)
  • List, get StorageClass
  • List, get, create, delete, edit PersistentVolumes

In addition to the above, the Admin role also has:

  • List, create, delete ClusterRoles
  • List, create, delete ClusterRoleBindings

If required, it is possible to obtain Super Admin rights to the cluster. Please contact us for more information .

Resources in the cluster managed by ASERGO are all labeled with asergo.com/managed=true .