Kubernetes (k8s) is a container cluster that manages deployments, management, and connections between containers by using a fleet of worker servers, also called worker nodes. These nodes will hold all containers running in the cluster. All nodes are managed and overseen by the Kubernetes Control Plane.
The ASERGO way
We use Kubernetes in our own stack and have developed and optimized a new infrastructure specialized for Kubernetes.
Our ASERGO cluster is a combination of load balancers, Kubernetes nodes, and advanced networking. All clusters are installed with stacked masters principle. To achieve HA, the control plane will use a load balancer (API LB) to get a list of all active control plane nodes. This API LB will also act as management API access to your cluster.
The network consists of internet (1/10Gbps), Private Kubernetes Network (10/25/100Gbps) and Private Network (1/10/25/100Gbps).
Internet is the public access to your applications. Applications will be exposed by using a BGP load balancer located in the cluster Private Kubernetes Network where all internal cluster traffic happens. This is the closed private network where only your cluster is located. The API LB is also located on this network. The Private Network is your entrance to the Private Kubernetes Network.
We are using Canal as the chosen CNI for Kubernetes. Canal brings the best of Flannel and Calico. Flannel brings a simple and tested overlay network while Calico brings the security aspect of network policies for ingress and egress traffic.
Each cluster comes with a ready to use deployed Ceph cluster, and the ability to provision local storage, if low latency storage is needed. Ceph supports Block storage, Object storage and Shared File System.
Expand the cluster
You can provision new worker nodes from the ASERGO Dashboard. This is an automated action. As soon as the node is ready will it add itself to your cluster.
We have tested multiple ways of monitoring a cluster and found the optimal combination for monitoring the cluster and its running applications.
Monitoring system for metrics and alerting.
Collect and show logs from all running pods.
Enable monitoring and troubleshooting transactions in distributed systems.
Control and manage your cluster with your browser.
In order to distribute traffic sent to the cluster, we have pre-installed a Nginx Ingress Controller. The controller can distribute HTTP, TCP, and UDP traffic to various services inside the cluster.
We have pre-installed Operators for Prometheus, logging and tracing for easy and simple deployment if needed. Operators work as a templating system, used for deploying and managing a Kubernetes application.
Prometheus Operator manages the following elements:
- Prometheus-data nodes
- Grafana- Dashboard node
- prometheus.io/port: "PORT"
- prometheus.io/scrape: "true"
Prometheus scrape annotations:
Logging Operator manages the following elements:
- Elasticsearch-master nodes
- Elasticsearch-ingest nodes
- Elasticsearch-data nodes
- Fluentd nodes
- Kibana node
Tracing Operator manages the following elements:
Ceph Operator manages the following elements: