Network attached storage

Network Attached Storage excels as a storage provisioner to Virtual Machines or Cluster solutions such as Kubernetes.

Using a standard Ceph cluster, ASERGO manages and maintain all Monitor nodes, enabling customers to focus on their business. ASERGO Network Attached Storage utilizes Ceph BlueStore technology and comes in 2 settings. Shared Storage and Private Storage;

Shared Storage

  • Shared Storage uses the Pool assignment in Ceph. All data is stored on a shared OSD cluster, while customers are having private access to their own storage pool.
  • Shared Storage comes default with a replication set of 3 OSDs (Your data is replicated across 3 different disks). These OSDs are selected randomly for a large cluster of OSDs.

Private Storage

  • Private storage reserves full OSDs to each customer. You will obtain full control over your OSD's. Create your own storage pools and create/manage users to have access to the pools.
  • Private Storage is more dynamic in its configuration compared to Shared Storage. It is up to you how many OSDs (min. 3 OSDs) the cluster should hold and how many times data should be replicated.

All cluster nodes have 20 Gigabit Uplink.

All OSDs are using the BlueStore technology instead of having a filesystem installed on each OSD. This reduces latency and provides better disk performance. Journaling is installed on a SSD to improve disk performance. All data is written directly onto the RAW block device (OSD).