Encryption at Rest

Encryption at Rest is a form of encryption that is designed to prevent an attacker from accessing data by ensuring it is encrypted when stored on a persistent device (see Encryption at rest with Ceph for more information). The encryption keys can be held within Ceph itself (a Monitor) or managed by a separate key manager.

The deployment steps for the encryption feature (with the Vault key manager) will be shown next.

Deployment

The configuration will be contained within a YAML file. Let it be called ceph-encrypted.yaml:

ceph-osd:
  customize-failure-domain: true
  osd-devices: /dev/sdb /dev/sdc
  osd-encrypt: true
  osd-encrypt-keymanager: vault
  source: cloud:bionic-ussuri

ceph-mon:
  customize-failure-domain: true
  monitor-count: 3
  expected-osd-count: 3
  source: cloud:bionic-ussuri

The required configuration options to enable encryption are:

osd-encrypt
This option enables full disk encryption for newly added storage devices via Ceph’s Ceph’s dm-crypt support. It protects data-at-rest from unauthorised usage.

osd-encrypt-keymanager
This option specifies the encryption key management software to use. The default is ‘ceph’ and the alternative is ‘vault’. For the latter, the OSD encryption keys will be stored within Vault.

Example deploy commands would look like this:

juju deploy -n 3 --config ./ceph-encrypted.yaml ceph-osd
juju deploy -n 3 --to lxd:0,lxd:1,lxd:2 --config ./ceph-encrypted.yaml ceph-mon
juju add-relation ceph-osd:mon ceph-mon:osd

As before, a containerised Monitor is located on each storage node and the assumption has been made that the machines spawned in the first command are assigned the IDs of 0, 1, and 2.

Additional commands will be required for the encryption aspect of the deployment. We will need to deploy Vault for managing the encryption keys and PostgreSQL to act as a database backend. Both applications will be containerised on a new host and neither of them will require configuration options.

juju deploy --to lxd vault
juju deploy --to lxd:3 postgresql

Two relations will be needed. One between Vault and PostgreSQL and one between Vault and the OSDs:

juju add-relation vault:db postgresql:db
juju add-relation vault:secrets ceph-osd:secrets-storage

Once the deployment settles down Vault will be in a sealed state. The last task before the cluster is ready for usage is to Initialise and unseal Vault. This is achieved by following these instructions.

Eventually, the juju status output for this deployment will look very similar to this:

Model           Controller     Cloud/Region     Version  SLA          Timestamp
ceph-encrypted  my-controller  my-maas/default  2.8.1    unsupported  00:41:14Z

App         Version  Status  Scale  Charm       Store       Rev  OS      Notes
ceph-mon    15.2.3   active      3  ceph-mon    jujucharms   48  ubuntu  
ceph-osd    15.2.3   active      3  ceph-osd    jujucharms  303  ubuntu  
postgresql  12.2     active      1  postgresql  jujucharms  208  ubuntu  
vault       1.1.1    active      1  vault       jujucharms   39  ubuntu  

Unit           Workload  Agent  Machine  Public address  Ports     Message
ceph-mon/0     active    idle   0/lxd/0  10.0.0.140                Unit is ready and clustered
ceph-mon/1*    active    idle   1/lxd/0  10.0.0.139                Unit is ready and clustered
ceph-mon/2     active    idle   2/lxd/0  10.0.0.141                Unit is ready and clustered
ceph-osd/0     active    idle   0        10.0.0.133                Unit is ready (1 OSD)
ceph-osd/1*    active    idle   1        10.0.0.134                Unit is ready (1 OSD)
ceph-osd/2     active    idle   2        10.0.0.135                Unit is ready (1 OSD)
postgresql/0*  active    idle   3/lxd/0  10.0.0.138      5432/tcp  Live master (12.2)
vault/0*       active    idle   3/lxd/1  10.0.0.137      8200/tcp  Unit is ready (active: true, mlock: disabled)

Machine  State    DNS         Inst id              Series  AZ       Message
0        started  10.0.0.133  node1                focal   default  Deployed
0/lxd/0  started  10.0.0.140  juju-66369d-0-lxd-0  focal   default  Container started
1        started  10.0.0.134  node2                focal   default  Deployed
1/lxd/0  started  10.0.0.139  juju-66369d-1-lxd-0  focal   default  Container started
2        started  10.0.0.135  node3                focal   default  Deployed
2/lxd/0  started  10.0.0.141  juju-66369d-2-lxd-0  focal   default  Container started
3        started  10.0.0.136  node4                focal   default  Deployed
3/lxd/0  started  10.0.0.138  juju-66369d-3-lxd-0  focal   default  Container started
3/lxd/1  started  10.0.0.137  juju-66369d-3-lxd-1  focal   default  Container started

This page was last modified 1 year, 7 months ago. Help improve this document in the forum.