At this point, it is almost cliché: Containers should be stateless, and at the same time containers need to have persistent storage available. There are quite a few ways to address this, and storage vendors all seem to have their own solutions. This means third party storage, which in turn leads to additional management overhead, complexity, and cost.
What if you could just mount object storage to your containers and treat this as local disk? The developers at LucidLink recently made an alpha build of Docker Volume Plugin support available, allowing you to connect your containers to external S3-compatible storage with no additional work required.
The Docker plugin for LucidLink makes it really easy to mount LucidLink Filespaces in Docker containers, all the containers have to do is request the volume by name, and no matter what host the containers move to, they will still be able to access the same data as this is stored in your object storage bucket.
Deploying LucidLink Docker Volume Plugin
- I'll be using Ubuntu 16 LTS, and will install Docker from its official repository, by adding an apt repository source.
apt update curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add - add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" apt update && apt install docker-ce
You may find that you need to run
apt-upgrade, depending on when you last updated your system. Docker Engine volume plugin support requires Docker 18.03-1 or higher.
- Next we will deploy the alpha version of the LucidLink Filespaces Docker Volume Plugin. This will require network [host], mount [var/lib/docker/plugins], device [/dev/fuse], and CAP_SYS_ADMIN privileges. The total download size should be ~52 MB.
docker plugin install lucidlinkcorp/docker-volume-lucid:alpha
- Now its time to create a LucidLink Filespace. If this is your first time using LucidLink, you have to sign up and choose a domain for your account. After this you can provision your first filespace. Select the 'Create a new filespace' option, at which point you can choose between 'LucidLink Storage', or 'Your Storage Provider'. I'm choosing my own storage provider and will then select AWS, a region and then click 'Create'. After this the portal will spend some time spinning up your Filespace instance.
Don't forget that this filespace needs to be initialized. To do this we will download the LucidLink Client and use
# install lucidLink client wget https://s3.amazonaws.com/lucidlink-builds/latest/lin64/lucid_1.12.1666_amd64.deb dpkg -i lucid_1.12.1666_amd64.deb apt install --fix-broken # start lucidLink daemon lucid daemon & # initialize lucidlink filespace lucid init-s3 --fs filespace.huttenga --password 123 --https --accesskey <accesskey> --secretkey <secretkey> --region <region> --provider AWS # exit lucidLink daemon lucid exit
Be sure to use your filespace name and change the password, access and secret keys. Initialization only has to happen once per LucidLink Filespace and ensures that you are the only one with the password. If done correctly the initialization should show something like this.
Now we can create and use Docker volumes using the filespace we initialized. This same volume can be accessed from any container, running on any Docker host, on any platform, anywhere.
docker volume create -d lucidlinkcorp/docker-volume-lucid:alpha filespace.huttenga -o password=123 docker run -it --mount type=volume,src=filespace.huttenga,target=/mnt/filespace.huttenga nginx:latest /bin/bash
This was a short overview of how to use the LucidLink Docker Volume Plugin, which, remember, is still in alpha and not for production use. That said, I can imagine LucidLink being pretty useful in development workflows. In fact that was the initial reason LucidLink was written -- to make it easier to share code bases across clouds, be it with developers or continuous integration tools.