Scenario
Extending beyond compute-focused deployments, like we saw in Chapter 6, “Private Compute Cloud,” and Chapter 7, “Public Compute Cloud,” with Cinder block storage means you can now offer real persistent storage to your users. Since we’re adding an additional component, a bit of setup needs to be done with Puppet again to configure this storage.
Controller Node Setup
If you have your Controller and Compute nodes available from Chapter 5 you will only need to run a single command to add the support for Cinder block storage. In this scenario you will only need to make changes to the controller node. No modifications need to be made to the compute node.
The command is another Puppet apply command, which will process our foundational block storage role in Puppet:
$ sudo puppet apply /etc/puppet/modules/deployments/manifests/role/foundations_block_storage.pp
This will take some time as it downloads everything required for Cinder and sets up configurations. If anything goes wrong and this command fails, remember that Puppet can also be run with the --debug flag in order to show more detail.
While this is running, we can take a look at what this file contains:
class deployments::role::foundations_block_storage { include deployments::role::foundations include deployments::profile::cinder LVM } include deployments::role::foundations_block_storage
This is calling out to our foundations role, which means if you didn’t set up a foundations role yet for your controller, it will do it now. This is mostly a safety measure; we would still recommend that you run it independently in case you need to do any troubleshooting.
It then calls our Cinder block storage profile, which you can view on the controller file system at /etc/puppet/modules/deployments/manifests/profile/cinder.pp, and it contains the following:
class deployments::profile::cinder { include ::cinder include ::cinder::api include ::cinder::ceilometer include ::cinder::config include ::cinder::db::mysql include ::cinder::keystone::auth include ::cinder::scheduler include ::cinder::volume include ::cinder::setup_test_volume file { '/etc/init/cinder-loopback.conf': owner => 'root', group => 'root', mode => '0644', content => template('deployments/cinder-loopback.conf.erb'), }
The profile pulls in various components to cinder that we will need. Just like other services in OpenStack, Cinder requires an API, database and keystone authentication. In case you wish to track usage with Ceilometer’s telemetry service, we also include that. The config is pulled in to help manage arbitrary cinder configurations you may wish to have. A scheduler in block storage is used in much the same way other OpenStack services use schedulers, to view the requirements the user is requesting for the volume, and then randomly picks a storage device back end that the volume can be created on that meets that criteria. As you may expect, pulling in cinder::volume is for the Cinder volume manager. As explained earlier in the chapter, this is what interacts with the drivers actually controlling the storage back end, whether it’s a simple loopback device with LVM (Linux Volume Manager) like we will be using or a proprietary NAS device.
The final lines of this file use the Puppet module’s capability to configure a test volume. For simplicity’s sake we use this setup_test_volume, which creates a simple 10GB file mounted to a loopback (by default, /dev/loop2) device and added to LVM as a single logical group. An init file is also created in our cinder.pp profile to make sure the file is mounted and the volume group is activated if your controller reboots.
Once your puppet apply command completes, you’re ready to start creating volumes and attaching them to instances!
Creating and Attaching a Volume: Dashboard
We will begin with the process for creating and attaching a volume using the OpenStack dashboard (Horizon). With the block storage (Cinder) component now installed, when you log into the dashboard with your test user you will see a section for Volumes in the left under Project in Compute, as show in Figure 8.3.

Figure 8.3 Empty, default Volumes page in the dashboard
Creating a Volume
On this page you’ll want to click on the Create Volume button, which will bring up a dialog like the one in Figure 8.4 where you will put in information about the volume you wish to create. Some fields will be automatically filled out, but the rest will be up to you.

Figure 8.4 Create a volume in the dashboard
The volume name is what you will be using to refer to the volume. A description is optional and can be used for whatever you want, maybe as a reminder to yourself about what the volume is intended for. The volume source enables you to pre-populate the volume with a source of defined data. By default, it queries the Image Storage (Glance) service and enables you, as one of the options, to put an Image on your newly created volume. You may also want to create a volume source that has a basic filesystem and partition table for your new volume so it doesn’t need to be created later after you mount it on an instance. For this scenario, we will just use No source, empty volume and will explain how to partition and format it after it is added to an instance.
The type of volume will inform the scheduler as to which type of storage back end you need to use. From the customer point of view, you want to define a type as tiered and varied storage with different properties, like how fast the storage device is, Quality of Service (QoS) requirements or whether a tier has replication. Prices may vary for the customer based on which options they select. From your perspective, this means one of these tiers may be using Ceph and another a proprietary NAS device that has the desired qualities for the tier being offered. We have not set a volume type, so it will remain as “No volume type” for this example. Our device only has 10GB, so we’ll start out in this test by creating a 1GB volume to attach to our instance. The availability zone is identical to the one in compute (nova) and currently must match the zone where the instance you wish to attach it to resides. In our deployment scenario we only have a single availability zone, so the default of nova should remain selected.
When you have finished, you can click on Create Volume in order to begin volume creation. You will be returned to the Volumes page of the dashboard, which will show your new volume as you can see in Figure 8.5.

Figure 8.5 A volume called “walrus” has been created
Attaching a Volume
A volume on its own is not of much value, so we’ll now want to attach it to a compute instance. If you do not have an instance running, you can create a basic one with a CirrOS image now in the Instances dashboard. Refer back to Chapter 6 if you need a refresher on the steps to create an image.
Attaching a volume in the dashboard is done by going to the drop down menu on the right side of where your volume is listed. From that menu, select Manage Attachments to bring up the screen, where you can attach the volume to an instance (Figure 8.6).

Figure 8.6 Managing volume attachments
In this example we have an instance running called “giraffe” and the UUID is also included, since names can be reused in compute (Nova). There is also an optional Device Name section where you can define what you want the device to be named when it’s attached to the instance. This can safely be left blank and a name will be assigned automatically. When you’re done selecting the instance to attach to, click on Attach Volume.
When the volume completes attaching, you will be able to see it in the dashboard as “Attached to” with the instance name and the device it has shown up as (see Figure 8.7).

Figure 8.7 A volume has been attached
You’ll next want to log into the instance to see that the device has been attached successfully, but this process is the same whether you’re completing this process with the dashboard or through the command line. You can continue to learn the process for attaching a volume using the OpenStack Client on the command line, or skip to the “Using the Volume” section later in this chapter to see what you can do to use your new volume.
Creating and Attaching a Volume: OpenStack Client
As we’ve discussed previously, the dashboard can be a convenient way to interact with OpenStack to complete most of the simple operations you may need to do. You will find, however, that most operators prefer using the command line clients or SDKs to interface with the tooling. As such, we’ll now walk through the same process we did with the dashboard but instead using the OpenStack Client (OSC).
The OSC is small and can easily be run from any system that has access to the API endpoints for the services. In our deployment scenarios, this means it must be on the same network as your controller node. You must also have access to the /etc/openrc.test file that was created on your controller and compute nodes, so for these commands we will assume you’re running everything on your controller.
Creating a Volume
We will be using the test user in order to create this volume, since it will also be attaching to a compute instance owned by the test user. To begin, we’ll bring the environment variables for the test user in from the openrc file. Now that that has been confirmed, we can issue the command to create a 1GB instance using that storage back end. Aside from the name, we will be using the same specifications for creation of the volume as was used with the OpenStack dashboard (Horizon), which means creating a 1GB volume that is empty (no partition table, filesystem or data) and is in our default availability zone, called nova.
$ source /etc/openrc.test $ openstack volume create --size 1 --availability-zone nova seaotter +---------------------+--------------------------------------+ | Field | Value | +---------------------+--------------------------------------+ | attachments | [] | | availability_zone | nova | | bootable | false | | consistencygroup_id | None | | created_at | 2016-04-15T04:19:46.086611 | | description | None | | encrypted | False | | id | 53372cc5-087a-4342-a67b-397477e1a4f2 | | multiattach | False | | name | seaotter | | properties | | | replication_status | disabled | | size | 1 | | snapshot_id | None | | source_volid | None | | status | creating | | type | None | | updated_at | None | | user_id | aa347b98f1734f66b1331784241fa15a | +---------------------+--------------------------------------+
To confirm this volume has been created, you can run the following (Listing 8.1).
Listing 8.1
$ openstack volume list +--------------------------------------+--------------+-----------+------+----------------------------------+ | ID | Display Name | Status | Size | Attached to | +--------------------------------------+--------------+-----------+------+----------------------------------+ | 53372cc5-087a-4342-a67b-397477e1a4f2 | seaotter | available | 1 | | | 54447e7a-d39d-4186-a5b4-3a5fc1e773aa | walrus | in-use | 1 | Attached to giraffe on /dev/vdb | +--------------------------------------+--------------+-----------+------+----------------------------------+
As you can see, both the walrus and the seaotter volumes are listed here since they were both created in this chapter. The walrus volume is showing that it is attached to the giraffe instance.
If you need to make changes to a volume, use the openstack volume set command. Running that command alone will give you help output to assist you with making changes to all the parameters before the volume is attached.
Attaching a Volume
As mentioned earlier, you can’t do much with a volume if it’s not attached to an instance. You’ll now want to add your new volume to an instance. First you’ll want to see what instances are available:
$ openstack server list +--------------------------------------+---------+--------+------------------+ | ID | Name | Status | Networks | +--------------------------------------+---------+--------+------------------+ | 823f2d7a-f186-4453-874d-4021ff2b22e4 | giraffe | ACTIVE | private=10.0.0.3 | +--------------------------------------+---------+--------+------------------+
With confirmation that you have an instance running, you can now run the command to attach the seaotter volume to the giraffe instance:
$ openstack server add volume giraffe seaotter
This command will have no output, but the next time you run volume list you will see that the volume has been attached (Listing 8.2).
Listing 8.2
$ openstack volume list +--------------------------------------+--------------+--------+------+----------------------------------+ | ID | Display Name | Status | Size | Attached to | +--------------------------------------+--------------+--------+------+----------------------------------+ | 53372cc5-087a-4342-a67b-397477e1a4f2 | seaotter | in-use | 1 | Attached to giraffe on /dev/vdc | | 54447e7a-d39d-4186-a5b4-3a5fc1e773aa | walrus | in-use | 1 | Attached to giraffe on /dev/vdb | +--------------------------------------+--------------+--------+------+----------------------------------+
Since the giraffe instance already had the walrus volume attached as /dev/vdb, you will notice that it has attached the seaotter volume as /dev/vdc.
Congratulations, you have successfully added a Cinder block storage volume to an instance on the command line!
Using the Volume
Whether you used the OpenStack dashboard or the command line to create and attach your volume, we will now want to actually confirm the volume was attached and then go ahead and use it with our instance. It may be easiest to use the dashboard in order to run the following commands, but if you followed instructions in an earlier chapter so that your CirrOS instance has been set up for SSH (Secure Shell), feel free to use SSH instead.
Assuming you’re using the dashboard, navigate to the Instances screen in the OpenStack dashboard and in the drop down menu to the right of the instance you attached it to, select Console to bring you to a console for your instance. Once you’re on the console page, if you’re unable to type in the console, click Click here to show only console and you will be brought to a page that only has the console.
Follow the instructions to log into the instance, and run the following command:
$ dmesg
There will likely be a lot of output, but the last thing you are likely to see should be something like the following:
[ 648.143431] vdb: unknown partition table
This vdb device is your new block storage (Cinder) volume! At this phase it has no partition table or file system, so this will need to be set up using fdisk. Assuming the device is vdb in this example, partitioning and creation of an ext2 file system can be done with fdisk:
$ sudo fdisk /dev/vdb Device contains neither a valid DOS partition table, nor Sun, SGI or OSF disklabel Building a new DOS disklabel with disk identifier 0xcf80b0a5. Changes will remain in memory only, until you decide to write them. After that, of course, the previous content won't be recoverable. Warning: invalid flag 0x0000 of partition table 4 will be corrected by w(rite) Command (m for help): n Partition type: p primary (0 primary, 0 extended, 4 free) e extended Select (default p): p Partition number (1-4, default 1): 1 First sector (2048-2097151, default 2048): 2048 Last sector, +sectors or +size{K,M,G} (2048-2097151, default 2097151): 2097151 Command (m for help): p Disk /dev/vdb: 1073 MB, 1073741824 bytes 16 heads, 63 sectors/track, 2080 cylinders, total 2097152 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0xcf80b0a5 Device Boot Start End Blocks Id System /dev/vdb1 2048 2097151 1047552 83 Linux Command (m for help): w The partition table has been altered! Calling ioctl() to re-read partition table. Syncing disks.
Now you’ll want to create a basic file system on the new disk. It’s only a 1GB volume, and this is a demonstration, so we’ll use the ext2 file system:
$ sudo mkfs.ext2 /dev/vdb1 mke2fs 1.42.2 (27-Mar-2012) Filesystem label= OS type: Linux Block size=4096 (log=2) Fragment size=4096 (log=2) Stride=0 blocks, Stripe width=0 blocks 65536 inodes, 261888 blocks 13094 blocks (5.00%) reserved for the super user First data block=0 Maximum filesystem blocks=268435456 8 block groups 32768 blocks per group, 32768 fragments per group 8192 inodes per group Superblock backups stored on blocks: 32768, 98304, 163840, 229376 Allocating group tables: done Writing inode tables: done Writing superblocks and filesystem accounting information: done
The last step is creating a mount point and mounting your new volume. Let’s say that you want to use this volume for photos and create a directory for that. Then we’ll check to confirm it’s the size we expect it to be.
$ mkdir photos $ sudo mount /dev/vdb1 photos/ $ df -h | grep vdb1 /dev/vdb1 1006.9M 1.3M 954.5M 0% /home/cirros/photos $ df -h /dev/vdb1 Filesystem Size Used Available Use% Mounted on /dev/vdb1 1006.9M 1.3M 954.5M 0% /home/cirros/photos
Congratulations! A 1GB volume from the block storage service Cinder is now mounted on your system. Note that this was mounted using the root user, so you will need to either change the ownership to your user or use root to place files on it.
Automation
As we explained in our chapters about private and public clouds, you don’t only need to interact with OpenStack through the OpenStack dashboard or OpenStack client. Instead you may interact with the APIs through various SDKs, which you can learn about at http://developer.openstack.org/.