This article is just a collection of my personal notes while I work through using OpenStack in my new role.
OpenStack components can be divided into control, network, and compute.
- Control runs API services, web interfaces, databases, and message bus.
- Network runs service agents for networking.
- Compute is the virtualization hypervisor.
All components use a database and/or a message bus.
- Databases can be MySQL, MariaDB, or PostgreSQL.
- Message buses can be RabbitMQ, Qpid, and ActiveMQ.
Everything in OpenStack must exist in a tenant/project.
Tenants/projects are simply groupings of objects; and users, instances, networks, etc. are examples of objects.
In order to launch an instance, four components are essential:
Keystone manages tenants/projects, users, roles, and catalogs services and endpoints for all components running in a cluster.
- Users must be granted a role in a tenant/project.
- A user cannot log in to a cluster unless they are a member of a tenant/project.
- Even the users the OpenStack components use to communicate with each other have to be members of a tenant to be able to authenticate.
Keystone uses username & password authentication to request tokens, and then uses these tokens for future requests.
Glance is the image management component.
Before a compute node instance is started, a Glance image is copied and cached onto the ephemeral disk of the new instance.
Glance images are “sealed-disk” images that have had things like SSH host keys and MAC addresses removed.
- This is done so that they can be used repeatedly without interfering with each other.
- Missing host specific information is generated at boot time via a post booting configuration facility named cloud-init.
Cloud-init is a script that is ran post boot to connect back to a metadata service.
Glance images can be found online by searching for a distributions’ name combined with “cloud image”.
To save time, custom images can be created with software “baked” into it. For example, if a LAMP or MEAN stack was required, that software can be pre-installed in the cloud image to reduce time spent setting those items up after deploying the instances.
Custom images can be made by configuring a virtual machine manually. Host specific information is removed, cloud-init is included, and an image is created.
Alternative tools for creating custom cloud images are:
Neutron is the network management component.
Neutron provides an API frontend that manages the SDN (Software Defined Networking) infrastructure.
Using Neutron allows each tenant/project to have its own virtual isolated network.
Each isolated network can be connected to a virtual router to create routes between them.
A virtual router can have an external gateway connected to it — so that when each instance has a floating IP it can have external access to a larger network or the internet.
Traffic sent to a floating IP address is routed through Neutron to an active/launched instance.
The capabilities of Neutron can be referred to as NaaS (Network as a Service).
A hard definition for NaaS would be: “Providing networks and network resources on demand via software.”
Open vSwitch (a virtually managed switch) is a commonly used virtualized networking infrastructure.
Physically managed switches can also be used in place of virtual ones to handle the virtual networks managed by Neutron.
Nova is the instance management component.
An SSH key pair and a security group must be configured before launching a Nova instance.
A security group is a firewall at the cloud infrastructure layer. It prevents access from anything not within the same security group from making connections outside of the security group.
Once Keystone has tenants & roles, Glance has a cloud image, a Neutron network is created, an SSH key pair has been made, and a security group is established, a new Nova instance can be started.
The following are the steps taken when a new Nova instance is being launched:
- Resource identifiers are provided to Nova.
- Nova reviews what resources are being used on each hypervisor.
- Nova schedules a new instance to spawn on a compute node.
- The compute node gets a Glance image, creates all necessary virtual network devices, and boots the instance.
- During boot, cloud-init is run and connects to the metadata service.
- The metadata service provides the SSH public key needed for SSH logins to the instance.
- Any post boot configurations are ran.
Cinder is the block storage management component.
Volumes can be created and attached to instances.
Block devices can be partitioned; and have filesystems created, formatted, and mounted on them.
Cinder handles snapshots as well.
Several storage backends can be used with Cinder to store volumes and snapshots:
- LVM (Logical Volume Manager)
Swift is the object storage management component.
Object storage is a simple content-only storage system — files are stored without the metadata that a block filesystem uses.
Two layers are used as part of Swift’s deployment:
- Storage Engine
The proxy is an API layer — it communicates with the storage engine on the user’s behalf.
The default storage engine is the Swift storage engine, but GlusterFS and Ceph can also be used.
Ceilometer is the telemetry component.
It collects resource measurements and is able to monitor the cluster.
It was originally designed for billing users by metering the system, but it evolved into a general purpose telemetry system.
A meter is called a “sample“.
Samples get recorded on a regular basis; and a collection of samples is called a “statistic“.
Statistics can give insights into how resources are being used on an OpenStack deployment.
Alarms can be made with the samples.
Heat is the orchestration component.
Orchestration is the process of launching multiple instances that are supposed to work together.
Templates are used to define what will be launched during orchestration.
Heat is compatible with AWS CloudFormation templates and implements additional features beyond their template language.
When a template is launched it creates a collection of virtual resources (instances, networks, storage devices, etc.) — referred to as a “stack“.
It is important to note that Heat is not CM (configuration management). It is used for orchestration.
Heat can execute simple post boot configurations, such as invoking an actual CM to do more complex configurations.
RDO using TripleO
One of the easiest ways to get started using OpenStack is to use RDO (http://rdoproject.org/).
TripleO stands for “OpenStack on OpenStack”; and it is intended to deploy an all-in-one OpenStack installation, which is then used as a provisioning tool for a multi-node OpenStack deployment.
The two OpenStacks are called the undercloud and the overcloud.
The undercloud is the baremetal management all-in-one OpenStack installation.
The overcloud is the target deployment of OpenStack that is meant to be provided to end users.
The undercloud can take a cluster of nodes provided to it and deploy the overcloud on them.