Getting Started with Multi-Node OpenStack RDO Havana + Gluster Backend + Neutron VLAN

OpenStack is probably the largest and fastest growing opensource project out there.

Unfortunately, that means fast changes, new features and different install methods. RedHat's RDO simplifies this.

I'm not going to repeat the RDO quickstart guide because there's no point, if you want an all in one install head over to

My setup was slightly different, I managed to bring together OpenStack with a Gluster backed Cinder and Nova, however I still haven't been able to get migration working.

Both of my hosts were on CentOS 6.4 hosts.


eth0 (management/gluster):
eth1 (VM Data): n/a
eth2 (Public Network):


eth0 (management/gluster):
eth1 (VM Data): n/a

yum install -y  
yum -y install wget screen nano

cd /etc/yum.repos.d/  
yum install -y openstack-packstack python-netaddr

nano /etc/hosts hv01 hv02

yum install -y glusterfs glusterfs-fuse glusterfs-server  
mkfs.xfs -f -i size=512 /dev/mapper/vg_gluster-lv_gluster1  
echo "/dev/mapper/vg_gluster-lv_gluster1 /data1  xfs     defaults        1 2" >> /etc/fstab  
mkdir -p /data1/{nova,cinder,glance}  
mount -a

yum install -y gcc kernel-headers kernel-devel  
yum install -y keepalived

cat /dev/null > /etc/keepalived/keepalived.conf  
nano /etc/keepalived/keepalived.conf  

Node One Config:

vrrp_instance VI_1 {  
interface eth0  
state MASTER  
virtual\_router\_id 10  
priority 100   # master 100  
virtual_ipaddress {  

Node Two Config:

vrrp_instance VI_1 {  
interface eth0  
state BACKUP  
virtual\_router\_id 10  
priority 99 # master 100  
virtual_ipaddress {  
service keepalived start  
chkconfig keepalived on  
chkconfig glusterd on  
service glusterd start

nano /etc/sysconfig/iptables

Insert under (-A INPUT -p tcp -m state --state NEW -m tcp --dport 22 -j ACCEPT)

-A INPUT -p tcp -m multiport --dport 24007:24047 -j ACCEPT
-A INPUT -p tcp --dport 111 -j ACCEPT
-A INPUT -p udp --dport 111 -j ACCEPT
-A INPUT -p tcp -m multiport --dport 38465:38485 -j ACCEPT

Comment Out  
-A FORWARD -j REJECT --reject-with icmp-host-prohibited
-A INPUT -j REJECT --reject-with icmp-host-prohibited

service iptables restart`

gluster peer probe  
gluster peer status

gluster volume create NOVA replica 2  
gluster volume create CINDER replica 2  
gluster volume create GLANCE replica 2  
gluster volume start NOVA  
gluster volume start CINDER  
gluster volume start GLANCE  
gluster volume set NOVA auth.allow 172.16.0.*  
gluster volume set CINDER auth.allow 172.16.0.*  
gluster volume set GLANCE auth.allow 172.16.0.*

# We run yum update now to get the new OpenStack kernel which has the namespaces patch. (It'll allow you to run multiple internal networks in the same subnet)
yum -y update

# Unfortunately there appears to be some undocumented issues with SeLinux. Easier to simply disable it rather than wondering why things aren't working.

nano /etc/selinux/config  

# Make Sure ONBOOT=yes and BOOTPROTO=none
nano /etc/sysconfig/network-scripts/ifcfg-eth1  


Now it's time to install OpenStack Grab my packstack answer file from

The answer file as of 24th Novemember 2013 should work as is, assuming you update the IP Addresses.

Note: (This is the public IP network attached to eth2 on the controller node): CONFIG_NOVA_VNCPROXY_HOST=

Run the packstack file and hope for the best

packstack --answer-file=multi-node.packstack  

If it fails because of NTP just re-run it, I couldn't seem to find out why that was happening but it could possibly be the NTP servers I was using were playing up.

Now For the Gluster Part

# Mount Points
mkdir -p /mnt/gluster/{glance,nova} # On Controller  
mkdir -p /mnt/gluster/nova # On Compute

mount -t glusterfs /mnt/gluster/nova/  
mount -t glusterfs /mnt/gluster/glance/

nano /etc/glance/glance-api.conf  
    filesystem_store_datadir = /mnt/gluster/glance/images

mkdir -p /mnt/gluster/glance/images  
chown -R glance:glance /mnt/gluster/glance/  
service openstack-glance-api restart

# On all Compute Nodes (including controller if you run compute on it too) (add)
nano /etc/nova/nova.conf  
    instances_path = /mnt/gluster/nova/instance

mkdir /mnt/gluster/nova/instance/  
chown -R nova:nova /mnt/gluster/nova/instance/  
service openstack-nova-compute restart

# On Controller (edit)
# I do this so that the VMs will get a static route added to their lease for access to the metadata service.
nano /etc/neutron/dhcp_agent.ini  
enable\_isolated\_metadata = True

# Add Glance only on Controller
nano /etc/fstab /mnt/gluster/glance glusterfs defaults,_netdev 0 0 /mnt/gluster/nova glusterfs defaults,_netdev 0 0  

I've always had trust issues with 3rd party images ever since a few years ago one I used had a little trojan spy. So I create my own

# Create Image
cd ~  
yum install -y libvirt python-virtinst qemu-kvm  
wget -O centos-6-x86_64.ks  
virt-install \  
    --name centos-6-x86_64 \
    --ram 1024 \
    --cpu host \
    --vcpus 1 \
    --nographics \
    --os-type=linux \
    --os-variant=rhel6 \
    --location= \
    --initrd-inject=centos-6-x86_64.ks \
    --extra-args="ks=file:/centos-6-x86_64.ks text console=tty0 utf8 console=ttyS0,115200" \
    --disk path=/var/lib/libvirt/images/centos-6-x86_64.img,size=2,bus=virtio \
    --force \

yum install -y libguestfs-tools  
virt-sysprep --no-selinux-relabel -a /var/lib/libvirt/images/centos-6-x86_64.img  
virt-sparsify --convert qcow2 --compress /var/lib/libvirt/images/centos-6-x86_64.img centos-6-x86_64.qcow2

source ~/keystonerc_admin

# Upload it to Glance
glance image-create --name "CentOS 6 x86_64" \  
    --disk-format qcow2 --container-format bare \
    --is-public true --file centos-6-x86_64.qcow2
# Create br-ex interface for the public network

ovs-vsctl add-port br-ex eth2

nano /etc/sysconfig/network-scripts/ifcfg-br-ex  

nano /etc/sysconfig/network-scripts/ifcfg-eth2  

Load Balancer as a Service

LBaaS is a really amazing concept but it's new and may not work.

# Enable LBaaS
edit /etc/neutron/neutron.conf and add the following in the default section:

service_plugins =

Then edit the /etc/openstack-dashboard/local_settings file and search for enable_lb and set it to true:

    'enable_lb': True

yum install haproxy 

nano /etc/neutron/lbaas_agent.ini interface_driver=neutron.agent.linux.interface.OVSInterfaceDriver  

Comment out the line in the service_providers section:

service neutron-lbaas-agent start  
chkconfig neutron-lbaas-agent on  
service neutron-server restart  
service httpd restart  

Now you're all set! Login to Horizon at your public URL

Using the login details from the file saved at ~/keystonerc_admin

To get going straight away here's a quick step process:

  • Create a Network in the Admin Tab
    • Create it as a Public Shared Network
    • Give it the subnet
    • Allocate it your DHCP Range
  • Head over to the Project Tab
  • Click on Network Topology
  • Create a Router
    • Click on the list of Routers and select "Set Gateway". Use the public network you just created.
  • Create an Internal Network
  • Add a port to your new router to connect to the internal network.
  • Check your new Network Topology's beautiful diagram and smile :)
  • Create a new image called CirrOS. This is a great tiny image for testing. Here's the link you can just chuck in
  • Head over to Access & Security and add two rules in the default ruleset (it's essentially iptables):
    • Custom ICMP, -1, -1
    • Custom TCP, Port 22
  • While you're there also add your public key (~/.ssh/
  • Launch your new CirrOS image and attach it to the network.
  • Check the console, and ensure your network and DHCP is working.
  • Launch a CentOS image, attach a floating IP address to it and party!!!

Kudos again to Red Hat's great work putting this together for such an easy setup than it was last year.

Good Luck!

Basic Troubleshooting

I found the best way was to boot up multiple CirrOS instances and do inter VM networking tests.

  • First check your VMs can connect on the same network.
  • Check your Hypervisor can ping the floating IP when it is assigned. Generally if this fails it's because the VM doesn't have the internal IP address.
  • If you're having problems pinging your VM's floating IP address externally a good tip is to run tcpdump -n -i eth0 icmp on the VM, Hypervisor and any other firewall in between. Check to see where the packets are getting dropped.

If you've got a Mikrotik Switch or looking at purchasing one check out mini post on my Mikrotik CRS-125-24G-1S-RM with OpenStack Neutron

comments powered by Disqus