Getting Started with Multi-Node OpenStack RDO Havana + Gluster Backend + Neutron VLAN

OpenStack is probably the largest and fastest growing opensource project out there.

Unfortunately, that means fast changes, new features and different install methods. RedHat's RDO simplifies this.

I'm not going to repeat the RDO quickstart guide because there's no point, if you want an all in one install head over to http://openstack.redhat.com/Quickstart

My setup was slightly different, I managed to bring together OpenStack with a Gluster backed Cinder and Nova, however I still haven't been able to get migration working.

Both of my hosts were on CentOS 6.4 hosts.

Controller

eth0 (management/gluster): 172.16.0.11
eth1 (VM Data): n/a
eth2 (Public Network): 192.168.0.11

Compute

eth0 (management/gluster): 172.16.0.12
eth1 (VM Data): n/a

yum install -y  
http://rdo.fedorapeople.org/openstack-havana/rdo-release-havana.rpm  
yum -y install wget screen nano

cd /etc/yum.repos.d/  
wget http://download.gluster.org/pub/gluster/glusterfs/LATEST/RHEL/glusterfs-epel.repo  
yum install -y openstack-packstack python-netaddr

nano /etc/hosts  
172.16.0.11 hv01.lab.example.net hv01  
172.16.0.12 hv02.lab.example.net hv02

yum install -y glusterfs glusterfs-fuse glusterfs-server  
mkfs.xfs -f -i size=512 /dev/mapper/vg_gluster-lv_gluster1  
echo "/dev/mapper/vg_gluster-lv_gluster1 /data1  xfs     defaults        1 2" >> /etc/fstab  
mkdir -p /data1/{nova,cinder,glance}  
mount -a


yum install -y gcc kernel-headers kernel-devel  
yum install -y keepalived

cat /dev/null > /etc/keepalived/keepalived.conf  
nano /etc/keepalived/keepalived.conf  

Node One Config:

vrrp_instance VI_1 {  
interface eth0  
state MASTER  
virtual\_router\_id 10  
priority 100   # master 100  
virtual_ipaddress {  
172.16.0.5  
}
}

Node Two Config:

vrrp_instance VI_1 {  
interface eth0  
state BACKUP  
virtual\_router\_id 10  
priority 99 # master 100  
virtual_ipaddress {  
172.16.0.5  
}
}
service keepalived start  
chkconfig keepalived on  
chkconfig glusterd on  
service glusterd start

nano /etc/sysconfig/iptables

Insert under (-A INPUT -p tcp -m state --state NEW -m tcp --dport 22 -j ACCEPT)

-A INPUT -p tcp -m multiport --dport 24007:24047 -j ACCEPT
-A INPUT -p tcp --dport 111 -j ACCEPT
-A INPUT -p udp --dport 111 -j ACCEPT
-A INPUT -p tcp -m multiport --dport 38465:38485 -j ACCEPT

Comment Out  
-A FORWARD -j REJECT --reject-with icmp-host-prohibited
-A INPUT -j REJECT --reject-with icmp-host-prohibited

service iptables restart`

gluster peer probe hv01.lab.example.net  
gluster peer status

gluster volume create NOVA replica 2 hv01.lab.example.net:/data1/nova hv02.lab.example.net:/data1/nova  
gluster volume create CINDER replica 2 hv01.lab.example.net:/data1/cinder hv02.lab.example.net:/data1/cinder  
gluster volume create GLANCE replica 2 hv01.lab.example.net:/data1/glance hv02.lab.example.net:/data1/glance  
gluster volume start NOVA  
gluster volume start CINDER  
gluster volume start GLANCE  
gluster volume set NOVA auth.allow 172.16.0.*  
gluster volume set CINDER auth.allow 172.16.0.*  
gluster volume set GLANCE auth.allow 172.16.0.*

# We run yum update now to get the new OpenStack kernel which has the namespaces patch. (It'll allow you to run multiple internal networks in the same subnet)
yum -y update

# Unfortunately there appears to be some undocumented issues with SeLinux. Easier to simply disable it rather than wondering why things aren't working.

nano /etc/selinux/config  
    permissive


# Make Sure ONBOOT=yes and BOOTPROTO=none
nano /etc/sysconfig/network-scripts/ifcfg-eth1  
DEVICE=eth1  
TYPE=Ethernet  
ONBOOT=yes  
NM_CONTROLLED="no"  
BOOTPROTO=none

reboot  

Now it's time to install OpenStack Grab my packstack answer file from https://gist.github.com/andrewklau/7622535/raw/7dac55bbecc200cfb4bf040b6189f36897fc4efb/multi-node.packstack

The answer file as of 24th Novemember 2013 should work as is, assuming you update the IP Addresses.

Note: (This is the public IP network attached to eth2 on the controller node): CONFIG_NOVA_VNCPROXY_HOST=192.168.0.11

Run the packstack file and hope for the best

screen  
packstack --answer-file=multi-node.packstack  

If it fails because of NTP just re-run it, I couldn't seem to find out why that was happening but it could possibly be the NTP servers I was using were playing up.

Now For the Gluster Part

# Mount Points
mkdir -p /mnt/gluster/{glance,nova} # On Controller  
mkdir -p /mnt/gluster/nova # On Compute

mount -t glusterfs 172.16.0.5:/NOVA /mnt/gluster/nova/  
mount -t glusterfs 172.16.0.5:/GLANCE /mnt/gluster/glance/

nano /etc/glance/glance-api.conf  
    filesystem_store_datadir = /mnt/gluster/glance/images

mkdir -p /mnt/gluster/glance/images  
chown -R glance:glance /mnt/gluster/glance/  
service openstack-glance-api restart

# On all Compute Nodes (including controller if you run compute on it too) (add)
nano /etc/nova/nova.conf  
    instances_path = /mnt/gluster/nova/instance

mkdir /mnt/gluster/nova/instance/  
chown -R nova:nova /mnt/gluster/nova/instance/  
service openstack-nova-compute restart

# On Controller (edit)
# I do this so that the VMs will get a static route added to their lease for access to the metadata service.
nano /etc/neutron/dhcp_agent.ini  
enable\_isolated\_metadata = True

# Add Glance only on Controller
nano /etc/fstab  
172.16.0.5:/GLANCE /mnt/gluster/glance glusterfs defaults,_netdev 0 0  
172.16.0.5:/NOVA /mnt/gluster/nova glusterfs defaults,_netdev 0 0  

I've always had trust issues with 3rd party images ever since a few years ago one I used had a little trojan spy. So I create my own

# Create Image
cd ~  
yum install -y libvirt python-virtinst qemu-kvm  
wget -O centos-6-x86_64.ks https://gist.github.com/andrewklau/7622535/raw/7b4939987d8c12b9a8024df868e7f8f47b766ee3/centos-6-4-x86_64.ks  
virt-install \  
    --name centos-6-x86_64 \
    --ram 1024 \
    --cpu host \
    --vcpus 1 \
    --nographics \
    --os-type=linux \
    --os-variant=rhel6 \
    --location=http://mirror.centos.org/centos/6/os/x86_64 \
    --initrd-inject=centos-6-x86_64.ks \
    --extra-args="ks=file:/centos-6-x86_64.ks text console=tty0 utf8 console=ttyS0,115200" \
    --disk path=/var/lib/libvirt/images/centos-6-x86_64.img,size=2,bus=virtio \
    --force \
    --noreboot

yum install -y libguestfs-tools  
virt-sysprep --no-selinux-relabel -a /var/lib/libvirt/images/centos-6-x86_64.img  
virt-sparsify --convert qcow2 --compress /var/lib/libvirt/images/centos-6-x86_64.img centos-6-x86_64.qcow2

source ~/keystonerc_admin

# Upload it to Glance
glance image-create --name "CentOS 6 x86_64" \  
    --disk-format qcow2 --container-format bare \
    --is-public true --file centos-6-x86_64.qcow2
# Create br-ex interface for the public network

ovs-vsctl add-port br-ex eth2

nano /etc/sysconfig/network-scripts/ifcfg-br-ex  
DEVICE=br-ex  
DEVICETYPE=ovs  
TYPE=OVSBridge  
BOOTPROTO=static  
IPADDR=192.168.0.11  
PREFIX=24  
GATEWAY=192.168.0.1  
DNS1=8.8.8.8  
ONBOOT=yes

nano /etc/sysconfig/network-scripts/ifcfg-eth2  
DEVICE=eth2  
NM_CONTROLLED="no"  
TYPE=OVSPort  
DEVICETYPE=ovs  
OVS_BRIDGE=br-ex  
ONBOOT=yes  

Load Balancer as a Service

LBaaS is a really amazing concept but it's new and may not work.

# Enable LBaaS
edit /etc/neutron/neutron.conf and add the following in the default section:

[DEFAULT]
service_plugins = neutron.services.loadbalancer.plugin.LoadBalancerPlugin

Then edit the /etc/openstack-dashboard/local_settings file and search for enable_lb and set it to true:

OPENSTACK\_QUANTUM\_NETWORK = {  
    'enable_lb': True
}

yum install haproxy 

nano /etc/neutron/lbaas_agent.ini 

device_driver=neutron.services.loadbalancer.drivers.haproxy.namespace_driver.HaproxyNSDriver interface_driver=neutron.agent.linux.interface.OVSInterfaceDriver  
user_group=haproxy

Comment out the line in the service_providers section:  
service_provider=LOADBALANCER:Haproxy:neutron.services.loadbalancer.drivers.haproxy.plugin_driver.HaproxyOnHostPluginDriver:default

service neutron-lbaas-agent start  
chkconfig neutron-lbaas-agent on  
service neutron-server restart  
service httpd restart  

Now you're all set! Login to Horizon at your public URL https://192.168.0.11/

Using the login details from the file saved at ~/keystonerc_admin

To get going straight away here's a quick step process:

  • Create a Network in the Admin Tab
    • Create it as a Public Shared Network
    • Give it the subnet 192.168.0.0/24
    • Allocate it your DHCP Range
  • Head over to the Project Tab
  • Click on Network Topology
  • Create a Router
    • Click on the list of Routers and select "Set Gateway". Use the public network you just created.
  • Create an Internal Network
  • Add a port to your new router to connect to the internal network.
  • Check your new Network Topology's beautiful diagram and smile :)
  • Create a new image called CirrOS. This is a great tiny image for testing. Here's the link you can just chuck in http://download.cirros-cloud.net/0.3.1/cirros-0.3.1-x86_64-disk.img
  • Head over to Access & Security and add two rules in the default ruleset (it's essentially iptables):
    • Custom ICMP, -1, -1
    • Custom TCP, Port 22
  • While you're there also add your public key (~/.ssh/id_rsa.pub)
  • Launch your new CirrOS image and attach it to the network.
  • Check the console, and ensure your network and DHCP is working.
  • Launch a CentOS image, attach a floating IP address to it and party!!!

Kudos again to Red Hat's great work putting this together for such an easy setup than it was last year.

Good Luck!

Basic Troubleshooting

I found the best way was to boot up multiple CirrOS instances and do inter VM networking tests.

  • First check your VMs can connect on the same network.
  • Check your Hypervisor can ping the floating IP when it is assigned. Generally if this fails it's because the VM doesn't have the internal IP address.
  • If you're having problems pinging your VM's floating IP address externally a good tip is to run tcpdump -n -i eth0 icmp on the VM, Hypervisor and any other firewall in between. Check to see where the packets are getting dropped.

If you've got a Mikrotik Switch or looking at purchasing one check out mini post on my Mikrotik CRS-125-24G-1S-RM with OpenStack Neutron

comments powered by Disqus