• About
  • Services
    • Consulting >
      • ServiceNow Consulting Practice
      • Cloud & Automation Assessment
      • Cloud Management Consulting
      • IT Operations Automation
      • DevOps Automation
      • Ansible
    • Cloud Support Services
    • Expertise
    • Methodology
  • Products
    • ServiceNow Plug-in for Ansible
    • ServiceNow Plug-in for Cloudify
  • Partners
  • Careers
  • Resources
    • Tech Corner
    • Blog
    • News
    • White Papers
  • Our Team
Zefflin
Experts in Data Center Automation, ITSM & Cloud Management

The New Tech Corner launches with a RedHat Enterprise Linux OpenStack v6.0 All-in-one Install Guide

6/26/2015

0 Comments

 

Our New Tech Corner

Welcome to the first of many posts in our Tech Corner at Zefflin.  Moving forward, multiple team members will share tips, tricks, and how-to's on a wide variety of topics related to the datacenter automation and cloud management tools we work with on a daily basis. 

Installing RedHat Enterprise Linux OpenStack v6.0 on RHEL7

Redhat OpenStack
Our Zefflin lab is constantly evolving, staging a wide variety of scenarios and toolsets for our customers.  Recently we were tasked with standing up Redhat Enterprise Linux OpenStack (OSP) environment for integration testing.  Given this was a non-production environment; we opted for an all-in-one installation of RHEL OSP, which would provide us with the OpenStack resources that we needed using only one physical server.  Since this may be useful for others that are trying to setup a RHEL OSP all-in-one installation for testing or a proof-of-concept, we wanted to share the steps that we took to complete the installation and configuration of the environment.

1.     First, we need to ensure that we have a single physical server that supports hardware virtualization, has a RHEL 7.1 minimal installation, one functional network interface (eno1 in this demonstration), and has subscriptions to RHEL Server and RHEL OpenStack.  If you do not own the required licenses, you will need to purchase subscriptions before beginning or request an evaluation copy from Redhat.

2.     Once the pre-requisites are satisfied in step one, we install a text-editor, net-tools for some network utilities that are excluded in the minimal install, and subscribe to the repos we need:

yum -y install net-tools nano

subscription-manager repos --disable=*
subscription-manager repos --enable=rhel-7-server-rpms
subscription-manager repos --enable=rhel-7-server-rh-common-rpms
subscription-manager repos --enable=rhel-7-server-openstack-6.0-installer-rpms
subscription-manager repos --enable=rhel-server-rhscl-7-rpms
subscription-manager repos --enable=rhel-7-server-openstack-6.0-rpms

3.    We then need to ensure all packages are up to date, disable the NetworkManager which conflicts with OSP, and reboot to apply the changes:

yum update -y
systemctl disable NetworkManager
reboot

4.    
Once the server comes back online, we need to install the installer utility that we’ll use for the OSP all-in-one installation.  In this case, we’ll use the packstack installation utility:

yum -y install openstack-packstack
packstack --allinone
OpenStack Consulting

At this point, you’ll want to take a break as the packstack installer will take a while to install the OpenStack dependencies and services.




Note: When you run packstack –allinone, an answers configuration file is automatically generated with system defaults.  If you want to customize these defaults before running packstack, first generate the answers file, then edit it, and afterwards run it as follows:


packstack --gen—answer-file=packstack-answers.txt
nano packstack-answers.txt
packstack --answer-file=packstack-answers.txt

5.    
Once complete, we’ll need to do a little work to setup public and private networking for your instances.  This guide assumes that you have one network interface (eno0) assigned with a statically routed CIDR subnet of same size.

First, we are going to connect to Neutron to clear out the default network config that was loaded:

source keystonerc_admin
neutron router-gateway-clear router1
neutron subnet-delete public_subnet
neutron net-delete public
neutron router-delete router1

6.    
Now we need to make some changes to your network interface configuration.  Essentially, we are going to move it’s configuration to a new bridge (br-ex) interface that your instances will use to communicate outside the physical host, and map it back to the same eno1 physical interface.

First, open the configuration the eno1 interface configuration.  Before making any changes, it’s helpful to copy and paste the contents of the file to a local text editor:


nano /etc/sysconfig/network-scripts/ifcfg-eno1

You want to remove the existing configuration and update this file to look like the following, assuming your interface is also eno1 (adjust the name accordingly):


DEVICE=eno1
TYPE=OVSPort
DEVICETYPE=ovs
OVS_BRIDGE=br-ex
ONBOOT=yes

Now, save your changes and create the bridge interface configuration:


nano /etc/sysconfig/network-scripts/ifcfg-br-ex

Add the following configuration, updating my_host_ip, my_netmask, and my_host_gateway with the values retained in the last step from your ifcfg-eno1 before you changed it.


DEVICE=br-ex
DEVICETYPE=ovs
TYPE=OVSBridge
BOOTPROTO=static
IPADDR=my_host_ip
NETMASK=my_netmask
GATEWAY=my_host_gateway
DNS1=8.8.8.8
ONBOOT=yes

Save your configuration and reboot.


7.     Once the server returns online, we’re ready to configure some sample networks that your instances will utilize.  You may wish to setup a different network topology and you can reconfigure it later if you desire.  In this case, we’re going to create a public network using a pre-assigned, statically routed public subnet.  These addresses will be used as floating IPs in Neutron.  We’ll also create a private network and assign it a subnet that Neutron will manage.
 
Now, to setup the aforementioned network topology:

source keystonerc_admin
neutron net-create public --router:external=True
neutron net-create private

neutron subnet-create --name public_subnet --enable_dhcp=False --allocation_pool start=start_usable_ip,end=end_usable_ip --gateway=gateway_ip public network_ip/CIDR

In the above command, you’ll need to substitute the values given to you by your network administrator for your statically routed secondary subnet.


Start_usable_ip should be the first usable IP in your range

End_usable_ip should be the last usable IP in your range

Gateway_ip should be your default gateway

Network_IP/CIDR should be the network address, followed by your cidr subnet size.

neutron subnet-create --name private_subnet private 10.10.0.0/24

Lastly, create a router in Neutron and configure it to complete the network topology:


neutron router-create router2
neutron router-gateway-set router2 public
neutron router-interface-add router2 private_subnet


8.    You're now ready to start creating instances!  Navigate to your Horizon dashboard utilizing your web browser.  By default it is listening on port 80 using the ip of your br-ex interface.


Datacenter Cloud Strategy
0 Comments



Leave a Reply.

    Jordan Ohringer

    Cloud management, architecture, and automation executive.  Technology entrepreneur building solutions for the enterprise.

    Chet Golding

    Principal Architect @zefflinsystems

    Archives

    July 2015
    June 2015

    Categories

    All

    RSS Feed

Home          Careers          Why Automate?          What is the Lights Out Data Center?          Partners          Contact Us          About


© 2019 Zefflin Systems all rights reserved