Monday, July 11, 2016

OpenStack

Scaling on the Cloud

Summary











The Open Stack platform as Internal cloud

As Amazon Web Service a number of interesting service have been implemented in Open Stack. Unfortunately, we can say that the feature around disk usage are non usable in our case when our principal server have less that 1.5 tera byte to share. We concentrate our work on the computation feature that permits to make a dynamic deployement as possible. As Open stack is a huge of work to install, the solution was to use a simplified installation solution named DevStack.


Prepare the servers

Preparing the server to be usable to share GPU and to virtualize CPU. The first thing is to activate the virtualization configuration in your server BIOS. It is the most important thing to do if you want to activate KVM (Kernel Virtual Machine).


The second thing if you want to share your GPUs is to activate the intel iommu in the GRUB Command line.
Normally to activate it, you need just to add (at the end) “intel_iommu=on” on the GRUB command line as in the file “/etc/default/grub”:


GRUB_CMDLINE_LINUX_DEFAULT="resume=UUID=6771936b-06b6-493c-b655-6f60122f5228 intel_iommu=on"


Dont forget to regenerate your grub server


sudo grub-mkconfig -o /boot/grub/grub.cfg


If each of your servers respect this configurations, you are ready to virtualize your servers.


Global description

Now we need to take care of where will be the principal server that will manage our image scheduling solution.


In Open stack there several servers that manage different aspect of the cloud management.
One is managing all the aspects behind the image . It is Glance.
An other is managing the identity and the service discovery of the platform. It is Keystone.
One is managing computation and the scheduling of this application. It is Nova.
One is managing the network and is overlaying in a sub network. It is neutron.
One is managing data block as “/dev/sda”. It is cinder.
And the last that we will not activate is about the object storage management as File or folders. It is Swift.








This first list of components is named the core services. This components are globally stable.


An other list of components exists. The stability and the features of this components are non in a production state.


We use Horizon that is the official Dashboard of Open stack.
Ceilometer is the component for metric usage. Heat to orchestrate and schedule Openstack. Trove manage a cloud Database. Sahara manage the Haddop stack.Magnum is for Docker management. Etc ….


The components list is important and cover a large amount fo capabilities.


We focus here on the Computation task that will permit our original target to make automatic virtual machine creation.


The choice of devstack is a simplification tool that permit we a single and simple configuration to deploy all the component we want to use. There is a mix between integrated command and specific component configuration. For example , by default, the processor in virtualization is set to a default processor that is not compatible with the advanced option used in Opencl for CPU. As it is not a part of the devstack default task,we need during the configuration step to set the parameter.


There is a lot of internal component purely technical to respond on all the aspect of classical application.


There is a database. By defaut , it is maria db. There is a Queue manager named Rabbit MQ. Nova is computed by QEMU in KVM mode. The network is managed by OpenVSwitch.





Devstack Installation

DevStack is an opinionated script to quickly create an OpenStack development environment. It can also be used to demonstrate starting/running OpenStack services and provide examples of using them from a command line.


To install devstack , you need to get the last develop branch of the original git repository.
You need to clone this repository on each of the server yout want to use to be a part of your open stack solution.


As a menu, we can select what we want to install on each machine.  For this presentation, I propose to install one master servers, containing the all the core components needed to Openstack and to install just the network and computation component to other node.
















For this implementation, the selection is based on the last stable release named “mitaka”.


The configuration of a devstack installation is based on a file named local.conf that will permit to create the perfect and desired installation of Open stack.


I will make a session dedicated to the network part of the deployment.


Now you need to create you stack user with the command in your repository:
sudo tools/create-stack-user.sh


Don’t forget to make your local code owned by stack
sudo chown -R stack:stack *


Connect on stack user and continue the process.
su - stack





For instance the next configuration, that you need to have in local.conf, show the master server configuration:


[[local|localrc]]
RECLONE=yes #Force reresh of the git repository for each
REQUIREMENTS_BRANCH=stable/mitaka
CINDER_BRANCH=$REQUIREMENTS_BRANCH
GLANCE_BRANCH=$CINDER_BRANCH
HEAT_BRANCH=$CINDER_BRANCH
HORIZON_BRANCH=$CINDER_BRANCH
KEYSTONE_BRANCH=$CINDER_BRANCH
CEILOMETERMIDDLEWARE_BRANCH=$CINDER_BRANCH
NEUTRON_BRANCH=$CINDER_BRANCH
NOVA_BRANCH=$CINDER_BRANCH
#Ip of the host
HOST_IP=10.21.22.109
#Login and password of the administrator
ADMIN_PASSWORD=secret
DATABASE_PASSWORD=$ADMIN_PASSWORD
RABBIT_PASSWORD=$ADMIN_PASSWORD
SERVICE_PASSWORD=$ADMIN_PASSWORD
#Network configuration
FLAT_INTERFACE=em1
FLOATING_RANGE=10.21.22.200/23 # External IP to manage
FIXED_RANGE=190.168.2.0/24 # Internal IP
FIXED_NETWORK_SIZE=256
NETWORK_GATEWAY="190.168.2.1" # for internal IP
Q_FLOATING_ALLOCATION_POOL=start=10.21.22.200,end=10.21.22.254
PUBLIC_NETWORK_GATEWAY="10.21.23.254"
#KVM usage
LIBVIRT_TYPE=kvm
# Path for store
INSTANCES_PATH=/data/openstack/stack/instances
GLANCE_IMAGE_DIR=/data/openstack/stack/images
DATA_DIR=/data/openstack/stack/nova
LOGDIR=/data/openstack/stack/logs
#Activate Multi Host support
MULTI_HOST=True
#VERBOSE=True
#Swift Configuration
SWIFT_HASH=66a3d6b56c1f479c8b4e70ab5c2000f5
SWIFT_REPLICAS=1
SWIFT_DATA_DIR=/data/openstack/stack/swift
SWIFT_LOOPBACK_DISK_SIZE=100G
disable_service n-net
enable_service q-svc
enable_service q-agt
enable_service q-dhcp
enable_service q-l3
enable_service q-meta
enable_service neutron
disable_service tempest
#Enable heat services
enable_service h-eng h-api h-api-cfn h-api-cw
#enable swift
enable_service s-proxy s-object s-container s-account
# Configure the notifier to talk to the message queue
# and turn on usage audit events
EXTRA_OPTS=(notification_driver=nova.openstack.common.notifier.rabbit_notifier,ceilometer.compute.nova_notifier)
# Enable the ceilometer services
enable_service ceilometer-acompute,ceilometer-acentral,ceilometer-collector,ceilometer-api
## Neutron options
Q_USE_SECGROUP=True
PUBLIC_INTERFACE=em1
#
# Open vSwitch provider networking configuration
Q_USE_PROVIDERNET_FOR_PUBLIC=True
OVS_PHYSICAL_BRIDGE=br-ex
PUBLIC_BRIDGE=br-ex
OVS_BRIDGE_MAPPINGS=public:br-ex
#
#Specific configuration for NOVA Compute
[[post-config|$NOVA_CONF]]
[DEFAULT]
pci_alias={\\"name\\": \\"K20m\\",\\"product_id\\": \\"1028\\",\\"vendor_id\\": \\"10de\\"}
scheduler_default_filters=RetryFilter,AvailabilityZoneFilter,RamFilter,DiskFilter,ComputeFilter,ComputeCapabilitiesFilter,ImagePropertiesFilter,ServerGroupAntiAffinityFilter,ServerGroupAffinityFilter,SameHostFilter,DifferentHostFilter,PciPassthroughFilter
[libvirt]
cpu_mode=host-passthrough
#[spice]
#enabled = True
#
[[post-config|$GLANCE_REGISTRY_CONF]]
[glance_store]
filesystem_store_datadir=/data/openstack/stack/images


[[post-config|$GLANCE_CACHE_CONF]]
[DEFAULT]
image_cache_dir=/data/openstack/stack/cache


Launch the script in batch mode without signal. Or the bridge will deconnect you during the installation. To be sure launch:
nohup ./stack.sh &



At the end of the installation, you will have this lines inviting you to connect on horizon.


+./stack.sh:main:1379                      set +o xtrace


=========================
DevStack Component Timing
=========================
Total runtime         916


run_process            67
test_with_retry         4
pip_install           106
restart_apache_server   8
wait_for_service       17
yum_install           115
git_timed              27
=========================


This is your host IP address: 10.21.22.109
This is your host IPv6 address: ::1
Horizon is now available at http://10.21.22.109/dashboard
Keystone is serving at http://10.21.22.109:5000/
The default users are: admin and demo
The password: secret
2016-06-21 12:35:20.956 | WARNING:
2016-06-21 12:35:20.956 | Using lib/neutron-legacy is deprecated, and it will be removed in the future
2016-06-21 12:35:20.956 | stack.sh completed in 916 seconds.


Don’t forget to open all the port in input:


sudo iptables -I INPUT -j ACCEPT

Login to dashboard to have a first look:


Log with “admin” and “secret”



















You connect directly to the list of the default project generated by devstack.
As you can see you can manage user, group and role.









An other menu talk about Project with computation, network and heat.
An other about the admin , etc …


Now we can install the others nodes of our cloud installation.
Be care about the HOST_IP that need to correspond to your local IP.


[[local|localrc]]
RECLONE=yes
CINDER_BRANCH=stable/mitaka
GLANCE_BRANCH=$CINDER_BRANCH
HEAT_BRANCH=$CINDER_BRANCH
HORIZON_BRANCH=$CINDER_BRANCH
KEYSTONE_BRANCH=$CINDER_BRANCH
NEUTRON_BRANCH=$CINDER_BRANCH
NOVA_BRANCH=$CINDER_BRANCH
REQUIREMENTS_BRANCH=$CINDER_BRANCH
TEMPEST_BRANCH=$CINDER_BRANCH
HOST_IP=10.21.22.105 # change this per compute node
NETWORK_GATEWAY="190.168.2.1"
FIXED_RANGE=190.168.2.0/24
ADMIN_PASSWORD=secret
DATABASE_PASSWORD=$ADMIN_PASSWORD
RABBIT_PASSWORD=$ADMIN_PASSWORD
SERVICE_PASSWORD=$ADMIN_PASSWORD
DATABASE_TYPE=mysql
SERVICE_HOST=10.21.22.109 #Master node
MYSQL_HOST=$SERVICE_HOST
RABBIT_HOST=$SERVICE_HOST
GLANCE_HOSTPORT=$SERVICE_HOST:9292
Q_HOST=$SERVICE_HOST
VNCSERVER_PROXYCLIENT_ADDRESS=$HOST_IP
VNCSERVER_LISTEN=0.0.0.0
FLAT_INTERFACE=em1
ENABLED_SERVICES=n-cpu,rabbit,q-agt
NOVA_VNC_ENABLED=True
NOVNCPROXY_URL="http://$SERVICE_HOST:6080/vnc_auto.html"
PUBLIC_INTERFACE=em1
LIBVIRT_TYPE=kvm
INSTANCES_PATH=/data/openstack
DATA_DIR=/data/openstack
#VERBOSE=True
[[post-config|$NOVA_CONF]]
[DEFAULT]
pci_passthrough_whitelist={\\"vendor_id\\":\\"10de\\",\\"product_id\\":\\"1028\\"}
scheduler_default_filters=RetryFilter,AvailabilityZoneFilter,RamFilter,DiskFilter,ComputeFilter,ComputeCapabilitiesFilter,ImagePropertiesFilter,ServerGroupAntiAffinityFilter,ServerGroupAffinityFilter,SameHostFilter,DifferentHostFilter,PciPassthroughFilter
[libvirt]
cpu_mode=host-passthrough


To install the server launch:
nohup ./stack.sh &


At the end of the installation, you will have :


=========================
DevStack Component Timing
=========================
Total runtime   167


run_process       7
pip_install      29
yum_install      39
git_timed         7
=========================


This is your host IP address: 10.21.22.105
This is your host IPv6 address: ::1
2016-06-21 14:16:20.682 | WARNING:
2016-06-21 14:16:20.682 | Using lib/neutron-legacy is deprecated, and it will be removed in the future
2016-06-21 14:16:20.682 | stack.sh completed in 167 seconds.


Don’t forget to open all the port in input:


sudo iptables -I INPUT -j ACCEPT


If you make this installation on all the nodes, you can connect to the open stack dashboard and have a look on the menu Admin → Hypervisor to see your machine with their capabilities.





Openstack Installation

User and Project creation for Fabric



The creation of user , role and project is an important element to give rights and make quota usage on an Open stack deployment.
In this short chapter, we will create a simple user , role and project for the fabric environment. We will see how to manage a project and a user by command line.


Go to your master server in the stack environment.


Launch these commands to create your project with Quotas


# connection
. openrc admin


#create the fabric tenant
openstack project create fabric


#set 200 cores to the tenant
openstack quota set fabric --cores 200


#set number of instance quota
openstack quota set fabric --instances 50


#manage floating ip quota
openstack quota set fabric --floating-ips  50


# make disk quota
openstack quota set fabric --gigabytes 4096


# make ram quota
openstack quota set fabric --ram 1048576















On open stack, you can access to  the Project

























































You can continue to create the user fabric with role and attachment to the project


#Create the fabric user in launching this command:
openstack user create --password Fabric_123 fabric


#create the fabric role
openstack role create fabric


#Attach the user to the project
openstack role add --project fabric --user fabric fabric
openstack role add --project fabric --user admin fabric


On open stack, you can have a look on your project.












































Manage Security Group



Security groups are sets of IP filter rules that are applied to all project instances, and which define networking access to the instance. Group rules are project specific; project members can edit the default rules for their group and add new rule sets.
All projects have a "default" security group which is applied to any instance that has no other defined security group. Unless we change the default, this security group denies all incoming traffic and allows only outgoing traffic to your instance.
We change it in opening all the port in tcp/udp/icmp feature.


#Open port to the default security group
export OS_PROJECT_NAME=fabric
nova secgroup-add-rule default icmp -1 -1 0.0.0.0/0
nova secgroup-add-rule default udp 1 65535 0.0.0.0/0
nova secgroup-add-rule default tcp 1 65535 0.0.0.0/0
nova secgroup-list-rules default
source openrc admin admin


Flavors

we have actually three kinds of machine :


- Two :  40 cpu, 2 gpu K20m , 256Gram
- One :  20 cpu, 2 gpu K20m , 256Gram
- One : 96 cpu, 0 gpu , 256Gram


Inspired by Amazon the list of our flavors is :


naming
Memory GiB
cpu
gpu
m1.16xlarge
256
96
0
m1.8xlarge
128
48
0
m1.4xlarge
64
24
0
m1.2xlarge
32
12
0
m1.xlarge
16
6
0
m1.large
8
3
0
m1.medium
4
2
0
m1.small
2
1
0
g1.2xlarge
32
12
1
g1.4xlarge
64
24
2








We need to clean all previous flavor before installing new flavors
Launch this commands to create the well flavors configuration:


openstack flavor delete cirros256
openstack flavor delete ds1G
openstack flavor delete ds2G
openstack flavor delete ds4G
openstack flavor delete ds512M
openstack flavor delete m1.large
openstack flavor delete m1.medium
openstack flavor delete m1.small
openstack flavor delete m1.tiny
openstack flavor delete m1.xlarge
nova flavor-create m1.16xlarge auto 262144 160 96 --is-public True
nova flavor-create m1.8xlarge  auto 131072 80 48 --is-public True
nova flavor-create m1.4xlarge  auto 65536 40 24 --is-public True
nova flavor-create m1.2xlarge  auto 32768 40 12 --is-public True
nova flavor-create m1.xlarge   auto 16384 30 6 --is-public True
nova flavor-create m1.large    auto 8192 20 3 --is-public True
nova flavor-create m1.medium   auto 4096 20 2 --is-public True
nova flavor-create m1.small    auto 2048 20 1 --is-public True


nova flavor-create g1.2xlarge auto 32768 40 12 --is-public True
nova flavor-key g1.2xlarge set  "pci_passthrough:alias"="K20m:1"  


nova flavor-create g1.4xlarge auto 65536 40 24 --is-public True
nova flavor-key g1.4xlarge set "pci_passthrough:alias"="K20m:2"



























Network management

Introduction

Neutron is an OpenStack project to provide "networking as a service" between interface devices  managed by other Openstack services.
Neutron permit to manage network isolation and overlay.  These technologies are implemented as ML2 type drivers which are used in conjunction with the OpenVSwitch mechanism driver.


An other thing need to be understood, is the fact than external ip are managed independently from the internal ip of each machine. Named Floating ip, their attachment to a VM is just a route to the internal network driver of the VM.
























































In this image we can show the ips used. In blue it is the external network named public. In orange it is the internal network manage in an overlay mode behind OpenVSwitch.
The instance named external is connecting to the public ip using the router. The floating Ip is managed by the router to route external ip (10.21.22.0/64) to internal ip (10.0.0.0/28)












An other view of these features:


























In the real network our machine (10.21.22.211)appear as wlan connection.

























Installation



Now we can create the network of our fabric project. All start by the creation of a network, followed by the creation of a router and the attachment of the public and private networks to the router.


#go to the project fabric
export OS_PROJECT_NAME=fabric
#create the private network fabric
neutron net-create fabric
#create a subnet
neutron subnet-create fabric 192.168.2.0/23 --name subnetfabric --dns-nameserver 10.21.200.2
#create a router
neutron router-create fabricrouter
#set the gateway for the router-
neutron router-gateway-set fabricrouter public
#route the fabric network to the router
neutron router-interface-add fabricrouter subnetfabric
source openrc admin admin


The result should be:









Image installation

Devstack install the cirros distribution by default. This not enough for our needs.
Openstack have a list of official compatible distribution release for Qemu.
Download it and install it by launching the command:
Example of import commands:


# connect on admin
source openrc admin admin
#install fedora 21 for gpu
glance image-create     --name fedora.21.gpu --visibility public    --file Fedora-Cloud-Base-20141203-21.x86_64.qcow2 --disk-format qcow2     --disk-format qcow2 --container-format bare --property hw_video_model=qxl     --property configure_x=true
#install centos 7
glance image-create     --name centos7.gpu      --visibility public     --file CentOS-7-x86_64-GenericCloud.qcow2     --disk-format qcow2     --property hw_video_model=qxl     --property configure_x=true --disk-format qcow2 --container-format bare
glance image-create     --name centos7      --visibility public     --file CentOS-7-x86_64-GenericCloud.qcow2     --disk-format qcow2   --container-format bare
#install ubuntu 14 and 16
glance image-create     --name ubuntu14.04      --visibility public     --file trusty-server-cloudimg-amd64-disk1.img     --disk-format qcow2   --container-format bare
glance image-create     --name ubuntu16.04      --visibility public     --file xenial-server-cloudimg-amd64-disk1.img   --disk-format qcow2   --container-format bare
The image list should be as this screenshot:























Play with Heat

Heat is the main project of the OpenStack orchestration program. It allows users to describe deployments of complex cloud applications in text files called templates. These templates are then parsed and executed by the Heat engine.


Go on the menu to the Project→Orchestration to create your stack


Simple example

This example launch a simple centos 7 instance:
Create a stack with this template :


heat_template_version: 2015-04-30


description: Simple template to deploy a single compute instance


resources:
 my_instance:
   type: OS::Nova::Server
   properties:
     image: centos7
     flavor: m1.small





























Advanced stack



You can create parameters to make a more strong solution and associate network, volume and other tools existing in heat services.
Try the following template:
heat_template_version: 2013-05-23


description: >
 A template showing how to create a Nova instance, a Cinder volume and attach
 the volume to the instance. The template uses only Heat OpenStack native
 resource types.


parameters:


 instance_type:
   type: string
   description: Type of the instance to be created.
   default: m1.small
   constraints:
     - allowed_values: [m1.small, m1.medium, m1.large]
       description:
         Value must be one of 'm1.small', 'm1.medium' or 'm1.large'.
 image_id:
   type: string
   description: ID of the image to use for the instance to be created.
   default: centos7
   constraints:
     - allowed_values: [ centos7, fedora.21.gpu, ubuntu14.04 ]
       description:
         Image ID must be either centos7, fedora.21.gpu, ubuntu14.04.
 availability_zone:
   type: string
   description: The Availability Zone to launch the instance.
   default: nova
 volume_size:
   type: number
   description: Size of the volume to be created.
   default: 1
   constraints:
     - range: { min: 1, max: 1024 }
       description: must be between 1 and 1024 Gb.
resources:
 nova_instance:
   type: OS::Nova::Server
   properties:
     availability_zone: { get_param: availability_zone }
     image: { get_param: image_id }
     flavor: { get_param: instance_type }
 cinder_volume:
   type: OS::Cinder::Volume
   properties:
     size: { get_param: volume_size }
     availability_zone: { get_param: availability_zone }
 volume_attachment:
   type: OS::Cinder::VolumeAttachment
   properties:
     volume_id: { get_resource: cinder_volume }
     instance_uuid: { get_resource: nova_instance }
     mountpoint: /dev/vdc
outputs:
 instance_ip:
   description: Public IP address of the newly created Nova instance.
   value: { get_attr: [nova_instance, first_address] }


The form before launching will show the attributes set in the parameters section of the template:






































If you launch your stack , you will see the creation of your stack.


































No comments:

Post a Comment