05 октября 2017

Red Hat Certified Architect Level IX

I have got Red Hat Certified Architect Level IX now. I've just received the results for EX342 Red Hat Certificate of Expertise in Red Hat Enterprise Linux Diagnostics and Troubleshooting.

07 мая 2017

How to configure OpenStack TripleO Undercloud in virtual environment with the help of VBMC

This blog post describes the way to prepare a small lab in virtual environment intended for TripleO (Director) OpenStack deployment tool learning. This is a quick instruction. For deep understanding and command exploration use documentation

I used my home laptop with 16 Gb of memory and CentOS 7.3 on the top of it for the lab environment hardware and RHEL 7.3 and RHOPS 10 for virtual machines, but these instructions should be applicable for RDO OpenStack distribution with minor changes.

I created three virtual machines:
  • Undercloud VM with 8Gb of memory for Director;
  • Compute VM with 4Gb of memory for hypervisor;
  • Control VM with 6Gb of memory for all OpenStack services.
Director should work with managed VMs as they bare metal servers. Virtual BMC can be useful here. VirtualBMC is a small software that allows users to create a virtual BMC to manage a virtual machines using the IPMI protocol, similar to how real bare metal machines are managed.



You need to install VBMC on visualization host where all VMs will run. The easiest way is to use RPM provided by RDO repository:

# yum install -y https://www.rdoproject.org/repos/rdo-release.rpm
# yum install yum install python2-virtualbmc

Then you need to create BMCs for already existing VMs:

# virsh list --all
 Id    Name                           State
----------------------------------------------------
 6     OOO_Under                      running
 -     OOO_Compute                    shut off
 -     OOO_Control                    shut off

# vbmc add OOO_Compute --port 7001 --username admin --password openstack
# vbmc add OOO_Control --port 7002 --username admin --password openstack
# vbmc start OOO_Control
# 2017-05-07 13:10:11,080.080 5501 INFO VirtualBMC [-] Virtual BMC for domain OOO_Control started
# vbmc start OOO_Compute
# 2017-05-07 13:10:18,304.304 5509 INFO VirtualBMC [-] Virtual BMC for domain OOO_Compute started

In this example you created and started BMCs that will be available on 7001 and 7002 ports of visualization host. For both of them password is openstack and user name is admin.

You can see the list of running BMCs:
# vbmc list
+-------------+---------+---------+------+
| Domain name |  Status | Address | Port |
+-------------+---------+---------+------+
| OOO_Compute | running |    ::   | 7001 |
| OOO_Control | running |    ::   | 7002 |
+-------------+---------+---------+------+

Next, you need to allow access to BMCs from outside including Director VM. CentOS uses firewalld by default:

# firewall-cmd --zone=public --add-port=7001/udp --permanent
# firewall-cmd --zone=public --add-port=7002/udp --permanent
# firewall-cmd --reload

Now you can test VBMC from Director VM. You can try to power on VMs or check the power status. By default visualization host should be accessible on IP adders 192.168.122.1. For compute VM:

[root@ooounder ~]# ipmitool -I lanplus -U admin -P openstack -H 192.168.122.1 -p 7001 power on
Chassis Power Control: Up/On
[root@ooounder ~]# ipmitool -I lanplus -U admin -P openstack -H 192.168.122.1 -p 7001 power status
Chassis Power is on

The same should work for the control VM with the only exception of port 7002. The rest part of the  installation of Undercloud VM should be done in accordance with the documentation.

Add required repos and update OS:

[root@ooounder ~]# subscription-manager repos --enable=rhel-7-server-rpms --enable=rhel-7-server-extras-rpms --enable=rhel-7-server-rh-common-rpms --enable=rhel-ha-for-rhel-7-server-rpms --enable=rhel-7-server-openstack-10-rpms
[root@ooounder ~]# yum -y update
[root@ooounder ~]# init 6

Use the following commands to create the user named stack and set a password:

[root@ooounder ~]# useradd stack
[root@ooounder ~]# passwd stack

Disable password requirements for this user when using sudo:

[root@ooounder ~]# echo "stack ALL=(root) NOPASSWD:ALL" | sudo tee -a /etc/sudoers.d/stack
stack ALL=(root) NOPASSWD:ALL
[root@ooounder ~]# chmod 0440 /etc/sudoers.d/stack

The director also requires an entry for the system’s hostname and base name in /etc/hosts:

127.0.0.1   ooounder ooounder.test.local localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6

Use the following command to install the director. Then switch to stack user:
[root@ooounder ~]# yum install -y python-tripleoclient
[root@ooounder ~]# su - stack

Next, you should prepare basic template undercloud.conf  to help determine the required settings for your installation. You can use documentation and helpful tool https://github.com/cybertron/ucw. Here is my example for the lab:

[stack@ooounder ~]$ grep -o '^[^#]*' undercloud.conf
[DEFAULT]
undercloud_hostname = ooounder.test.local
local_ip = 192.168.24.1/24
network_gateway = 192.168.24.1
undercloud_public_host = 192.168.24.2
undercloud_admin_host = 192.168.24.3
undercloud_service_certificate =
generate_service_certificate = True
local_interface = eth1
local_mtu = 1500
network_cidr = 192.168.24.0/24
masquerade_network = 192.168.24.0/24
dhcp_start = 192.168.24.4
dhcp_end = 192.168.24.15
inspection_iprange = 192.168.24.16,192.168.24.17
scheduler_max_attempts = 10
[auth]

Run the following command:

[stack@ooounder ~]$ openstack undercloud install

At the end, you should get:

#############################################################################
Undercloud install complete.

The file containing this installation's passwords is at
/home/stack/undercloud-passwords.conf.

There is also a stackrc file at /home/stack/stackrc.

These files are needed to interact with the OpenStack services, and should be
secured.

#############################################################################


To initialize the stack user to use the command line tools, run the following command:

[stack@ooounder ~]$ source stackrc

The director requires several disk images for provisioning overcloud nodes. Obtain these images and import into the director:

[stack@ooounder ~]$ mkdir ~/images
[stack@ooounder ~]$ sudo yum install rhosp-director-images rhosp-director-images-ipa
[stack@ooounder ~]$ cd ~/images
[stack@ooounder ~]$ for i in /usr/share/rhosp-director-images/overcloud-full-latest-10.0.tar /usr/share/rhosp-director-images/ironic-python-agent-latest-10.0.tar; do tar -xvf $i; done
[stack@ooounder ~]$ openstack overcloud image upload --image-path /home/stack/images/

Check a list of the images in the CLI:

[stack@ooounder ~]$ openstack image list
+--------------------------------------+------------------------+--------+
| ID                                   | Name                   | Status |
+--------------------------------------+------------------------+--------+
| 1c8325f3-e13b-454d-ae1d-119bd8656013 | bm-deploy-ramdisk      | active |
| da47e44f-6b59-49ef-8373-9e4e88bf2b46 | bm-deploy-kernel       | active |
| 0150b142-16aa-41e0-b1eb-808e6d070c15 | overcloud-full         | active |
| bd4f8a2d-9e05-4546-8a7d-654d7eea3c62 | overcloud-full-initrd  | active |
| aa2a737a-727e-4891-bb0e-0c9d8493fe49 | overcloud-full-vmlinuz | active |
+--------------------------------------+------------------------+--------+

Overcloud nodes require a nameserver. Find the ID of subnet and update DNS server:

[stack@ooounder ~]$ neutron subnet-list
+--------------------------------------+------+-----------------+---------------------------------------------------+
| id                                   | name | cidr            | allocation_pools                                  |
+--------------------------------------+------+-----------------+---------------------------------------------------+
| 50f5a202-3c87-4f3c-b57f-3e3ca086be1c |      | 192.168.24.0/24 | {"start": "192.168.24.4", "end": "192.168.24.15"} |
+--------------------------------------+------+-----------------+---------------------------------------------------+
[stack@ooounder ~]$ neutron subnet-update 50f5a202-3c87-4f3c-b57f-3e3ca086be1c --dns-nameservers list=true 8.8.8.8
Updated subnet: 50f5a202-3c87-4f3c-b57f-3e3ca086be1c


The director requires a node definition template, which you create manually. This file contains the hardware and power management details for your nodes. I have two nodes with known IPMI details. Also you need to add MAC addresses of VMs NIC for PXE booting.
Here is my example:

[stack@ooounder ~]$ cat instackenv.json
{
    "nodes":[
        {
            "mac":[
                "52:54:00:d6:b4:bb"
            ],
            "cpu":"2",
            "memory":"4096",
            "disk":"100",
            "arch":"x86_64",
            "name":"compute",
            "pm_type":"pxe_ipmitool",
            "pm_user":"admin",
            "pm_password":"openstack",
            "pm_addr":"192.168.122.1",
            "pm_port":"7001"
        },
        {
            "mac":[
                "52:54:00:63:d8:a9"
            ],
            "cpu":"2",
            "memory":"4096",
            "disk":"100",
            "arch":"x86_64",
            "name":"controller",
            "pm_type":"pxe_ipmitool",
            "pm_user":"admin",
            "pm_password":"openstack",
            "pm_addr":"192.168.122.1",
            "pm_port":"7002"
        }
    ]
}

Import it into the director using the following command:

[stack@ooounder ~]$ openstack baremetal import --json ~/instackenv.json
Started Mistral Workflow. Execution ID: c3370562-1121-4d50-89ea-1b09f6f4058f
Successfully registered node UUID 6b56c6f5-c1d8-40a3-8d38-597fac9a1425
Successfully registered node UUID f6d78cf5-0308-45e9-b2ca-e109f0667bc6
Started Mistral Workflow. Execution ID: 59827dfa-dfaa-4bbd-ad0a-abc30a96de95
Successfully set all nodes to available.

Assign the kernel and ramdisk images to all nodes:
[stack@ooounder ~]$ openstack baremetal configure boot

The nodes are now registered and configured in the director. View a list of these nodes in the CLI:

[stack@ooounder ~]$ openstack baremetal node list
+--------------------------------------+------------+---------------+-------------+--------------------+-------------+
| UUID                                 | Name       | Instance UUID | Power State | Provisioning State | Maintenance |
+--------------------------------------+------------+---------------+-------------+--------------------+-------------+
| 6b56c6f5-c1d8-40a3-8d38-597fac9a1425 | compute    | None          | power off   | available          | False       |
| f6d78cf5-0308-45e9-b2ca-e109f0667bc6 | controller | None          | power off   | available          | False       |
+--------------------------------------+------------+---------------+-------------+--------------------+-------------+


Set all nodes to a managed state:

[stack@ooounder ~]$ for node in $(openstack baremetal node list -c UUID -f value) ; do openstack baremetal node manage $node ; done

Run the following commands one by one to inspect the hardware attributes of each node:

[stack@ooounder ~]$  openstack overcloud node introspect | 6b56c6f5-c1d8-40a3-8d38-597fac9a1425 --provide
[stack@ooounder ~]$  openstack overcloud node introspect f6d78cf5-0308-45e9-b2ca-e109f0667bc6 --provide

This process causes each node to boot an introspection agent over PXE. This agent collects hardware data from the node and sends it back to the director.

After registering and inspecting the hardware of each node, you will tag them into specific profiles. These profile tags match your nodes to flavors, and in turn the flavors are assigned to a deployment role:
[stack@ooounder ~]$ ironic node-update compute add properties/capabilities='profile:compute,boot_option:local'
+------------------------+-------------------------------------------------------------------------+
| Property               | Value                                                                   |
+------------------------+-------------------------------------------------------------------------+
| chassis_uuid           |                                                                         |
| clean_step             | {}                                                                      |
| console_enabled        | False                                                                   |
| created_at             | 2017-05-07T11:16:37+00:00                                               |
| driver                 | pxe_ipmitool                                                            |
| driver_info            | {u'ipmi_port': u'7001', u'ipmi_username': u'admin', u'deploy_kernel': u |
|                        | 'da47e44f-6b59-49ef-8373-9e4e88bf2b46', u'ipmi_address':                |
|                        | u'192.168.122.1', u'deploy_ramdisk': u'1c8325f3-e13b-454d-ae1d-         |
|                        | 119bd8656013', u'ipmi_password': u'******'}                             |
| driver_internal_info   | {}                                                                      |
| extra                  | {u'hardware_swift_object': u'extra_hardware-                            |
|                        | 6b56c6f5-c1d8-40a3-8d38-597fac9a1425'}                                  |
| inspection_finished_at | None                                                                    |
| inspection_started_at  | None                                                                    |
| instance_info          | {}                                                                      |
| instance_uuid          | None                                                                    |
| last_error             | None                                                                    |
| maintenance            | False                                                                   |
| maintenance_reason     | None                                                                    |
| name                   | compute                                                                 |
| network_interface      |                                                                         |
| power_state            | power off                                                               |
| properties             | {u'memory_mb': u'4096', u'cpu_arch': u'x86_64', u'local_gb': u'199',    |
|                        | u'cpus': u'2', u'capabilities': u'profile:compute,boot_option:local'}   |
| provision_state        | available                                                               |
| provision_updated_at   | 2017-05-07T12:40:07+00:00                                               |
| raid_config            |                                                                         |
| reservation            | None                                                                    |
| resource_class         |                                                                         |
| target_power_state     | None                                                                    |
| target_provision_state | None                                                                    |
| target_raid_config     |                                                                         |
| updated_at             | 2017-05-07T12:40:14+00:00                                               |
| uuid                   | 6b56c6f5-c1d8-40a3-8d38-597fac9a1425                                    |
+------------------------+-------------------------------------------------------------------------+
[stack@ooounder ~]$ ironic node-update controller add properties/capabilities='profile:control,boot_option:local'
+------------------------+-------------------------------------------------------------------------+
| Property               | Value                                                                   |
+------------------------+-------------------------------------------------------------------------+
| chassis_uuid           |                                                                         |
| clean_step             | {}                                                                      |
| console_enabled        | False                                                                   |
| created_at             | 2017-05-07T11:16:37+00:00                                               |
| driver                 | pxe_ipmitool                                                            |
| driver_info            | {u'ipmi_port': u'7002', u'ipmi_username': u'admin', u'deploy_kernel': u |
|                        | 'da47e44f-6b59-49ef-8373-9e4e88bf2b46', u'ipmi_address':                |
|                        | u'192.168.122.1', u'deploy_ramdisk': u'1c8325f3-e13b-454d-ae1d-         |
|                        | 119bd8656013', u'ipmi_password': u'******'}                             |
| driver_internal_info   | {}                                                                      |
| extra                  | {u'hardware_swift_object': u'extra_hardware-f6d78cf5-0308-45e9-b2ca-    |
|                        | e109f0667bc6'}                                                          |
| inspection_finished_at | None                                                                    |
| inspection_started_at  | None                                                                    |
| instance_info          | {}                                                                      |
| instance_uuid          | None                                                                    |
| last_error             | None                                                                    |
| maintenance            | False                                                                   |
| maintenance_reason     | None                                                                    |
| name                   | controller                                                              |
| network_interface      |                                                                         |
| power_state            | power off                                                               |
| properties             | {u'memory_mb': u'4096', u'cpu_arch': u'x86_64', u'local_gb': u'199',    |
|                        | u'cpus': u'2', u'capabilities': u'profile:control,boot_option:local'}   |
| provision_state        | available                                                               |
| provision_updated_at   | 2017-05-07T12:42:55+00:00                                               |
| raid_config            |                                                                         |
| reservation            | None                                                                    |
| resource_class         |                                                                         |
| target_power_state     | None                                                                    |
| target_provision_state | None                                                                    |
| target_raid_config     |                                                                         |
| updated_at             | 2017-05-07T12:43:01+00:00                                               |
| uuid                   | f6d78cf5-0308-45e9-b2ca-e109f0667bc6                                    |
+------------------------+-------------------------------------------------------------------------+

Now everything is ready to start the configuration and installation of overcloud.

17 февраля 2017

My first article in the series on Docker

My first article in the series on
Docker was published in Russian magazine "Системный администратор" 1/2017 (p. 36-40). Read the full version in paper http://samag.ru/archive/article/3358