Tuesday 30 January 2018

How to configure external connectivity for a cirros VM using openvswitch, in openstack installed on oracle virtual box

Hi Everyone, This post details about the internals of  network connectivity of a VM to the external world. The VM is built on cirros image. The openstack is deployed as all in one on a centos VM. The openstack flavor is liberty.


We will see the changes in OVS bridge as we proceed further.

1) Initially there is no VM, and no network created.


[root@os ~(admin)]# nova list;
+----+------+--------+------------+-------------+----------+
| ID | Name | Status | Task State | Power State | Networks |
+----+------+--------+------------+-------------+----------+
+----+------+--------+------------+-------------+----------+
[root@os ~(admin)]# neutron net-list

[root@os ~(admin)]# neutron subnet-list

[root@os ~(admin)]# neutron router-list

[root@os ~(admin)]# brctl show
bridge name     bridge id               STP enabled     interfaces
[root@os ~(admin)]#
[root@os ~(admin)]#

Initially when there is no router, no network:

[root@os ~(admin)]# ovs-vsctl show
7a2fa294-0a32-4a8b-ac76-7a4456632c1c
    Bridge br-int
        fail_mode: secure
        Port br-int
            Interface br-int
                type: internal
        Port patch-tun ==============> interconnection with br-tun
            Interface patch-tun
                type: patch
                options: {peer=patch-int}
    Bridge br-ex
        Port br-ex
            Interface br-ex
                type: internal
    Bridge br-tun
        fail_mode: secure
        Port patch-int
            Interface patch-int
                type: patch
                options: {peer=patch-tun}
        Port br-tun
            Interface br-tun
                type: internal
    ovs_version: "2.4.0"
[root@os ~(admin)]# ip netns list

[root@os ~(admin)]#

2) Lets create an internal  network and and attach a subnet to it. The network type would be "VXLAN"


[root@os ~(admin)]# neutron net-create internal_net
Created a new network:
+---------------------------+--------------------------------------------------+
| Field                             |                 Value                                          |
+---------------------------+--------------------------------------------------+
| admin_state_up            |                  True                                           |
| id                                  | 3bdec66c-db6c-4367-93fb-d3ce2d8bbe87 |
| mtu                               |                               0                                    |
| name                            |                          internal_net                         |
| provider:network_type     | vxlan                                |
| provider:physical_network |                                      |
| provider:segmentation_id  | 45                                   |
| router:external           | False                                |
| shared                    | False                                |
| status                    | ACTIVE                               |
| subnets                   |                                      |
| tenant_id                 | e8c1a7908399409fa678e19cb3183b7b     |
+---------------------------+--------------------------------------+
[root@os ~(admin)]# neutron subnet-create internal_net 10.10.10.0/24 --name inetrnal_subnet --enable-dhcp --gateway 10.10.10.254
Created a new subnet:
+-------------------+------------------------------------------------+
| Field             | Value                                          |
+-------------------+------------------------------------------------+
| allocation_pools  | {"start": "10.10.10.1", "end": "10.10.10.253"} |
| cidr              | 10.10.10.0/24                                  |
| dns_nameservers   |                                                |
| enable_dhcp       | True                                           |
| gateway_ip        | 10.10.10.254                                   |
| host_routes       |                                                |
| id                | 9e01b1eb-5d21-41f3-b8a2-13e98da4f835           |
| ip_version        | 4                                              |
| ipv6_address_mode |                                                |
| ipv6_ra_mode      |                                                |
| name              | inetrnal_subnet                                |
| network_id        | 3bdec66c-db6c-4367-93fb-d3ce2d8bbe87           |
| subnetpool_id     |                                                |
| tenant_id         | e8c1a7908399409fa678e19cb3183b7b               |
+-------------------+------------------------------------------------+
[root@os ~(admin)]# ip netns
qdhcp-3bdec66c-db6c-4367-93fb-d3ce2d8bbe87  =======================>> DHCP created

[root@os ~(admin)]# ip netns exec qdhcp-3bdec66c-db6c-4367-93fb-d3ce2d8bbe87 ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
13: tap07381c2d-2f: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN
    link/ether fa:16:3e:19:9d:62 brd ff:ff:ff:ff:ff:ff
    inet 10.10.10.1/24 brd 10.10.10.255 scope global tap07381c2d-2f    ====================> DHCP has IP of 10.10.10.1
       valid_lft forever preferred_lft forever
    inet6 fe80::f816:3eff:fe19:9d62/64 scope link
       valid_lft forever preferred_lft forever


[root@os ~(admin)]# ovs-vsctl show
7a2fa294-0a32-4a8b-ac76-7a4456632c1c
    Bridge br-int
        fail_mode: secure
        Port br-int
            Interface br-int
                type: internal
        Port "tap07381c2d-2f"      =========================>> tab port created by dhcp for VM
            tag: 1
            Interface "tap07381c2d-2f"
                type: internal
        Port patch-tun
            Interface patch-tun
                type: patch
                options: {peer=patch-int}
    Bridge br-ex
        Port br-ex
            Interface br-ex
                type: internal
    Bridge br-tun
        fail_mode: secure
        Port patch-int
            Interface patch-int
                type: patch
                options: {peer=patch-tun}
        Port br-tun
            Interface br-tun
                type: internal
    ovs_version: "2.4.0"
[root@os ~(admin)]#

3) Spin a cirros virtual machine.


[root@os ~(admin)]# nova boot cirros --flavor=m1.tiny --image=cirros --nic net-id=3bdec66c-db6c-4367-93fb-d3ce2d8bbe87
+--------------------------------------+------------------------------------------------+
| Property                             | Value                                          |
+--------------------------------------+------------------------------------------------+
| OS-DCF:diskConfig                    | MANUAL                                         |
| OS-EXT-AZ:availability_zone          |                                                |
| OS-EXT-SRV-ATTR:host                 | -                                              |
| OS-EXT-SRV-ATTR:hypervisor_hostname  | -                                              |
| OS-EXT-SRV-ATTR:instance_name        | instance-00000002                              |
| OS-EXT-STS:power_state               | 0                                              |
| OS-EXT-STS:task_state                | scheduling                                     |
| OS-EXT-STS:vm_state                  | building                                       |
| OS-SRV-USG:launched_at               | -                                              |
| OS-SRV-USG:terminated_at             | -                                              |
| accessIPv4                           |                                                |
| accessIPv6                           |                                                |
| adminPass                            | Tim3qy2GeWYv                                   |
| config_drive                         |                                                |
| created                              | 2018-01-29T14:30:40Z                           |
| flavor                               | m1.tiny (5b9347ee-7a3b-4270-ac31-afdfd9c17eb0) |
| hostId                               |                                                |
| id                                   | 51e724d6-88ff-4033-bd83-0c288da65eb7           |
| image                                | cirros (1778216e-714d-4571-95a6-6ad83723d5de)  |
| key_name                             | -                                              |
| metadata                             | {}                                             |
| name                                 | cirros                                         |
| os-extended-volumes:volumes_attached | []                                             |
| progress                             | 0                                              |
| security_groups                      | default                                        |
| status                               | BUILD                                          |
| tenant_id                            | e8c1a7908399409fa678e19cb3183b7b               |
| updated                              | 2018-01-29T14:30:40Z                           |
| user_id                              | 808cb4bfeb944a088fe5b3977ac088d0               |
+--------------------------------------+------------------------------------------------+



[root@os ~(admin)]# brctl show
bridge name     bridge id               STP enabled     interfaces
qbra06bbf9c-ca          8000.7a3bbc5fbc73       no              qvba06bbf9c-ca
                                                        tapa06bbf9c-ca
[root@os ~(admin)]# ovs-vsctl show
7a2fa294-0a32-4a8b-ac76-7a4456632c1c
    Bridge br-int
        fail_mode: secure
        Port br-int
            Interface br-int
                type: internal
        Port "qvoa06bbf9c-ca"            ===============> now br-int has connection with linux bridge
            tag: 1
            Interface "qvoa06bbf9c-ca"
        Port "tap07381c2d-2f"
            tag: 1
            Interface "tap07381c2d-2f"
                type: internal
        Port patch-tun
            Interface patch-tun
                type: patch
                options: {peer=patch-int}
    Bridge br-ex
        Port br-ex
            Interface br-ex
                type: internal
    Bridge br-tun
        fail_mode: secure
        Port patch-int
            Interface patch-int
                type: patch
                options: {peer=patch-tun}
        Port br-tun
            Interface br-tun
                type: internal
    ovs_version: "2.4.0"
[root@os ~(admin)]#

4) Lets create a router so the traffic can be routed to outer world.


[root@os ~(admin)]# neutron router-create router

+-----------------------+--------------------------------------+
| Field                 | Value                                |
+-----------------------+--------------------------------------+
| admin_state_up        | True                                 |
| distributed           | False                                |
| external_gateway_info |                                      |
| ha                    | False                                |
| id                    | 98a84588-9ce7-451a-9693-55eede6451f1 |
| name                  | router                               |
| routes                |                                      |
| status                | ACTIVE                               |
| tenant_id             | e8c1a7908399409fa678e19cb3183b7b     |
+-----------------------+--------------------------------------+
[root@os ~(admin)]# ip netns
qdhcp-3bdec66c-db6c-4367-93fb-d3ce2d8bbe87

5) Now add Interface (subnet of the internal network)


[root@os ~(admin)]#neutron router-interface-add router subnet=9e01b1eb-5d21-41f3-b8a2-13e98da4f835

[root@os ~(admin)]# ip netns
qrouter-98a84588-9ce7-451a-9693-55eede6451f1
qdhcp-3bdec66c-db6c-4367-93fb-d3ce2d8bbe87



[root@os ~(admin)]# ip netns exec qrouter-98a84588-9ce7-451a-9693-55eede6451f1 ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
18: qr-9a6e1d84-3c: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN
    link/ether fa:16:3e:12:37:66 brd ff:ff:ff:ff:ff:ff
    inet 10.10.10.254/24 brd 10.10.10.255 scope global qr-9a6e1d84-3c==================================> this router port has the gateway IP of subnet created above
       valid_lft forever preferred_lft forever
    inet6 fe80::f816:3eff:fe12:3766/64 scope link
       valid_lft forever preferred_lft forever
[root@os ~(admin)]#


[root@os ~(admin)]# ovs-vsctl show
7a2fa294-0a32-4a8b-ac76-7a4456632c1c
    Bridge br-int
        fail_mode: secure
        Port br-int
            Interface br-int
                type: internal
        Port "qvoa06bbf9c-ca"
            tag: 1
            Interface "qvoa06bbf9c-ca"
        Port "tap07381c2d-2f"
            tag: 1
            Interface "tap07381c2d-2f"
                type: internal
        Port "qr-9a6e1d84-3c"
            tag: 1
            Interface "qr-9a6e1d84-3c"     ===============================>> created after router creation
                type: internal
        Port patch-tun
            Interface patch-tun
                type: patch
                options: {peer=patch-int}
    Bridge br-ex
        Port br-ex
            Interface br-ex
                type: internal
    Bridge br-tun
        fail_mode: secure
        Port patch-int
            Interface patch-int
                type: patch
                options: {peer=patch-tun}
        Port br-tun
            Interface br-tun
                type: internal
    ovs_version: "2.4.0"
[root@os ~(admin)]# brctl show
bridge name     bridge id               STP enabled     interfaces
qbra06bbf9c-ca          8000.7a3bbc5fbc73       no              qvba06bbf9c-ca
                                                        tapa06bbf9c-ca
[root@os ~(admin)]#

6) Since we are using ovs, lets configure external bridge, i.e br-ex. For external connectivity. The host virtual machine, where openstack is installed will act as external server for this use case.  For this I will dettach the native network interface and attach it to br-ex. Will also flush the IP address of this interface and assign it to br-ex


        ifconfig enp0s3 0
        dhclient br-ex =====> make sure that br-ex gets the same ip that enp0s3 had, if not assign it manually.
        ifconfig br-ex 192.168.56.102/24 up

7) Create a public network 


[root@os ~(admin)]# neutron net-create public --provider:network_type vxlan --router:external=true
Created a new network:
+---------------------------+--------------------------------------+
| Field                     | Value                                |
+---------------------------+--------------------------------------+
| admin_state_up            | True                                 |
| id                        | 4c7d9cec-6013-488b-8d5d-0ed13c228876 |
| mtu                       | 0                                    |
| name                      | public                               |
| provider:network_type     | vxlan                                |
| provider:physical_network |                                      |
| provider:segmentation_id  | 44                                   |
| router:external           | True                                 |
| shared                    | False                                |
| status                    | ACTIVE                               |
| subnets                   |                                      |
| tenant_id                 | e8c1a7908399409fa678e19cb3183b7b     |
+---------------------------+--------------------------------------+
[root@os ~(admin)]#

[root@os ~(admin)]# neutron subnet-create public 192.168.56.0/24 --name pub-subnet --gateway 192.168.56.1 --disable-dhcp --allocation-pool start=192.168.56.200,end=192.168.56.210 =================================>> allocation pool is mandatory here as floating ip will be assigned from this pool
Created a new subnet:
+-------------------+------------------------------------------------------+
| Field             | Value                                                |
+-------------------+------------------------------------------------------+
| allocation_pools  | {"start": "192.168.56.200", "end": "192.168.56.210"} |
| cidr              | 192.168.56.0/24                                      |
| dns_nameservers   |                                                      |
| enable_dhcp       | False                                                |
| gateway_ip        | 192.168.56.1                                         |
| host_routes       |                                                      |
| id                | 23db7b6d-8196-42e5-8283-9f38cfe9a1cb                 |
| ip_version        | 4                                                    |
| ipv6_address_mode |                                                      |
| ipv6_ra_mode      |                                                      |
| name              | pub-subnet                                           |
| network_id        | 4c7d9cec-6013-488b-8d5d-0ed13c228876                 |
| subnetpool_id     |                                                      |
| tenant_id         | e8c1a7908399409fa678e19cb3183b7b                     |
+-------------------+------------------------------------------------------+
[root@os ~(admin)]#

8) Set the gateway so the router will use the allocated pool of  IPs of external network.   

When one network wants to talk to another, add them as interfaces but when both networks wants to connect with external world then set the gateway of the external network in router.

[root@os ~(admin)]# ip netns
qrouter-98a84588-9ce7-451a-9693-55eede6451f1
qdhcp-3bdec66c-db6c-4367-93fb-d3ce2d8bbe87
[root@os ~(admin)]# ip netns qrouter-98a84588-9ce7-451a-9693-55eede6451f1 bash
Command "qrouter-98a84588-9ce7-451a-9693-55eede6451f1" is unknown, try "ip netns help".
[root@os ~(admin)]# ip netns exec qrouter-98a84588-9ce7-451a-9693-55eede6451f1 bash
[root@os ~(admin)]# ifconfig
lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
        inet 127.0.0.1  netmask 255.0.0.0
        inet6 ::1  prefixlen 128  scopeid 0x10<host>
        loop  txqueuelen 0  (Local Loopback)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

qg-75980405-14: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 192.168.56.200  netmask 255.255.255.0  broadcast 192.168.56.255
        inet6 fe80::f816:3eff:fe3c:22d4  prefixlen 64  scopeid 0x20<link>
        ether fa:16:3e:3c:22:d4  txqueuelen 0  (Ethernet)
        RX packets 15  bytes 1715 (1.6 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 13  bytes 1074 (1.0 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

qr-9a6e1d84-3c: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 10.10.10.254  netmask 255.255.255.0  broadcast 10.10.10.255
        inet6 fe80::f816:3eff:fe12:3766  prefixlen 64  scopeid 0x20<link>
        ether fa:16:3e:12:37:66  txqueuelen 0  (Ethernet)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 10  bytes 864 (864.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

Now from the router the external server is reachable.
[root@os ~(admin)]# ping 192.168.56.102
PING 192.168.56.102 (192.168.56.102) 56(84) bytes of data.
64 bytes from 192.168.56.102: icmp_seq=1 ttl=64 time=4.19 ms
64 bytes from 192.168.56.102: icmp_seq=2 ttl=64 time=0.247 ms

And form dhcp, cirros VM is reachable

[root@os ~(admin)]# ip netns
qrouter-98a84588-9ce7-451a-9693-55eede6451f1
qdhcp-3bdec66c-db6c-4367-93fb-d3ce2d8bbe87
[root@os ~(admin)]# ip netns exec qdhcp-3bdec66c-db6c-4367-93fb-d3ce2d8bbe87 bash
[root@os ~(admin)]# ifconfig
lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
        inet 127.0.0.1  netmask 255.0.0.0
        inet6 ::1  prefixlen 128  scopeid 0x10<host>
        loop  txqueuelen 0  (Local Loopback)
        RX packets 13  bytes 1408 (1.3 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 13  bytes 1408 (1.3 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

tap07381c2d-2f: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 10.10.10.1  netmask 255.255.255.0  broadcast 10.10.10.255
        inet6 fe80::f816:3eff:fe19:9d62  prefixlen 64  scopeid 0x20<link>
        ether fa:16:3e:19:9d:62  txqueuelen 0  (Ethernet)
        RX packets 13  bytes 1522 (1.4 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 32  bytes 2497 (2.4 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

[root@os ~(admin)]# ping 10.10.10.2
PING 10.10.10.2 (10.10.10.2) 56(84) bytes of data.
64 bytes from 10.10.10.2: icmp_seq=1 ttl=64 time=8.38 ms
64 bytes from 10.10.10.2: icmp_seq=2 ttl=64 time=3.51 ms

[root@os ~(admin)]# ssh cirros@10.10.10.2
cirros@10.10.10.2's password:
$ ifconfig
eth0      Link encap:Ethernet  HWaddr FA:16:3E:6B:C8:83
          inet addr:10.10.10.2  Bcast:10.10.10.255  Mask:255.255.255.0
          inet6 addr: fe80::f816:3eff:fe6b:c883/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1400  Metric:1
          RX packets:117 errors:0 dropped:0 overruns:0 frame:0
          TX packets:138 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:14765 (14.4 KiB)  TX bytes:14418 (14.0 KiB)

lo        Link encap:Local Loopback
          inet addr:127.0.0.1  Mask:255.0.0.0
          inet6 addr: ::1/128 Scope:Host
          UP LOOPBACK RUNNING  MTU:16436  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)

From VMs the external server is reachable and sshable now.

$ ping 192.168.56.102
PING 192.168.56.102 (192.168.56.102): 56 data bytes
64 bytes from 192.168.56.102: seq=0 ttl=63 time=24.940 ms
64 bytes from 192.168.56.102: seq=1 ttl=63 time=7.214 ms
^C
--- 192.168.56.102 ping statistics ---
2 packets transmitted, 2 packets received, 0% packet loss
round-trip min/avg/max = 7.214/16.077/24.940 ms
$ ssh root@192.168.56.102
root@192.168.56.102's password:
Last login: Tue Jan 30 03:02:34 2018 from 192.168.56.1
[root@os ~]# ls
anaconda-ks.cfg  ans.cfg  keystonerc_admin  liberty  sec-key.pem
[root@os ~]# hostname -I
192.168.56.102
[root@os ~]#


[root@os ~(admin)]# neutron router-show router
+-----------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Field                 | Value                                                                                                                                                                                      |
+-----------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| admin_state_up        | True                                                                                                                                                                                       |
| distributed           | False                                                                                                                                                                                      |
| external_gateway_info | {"network_id": "4c7d9cec-6013-488b-8d5d-0ed13c228876", "enable_snat": true, "external_fixed_ips": [{"subnet_id": "23db7b6d-8196-42e5-8283-9f38cfe9a1cb", "ip_address": "192.168.56.200"}]} |
| ha                    | False                                                                                                                                                                                      |
| id                    | 98a84588-9ce7-451a-9693-55eede6451f1                                                                                                                                                       |
| name                  | router                                                                                                                                                                                     |
| routes                |                                                                                                                                                                                            |
| status                | ACTIVE                                                                                                                                                                                     |
| tenant_id             | e8c1a7908399409fa678e19cb3183b7b                                                                                                                                                           |
+-----------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
[root@os ~(admin)]#


9) Till now we have made external server reachable from tenant VM, but vice-versa is not true. To make the VM reachable from external server, we need to create and assign the floating ip.

[root@os ~(admin)]# neutron  floatingip-create public
[root@os ~(admin)]# nova floating-ip-list
+--------------------------------------+----------------+-----------+----------+--------+
| Id                                   | IP             | Server Id | Fixed IP | Pool   |
+--------------------------------------+----------------+-----------+----------+--------+
| 68f29d37-c3ec-406e-b69a-c342efc7dcff | 192.168.56.201 | -         | -        | public |
+--------------------------------------+----------------+-----------+----------+--------+
[root@os ~(admin)]# nova floating-ip-associate cirros 192.168.56.201
[root@os ~(admin)]# nova list
+--------------------------------------+--------+--------+------------+-------------+-----------------------------------------+
| ID                                   | Name   | Status | Task State | Power State | Networks                                |
+--------------------------------------+--------+--------+------------+-------------+-----------------------------------------+
| 51e724d6-88ff-4033-bd83-0c288da65eb7 | cirros | ACTIVE | -          | Running     | internal_net=10.10.10.2, 192.168.56.201 |
+--------------------------------------+--------+--------+------------+-------------+-----------------------------------------+
[root@os ~(admin)]# ping 192.168.56.201
PING 192.168.56.201 (192.168.56.201) 56(84) bytes of data.
64 bytes from 192.168.56.201: icmp_seq=1 ttl=63 time=119 ms
64 bytes from 192.168.56.201: icmp_seq=2 ttl=63 time=4.15 ms

[root@os ~(admin)]# ssh cirros@192.168.56.201
The authenticity of host '192.168.56.201 (192.168.56.201)' can't be established.
RSA key fingerprint is 8e:b7:98:45:9c:57:ca:02:ae:82:21:a0:f8:9c:7b:1e.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '192.168.56.201' (RSA) to the list of known hosts.
cirros@192.168.56.201's password:
$ ls
$ hostname
cirros
$ exit


[root@os ~(admin)]# ip netns exec qrouter-98a84588-9ce7-451a-9693-55eede6451f1 ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
13: qr-9a6e1d84-3c: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN
    link/ether fa:16:3e:12:37:66 brd ff:ff:ff:ff:ff:ff
    inet 10.10.10.254/24 brd 10.10.10.255 scope global qr-9a6e1d84-3c
       valid_lft forever preferred_lft forever
    inet6 fe80::f816:3eff:fe12:3766/64 scope link
       valid_lft forever preferred_lft forever
14: qg-75980405-14: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN
    link/ether fa:16:3e:3c:22:d4 brd ff:ff:ff:ff:ff:ff
    inet 192.168.56.200/24 brd 192.168.56.255 scope global qg-75980405-14
       valid_lft forever preferred_lft forever
    inet 192.168.56.201/32 brd 192.168.56.201 scope global qg-75980405-14
       valid_lft forever preferred_lft forever
    inet6 fe80::f816:3eff:fe3c:22d4/64 scope link
       valid_lft forever preferred_lft forever
[root@os ~(admin)]#

$ ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1400 qdisc pfifo_fast qlen 1000
    link/ether fa:16:3e:6b:c8:83 brd ff:ff:ff:ff:ff:ff
    inet 10.10.10.2/24 brd 10.10.10.255 scope global eth0
    inet6 fe80::f816:3eff:fe6b:c883/64 scope link
       valid_lft forever preferred_lft forever
$


Tuesday 22 November 2016

OpenStack Services :: what runs where on MultiNode deployment

OpenStack is an open source infrastructure for cloud which is deployed as IAAS (Infrastructure as a service). It was launched in Jul 2010 by NASA and Rackspace. Openstack is a collection of various modules/projects. The first release of OpenStack, which is Austin, has only two components Nova and Swift later on many modules like Nova,Neutron,Horizon,Heat,Ceilometer,Glance,Cinder,keystone etc have been added. Full list of openstack projects you can find it on link mentioned at bottom of this page.

OpenStack can be deployed in various ways. All projects can be clubbed into single machine which is called SingleNode deployment or projects can be functionally separated onto various machines like networking features can be hosted on one physical box, storage onto another separate machine or compute features onto another separate machine and so on. Each module or project of OpenStack consists of sub services and runs on machine based on deployment model.

For this page, the reference has been taken from a 4-node RDO deployment: One Controller node where nova,keystone,heat,ceilometer,cinder,glance services runs. One Network node where only network related services runs and 2-Compute nodes where hyper-visor services runs.
Storage services can also be made to run on separate physical box.
Below is the list of services on each nodes:

Openstack Deployment 

               

Controller Node:


The Controller node has one dashboard service, one image store, and one identity service. This node also includes MySQL, RabbitMQ, and compute, block storage, and networking services.
  • neutron-server
    • Accepts API requests and then routes them to the appropriate Neutron plug-in for action.
  • ceilometer-alarm-evaluator
    • Determines when alarms fire due to the associated statistic trend crossing a threshold over a sliding time window.
  • ceilometer-alarm-notifier
    • Initiates alarm actions such as calling out to webhook with description of alarm state transition
  • ceilometer-api
    • Presents aggregated metering data to consumers such as billing engines and analytic tools
  • ceilometer-central
    • Polls public RESTful APIs of other OpenStack services such as Glance, Swift, Cinder, and Neutron to monitor resources
  • ceilometer-collector
    • Consumes AMQP notifications from agents and other OpenStack services and dispatches data to metering store
  • ceilometer-notification
    • Notification agents consuming messages from services, daemon which monitors the message bus for data being provided by other OpenStack components such as Nova, Glance, Cinder, Neutron, Swift, Keystone, and Heat, as well as Ceilometer internal communication.
  • cinder-api.service
    • Accepts API requests and routes them to cinder-volume
  • cinder-scheduler
    • Picks optimal Block Storage provider node on which to create volume
  • cinder-volume
    • Responds to requests and persists the changes made to stateful database. Also interacts with other processes, and can interact with a variety of storage providers through driver architecture
  • glance-api
    • a server daemon that serves the Glance API, which receives the restful requests form other components.
  • glance-registry
    • maintains Image registries.
  • heat-api-cfn
    • Provides AWS-style query API and processes API requests
  • heat-api
    • Provides OpenStack native REST API that processes requests
  • heat-engine
    • Orchestrates launch of templates and provides events back to API consumer
  • losetup
    • Creates loop back device for cinder.
  • nova-api
    • Accepts and responds to end-user Compute API calls. It processes the REST request
  • nova-cert
    • nova–cert is a server daemon that serves the Nova Cert service for X509 certificates. Used to generate certificates for euca-bundle-image. Only needed for EC2 API.
  • nova-conductor
    • Proxies database connections primarily for Nova.
  • nova-consoleauth
    • Manages token authentication for both console proxies
  • nova-novncproxy
    • Allows Compute service to access instances through virtual network computing (VNC) clients
  • nova-scheduler
    • Handles VM instance request from the queue and determines where the VM should run [decides which host gets each instance]

Network Node

Network node runs Openstack Neutron module. Neutron provides "network connectivity as a service" between interface devices managed by other OpenStack services (most likely Nova). The service works by allowing users to create their own networks and then attach interfaces to them. Like many of the OpenStack services, Neutron is highly configurable due to its plug-in architecture.
Please be mindful that neutron-server runs on controller node not here.
  • neutron-dhcp-agent
    • Provides DHCP services to tenant networks. This agent is the same across all plug-ins and is responsible for maintaining DHCP configuration. The neutron-dhcp-agent requires message queue access. Optional depending on plug-in.
  • neutron-l3-agent
    • Provides L3/NAT forwarding for external network access of VMs on tenant networks. Requires message queue access. Optional depending on plug-in.
  • neutron-metadata-agent:
    • Provides instance’s network related data.
  • neutron-openvswitch-agent
    • Runs on each compute node and network node to manage local open virtual switch (vswitch) configuration. The plug-in that you use determine which agents run.

Compute Node1:

The Compute node is where VM instances are installed.
  • neutron-openvswitch-agent
    • Runs on each compute node and network node to manage local open virtual switch (vswitch) configuration. The plug-in that you use determine which agents run.
  • ceilometer-compute
    • Polls local libvirt daemon to acquire instance performance data and emits data as AMQP notifications
  • nova-compute
    • Creates and terminates VM instances via hypervisor APIs

Compute Node2:

  • neutron-openvswitch-agent
    • Runs on each compute node and network node to manage local open virtual switch (vswitch) configuration. The plug-in that you use determine which agents run.
  • ceilometer-compute
    • Polls local libvirt daemon to acquire instance performance data and emits data as AMQP notifications
  • nova-compute
    • Creates and terminates VM instances via hypervisor APIs

Assumptions:

1) OpenStack release : Kilo
2) Only basic and important features are installed.
3) Operating System : RHEL7.1

References: 

OpenStack Command Line Basics

 

Configure a VM using command line in openstack

This page details about the basic operations in openstack using command line. This is beginner level tutorial and explains only fundamental commands for a novice. For more advanced level commands please explore the links provided in reference section at the bottom of this page. In fact there are various ways to access cloud like though dashboard, command line or via orchestrator etc.

 Assumptions:

1) Openstack version used : kilo
2) There are no any VMs running, no network created, no image created.
     Basically openstack cloud is being used is after fresh installation
3) Image used for this demonstration is "cirros-3.4"
4) tenant/project used is "admin" So all the configuration will be reflected 
    for admin tenant only.

1) Source .rc file for admin user

To use openstack cloud thorough command line, we need to source .rc file, which is keystone_admin. By sourcing this file, environment variables are set which are imperative to fire openstack commands.

     #source keystone_admin

2) Create image:

Before launching an instance we need to specify the OS type and this has to be created and available.

     #glance image-create --name cirros --is-public True --disk-format qcow2 
       --container-format bare < /root/cirros-0.3.4-x86_64-disk.img

Image creation can be verified using:

     #glance image-list

3) Creating flavor:

There might be few flavors already created after fresh installation, like:
How to install openstack using packstack
 
     #nova flavor-list

+----+-----------+-----------+------+-----------+------+-------+-------------+-----------+
| ID | Name | Memory_MB | Disk | Ephemeral | Swap | VCPUs | RXTX_Factor | Is_Public |
+----+-----------+-----------+------+-----------+------+-------+-------------+-----------+
| 1  | m1.tiny   | 512   | 1   |    0      |      |   1   |     1.0     |   True    |
| 2  | m1.small  | 2048  | 20  |    0      |      |   1   |     1.0     |   True    |
| 3  | m1.medium | 4096  | 40  |    0      |      |   2   |     1.0     |   True    |
| 4  | m1.large  | 8192  | 80  |    0      |      |   4   |     1.0     |   True    |
| 5  | m1.xlarge | 16384 | 160 |    0      |      |   8   |     1.0     |   True    |
+----+-----------+-----------+------+-----------+------+-------+-------------+-----------+
 
Lets create our own:

     #nova flavor-create nano 6 256 2 1

    where :
        nano: is the name of image
        6   : is the image id, which can be any integer other than already used
        256 : is the amount of ram in MB
        2   : is the amount of disk, in GB
        1   : is the number of virtual cpu 

      #nova flavor-list


+----+-----------+-----------+------+-----------+------+-------+-------------+-----------+
| ID | Name | Memory_MB | Disk | Ephemeral | Swap | VCPUs | RXTX_Factor | Is_Public |
+----+-----------+-----------+------+-----------+------+-------+-------------+-----------+
| 1  | m1.tiny   | 512   | 1   |    0      |      |   1   |     1.0     |   True    |
| 2  | m1.small  | 2048  | 20  |    0      |      |   1   |     1.0     |   True    |
| 3  | m1.medium | 4096  | 40  |    0      |      |   2   |     1.0     |   True    |
| 4  | m1.large  | 8192  | 80  |    0      |      |   4   |     1.0     |   True    |
| 5  | m1.xlarge | 16384 | 160 |    0      |      |   8   |     1.0     |   True    |
| 5  | nano      | 256   | 2   |    0      |      |   1   |     1.0     |   True    |
+----+-----------+-----------+------+-----------+------+-------+-------------+-----------+

4) Creating network:

 There should be at least one network and subnet be there to launch any VM.

      #neutron net-create private1
      #neutron net-create private2 

      where:
          private1 and private2 are netowrk names.

5) Creating and attaching subnets to both networks:

       #neutron subnet-create private1 10.10.10.0/24 --name private1-sub
       #neutron subnet-create private2 20.20.20.0/24 --name private2-sub

       where:
 
         ==> private1 and private2 are the network names which have already been created.
         ==> private1-sub and private2-sub are subnets attached to networks private1 and 
             private2 respectively.

6) Launch instance:

     I am launching two cirros VMs, one for each subnet:

     #nova boot vm1 --flavor nano --image cirros --nic 
                    net-id=729722af-cdb9-4e69-9c38-9d1520e8825e
 
     #nova boot vm2 --flavor nano --image cirros --nic 
                    net-id=eaeed01a-f9dd-489f-bc50-4ccb172de21e

       where:
             
               net-id: is the id of networks private1 and private2

8) Adding interfaces to router:

 Either port or subnets can be added to router interface.

     # neutron router-interface-add router1 private1-sub
     # neutron router-interface-add router1 private2-sub

7) Creating router:

 Since we have created two instances vm1 and vm2 each is assigned different subnet, if they 
 have to communicate with each other, they router. So lets create one.

     # neutron router-create router1

     where: 
         router1 : is the name of router.

9) Adding security rules for ping and remote connection

By Default "default" security group is created if you install openstack using packstack. If this security group is used which is by-default if not specified, it does not allow the VMs to be pingged from outside neither does is allow VM to ping it to other. Same is the case with remote connection using ssh.
To make the things functioning we need to add rules for icmp and ssh.


     #openstack security group rule create default --protocol icmp --dst-port -1:-1 
      --src-ip 0.0.0.0/0
 
     #openstack security group rule create default --protocol tcp --dst-port 22:22 
      --src-ip 0.0.0.0/0

10) Verification

To check the VMs status type:

    #nova list

Both VMs (vm1 & vm2) should be in 'running' state.

Now open the console of these VMs using dashboard and login. Also you can ping each other to 
verify whether router is functioning properly. 
 
 

11) References:

          2) OpenStack Docs