Public Cloud vRack

The public cloud instances also works with the vRack, the setup is based upon an instance with Debian 8 Jessie as the OS for our instance. Naturally for the instance to work with the vRack it would need to be in the vRack itself. We can do this within the control panel to move the public cloud project into the vRack.

Our guide will be based upon instances all residing under a single project in the SBG1 datacenter. The instance type will be VPS SSD 1 instance type, we will also be setting the IP range of 192.168.0.1/16.

A prior requirement for this guide is for you to have followed OpenStack

One of the first things for the public cloud to work under the vRack is to define the public network. We can do this via the API at:

https://api.ovh.com/console/#/cloud/project/{serviceName}/network/private#POST

serviceName  = tenant/project ID

name         = vRack name, we will call the name "vRack"

regions      = Region where to activate private network. No parameters means all region (datacenters)

vlanId       = ID for the vlan, in our case we will be using 10

Whist we have the network, we still need to define the subnet, for the private vRack network on the public cloud. We can do this on the API as well.

https://api.ovh.com/console/#/cloud/project/{serviceName}/network/private/{networkId}/subnet#POST

serviceName        = tenant/project ID

networkId          = vRack value. If our vRack in the manager is PN-100, and we specified the vlanId previously as 10 then our networkId is PN-100_10

dhcp               = undefined as we will be setting static IP addresses

end                = Last IP address. Due to our IP block of 192.168.0.1/16, our end IP will be 192.168.255.255

network            = Our network will be the private IP address of 192.168.0.0/16

noGateway          = undefined, this is because we will be using a gateway to connect the instances from other datacenters.

region             = All our instances are set in the project for the datacenter of SBG1

start              = Starting IP will be 192.168.0.2

The IP of 192.168.0.1 will be set as the gateway.

Now that we have defined our network for the public cloud under the vRack we can verify this directly via the OpenStack API.

root@server-1:# nova net-list
+--------------------------------------+---------+------+
| ID                                   | Label   | CIDR |
+--------------------------------------+---------+------+
| 3f4e3b19-4a46-4672-aade-5654d1fc0705 | Ext-Net | None |
| 808b133f-4541-4c75-949a-e1922677bdc5 | vRack   | None |
+--------------------------------------+---------+------+

Here we can see 2 networks assigned. One is Ext-Net, this is the external network which the instance will make for public connections. The name of vRack is the private interface network for our public cloud. 

We can also see the current status of the public cloud as well with:

root@server-1:# nova list
+-----------+----------+--------+------------+-------------+-------------------+
| ID        | Name     | Status | Task State | Power State | Networks          |
+-----------+----------+--------+------------+-------------+-------------------+
| 0989531b  | Server 1 | ACTIVE | -          | Running     | Ext-Net=PUBLIC IP |
+-----------+----------+--------+------------+-------------+-------------------+

To add the new network to the public cloud we will execute the following commands:

root@server-1:# nova interface-attach --net 808b133f-4541-4c75-949a-e1922677bdc5 Server\ 1

We can now add at the static IP to the public cloud instance with:

ip addr add 192.168.0.2/16 dev eth1

To setup the link we can run:

ip link set up eth1

Now we have a public cloud with the interface correctly setup for private networking.

If you wish to have the adapter to come up automatically you can set it at:

nano /etc/network/interfaces

allow-hotplug ens8
iface ens8 inet dhcp