Tuesday, December 8, 2015

Cumulus VX on Libvirt (CentOS 6)

You may have seen my earlier post on Cumulus VX in libvirt. I'm going to go a step further and try to emulate the environment described here:

https://docs.cumulusnetworks.com/display/VX/Using+Cumulus+VX+with+KVM

There are some pretty significant differences though, because libvirt on CentOS 6 is pretty "feature light". Nevertheless, this should result in something to work with.

You will need to build a system that can serve DHCP and HTTP on your 'default' network and then disable the dhcp section (you could delete it) in /etc/libvirt/qemu/networks/default.xml . This is because while libvirt can create manual DHCP entries (sometimes called "static dhcp" entries) using the 'host' declaration, you will not be able to pass DHCP option 239 which is required for your Zero Touch Provisioning. Queue sad trombone noise.

Next, you will want to build 4 additional networks in the Virtual Machine Manager ( localhost>details>Virtual Networks ). I built isolated networks named testnet1 thru testnet4, but you don't need to follow my silliness. Be sure to restart libvirtd after you've completed this to be sure that it has picked up all your changes. Maybe backup your XML files too, because there are situations where libvirt can blow them away. Once your DHCP server is running on the 'default' network then build 4 manual DHCP entries using the MAC addresses that you'll find below for eth0 (the ones that are on virbr0).

You will follow the instructions in the cumulus docs but instead of the KVM commands you can use virt-install to initiate the guests (adjust path to qcow2 files you copied):

sudo virt-install --os-variant=generic --ram=256 --vcpus=1 --network bridge=virbr0,model=virtio,mac=00:01:00:00:01:01 --network network=testnet1,model=virtio,mac=00:00:02:00:00:11 --network network=testnet2,model=virtio,mac=00:00:02:00:00:12 --boot hd --disk path=/home/user1/leaf1.qcow2,format=qcow2 --name=leaf1

sudo virt-install --os-variant=generic --ram=256 --vcpus=1 --network bridge=virbr0,model=virtio,mac=00:01:00:00:01:02 --network network=testnet3,model=virtio,mac=00:00:02:00:00:21 --network network=testnet4,model=virtio,mac=00:00:02:00:00:22 --boot hd --disk path=/home/user1/leaf2.qcow2,format=qcow2 --name=leaf2

sudo virt-install --os-variant=generic --ram=256 --vcpus=1 --network bridge=virbr0,model=virtio,mac=00:01:00:00:01:03 --network network=testnet1,model=virtio,mac=00:00:02:00:00:31 --network network=testnet3,model=virtio,mac=00:00:02:00:00:32 --boot hd --disk path=/home/user1/spine1.qcow2,format=qcow2 --name=spine1

sudo virt-install --os-variant=generic --ram=256 --vcpus=1 --network bridge=virbr0,model=virtio,mac=00:01:00:00:01:04 --network network=testnet2,model=virtio,mac=00:00:02:00:00:41 --network network=testnet4,model=virtio,mac=00:00:02:00:00:42 --boot hd --disk path=/home/user1/spine2.qcow2,format=qcow2 --name=spine2

 I will admit that the configs that are described in the Cumulus docs indicate that swp1 thru swp3 would be provisioned, but the virtual hardware configured is only swp1 thru swp2. I'm going to ignore swp3 for now.

If you built option 239 correctly on the DHCP scope, then you should be able to point all these instances to your ZTP script. Just remember that you do not need to run your web server on the deafult port 80, as ZTP will accept a path like 'http://192.168.122.101:1234/my_ztp_script.sh'

I haven't setup a configuration management tool on my instances yet, but that will be upcoming via my ZTP script.


Friday, December 4, 2015

Boot Cumulus VX in a libvirt KVM

Why would I want to do that?

Well, if you are running CentOS 6  and want an easy way to play with the Cumulus VX, here is a good start.

On a CentOS 6 system running a GUI
#yum groupinstall "Virtualization*"

(this will install all your libvirt bits and pieces)

Then download the Cumulus KVM image (it is a qcow2 image) and remember the path

#sudo virt-install --os-variant=generic --ram=256 --vcpus=1 --network bridge=virbr0,model=virtio,mac=00:01:00:00:01:00 --boot hd --disk path=/home/user/cumulus.qcow2 --name=cumulus1

This only provisions the eth0 interface (mgmt) but it should get you a start to playing with the product.

Enjoy!!!