I had this interesting problem recently and was able to work around that problem (once I discovered it)
I was using the ansible functionality of 'ipify' to determine the public ip address of the host I was provisioning. It worked great and I used that fact to find the ethernet interface that was attached to that public ip address.
I ran it on a new host and it just didn't work.
After far too long of troubleshooting I discovered my problem. The ip address provided by 'ipify` was not the public ip address of the host I ran the playbook on. It was the public ip address of the http proxy that the system was using (...doh!!!). Don't get me wrong, this is the correct behavior of an application that uses http by respecting the proxy settings. However, ipify has no way to override the proxy setting.
I was able to solve it:
- name: delete public ip address file
file:
state: absent
path: /tmp/public_ip_address
- name: set_facts | get my public IP
get_url:
url: http://ifconfig.co/ip
use_proxy: no
dest: /tmp/public_ip_address
- name: Slurp file with public ip address
slurp:
src: /tmp/public_ip_address
register: slurpfile
- name: set_facts | set fact from ip address in slurped file
set_fact:
_public_ip_address: "{{ slurpfile['content'] | b64decode | ipaddr }}"
- name: set_facts | interface for public ip
set_fact:
public_interface: "{{ item }}"
when: >
(hostvars[inventory_hostname]['ansible_%s' % item]|default({}))
.get('ipv4', {}).get('address') == _public_ip_address
or
_public_ip_address in ((hostvars[inventory_hostname]['ansible_%s' % item]|default({}))
.get('ipv4_secondaries'))|map(attribute='address')|list
with_items:
- "{{ ansible_interfaces }}"
Many thanks to user larsks that published a good way to iterate through the interfaces.
StackOverFlow discussion about Ansible iterating interface details
And shoutout to Flowroute ( https://www.flowroute.com/ )
Dissasemble #5
Friday, May 25, 2018
Monday, January 29, 2018
Using Docker to solve my JAVA woes
Occasionally you encounter a problem that resurfaces every 6 months or so that you wish never would come back again.
One of those problems is running multiple versions of Java (for various reasons) and toggling between those versions. "Jenv" works pretty well on a Mac, but sometimes when you are talking about libraries and java plugins it goes back to being complex.
For this last round, I took a different approach. What if I created a docker container that had everything I needed for that task? Then I could reuse that container whenever I needed and did not need to make any changes to my host system. My challenge this time around was connecting to an Avocent KVM switch that could only support a very old version of JAVA webstart.
I started with a basic Debian Wheezy container and added all the JAVA parts that I needed, then setup VNC to connect to the container. Downloading and launching the container is pretty easy if you follow the directions here https://hub.docker.com/r/paklids/jnlp-helper/ .
After that, there are a few other steps to follow.
#1 VNC to the container. I'm using RealVNC on a Mac, but you may use another method (My coworker used the Safari browser)
#2 After the window opens (using the correct password) then launch firefox from the text console
#3 Use firefox to browse to your Avocent KVM (or whatever site you need the older JAVA for - like those that use JNLP's)
#4 Accept any SSL exceptions. Even if you resolve the SSL problems with your appliance, you will still need to use an older version of JAVA which in itself is likely insecure. Now login to your device:
#5 Click on any of the links that uses Java Webstart (like JNLP links). Use the "Open with" context and select browse:
#6 Show other applications and then select "IcedTea Java Web Start"
At this point your Java Webstart application should start. I'll try to publish my Dockerfile so that if you need to tweak for your own purposes then you have a good base to start from. Enjoy!
For this last round, I took a different approach. What if I created a docker container that had everything I needed for that task? Then I could reuse that container whenever I needed and did not need to make any changes to my host system. My challenge this time around was connecting to an Avocent KVM switch that could only support a very old version of JAVA webstart.
I started with a basic Debian Wheezy container and added all the JAVA parts that I needed, then setup VNC to connect to the container. Downloading and launching the container is pretty easy if you follow the directions here https://hub.docker.com/r/paklids/jnlp-helper/ .
After that, there are a few other steps to follow.
#1 VNC to the container. I'm using RealVNC on a Mac, but you may use another method (My coworker used the Safari browser)
#2 After the window opens (using the correct password) then launch firefox from the text console
#3 Use firefox to browse to your Avocent KVM (or whatever site you need the older JAVA for - like those that use JNLP's)
#4 Accept any SSL exceptions. Even if you resolve the SSL problems with your appliance, you will still need to use an older version of JAVA which in itself is likely insecure. Now login to your device:
#5 Click on any of the links that uses Java Webstart (like JNLP links). Use the "Open with" context and select browse:
#6 Show other applications and then select "IcedTea Java Web Start"
At this point your Java Webstart application should start. I'll try to publish my Dockerfile so that if you need to tweak for your own purposes then you have a good base to start from. Enjoy!
Saturday, January 13, 2018
My first iPXE adventure!
I recently encountered the need to rebuild some bare-metal servers in one of our datacenters and fell into a strange requirement. I've built tons of PXE boot systems before (all different variations of kickstart) so I was stubborn when the recommendation for iPXE came along.
"I can do this with standard PXE!" I said...
I was wrong.
This time around I needed iPXE for passing arguments to a dynamic build system. Initially, I was resistant. Then I bit the bullet and jumped into using iPXE. Boy was I glad, because it gave me flexibility that I hadn't used before. I'll document the process I used to configure my build box, but I wanted first to post my working iPXE boot config (also its the boot menu)
"I can do this with standard PXE!" I said...
I was wrong.
This time around I needed iPXE for passing arguments to a dynamic build system. Initially, I was resistant. Then I bit the bullet and jumped into using iPXE. Boy was I glad, because it gave me flexibility that I hadn't used before. I'll document the process I used to configure my build box, but I wanted first to post my working iPXE boot config (also its the boot menu)
#!ipxe
# pulled from http://boot.ipxe.org/undionly.kpxe
set store mybuildserver.example.com
prompt --key 0x02 --timeout 1000 Press Ctrl-B for the iPXE command line... && shell ||
:boot_menu
menu iPXE Boot Menu
item localboot Boot From Local Disk
item --gap -- --------- Operating Systems -------------
item ubuntu1604 Wipe and Install Ubuntu 16.04.3
item coreos Wipe and Install CoreOS
item --gap -- --------- Utilities -------------
item gparted GParted Partition Manager
item DBAN Dariks Boot and Nuke NOTE: at end press Alt-F4 and reboot
item --gap -- --------- iPXE tools -------------
item shell iPXE Shell
item reboot Reload iPXE
choose --default localboot --timeout 30000 target && goto ${target} ||
echo __NOTE: Cancel Enter Select Menu, Exit
exit
:localboot
sanboot --no-describe --drive 0x80 || goto boot_menu
:ubuntu1604
echo Starting Ubuntu Xenial installer for ${mac}
kernel http://${store}/ubuntu/install/netboot/ubuntu-installer/amd64/linux
initrd http://${store}/ubuntu/install/netboot/ubuntu-installer/amd64/initrd.gz
imgargs linux auto=true url=http://${store}/auto-16-preseed.cfg priority=critical preseed/interactive=false netcfg/choose_interface=enp3s0 vga=788 live-installer/net-image=http://${store}/ubuntu/install/filesystem.squashfs
boot || goto boot_menu
:coreos
kernel http://${store}/coreos/coreos_production_pxe.vmlinuz
initrd http://${store}/coreos/coreos_production_pxe_image.cpio.gz
imgargs coreos_production_pxe.vmlinuz coreos.first_boot=1 coreos.autologin coreos.config.url=http://${store}:8080/ignition?mac=${mac:hexhyp}
boot || goto boot_menu
:gparted
kernel http://${store}/gparted/live/vmlinuz
initrd http://${store}/gparted/live/initrd.img
imgargs vmlinuz boot=live config components union=overlay username=user noswap noeject ip= vga=788 fetch=http://${store}/gparted/live/filesystem.squashfs
boot || goto boot_menu
:DBAN
kernel http://${store}/dban/dban.bzi
#imgargs dban.bzi nuke="dwipe"
imgargs dban.bzi nuke="dwipe --autonuke --method=zero" silent
boot || goto boot_menu
:shell
echo __NOTE: Type 'config' enter iPXE config setting, 'exit' return to boot menu.
shell
goto boot_menu
Tuesday, October 3, 2017
A reliable public API for (close to) free!
Sorry for the lack of posts lately, but I've been busy churning out projects (both personal and business related).
I recently went through an exercise that will definitely benefit some of the communities that I interact with (HAD and CoffeeOps) and felt obligated to share because I'm super excited about it.
In the world of IOT and scalable applications, I don't need to spend much time hyping the benefits of a simple API. Along those lines I've been playing around with the Serverless Framework to spin up services (in my case AWS Lambda) but then asked myself 3 distinct questions:
1. How small is "too small" for an API?
2. How cheap could I make a small API?
3. How reliable (or performant) could I make an API?
This seems very similar to the argument about hiring a contractor ("You can have cheap, good or fast....but you can only pick 2 of the 3 options), but it turns out that when you go small you can get pretty close to all three.
With Serverless Framework, in AWS it is pretty easy to setup an API Gateway that passes input to a Lambda function. It's really small and granular and AWS can host it on their infrastructure really easily. That covers the small stuff.
And here is the plus for us - it's really cheap! You pay only for the number of hits to your API Gateway and most AWS accounts give you a minimum number of Lambda functions for free (usually in the thousands or millions). But my problem was the last part - where do you store your data? You could spin up an RDS like PostgreSQL or MySQL, a Redis database or even a DynamoDB. But can you go cheaper than that? Yes....yes you can.
AWS offers a simple key value store that is most often used for configuration data called SSM Parameter store. In that KV store, you can store strings of up to 4096 characters (I don't think it is bytes according to what I've found online) which is more than enough for the purpose I have. Using parameter store should be a free built in service, so that covers the whole cheap argument.
So finally we get to the reliable and fast part. Depending on how you write your function, you can have an API that completes well under 100ms. As for reliability, AWS has a long history of reliable services, and that isn't even counting that this could run in multiple regions (as long as you could have a way to sync your data). Reliable....check.
So how do you do it? Here is my very basic working example (no authentication in this example BTW...only a proof of concept):
Step 1:
Learn about and install Serverless Framework here https://serverless.com/ . Assumed is that you already have an AWS account to work with, but if not, you'll need to set one up.
Step 2:
You can create a serverless project using the example templates, but to start out with you only need 3 things....a directory to work in, a serverless.yml file and a handler (I'm using python for mine)
Step 3:
Create a Parameter in the SSM Parameter Store. Its in the AWS console under the EC2 section. Not hard to find and you only need to set an un-encrypted string. I'm using the name 'myParam'.
Step 4: A bit of code (edit your account & region as needed):
serverless.yml
and my python functions in handler.py
Deploy using serverless. At CLI "serverless deploy -v"
Step 6:
Test. Curl against the URL that you were given after your deploy completed. This API now supports get and put, but you could add whatever you want as far as HTTP method and authentication.
Step 7:
Profit!
What could you do with this? Any number of things. Need to share data across IOT devices? Check. Need to build a global service and you don't have datacenters across the globe? Check. The advantage with this is that the investment is very low and this scales to extremely high (just on the micro level, like owning 50 million insects that you control all over the world).
Many thanks to my employer (https://www.flowroute.com/ ) for letting me work on this and my coworker Reed for helping me through the Python bits and bobs.
Also, thanks to the Seattle CoffeeOps meetup https://www.meetup.com/Seattle-CoffeeOps/ for listening to me rant about this stuff even though they are asking "What is he talking about ?!?"
I recently went through an exercise that will definitely benefit some of the communities that I interact with (HAD and CoffeeOps) and felt obligated to share because I'm super excited about it.
In the world of IOT and scalable applications, I don't need to spend much time hyping the benefits of a simple API. Along those lines I've been playing around with the Serverless Framework to spin up services (in my case AWS Lambda) but then asked myself 3 distinct questions:
1. How small is "too small" for an API?
2. How cheap could I make a small API?
3. How reliable (or performant) could I make an API?
This seems very similar to the argument about hiring a contractor ("You can have cheap, good or fast....but you can only pick 2 of the 3 options), but it turns out that when you go small you can get pretty close to all three.
With Serverless Framework, in AWS it is pretty easy to setup an API Gateway that passes input to a Lambda function. It's really small and granular and AWS can host it on their infrastructure really easily. That covers the small stuff.
And here is the plus for us - it's really cheap! You pay only for the number of hits to your API Gateway and most AWS accounts give you a minimum number of Lambda functions for free (usually in the thousands or millions). But my problem was the last part - where do you store your data? You could spin up an RDS like PostgreSQL or MySQL, a Redis database or even a DynamoDB. But can you go cheaper than that? Yes....yes you can.
AWS offers a simple key value store that is most often used for configuration data called SSM Parameter store. In that KV store, you can store strings of up to 4096 characters (I don't think it is bytes according to what I've found online) which is more than enough for the purpose I have. Using parameter store should be a free built in service, so that covers the whole cheap argument.
So finally we get to the reliable and fast part. Depending on how you write your function, you can have an API that completes well under 100ms. As for reliability, AWS has a long history of reliable services, and that isn't even counting that this could run in multiple regions (as long as you could have a way to sync your data). Reliable....check.
So how do you do it? Here is my very basic working example (no authentication in this example BTW...only a proof of concept):
Step 1:
Learn about and install Serverless Framework here https://serverless.com/ . Assumed is that you already have an AWS account to work with, but if not, you'll need to set one up.
Step 2:
You can create a serverless project using the example templates, but to start out with you only need 3 things....a directory to work in, a serverless.yml file and a handler (I'm using python for mine)
Step 3:
Create a Parameter in the SSM Parameter Store. Its in the AWS console under the EC2 section. Not hard to find and you only need to set an un-encrypted string. I'm using the name 'myParam'.
Step 4: A bit of code (edit your account & region as needed):
serverless.yml
# Welcome to Serverless!
#
service: param-api # NOTE: update this with your service name
# frameworkVersion: "=X.X.X"
provider:
name: aws
runtime: python2.7
stage: dev
region: us-west-2
iamRoleStatements:
- Effect: "Allow"
Action:
- "ssm:GetParameters"
- "ssm:PutParameter"
Resource:
- "arn:aws:ssm:${self:provider.region}:${accountId}:parameter/myParam"
functions:
get_myParam:
handler: handler.get_myParam
events:
- http:
path: get
method: get
put_myParam:
handler: handler.put_myParam
events:
- http:
path: put
method: put
and my python functions in handler.py
import json
import boto3
def get_myParam(event, context):
ssmResponse = boto3.client('ssm').get_parameters(
Names=['myParam'],
WithDecryption=False
)
body = {
"message": ssmResponse['Parameters'],
"input": event['body']
}
response = {
"statusCode": 200,
"body": json.dumps(body)
}
return response
def put_myParam(event, context):
ssmResponse = boto3.client('ssm').put_parameter(
Name='myParam',
Description='this is a test',
Value=event['body'],
Type='String',
Overwrite=True
)
response = {
"statusCode": 200,
"body": json.dumps(ssmResponse)
}
return response
Step 5: Deploy using serverless. At CLI "serverless deploy -v"
Step 6:
Test. Curl against the URL that you were given after your deploy completed. This API now supports get and put, but you could add whatever you want as far as HTTP method and authentication.
Step 7:
Profit!
What could you do with this? Any number of things. Need to share data across IOT devices? Check. Need to build a global service and you don't have datacenters across the globe? Check. The advantage with this is that the investment is very low and this scales to extremely high (just on the micro level, like owning 50 million insects that you control all over the world).
Many thanks to my employer (https://www.flowroute.com/ ) for letting me work on this and my coworker Reed for helping me through the Python bits and bobs.
Also, thanks to the Seattle CoffeeOps meetup https://www.meetup.com/Seattle-CoffeeOps/ for listening to me rant about this stuff even though they are asking "What is he talking about ?!?"
Wednesday, March 2, 2016
Setting up an apt-mirror for Cumulus Linux
You may want to setup a local apt repository to pull packages from for your Cumulus infrastructure.
I followed the instructions found here:
https://www.howtoforge.com/local_debian_ubuntu_mirror
But I did find a few gotcha's that I had to fix before I was able to use my local repo. Here is a sample of my /etc/apt/mirror.list
I followed the instructions found here:
https://www.howtoforge.com/local_debian_ubuntu_mirror
But I did find a few gotcha's that I had to fix before I was able to use my local repo. Here is a sample of my /etc/apt/mirror.list
############# config ##################
#
# set base_path /var/spool/apt-mirror
#
# set mirror_path $base_path/mirror
# set skel_path $base_path/skel
# set var_path $base_path/var
# set cleanscript $var_path/clean.sh
# set defaultarch
# set postmirror_script $var_path/postmirror.sh
# set run_postmirror 0
set nthreads 20
set _tilde 0
#
############# end config ##############
deb http://repo.cumulusnetworks.com CumulusLinux-2.5 main addons updates
deb http://repo.cumulusnetworks.com CumulusLinux-2.5 security-updates
# Uncomment the next line to get access to the testing component
# deb http://repo.cumulusnetworks.com CumulusLinux-2.5 testing
# Uncomment the next line to get access to the Cumulus community repository
# deb http://repo.cumulusnetworks.com/community/ CumulusLinux-Community-2.5 main addons updates
deb http://ftp.us.debian.org/debian wheezy-backports main
deb http://ftp.us.debian.org/debian/ wheezy main
# mirror additional architectures
#deb-alpha http://ftp.us.debian.org/debian unstable main contrib non-free
#deb-amd64 http://ftp.us.debian.org/debian unstable main contrib non-free
#deb-armel http://ftp.us.debian.org/debian unstable main contrib non-free
#deb-hppa http://ftp.us.debian.org/debian unstable main contrib non-free
#deb-i386 http://ftp.us.debian.org/debian unstable main contrib non-free
#deb-ia64 http://ftp.us.debian.org/debian unstable main contrib non-free
#deb-m68k http://ftp.us.debian.org/debian unstable main contrib non-free
#deb-mips http://ftp.us.debian.org/debian unstable main contrib non-free
#deb-mipsel http://ftp.us.debian.org/debian unstable main contrib non-free
#deb-powerpc http://ftp.us.debian.org/debian unstable main contrib non-free
#deb-s390 http://ftp.us.debian.org/debian unstable main contrib non-free
#deb-sparc http://ftp.us.debian.org/debian unstable main contrib non-free
clean http://ftp.us.debian.org/debian
I was able to use the instructions in the howto to build the mirror and present it via http. I did have some problems using it at first. Notice the error:
W: Failed to fetch http://mydebianrepo.mycompany.com/debian/dists/wheezy-backports/main/i18n/Translation-en Hash Sum mismatch
And then I read this:
http://askubuntu.com/questions/217502/speed-up-apt-get-update-by-removing-known-ignored-translation-en
So to fix this put the line:
Acquire::Languages "none";
on the Cumulus node at the end of the conf file /etc/apt/apt.conf.d/70debconf
W: Failed to fetch http://mydebianrepo.mycompany.com/debian/dists/wheezy-backports/main/i18n/Translation-en Hash Sum mismatch
And then I read this:
http://askubuntu.com/questions/217502/speed-up-apt-get-update-by-removing-known-ignored-translation-en
So to fix this put the line:
Acquire::Languages "none";
on the Cumulus node at the end of the conf file /etc/apt/apt.conf.d/70debconf
Here is my /etc/apt/sources.list
#
#
deb http://mydebianrepo.mycompany.com/CumulusLinux CumulusLinux-2.5 main addons updates
deb http://mydebianrepo.mycompany.com/CumulusLinux CumulusLinux-2.5 security-updates
# Uncomment the next line to get access to the testing component
# deb http://repo.cumulusnetworks.com CumulusLinux-2.5 testing
# Uncomment the next line to get access to the Cumulus community repository
# deb http://repo.cumulusnetworks.com/community/ CumulusLinux-Community-2.5 main addons updates
deb http://mydebianrepo.mycompany.com/debian wheezy-backports main
deb http://mydebianrepo.mycompany.com/debian wheezy main
Labels:
apt,
apt-mirror,
cumulus,
Cumulus Linux,
debian,
wheezy
Tuesday, March 1, 2016
Puppet 3 module for Cumulus Linux License
Here is a Puppet 3 module for applying the Cumulus Linux license to your nodes. You can change the logic to use $hostname or $fqdn or any other facter fact, but you will want to put some logic in there for future growth (ie: will you ever move to 40Gb switches?).
License file is dropped on the node at /etc/license . Be sure to name your directory the SAME AS the name of the class in ../modulename/manifests/init.pp
# manifests/init.pp
class cumulus_license {
## BASIC CHECK - ARE 10GB SWITCHES THIS SETUP? THEN USE 10GB LICENSE
## OTHERWISE USE OTHER FACTER FACT TO DECIDE
if ($architecture == 'x86_64') and ($operatingsystem == 'CumulusLinux') {
file { '/etc/license':
ensure => present,
owner => root,
group => root,
source => 'puppet:///modules/cumulus_license/my_10Gb_license',
}
## AND USE THIS IDENTIFIER TO DECIDE ON 1GB LICENSE
if ($architecture == 'ppc') and ($operatingsystem == 'CumulusLinux') {
file { '/etc/license':
ensure => present,
owner => root,
group => root,
source => 'puppet:///modules/cumulus_license/my_1Gb_license',
}
}
}
## CHANGE TO THE LICENSE FILE NOTIFIES THIS EXEC
## NOW RUN THE LICENSE COMMAND AND IF IT SUCCEEDS THEN DO NOTHING
## IF FAILS THEN HARDWARE IS UNLICENSED AND WILL INSTALL FROM PUPPET
exec { cl-license-command :
command => ['/usr/cumulus/bin/cl-license -i /etc/license'],
unless => ['/usr/cumulus/bin/cl-license'],
subscribe => File['/etc/license'],
refreshonly => true,
}
}
A few notes about operation:
1. First the puppet run validates that there is an existing valid license on the switch. If there is, then do nothing. If you want to apply a new license then you will update puppet and run the cli-license command manually
2. If there is no active license then it tries to apply the license found at /etc/license (pushed there by puppet). You can rename this to whatever you need.
3. Make sure your Cumulus License files are in the files section of the module. These are just simple text files that get pulled in by the cli_license command during the run
4. You /may/ have to reboot the switch after the new license is applied. Please refer to Cumulus docs for guidance there.
3. Make sure your Cumulus License files are in the files section of the module. These are just simple text files that get pulled in by the cli_license command during the run
4. You /may/ have to reboot the switch after the new license is applied. Please refer to Cumulus docs for guidance there.
Tuesday, December 8, 2015
Cumulus VX on Libvirt (CentOS 6)
You may have seen my earlier post on Cumulus VX in libvirt. I'm going to go a step further and try to emulate the environment described here:
https://docs.cumulusnetworks.com/display/VX/Using+Cumulus+VX+with+KVM
There are some pretty significant differences though, because libvirt on CentOS 6 is pretty "feature light". Nevertheless, this should result in something to work with.
You will need to build a system that can serve DHCP and HTTP on your 'default' network and then disable the dhcp section (you could delete it) in /etc/libvirt/qemu/networks/default.xml . This is because while libvirt can create manual DHCP entries (sometimes called "static dhcp" entries) using the 'host' declaration, you will not be able to pass DHCP option 239 which is required for your Zero Touch Provisioning. Queue sad trombone noise.
Next, you will want to build 4 additional networks in the Virtual Machine Manager ( localhost>details>Virtual Networks ). I built isolated networks named testnet1 thru testnet4, but you don't need to follow my silliness. Be sure to restart libvirtd after you've completed this to be sure that it has picked up all your changes. Maybe backup your XML files too, because there are situations where libvirt can blow them away. Once your DHCP server is running on the 'default' network then build 4 manual DHCP entries using the MAC addresses that you'll find below for eth0 (the ones that are on virbr0).
You will follow the instructions in the cumulus docs but instead of the KVM commands you can use virt-install to initiate the guests (adjust path to qcow2 files you copied):
sudo virt-install --os-variant=generic --ram=256 --vcpus=1 --network bridge=virbr0,model=virtio,mac=00:01:00:00:01:01 --network network=testnet1,model=virtio,mac=00:00:02:00:00:11 --network network=testnet2,model=virtio,mac=00:00:02:00:00:12 --boot hd --disk path=/home/user1/leaf1.qcow2,format=qcow2 --name=leaf1
sudo virt-install --os-variant=generic --ram=256 --vcpus=1 --network bridge=virbr0,model=virtio,mac=00:01:00:00:01:02 --network network=testnet3,model=virtio,mac=00:00:02:00:00:21 --network network=testnet4,model=virtio,mac=00:00:02:00:00:22 --boot hd --disk path=/home/user1/leaf2.qcow2,format=qcow2 --name=leaf2
sudo virt-install --os-variant=generic --ram=256 --vcpus=1 --network bridge=virbr0,model=virtio,mac=00:01:00:00:01:03 --network network=testnet1,model=virtio,mac=00:00:02:00:00:31 --network network=testnet3,model=virtio,mac=00:00:02:00:00:32 --boot hd --disk path=/home/user1/spine1.qcow2,format=qcow2 --name=spine1
sudo virt-install --os-variant=generic --ram=256 --vcpus=1 --network bridge=virbr0,model=virtio,mac=00:01:00:00:01:04 --network network=testnet2,model=virtio,mac=00:00:02:00:00:41 --network network=testnet4,model=virtio,mac=00:00:02:00:00:42 --boot hd --disk path=/home/user1/spine2.qcow2,format=qcow2 --name=spine2
I will admit that the configs that are described in the Cumulus docs indicate that swp1 thru swp3 would be provisioned, but the virtual hardware configured is only swp1 thru swp2. I'm going to ignore swp3 for now.
If you built option 239 correctly on the DHCP scope, then you should be able to point all these instances to your ZTP script. Just remember that you do not need to run your web server on the deafult port 80, as ZTP will accept a path like 'http://192.168.122.101:1234/my_ztp_script.sh'
I haven't setup a configuration management tool on my instances yet, but that will be upcoming via my ZTP script.
https://docs.cumulusnetworks.com/display/VX/Using+Cumulus+VX+with+KVM
There are some pretty significant differences though, because libvirt on CentOS 6 is pretty "feature light". Nevertheless, this should result in something to work with.
You will need to build a system that can serve DHCP and HTTP on your 'default' network and then disable the dhcp section (you could delete it) in /etc/libvirt/qemu/networks/default.xml . This is because while libvirt can create manual DHCP entries (sometimes called "static dhcp" entries) using the 'host' declaration, you will not be able to pass DHCP option 239 which is required for your Zero Touch Provisioning. Queue sad trombone noise.
Next, you will want to build 4 additional networks in the Virtual Machine Manager ( localhost>details>Virtual Networks ). I built isolated networks named testnet1 thru testnet4, but you don't need to follow my silliness. Be sure to restart libvirtd after you've completed this to be sure that it has picked up all your changes. Maybe backup your XML files too, because there are situations where libvirt can blow them away. Once your DHCP server is running on the 'default' network then build 4 manual DHCP entries using the MAC addresses that you'll find below for eth0 (the ones that are on virbr0).
You will follow the instructions in the cumulus docs but instead of the KVM commands you can use virt-install to initiate the guests (adjust path to qcow2 files you copied):
sudo virt-install --os-variant=generic --ram=256 --vcpus=1 --network bridge=virbr0,model=virtio,mac=00:01:00:00:01:01 --network network=testnet1,model=virtio,mac=00:00:02:00:00:11 --network network=testnet2,model=virtio,mac=00:00:02:00:00:12 --boot hd --disk path=/home/user1/leaf1.qcow2,format=qcow2 --name=leaf1
sudo virt-install --os-variant=generic --ram=256 --vcpus=1 --network bridge=virbr0,model=virtio,mac=00:01:00:00:01:02 --network network=testnet3,model=virtio,mac=00:00:02:00:00:21 --network network=testnet4,model=virtio,mac=00:00:02:00:00:22 --boot hd --disk path=/home/user1/leaf2.qcow2,format=qcow2 --name=leaf2
sudo virt-install --os-variant=generic --ram=256 --vcpus=1 --network bridge=virbr0,model=virtio,mac=00:01:00:00:01:03 --network network=testnet1,model=virtio,mac=00:00:02:00:00:31 --network network=testnet3,model=virtio,mac=00:00:02:00:00:32 --boot hd --disk path=/home/user1/spine1.qcow2,format=qcow2 --name=spine1
sudo virt-install --os-variant=generic --ram=256 --vcpus=1 --network bridge=virbr0,model=virtio,mac=00:01:00:00:01:04 --network network=testnet2,model=virtio,mac=00:00:02:00:00:41 --network network=testnet4,model=virtio,mac=00:00:02:00:00:42 --boot hd --disk path=/home/user1/spine2.qcow2,format=qcow2 --name=spine2
I will admit that the configs that are described in the Cumulus docs indicate that swp1 thru swp3 would be provisioned, but the virtual hardware configured is only swp1 thru swp2. I'm going to ignore swp3 for now.
If you built option 239 correctly on the DHCP scope, then you should be able to point all these instances to your ZTP script. Just remember that you do not need to run your web server on the deafult port 80, as ZTP will accept a path like 'http://192.168.122.101:1234/my_ztp_script.sh'
I haven't setup a configuration management tool on my instances yet, but that will be upcoming via my ZTP script.
Labels:
centos,
cumulus,
cumulus vx,
kvm,
libvirt,
qemu,
redhat,
virt-install
Subscribe to:
Posts (Atom)