Monday, January 29, 2018

Using Docker to solve my JAVA woes

Occasionally you encounter a problem that resurfaces every 6 months or so that you wish never would come back again. One of those problems is running multiple versions of Java (for various reasons) and toggling between those versions. "Jenv" works pretty well on a Mac, but sometimes when you are talking about libraries and java plugins it goes back to being complex.

For this last round, I took a different approach. What if I created a docker container that had everything I needed for that task? Then I could reuse that container whenever I needed and did not need to make any changes to my host system. My challenge this time around was connecting to an Avocent KVM switch that could only support a very old version of JAVA webstart.

I started with a basic Debian Wheezy container and added all the JAVA parts that I needed, then setup VNC to connect to the container. Downloading and launching the container is pretty easy if you follow the directions here .

After that, there are a few other steps to follow.

#1 VNC to the container. I'm using RealVNC on a Mac, but you may use another method (My coworker used the Safari browser)

#2 After the window opens (using the correct password) then launch firefox from the text console
#3 Use firefox to browse to your Avocent KVM (or whatever site you need the older JAVA for - like those that use JNLP's)
#4 Accept any SSL exceptions. Even if you resolve the SSL problems with your appliance, you will still need to use an older version of JAVA which in itself is likely insecure. Now login to your device:
#5 Click on any of the links that uses Java Webstart (like JNLP links). Use the "Open with" context and select browse:
#6 Show other applications and then select "IcedTea Java Web Start"
At this point your Java Webstart application should start. I'll try to publish my Dockerfile so that if you need to tweak for your own purposes then you have a good base to start from. Enjoy!

Saturday, January 13, 2018

My first iPXE adventure!

I recently encountered the need to rebuild some bare-metal servers in one of our datacenters and fell into a strange requirement. I've built tons of PXE boot systems before (all different variations of kickstart) so I was stubborn when the recommendation for iPXE came along. 

"I can do this with standard PXE!" I said...

I was wrong.

This time around I needed iPXE for passing arguments to a dynamic build system. Initially, I was resistant. Then I bit the bullet and jumped into using iPXE. Boy was I glad, because it gave me flexibility that I hadn't used before. I'll document the process I used to configure my build box, but I wanted first to post my working iPXE boot config (also its the boot menu)


# pulled from
set store

prompt --key 0x02 --timeout 1000 Press Ctrl-B for the iPXE command line... && shell ||

menu iPXE Boot Menu
item localboot  Boot From Local Disk
item --gap --   --------- Operating Systems -------------
item ubuntu1604 Wipe and Install Ubuntu 16.04.3
item coreos     Wipe and Install CoreOS
item --gap --   ---------     Utilities     -------------
item gparted    GParted Partition Manager
item DBAN       Dariks Boot and Nuke       NOTE: at end press Alt-F4 and reboot
item --gap --   ---------     iPXE tools    -------------
item shell      iPXE Shell
item reboot     Reload iPXE

choose --default localboot --timeout 30000 target && goto ${target} ||
echo __NOTE: Cancel Enter Select Menu, Exit

sanboot --no-describe --drive 0x80 || goto boot_menu

echo Starting Ubuntu Xenial installer for ${mac}
kernel http://${store}/ubuntu/install/netboot/ubuntu-installer/amd64/linux
initrd http://${store}/ubuntu/install/netboot/ubuntu-installer/amd64/initrd.gz
imgargs linux auto=true url=http://${store}/auto-16-preseed.cfg priority=critical preseed/interactive=false netcfg/choose_interface=enp3s0 vga=788 live-installer/net-image=http://${store}/ubuntu/install/filesystem.squashfs 
boot || goto boot_menu

kernel http://${store}/coreos/coreos_production_pxe.vmlinuz
initrd http://${store}/coreos/coreos_production_pxe_image.cpio.gz
imgargs coreos_production_pxe.vmlinuz coreos.first_boot=1 coreos.autologin coreos.config.url=http://${store}:8080/ignition?mac=${mac:hexhyp}
boot || goto boot_menu

kernel http://${store}/gparted/live/vmlinuz
initrd http://${store}/gparted/live/initrd.img
imgargs vmlinuz boot=live config components union=overlay username=user noswap noeject ip= vga=788 fetch=http://${store}/gparted/live/filesystem.squashfs
boot || goto boot_menu

kernel http://${store}/dban/dban.bzi
#imgargs dban.bzi nuke="dwipe"
imgargs dban.bzi nuke="dwipe --autonuke --method=zero" silent
boot || goto boot_menu

echo __NOTE: Type 'config' enter iPXE config setting, 'exit' return to boot menu.
goto boot_menu

Tuesday, October 3, 2017

A reliable public API for (close to) free!

Sorry for the lack of posts lately, but I've been busy churning out projects (both personal and business related).

I recently went through an exercise that will definitely benefit some of the communities that I interact with (HAD and CoffeeOps) and felt obligated to share because I'm super excited about it.

In the world of IOT and scalable applications, I don't need to spend much time hyping the benefits of a simple API. Along those lines I've been playing around with the Serverless Framework to spin up services (in my case AWS Lambda) but then asked myself 3 distinct questions:

1. How small is "too small" for an API?
2. How cheap could I make a small API?
3. How reliable (or performant) could I make an API?

This seems very similar to the argument about hiring a contractor ("You can have cheap, good or fast....but you can only pick 2 of the 3 options), but it turns out that when you go small you can get pretty close to all three.

With Serverless Framework, in AWS it is pretty easy to setup an API Gateway that passes input to a Lambda function. It's really small and granular and AWS can host it on their infrastructure really easily. That covers the small stuff.

And here is the plus for us - it's really cheap! You pay only for the number of hits to your API Gateway and most AWS accounts give you a minimum number of Lambda functions for free (usually in the thousands or millions). But my problem was the last part - where do you store your data? You could spin up an RDS like PostgreSQL or MySQL, a Redis database or even a DynamoDB. But can you go cheaper than that? Yes....yes you can. 

AWS offers a simple key value store that is most often used for configuration data called SSM Parameter store. In that KV store, you can store strings of up to 4096 characters (I don't think it is bytes according to what I've found online) which is more than enough for the purpose I have. Using parameter store should be a free built in service, so that covers the whole cheap argument.

So finally we get to the reliable and fast part. Depending on how you write your function, you can have an API that completes well under 100ms. As for reliability, AWS has a long history of reliable services, and that isn't even counting that this could run in multiple regions (as long as you could have a way to sync your data). Reliable....check.

So how do you do it? Here is my very basic working example (no authentication in this example BTW...only a proof of concept):

Step 1: 
Learn about and install Serverless Framework here . Assumed is that you already have an AWS account to work with, but if not, you'll need to set one up.

Step 2:  
You can create a serverless project using the example templates, but to start out with you only need 3 things....a directory to work in, a serverless.yml file and a handler (I'm using python for mine)

Step 3: 
Create a Parameter in the SSM Parameter Store. Its in the AWS console under the EC2 section. Not hard to find and you only need to set an un-encrypted string. I'm using the name 'myParam'.

Step 4: A bit of code (edit your account & region as needed):


# Welcome to Serverless!

service: param-api # NOTE: update this with your service name

# frameworkVersion: "=X.X.X"

  name: aws
  runtime: python2.7
  stage: dev
  region: us-west-2
    - Effect: "Allow"
        - "ssm:GetParameters"
        - "ssm:PutParameter"
        - "arn:aws:ssm:${self:provider.region}:${accountId}:parameter/myParam"

    handler: handler.get_myParam
      - http:
          path: get
          method: get

    handler: handler.put_myParam
      - http:
          path: put
          method: put

and my python functions in

import json
import boto3

def get_myParam(event, context):
    ssmResponse = boto3.client('ssm').get_parameters(
    body = {
        "message": ssmResponse['Parameters'],
        "input": event['body']

    response = {
        "statusCode": 200,
        "body": json.dumps(body)

    return response

def put_myParam(event, context):

    ssmResponse = boto3.client('ssm').put_parameter(
        Description='this is a test',

    response = {
        "statusCode": 200,
        "body": json.dumps(ssmResponse)

    return response

 Step 5: 
Deploy using serverless. At CLI "serverless deploy -v"

Step 6: 
Test. Curl against the URL that you were given after your deploy completed. This API now supports get and put, but you could add whatever you want as far as HTTP method and authentication.

Step 7: 

What could you do with this? Any number of things. Need to share data across IOT devices? Check. Need to build a global service and you don't have datacenters across the globe? Check. The advantage with this is that the investment is very low and this scales to extremely high (just on the micro level, like owning 50 million insects that you control all over the world).

Many thanks to my employer ( ) for letting me work on this and my coworker Reed for helping me through the Python bits and bobs. 
Also, thanks to the Seattle CoffeeOps meetup for listening to me rant about this stuff even though they are asking "What is he talking about ?!?"

Wednesday, March 2, 2016

Setting up an apt-mirror for Cumulus Linux

You may want to setup a local apt repository to pull packages from for your Cumulus infrastructure. 

I followed the instructions found here: 

But I did find a few gotcha's that  I had to fix before I was able to use my local repo. Here is a sample of my /etc/apt/mirror.list

############# config ##################
# set base_path    /var/spool/apt-mirror
# set mirror_path  $base_path/mirror
# set skel_path    $base_path/skel
# set var_path     $base_path/var
# set cleanscript $var_path/
# set defaultarch  
# set postmirror_script $var_path/
# set run_postmirror 0
set nthreads     20
set _tilde 0
############# end config ##############

deb CumulusLinux-2.5 main addons updates
deb CumulusLinux-2.5 security-updates

# Uncomment the next line to get access to the testing component
# deb CumulusLinux-2.5 testing

# Uncomment the next line to get access to the Cumulus community repository
# deb CumulusLinux-Community-2.5  main addons updates

deb wheezy-backports main
deb wheezy main

# mirror additional architectures
#deb-alpha unstable main contrib non-free
#deb-amd64 unstable main contrib non-free
#deb-armel unstable main contrib non-free
#deb-hppa unstable main contrib non-free
#deb-i386 unstable main contrib non-free
#deb-ia64 unstable main contrib non-free
#deb-m68k unstable main contrib non-free
#deb-mips unstable main contrib non-free
#deb-mipsel unstable main contrib non-free
#deb-powerpc unstable main contrib non-free
#deb-s390 unstable main contrib non-free
#deb-sparc unstable main contrib non-free


I was able to use the instructions in the howto to build the mirror and present it via http. I did have some problems using it at first. Notice the error:

W: Failed to fetch  Hash Sum mismatch

And then I read this:

So to fix this put the line: 

Acquire::Languages "none"; 

on the Cumulus node at the end of the conf file /etc/apt/apt.conf.d/70debconf

Here is my /etc/apt/sources.list


deb CumulusLinux-2.5 main addons updates
deb CumulusLinux-2.5 security-updates

# Uncomment the next line to get access to the testing component
# deb CumulusLinux-2.5 testing

# Uncomment the next line to get access to the Cumulus community repository
# deb CumulusLinux-Community-2.5  main addons updates
deb wheezy-backports main
deb wheezy main

Tuesday, March 1, 2016

Puppet 3 module for Cumulus Linux License

Here is a Puppet 3 module for applying the Cumulus Linux license to your nodes. You can change the logic to use $hostname or $fqdn or any other facter fact, but you will want to put some logic in there for future growth (ie: will you ever move to 40Gb switches?).

License file is dropped on the node at /etc/license . Be sure to name your directory the SAME AS the name of the class in ../modulename/manifests/init.pp 

# manifests/init.pp
class cumulus_license {

if ($architecture == 'x86_64') and ($operatingsystem == 'CumulusLinux') {
  file { '/etc/license':
    ensure => present,
    owner  => root,
    group  => root,
    source => 'puppet:///modules/cumulus_license/my_10Gb_license',

if ($architecture == 'ppc') and ($operatingsystem == 'CumulusLinux') {
  file { '/etc/license':
    ensure => present,
    owner  => root,
    group  => root,
    source => 'puppet:///modules/cumulus_license/my_1Gb_license',

exec { cl-license-command :
  command     => ['/usr/cumulus/bin/cl-license -i /etc/license'],
  unless      => ['/usr/cumulus/bin/cl-license'],
  subscribe   => File['/etc/license'],
  refreshonly => true,

A few notes about operation:

1. First the puppet run validates that there is an existing valid license on the switch. If there is, then do nothing. If you want to apply a new license then you will update puppet and run the cli-license command manually
2. If there is no active license then it tries to apply the license found at /etc/license (pushed there by puppet). You can rename this to whatever you need.
3. Make sure your Cumulus License files are in the files section of the module. These are just simple text files that get pulled in by the cli_license command during the run
4. You /may/ have to reboot the switch after the new license is applied. Please refer to Cumulus docs for guidance there.

Tuesday, December 8, 2015

Cumulus VX on Libvirt (CentOS 6)

You may have seen my earlier post on Cumulus VX in libvirt. I'm going to go a step further and try to emulate the environment described here:

There are some pretty significant differences though, because libvirt on CentOS 6 is pretty "feature light". Nevertheless, this should result in something to work with.

You will need to build a system that can serve DHCP and HTTP on your 'default' network and then disable the dhcp section (you could delete it) in /etc/libvirt/qemu/networks/default.xml . This is because while libvirt can create manual DHCP entries (sometimes called "static dhcp" entries) using the 'host' declaration, you will not be able to pass DHCP option 239 which is required for your Zero Touch Provisioning. Queue sad trombone noise.

Next, you will want to build 4 additional networks in the Virtual Machine Manager ( localhost>details>Virtual Networks ). I built isolated networks named testnet1 thru testnet4, but you don't need to follow my silliness. Be sure to restart libvirtd after you've completed this to be sure that it has picked up all your changes. Maybe backup your XML files too, because there are situations where libvirt can blow them away. Once your DHCP server is running on the 'default' network then build 4 manual DHCP entries using the MAC addresses that you'll find below for eth0 (the ones that are on virbr0).

You will follow the instructions in the cumulus docs but instead of the KVM commands you can use virt-install to initiate the guests (adjust path to qcow2 files you copied):

sudo virt-install --os-variant=generic --ram=256 --vcpus=1 --network bridge=virbr0,model=virtio,mac=00:01:00:00:01:01 --network network=testnet1,model=virtio,mac=00:00:02:00:00:11 --network network=testnet2,model=virtio,mac=00:00:02:00:00:12 --boot hd --disk path=/home/user1/leaf1.qcow2,format=qcow2 --name=leaf1

sudo virt-install --os-variant=generic --ram=256 --vcpus=1 --network bridge=virbr0,model=virtio,mac=00:01:00:00:01:02 --network network=testnet3,model=virtio,mac=00:00:02:00:00:21 --network network=testnet4,model=virtio,mac=00:00:02:00:00:22 --boot hd --disk path=/home/user1/leaf2.qcow2,format=qcow2 --name=leaf2

sudo virt-install --os-variant=generic --ram=256 --vcpus=1 --network bridge=virbr0,model=virtio,mac=00:01:00:00:01:03 --network network=testnet1,model=virtio,mac=00:00:02:00:00:31 --network network=testnet3,model=virtio,mac=00:00:02:00:00:32 --boot hd --disk path=/home/user1/spine1.qcow2,format=qcow2 --name=spine1

sudo virt-install --os-variant=generic --ram=256 --vcpus=1 --network bridge=virbr0,model=virtio,mac=00:01:00:00:01:04 --network network=testnet2,model=virtio,mac=00:00:02:00:00:41 --network network=testnet4,model=virtio,mac=00:00:02:00:00:42 --boot hd --disk path=/home/user1/spine2.qcow2,format=qcow2 --name=spine2

 I will admit that the configs that are described in the Cumulus docs indicate that swp1 thru swp3 would be provisioned, but the virtual hardware configured is only swp1 thru swp2. I'm going to ignore swp3 for now.

If you built option 239 correctly on the DHCP scope, then you should be able to point all these instances to your ZTP script. Just remember that you do not need to run your web server on the deafult port 80, as ZTP will accept a path like ''

I haven't setup a configuration management tool on my instances yet, but that will be upcoming via my ZTP script.

Friday, December 4, 2015

Boot Cumulus VX in a libvirt KVM

Why would I want to do that?

Well, if you are running CentOS 6  and want an easy way to play with the Cumulus VX, here is a good start.

On a CentOS 6 system running a GUI
#yum groupinstall "Virtualization*"

(this will install all your libvirt bits and pieces)

Then download the Cumulus KVM image (it is a qcow2 image) and remember the path

#sudo virt-install --os-variant=generic --ram=256 --vcpus=1 --network bridge=virbr0,model=virtio,mac=00:01:00:00:01:00 --boot hd --disk path=/home/user/cumulus.qcow2 --name=cumulus1

This only provisions the eth0 interface (mgmt) but it should get you a start to playing with the product.