Tuesday, October 3, 2017

A reliable public API for (close to) free!

Sorry for the lack of posts lately, but I've been busy churning out projects (both personal and business related).

I recently went through an exercise that will definitely benefit some of the communities that I interact with (HAD and CoffeeOps) and felt obligated to share because I'm super excited about it.

In the world of IOT and scalable applications, I don't need to spend much time hyping the benefits of a simple API. Along those lines I've been playing around with the Serverless Framework to spin up services (in my case AWS Lambda) but then asked myself 3 distinct questions:

1. How small is "too small" for an API?
2. How cheap could I make a small API?
3. How reliable (or performant) could I make an API?

This seems very similar to the argument about hiring a contractor ("You can have cheap, good or fast....but you can only pick 2 of the 3 options), but it turns out that when you go small you can get pretty close to all three.

With Serverless Framework, in AWS it is pretty easy to setup an API Gateway that passes input to a Lambda function. It's really small and granular and AWS can host it on their infrastructure really easily. That covers the small stuff.

And here is the plus for us - it's really cheap! You pay only for the number of hits to your API Gateway and most AWS accounts give you a minimum number of Lambda functions for free (usually in the thousands or millions). But my problem was the last part - where do you store your data? You could spin up an RDS like PostgreSQL or MySQL, a Redis database or even a DynamoDB. But can you go cheaper than that? Yes....yes you can. 

AWS offers a simple key value store that is most often used for configuration data called SSM Parameter store. In that KV store, you can store strings of up to 4096 characters (I don't think it is bytes according to what I've found online) which is more than enough for the purpose I have. Using parameter store should be a free built in service, so that covers the whole cheap argument.

So finally we get to the reliable and fast part. Depending on how you write your function, you can have an API that completes well under 100ms. As for reliability, AWS has a long history of reliable services, and that isn't even counting that this could run in multiple regions (as long as you could have a way to sync your data). Reliable....check.

So how do you do it? Here is my very basic working example (no authentication in this example BTW...only a proof of concept):

Step 1: 
Learn about and install Serverless Framework here  https://serverless.com/ . Assumed is that you already have an AWS account to work with, but if not, you'll need to set one up.

Step 2:  
You can create a serverless project using the example templates, but to start out with you only need 3 things....a directory to work in, a serverless.yml file and a handler (I'm using python for mine)

Step 3: 
Create a Parameter in the SSM Parameter Store. Its in the AWS console under the EC2 section. Not hard to find and you only need to set an un-encrypted string. I'm using the name 'myParam'.

Step 4: A bit of code (edit your account & region as needed):

serverless.yml 


       
# Welcome to Serverless!
#

service: param-api # NOTE: update this with your service name

# frameworkVersion: "=X.X.X"

provider:
  name: aws
  runtime: python2.7
  stage: dev
  region: us-west-2
  iamRoleStatements:
    - Effect: "Allow"
      Action: 
        - "ssm:GetParameters"
        - "ssm:PutParameter"
      Resource:
        - "arn:aws:ssm:${self:provider.region}:${accountId}:parameter/myParam"

functions:
  get_myParam:
    handler: handler.get_myParam
    events:
      - http:
          path: get
          method: get

  put_myParam:
    handler: handler.put_myParam
    events:
      - http:
          path: put
          method: put
      
 

and my python functions in handler.py

       
import json
import boto3

def get_myParam(event, context):
    ssmResponse = boto3.client('ssm').get_parameters(
        Names=['myParam'],
        WithDecryption=False
    )
    
    body = {
        "message": ssmResponse['Parameters'],
        "input": event['body']
    }

    response = {
        "statusCode": 200,
        "body": json.dumps(body)
    }

    return response


def put_myParam(event, context):
    

    ssmResponse = boto3.client('ssm').put_parameter(
        Name='myParam',
        Description='this is a test',
        Value=event['body'],
        Type='String',
        Overwrite=True
    )

    response = {
        "statusCode": 200,
        "body": json.dumps(ssmResponse)
    }

    return response



      
 Step 5: 
Deploy using serverless. At CLI "serverless deploy -v"

Step 6: 
Test. Curl against the URL that you were given after your deploy completed. This API now supports get and put, but you could add whatever you want as far as HTTP method and authentication.

Step 7: 
Profit!

What could you do with this? Any number of things. Need to share data across IOT devices? Check. Need to build a global service and you don't have datacenters across the globe? Check. The advantage with this is that the investment is very low and this scales to extremely high (just on the micro level, like owning 50 million insects that you control all over the world).

Many thanks to my employer (https://www.flowroute.com/ ) for letting me work on this and my coworker Reed for helping me through the Python bits and bobs. 
Also, thanks to the Seattle CoffeeOps meetup  https://www.meetup.com/Seattle-CoffeeOps/ for listening to me rant about this stuff even though they are asking "What is he talking about ?!?"


Wednesday, March 2, 2016

Setting up an apt-mirror for Cumulus Linux

You may want to setup a local apt repository to pull packages from for your Cumulus infrastructure. 

I followed the instructions found here: 
https://www.howtoforge.com/local_debian_ubuntu_mirror 

But I did find a few gotcha's that  I had to fix before I was able to use my local repo. Here is a sample of my /etc/apt/mirror.list



       
############# config ##################
#
# set base_path    /var/spool/apt-mirror
#
# set mirror_path  $base_path/mirror
# set skel_path    $base_path/skel
# set var_path     $base_path/var
# set cleanscript $var_path/clean.sh
# set defaultarch  
# set postmirror_script $var_path/postmirror.sh
# set run_postmirror 0
set nthreads     20
set _tilde 0
#
############# end config ##############

deb http://repo.cumulusnetworks.com CumulusLinux-2.5 main addons updates
deb http://repo.cumulusnetworks.com CumulusLinux-2.5 security-updates

# Uncomment the next line to get access to the testing component
# deb http://repo.cumulusnetworks.com CumulusLinux-2.5 testing

# Uncomment the next line to get access to the Cumulus community repository
# deb http://repo.cumulusnetworks.com/community/ CumulusLinux-Community-2.5  main addons updates

deb http://ftp.us.debian.org/debian wheezy-backports main
deb http://ftp.us.debian.org/debian/ wheezy main

# mirror additional architectures
#deb-alpha http://ftp.us.debian.org/debian unstable main contrib non-free
#deb-amd64 http://ftp.us.debian.org/debian unstable main contrib non-free
#deb-armel http://ftp.us.debian.org/debian unstable main contrib non-free
#deb-hppa http://ftp.us.debian.org/debian unstable main contrib non-free
#deb-i386 http://ftp.us.debian.org/debian unstable main contrib non-free
#deb-ia64 http://ftp.us.debian.org/debian unstable main contrib non-free
#deb-m68k http://ftp.us.debian.org/debian unstable main contrib non-free
#deb-mips http://ftp.us.debian.org/debian unstable main contrib non-free
#deb-mipsel http://ftp.us.debian.org/debian unstable main contrib non-free
#deb-powerpc http://ftp.us.debian.org/debian unstable main contrib non-free
#deb-s390 http://ftp.us.debian.org/debian unstable main contrib non-free
#deb-sparc http://ftp.us.debian.org/debian unstable main contrib non-free

clean http://ftp.us.debian.org/debian



I was able to use the instructions in the howto to build the mirror and present it via http. I did have some problems using it at first. Notice the error:

W: Failed to fetch http://mydebianrepo.mycompany.com/debian/dists/wheezy-backports/main/i18n/Translation-en  Hash Sum mismatch

And then I read this:

http://askubuntu.com/questions/217502/speed-up-apt-get-update-by-removing-known-ignored-translation-en

So to fix this put the line: 

Acquire::Languages "none"; 

on the Cumulus node at the end of the conf file /etc/apt/apt.conf.d/70debconf


Here is my /etc/apt/sources.list



#
#

deb http://mydebianrepo.mycompany.com/CumulusLinux CumulusLinux-2.5 main addons updates
deb http://mydebianrepo.mycompany.com/CumulusLinux CumulusLinux-2.5 security-updates

# Uncomment the next line to get access to the testing component
# deb http://repo.cumulusnetworks.com CumulusLinux-2.5 testing

# Uncomment the next line to get access to the Cumulus community repository
# deb http://repo.cumulusnetworks.com/community/ CumulusLinux-Community-2.5  main addons updates
deb http://mydebianrepo.mycompany.com/debian wheezy-backports main
deb http://mydebianrepo.mycompany.com/debian wheezy main




Tuesday, March 1, 2016

Puppet 3 module for Cumulus Linux License

Here is a Puppet 3 module for applying the Cumulus Linux license to your nodes. You can change the logic to use $hostname or $fqdn or any other facter fact, but you will want to put some logic in there for future growth (ie: will you ever move to 40Gb switches?).

License file is dropped on the node at /etc/license . Be sure to name your directory the SAME AS the name of the class in ../modulename/manifests/init.pp 

       
# manifests/init.pp
class cumulus_license {

## BASIC CHECK - ARE 10GB SWITCHES THIS SETUP? THEN USE 10GB LICENSE
## OTHERWISE USE OTHER FACTER FACT TO DECIDE
if ($architecture == 'x86_64') and ($operatingsystem == 'CumulusLinux') {
  file { '/etc/license':
    ensure => present,
    owner  => root,
    group  => root,
    source => 'puppet:///modules/cumulus_license/my_10Gb_license',
    }

## AND USE THIS IDENTIFIER TO DECIDE ON 1GB LICENSE
if ($architecture == 'ppc') and ($operatingsystem == 'CumulusLinux') {
  file { '/etc/license':
    ensure => present,
    owner  => root,
    group  => root,
    source => 'puppet:///modules/cumulus_license/my_1Gb_license',
    }
  }
}

## CHANGE TO THE LICENSE FILE NOTIFIES THIS EXEC
## NOW RUN THE LICENSE COMMAND AND IF IT SUCCEEDS THEN DO NOTHING
## IF FAILS THEN HARDWARE IS UNLICENSED AND WILL INSTALL FROM PUPPET 
exec { cl-license-command :
  command     => ['/usr/cumulus/bin/cl-license -i /etc/license'],
  unless      => ['/usr/cumulus/bin/cl-license'],
  subscribe   => File['/etc/license'],
  refreshonly => true,
  }
}
       
 



A few notes about operation:


1. First the puppet run validates that there is an existing valid license on the switch. If there is, then do nothing. If you want to apply a new license then you will update puppet and run the cli-license command manually
2. If there is no active license then it tries to apply the license found at /etc/license (pushed there by puppet). You can rename this to whatever you need.
3. Make sure your Cumulus License files are in the files section of the module. These are just simple text files that get pulled in by the cli_license command during the run
4. You /may/ have to reboot the switch after the new license is applied. Please refer to Cumulus docs for guidance there.


Tuesday, December 8, 2015

Cumulus VX on Libvirt (CentOS 6)

You may have seen my earlier post on Cumulus VX in libvirt. I'm going to go a step further and try to emulate the environment described here:

https://docs.cumulusnetworks.com/display/VX/Using+Cumulus+VX+with+KVM

There are some pretty significant differences though, because libvirt on CentOS 6 is pretty "feature light". Nevertheless, this should result in something to work with.

You will need to build a system that can serve DHCP and HTTP on your 'default' network and then disable the dhcp section (you could delete it) in /etc/libvirt/qemu/networks/default.xml . This is because while libvirt can create manual DHCP entries (sometimes called "static dhcp" entries) using the 'host' declaration, you will not be able to pass DHCP option 239 which is required for your Zero Touch Provisioning. Queue sad trombone noise.

Next, you will want to build 4 additional networks in the Virtual Machine Manager ( localhost>details>Virtual Networks ). I built isolated networks named testnet1 thru testnet4, but you don't need to follow my silliness. Be sure to restart libvirtd after you've completed this to be sure that it has picked up all your changes. Maybe backup your XML files too, because there are situations where libvirt can blow them away. Once your DHCP server is running on the 'default' network then build 4 manual DHCP entries using the MAC addresses that you'll find below for eth0 (the ones that are on virbr0).

You will follow the instructions in the cumulus docs but instead of the KVM commands you can use virt-install to initiate the guests (adjust path to qcow2 files you copied):

sudo virt-install --os-variant=generic --ram=256 --vcpus=1 --network bridge=virbr0,model=virtio,mac=00:01:00:00:01:01 --network network=testnet1,model=virtio,mac=00:00:02:00:00:11 --network network=testnet2,model=virtio,mac=00:00:02:00:00:12 --boot hd --disk path=/home/user1/leaf1.qcow2,format=qcow2 --name=leaf1

sudo virt-install --os-variant=generic --ram=256 --vcpus=1 --network bridge=virbr0,model=virtio,mac=00:01:00:00:01:02 --network network=testnet3,model=virtio,mac=00:00:02:00:00:21 --network network=testnet4,model=virtio,mac=00:00:02:00:00:22 --boot hd --disk path=/home/user1/leaf2.qcow2,format=qcow2 --name=leaf2

sudo virt-install --os-variant=generic --ram=256 --vcpus=1 --network bridge=virbr0,model=virtio,mac=00:01:00:00:01:03 --network network=testnet1,model=virtio,mac=00:00:02:00:00:31 --network network=testnet3,model=virtio,mac=00:00:02:00:00:32 --boot hd --disk path=/home/user1/spine1.qcow2,format=qcow2 --name=spine1

sudo virt-install --os-variant=generic --ram=256 --vcpus=1 --network bridge=virbr0,model=virtio,mac=00:01:00:00:01:04 --network network=testnet2,model=virtio,mac=00:00:02:00:00:41 --network network=testnet4,model=virtio,mac=00:00:02:00:00:42 --boot hd --disk path=/home/user1/spine2.qcow2,format=qcow2 --name=spine2

 I will admit that the configs that are described in the Cumulus docs indicate that swp1 thru swp3 would be provisioned, but the virtual hardware configured is only swp1 thru swp2. I'm going to ignore swp3 for now.

If you built option 239 correctly on the DHCP scope, then you should be able to point all these instances to your ZTP script. Just remember that you do not need to run your web server on the deafult port 80, as ZTP will accept a path like 'http://192.168.122.101:1234/my_ztp_script.sh'

I haven't setup a configuration management tool on my instances yet, but that will be upcoming via my ZTP script.


Friday, December 4, 2015

Boot Cumulus VX in a libvirt KVM

Why would I want to do that?

Well, if you are running CentOS 6  and want an easy way to play with the Cumulus VX, here is a good start.

On a CentOS 6 system running a GUI
#yum groupinstall "Virtualization*"

(this will install all your libvirt bits and pieces)

Then download the Cumulus KVM image (it is a qcow2 image) and remember the path

#sudo virt-install --os-variant=generic --ram=256 --vcpus=1 --network bridge=virbr0,model=virtio,mac=00:01:00:00:01:00 --boot hd --disk path=/home/user/cumulus.qcow2 --name=cumulus1

This only provisions the eth0 interface (mgmt) but it should get you a start to playing with the product.

Enjoy!!!

Tuesday, September 15, 2015

Backup pfsense firewall (via SSH) using ONE script

I know there are other methods out there to backup a pfsense config. These do work but I'm just not a fan of relying on the gui to perform my config backups. Plus, SSH is my encrypted session of choice because its secure, flexible and available.

Once again I'm using Fabric to perform this backup. If you haven't already done it, go to the effort of setting up Fabric on your platform of choice...you won't regret it. If you want to do interesting things with Fabric then get warmed up on your python at Codecadamy (https://www.codecademy.com/tracks/python).

This script uses an interactive prompt for you to enter the password, but you can simply provide the password either in the script itself (shame shame) or via another secure method.

Backups are pulled back to the system running the script into a directory named 'my_pfsense_backups' and are given a directory for each day. You can tweak this to suit your needs.

       

#!/usr/bin/python
#
# Designed and tested on pfsense v2.2
#
import urllib2, base64, getpass, json, re, sys, os
from fabric.api import *
from datetime import datetime
#
myname = ('root')
# NOTE: pfsense uses root user that has same password as admin - required for sftp file access
theList = ['pfsense1.company.com','pfsense2.company.com']
#
i = datetime.now()
now_is = i.strftime('%Y%m%d-%H%M%S')
today_is = i.strftime('%Y%m%d')
print now_is
#
print ('')
print ('Username is ' + myname)
pw = getpass.getpass()
print ('')
#
how_many = len(theList)
#
print("This will backup " + str(how_many) + " systems:\n")
print (theList)
print ('')
#
env.user = myname
env.hosts = theList
env.password = pw
#
#@parallel(pool_size=5)
#
# generate the backup file on the pfsense system itself, this will take some time
def generate_and_pull_backup():
        env.warn_only = True
#       run( "8", shell=False )
        backup_command_output = run( "/etc/rc.create_full_backup", shell=False )
# parse the output of the create_full_backup command
        file_generated_full_path = backup_command_output.rsplit(None, 1)[-1]
        filename_generated = file_generated_full_path.split('/')[-1]
# pull the backup home to me
        get("%s" % file_generated_full_path,"./my_pfsense_backups/%s/%s-%s" % (today_is,env.host,filename_generated))
# NOTE: configs can be restored via /etc/rc.restore_full_backup
#
# delete config backup just generated so disk does not fill
        run( "rm -f %s" % file_generated_full_path, shell=False )
#
if __name__ == '__main__':
        execute(generate_and_pull_backup)
       
 

Hope you enjoy this as much as I have! Backing up my pfsense systems has always been far too manual and problem prone so I'm looking forward to putting that behind me.

Monday, September 14, 2015

Backup F5 units (LTM or GTM) using one script!

So in case you have gone down this rabbit hole, the BigIP series are great but certain things about them require finesse. Case in point - the management interface does not always play well with other published services like TACACS or NTP. Sometime they require custom routes and that can become a hassle.

So backing up the config on an F5 can sometimes encounter these challenges too. I want to:

1. Generate a config backup on my F5 unit using tmsh commands
2. Securely transfer that config backup somewhere else, preferably on a secure network
3. I don't want to deploy a webserver or entire development suite just to accomplish goals 1 or 2

Enter Python Fabric.

Yes, getting Fabric will take some setup....but this is an environment that you will be able to use over & over again (not just for F5 backups)

The script you can grab below permits you to use Fabric to open an SSH session, generate the config backup and then pull it back (via SFTP) to the system where you originally ran the fabric script.

Be sure to edit the user ("myname" variable) and the list of F5 units ("theList" variables) that you want to run this on. This can be used to be fully automated and headless but I'm going to let you work out those details yourself. Files will be stored on the local box in ./my_F5_backups under a date directory for each day or you could customize this script to go hand in glove with logrotate.

       
#!/usr/bin/python
#
# Designed and tested on LTM v11.x
#
import urllib2, base64, getpass, json, re, sys, os
from fabric.api import *
from datetime import datetime
#
i = datetime.now()
now_is = i.strftime('%Y%m%d-%H%M%S')
today_is = i.strftime('%Y%m%d')
print now_is
#
#
#
myname = ('my_admin_user')
print ('')
print ('Username is ' + myname)
pw = getpass.getpass()
print ('')
#
#
theList = ['ltm01.company.com','gtm01.company.com']
#
#
#
how_many = len(theList)
#
# I am using someone else's yes/no logic here
#
def query_yes_no(question, default="yes"):
        valid = {"yes":"yes",   "y":"yes",  "ye":"yes",
                "no":"no",     "n":"no"}
        if default == None:
                prompt = " [y/n] "
        elif default == "yes":
                prompt = " [Y/n] "
        elif default == "no":
                prompt = " [y/N] "
        else:
                raise ValueError("invalid default answer: '%s'" % default)

        while 1:
                sys.stdout.write(question + prompt)
                choice = raw_input().lower()
                if default is not None and choice == '':
                        return default
                elif choice in valid.keys():
                        return valid[choice]
                else:
                        sys.stdout.write("Please respond with 'yes' or 'no' "\
                                "(or 'y' or 'n').\n")
#
keep_going = query_yes_no("This will backup " + str(how_many) + " systems. Would you like to continue?\n (No will list systems that it would have run on and exit)")
if keep_going == "no":
        print (theList)
        sys.exit()
print ('')
#
env.user = myname
env.hosts = theList
env.password = pw
#
#@parallel(pool_size=5)
#
# generate the backup file (ucs) on the F5 unit itself - will create some load on F5 while running
def run_on_f5_first():
        run( "tmsh save sys ucs %s" % now_is, shell=True )
# copy backup file (ucs) back to local system
def file_get():
    get("/var/local/ucs/%s.ucs" % now_is,"./my_F5_backups/%s/%s-%s.ucs" % (today_is,now_is,env.host))
# delete config backup just genrated so disk does not fill
def run_on_f5_last():
        run( "rm -f /var/local/ucs/%s.ucs" % now_is, shell=True )
#
if __name__ == '__main__':
        execute(run_on_f5_first)
        execute (file_get)
        execute(run_on_f5_last)
       
 

And there you go. One script that does something that I've meant to do for years and just didn't get around to until now. Boy am I glad that I waited until Fabric came along!