Friday, December 2, 2016

More things from AWS CLI

Exporting RDS Parameter Groups

aws rds --profile aws_profile describe-db-parameters --db-parameter-group-name paramter_group_name

Creating a Read-Replica of an RDS instance

aws rds create-db-instance-read-replica --db-instance-identifier $UPGRADE_TARGET --source-db-instance-identifier $SOURCE_INSTANCE --profile $AWS_PROFILE

Upgrade an RDS instance

aws rds modify-db-instance --db-instance-identifier $UPGRADE_TARGET --engine-version $UPGRADE_VERSION --profile $AWS_PROFILE --allow-major-version-upgrade --apply-immediately

Promote a Read-Replica

aws rds promote-read-replica --db-instance-identifier $UPGRADE_TARGET --profile $AWS_PROFILE

Rename an RDS instance

aws rds modify-db-instance --db-instance-identifier $SOURCE_INSTANCE --new-db-instance-identifier $SOURCE_SAVE_NAME --apply-immediately --profile $AWS_PROFILE

Enable Multi-AZ on an instance

aws rds modify-db-instance --db-instance-identifier $UPGRADE_TARGET --multi-az --apply-immediately --profile $AWS_PROFILE

Finding IPs in a Subnet

aws ec2 describe-network-interfaces --filters "Name=subnet-id,Values=subnet-subnetid" --region region_name

MySQL Queries I don't want to forget

Filterable process list


Replication Commands

show master status;

show slave status\G

Finding old databases

select table_schema, table_name, create_time, datediff( create_time, now()) from tables
   where table_schema like 'filter%' and table_name = 'known_table'
    and  datediff (create_time, now()) <  -7

Friday, September 23, 2016

MySQL Metrics that matter.

Finding metrics that matter in MySQL.

Having now transitioned into a role that requires MySQL knowledge instead of Postgres knowledge, it is time to learn some new queries. 

Total database size

SELECT table_schema "Data Base Name",
     sum( data_length + index_length ) / 1024 / 1024 "Data Base Size in MB",
     sum( data_free )/ 1024 / 1024 "Free Space in MB"
 FROM information_schema.TABLES
 GROUP BY table_schema ;

Free Space here is an interesting notion. This "free space" is the space available inside the engine's (InnoDB or MyISAM) table spaces. This is not a reflection of total available disk space.

Top 20 tables by size

SELECT       table_schema as `Database`, table_name AS `Table`, 
    round(((data_length + index_length) / 1024 / 1024), 2) `Size in MB`
     FROM information_schema.TABLES
     ORDER BY (data_length + index_length) DESC 
     LIMIT 20;

If your database is small (relatively few tables), you can take out the limit clause and see all tables.

More updates as I find fun new queries.

Thursday, June 9, 2016

Finding stuff in aws cli


Finding default routes: aws ec2 describe-route-tables --query "RouteTables[*].Routes[?DestinationCidrBlock==\`\`]"
Finding Routes that terminate in instances: aws ec2 describe-route-tables --query "RouteTables[*].Routes[?DestinationCidrBlock==\`\`].{Destination:DestinationCidrBlock, Instance:InstanceId, State:State}"
Finding the VPC name for a route that terminates on an instance: 
aws ec2 describe-vpcs --vpc-ids $(aws ec2 describe-route-tables  | 
     jq '.RouteTables[] |  { VpcId, Instance: .Routes[] | 
       select( .InstanceId != null)  }' | 
     jq '.VpcId' | sed 's/\"//g')  |
     jq '.Vpcs[] |
       { VpcId, Name: .Tags[] |
       select ( .Key == "Name") }'
Finding the newest snapshot
aws --profile=prod ec2 describe-snapshots --filter 'Name=volume-id,Values=vol-1c6e8b1a' | jq '.[]|max_by(.StartTime)|.SnapshotId'

Find out how many 0b files in an s3 bucket

aws s3 ls --recursive s3://<folder_name> | awk '{if ($3 == 0) print}'

In my new gig, I'm learning lots more AWS, so I'll be sharing some AWS love here now. The last few weeks, I have been replacing NAT instances with NAT Gateways. It is a great way for your VPC VM's to get to the internet without having to manage a VM.

Well, I thought I was done so I decided to find a way to audit my assumptions. Well, I was wrong. The TL;DR above shows what it took to find the default routes that terminated specifically in instances, then from there find the name of the VPC. Now that I have found those, I know how many more NAT gateways I need to build. 

Based on a recommendation from a coworker, I add jq which is a very powerful JSON processor that blends awk, sed and grep for JSON objects. It was a bit complex to learn, but now that I have some skills, it will be very handy in the future!

Tuesday, May 10, 2016

Touchy DNS in Kitchen/Vagrant VM's?


Add to your drive section in .kitchen.yml

    natdnshostresolver1: "on"


I often work remotely. It's a beautiful thing; however, "Thar be dragons!" Does your cookbook need private DNS? Resources behind the firewall? Well, let's connect to VPN. WooHoo... Wait... The chef-client install failed. It can't get to the internet? WHAT? 


My VPN client has split tunneling DNS and split tunnel networking. So... How do you working around it? Tell kitchen to tell vagrant to use your host workstation as the DNS server.

Git Pull request Kung Fu


Merging via command line

If you do not want to use the merge button or an automatic merge cannot be performed, you can perform a manual merge on the command line.
Step 1: From your project repository, bring in the changes and test.
git fetch origin
git checkout -b tscalf-unit_tests origin/tscalf-unit_tests
git merge master
Step 2: Merge the changes and update.
git checkout master
git merge --no-ff tscalf-unit_tests
git push origin master

Monday, May 9, 2016

CoffeeScript Substrings


run_list = [ 'role[flexo]','role[bender]','role[roberto]']

run_list.forEach (result) ->
  role = result.match /\[(\S*)\]/i
  alert(role[1]) # Should yield a message box with "flexo" then "bender" then "roberto"

What I was looking for:

Given a Chef run_list, I needed the name of each role via CoffeeScript. I am working at having hubot tell our deployment processor to run a release. I wanted to help the user find the application they were looking for.

Helpful Links:

Try CoffeeScript - Paste your coffeescript and test it in browser

Friday, April 22, 2016

Finding assets in Chef


 knife search node "manufacturer:Dell* OR manufacturer:HP*" -a fqdn -a dmi.system.manufacturer -a dmi.system.product_name -a dmi.system.serial_number 


Today I was asked if the data center inventory was up to date... Well, no, it wasn't... I was looking for ways to find out what was in the data center without having to make the hour round trip.

I thought, "I wonder what's in Ohai". Low and behold, node[:dmi][:system][:manufacturer] is the name of the hardware manufacturer. WooHoo! A little knife search magic, and I get: 
 knife search node "manufacturer:Dell* OR manufacturer:HP*" -a fqdn -a dmi.system.manufacturer -a dmi.system.product_name -a dmi.system.serial_number
  dmi.system.manufacturer:  Dell Inc.
  dmi.system.product_name:  PowerEdge R720xd
  dmi.system.serial_number: XX0099AA

There it is, a one line asset inventory. This is specifically looking for Dell and HP hardware. It is pretty easy to extend this to any other hardware platform, including virtual guests. If you are using RedHat KVM, you can search "manufacturer:Red*".

There are a few formatting options as well, I tend to like JSON output. So, through in a -Fj and you get nice pretty JSON object.

Monday, April 11, 2016

Curl, OpenSSL, NSS, and secure web servers.


Upgrade curl and NSS on the Fedora Box
Don't use -k if you need SNI from a Mac
Send an x.509 certificate with your curl request?
  • Fedora: curl --cert mycert.pem  --key mycert.key
  • Mac: curl -v --cert "mycert.pfx:password" (To merge a PEM file and a Key, see the instructions here)
Final commands:
  • Fedora: curl -v --cert ~/mycert.pem --key ~/mycert.key --tlsv1.2
  • Mac: curl -v --cert "mycert.pfx:password"
Another fun option:
  • You haven't migrated DNS for your Host-header content site?  
    • curl -v --resolve

Back story

For the past couple months, I have been building a secure web server using Apache HTTPD and chef. The cookbook went to UAT and we were having a client problem. So, we started digging deeply into the cookbook. Client requests are authenticated using an x.509 certificate. So, I needed to include my test certificate in the curl request. And off I go, right???


First request:
$ curl -v --cert ~/mycert.pem --key ~/mycert.key --tlsv1.2
* Trying
* Connected to ( port 443 (#0)
* Initializing NSS with certpath: sql:/etc/pki/nssdb
* Cannot communicate securely with peer: no common encryption algorithm(s).
* Closing connection 0 curl: (35) Cannot communicate securely with peer: no common encryption algorithm(s).

WHAT?? How can I have "NO_CYPHER_OVERLAP"? So, my boss tries it from his Mac OS laptop. Maybe I have done something dumb on my laptop... Not likely, but possible. His request fails. What's going on? We check the logs on the server, and they show successful transactions by other client. Good, the server is not broken... My requests are getting logged at all (No surprise. You can't send an HTTP request if you can't negotiate SSL) His requests are getting logged, but no x.509 signature in the log (We use a custom log format that includes some x.509 stuff for this type of troubleshooting). What command are you using and what output do you get?

$ curl -v -k --cert "mycert.pfx:password" ""
* Trying * Connected to ( port 443 (#0)
* TLS 1.2 connection using TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384
* Server certificate: *
* Server certificate: GeoTrust SSL CA - G2
* Server certificate: GeoTrust Global CA
Wait... Why are you getting *'s certificate? We use a single IP address with SNI support. With curl and Mac OS, if you add -k to your curl command, it disables SNI support. Who knew???

So, I figure I'll upgrade curl. There have been some high profile bugs found in SSL implementations (POODLE and BEAST to name a couple). Yeah! New version of curl. More testing and get the same results. Finally, I read the error message again...
* Initializing NSS with certpath: sql:/etc/pki/nssdb
NSS???? I thought I was using OpenSSL... Let's upgrade NSS.
[thad.scalf@EDOLINUX-FS1 edo-httpd]$ curl -v --cert mycert.pem  --key mykey.key --tlsv1.2
*   Trying
* Connected to ( port 443 (#0)
* Initializing NSS with certpath: sql:/etc/pki/nssdb
*   CAfile: /etc/pki/tls/certs/ca-bundle.crt
  CApath: none
* SSL connection using TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256
* Server certificate:
* subject: CN=*,O=myOrg,L=Nashville,ST=Tennessee,C=US
* start date: Mar 29 00:00:00 2016 GMT
* expire date: Apr 12 23:59:59 2017 GMT
* common name: *
* issuer: CN=GeoTrust SSL CA - G3,O=GeoTrust Inc.,C=US
> GET /ping HTTP/1.1
> User-Agent: curl/7.40.0
> Host:
> Accept: */*
* NSS: client certificate from file
* subject: CN=mycert,OU=myOU,O=myOrg
* start date: Jun 02 21:31:27 2014 GMT
* expire date: Jun 01 21:31:27 2017 GMT
* common name: mycert
* issuer: CN=myOrg,OU=myOU,O=myOrg
< HTTP/1.1 200 OK
HA!! It works! And after removing the -k from my boss's Mac command, his works too!! It's like magic. 

Wednesday, April 6, 2016

Breathe - Just Breathe

This morning, I was woken by my clock radio that was purchased in the late 90's or early 2000's. No, it does not have an iPhone connector, or a micro USB adapter to play music from my phone. It is an AM/FM Analog radio, with a manual tuner, and 7-line red LED numbers. The song that was playing reminded me of visiting the Sears (now Willis) Tower.

Why was the then tallest building in the world built? 

Construction of the tower began in 1970. It was built because the Sears and Roebuck Company had a problem. There were over 350,000 people who worked for Sears and Roebuck, many in Chicago and the surrounding area. They needed a central place for these employees to work. They needed over three million square feet of working space for people. It cost Sears over $175 million dollars to build.

Why is this song connected to this building?

The opening verse of the song was talking about getting ready for a busy day. Preparing for the stresses of daily life. The second verse continues the day with three cups of coffee, and a desire to make the most of the singer's time in that day. "I'm busy, busy, busy, and it’s no surprise to see / That I only have time for me, me, me." The Sears Company needed to enable people to get more done in a day and their decision was to bring those people together in one big building. The person you needed was an elevator ride away. Their people were "busy, busy, busy". There weren't video conference systems or instant messaging apps. In fact, there wasn't anything like a desktop computer in 1970. Semiconductor memory was introduced on the IBM 370 in 1970. Everything was done by hand in those days.

Then comes the chorus. The singer is reminded to "Breathe, Just Breathe". This is where the song really connected. Computer scientists are innovators. We change the world. We changed how information is shared. No more riding up elevators to share your latest discovery with the boss. We changed how news is consumed. 142 characters at time and with a cliff hanger title. We even changed how you drive to work, with a smart phone and GPS. I have often heard it said that we hold more computer power in our hands than sent man to the moon. With all those things that have changed our lives, the pressure to deliver the "next big thing" is really intense. The expectations of technology are huge!

The potential of technology is huge, but how does innovation happen? Innovation doesn't happen when you are heads down, hacking out code on a keyboard. It doesn't happen when you're in a data center racking servers. It doesn't happen when you are updating your KanBan board so you can show what you've accomplished for the week. Innovation, true innovation that could change the world, happens when you breathe. When you were a Sears accountant and you're riding that elevator, you think "Can I move this number here and that number there and make something different, something better?" That is the spark of innovation and that is when the song really connected. When I breathe, just breathe, rest, and begin to unravel from the stress and the push/pull world of "deliver this" and "fix that" and "We gotta release code!". That is when I can begin to innovate.

So, what is this magic song?

It is "Breathe" by Johnny Diaz.

Breathe, just Breathe. Take a moment today to rest in God's arms and breathe. Step away from the stresses at the keyboard. Ride the elevator to talk to your boss, instead of the IM. Take the time to think, unwind, and innovate. Team leaders, let your folks breathe. You will get better solutions. Take three and a half minutes, sing along, and breathe, just breathe. "Lay down what’s good and find what’s best!"

More Info
History of the Willis Tower
1970 | Timeline of Computer History

Monday, April 4, 2016

Formatting a git commit message in Vim

:set tw=0 fo=cqt wm=0 #No automatic wrapping, rewrapping will wrap to 72
gq #Format the whole Document.
vip #Format a paragragh

The longer version:

Git Fork Workflow

GitHUB Fork
git clone
git checkout -b <username>-<feature>
git commit -am "This is what I did to your code."
git push origin <username>-<feature>
GitHUB create Pull request.

Now for the longer Story
Learning my way through getting better at Git. I'm working with the team. We are a group of  daytime DevOps engineeers and evening development hobbyists.  As such, we use github for all our amazing ideas. Today, in order to be a better github citizen, I decided to learn how to work in the Git Fork Workflow that is common on Github.

The Gitfork workflow works as follows. Copyleft has a project I like, so, I click "Fork". That give me a copy of the master branch in my personal Github account.. Woohoo, I have source code. Next, Let's make some changes. 'git clone' is my tool of choice there. So I clone it locally. Now, let's change something. I should put it in a branch so the rest of my Copyleft friends know what I have done. 'git checkout -b <branch_name>' That's my ticket. As a courtesy I throw my username into my branch name, then some feature name.

Now for some code hacking... All finished there. It looks pretty? Good! Did you write tests? Be kind. Test your stuff. It makes us all better. Commit your changes with a good commit comment or few. 'git commit -am "This is my Feature" -m "This is more about what I did to your code and why.' It is important to describe your code well. Then I push it to my personal repo. 'git push origin <username>-<feature>'. If the organization is using tags, add a tag "git tag". Then push your tags "git push --tags".

Finally you are ready for that all important Pull Request. In the GitHub UI, you will see your branch and a button that prompts you to create a pull request from Master. Go for it. Verfiy your commit message is well formatted and displayed. Then commit.