/* steve jansen */

// another day in paradise hacking code and more

Adding Chef Encrypted Data Bags to Source Control

| Comments

I’ve been using Chef for a bit now and generally a huge fan of the new Chef workflow.

We are working hard to attain true continuous delivery and test driven development with Chef. The devil is in the details now.

One small wrinkle in our effort has been marrying encrypted data_bags with our chef-repo in GitHub.

I don’t want to type the optional argument --secret-file ~/.chef/encrypted_data_bag_secret everytime I interact with a data bag. So, I added this option to my ~/.chef/knife.rb file.

1
knife[:secret_file]  = "#{current_dir}/encrypted_data_bag_secret"

However, this precludes me from easily exporting the edited file to disk. The export will always be my secret plaintext, not the encrypted ciphertext. Not exactly what you want to commit to GitHub.

1
2
3
knife data_bag create users jenkins
# DON'T COMMIT THIS... the exported file will be unencrypted
knife data_bag users jenkins --format=json > data_bags/users/jenkins.json

So, I decided to create a bash alias to temporarily disable the knife.rb setting and export the data bag to a file:

My ~/.bash_profile file contains this alias:

1
2
3
4
5
6
function knife-ciphertext () {
   sed -e "s/knife\[\:secret_file\]/\#knife\[\:secret_file\]/"  -i .bak  ~/.chef/knife.rb
   knife $@ --format=json
   mv  ~/.chef/knife.rb.bak  ~/.chef/knife.rb
}
alias knife-ciphertext=knife-ciphertext

This bash function comments out the secret file option in knife.rb using sed’s in-place editing.

Now I can commit the data bag in its encrypted format:

1
2
3
knife-ciphertext data_bag show users jenkins > data_bags/users/jenkins.json
git add data_bags/users/jenkins.json
git commit -m 'adding the latest jenkins data bag'

Happing cooking!

A Better IIS Express Console Window

| Comments

IIS Express is the de facto server to use for local development of ASP.NET MVC and Web Api apps. It’s just like it’s big brother IIS minus a few features rarely used for local development. Unlike it’s big brother, IIS Express runs on demand as a regular console app under the security context of your current login. This makes it much easier to start and stop debugging sessions.

Being a console app is great – you can see System.Diagnostics.Debug.Print and System.Diagnostics.Trace.Write output right in the console alongside IIS’ usual log statements for HTTP requests.

A really useful trick is to create a Windows Explorer shortcut to iisexpress.exe, and open that shortcut iisexpress.exe.lnk file instead of directly opening iisexpress.exe. There are two benefits to this:

  1. iisexpress.exe gets a dedicated icon on the Windows taskbar. In the screenshot below, I can WinKey + 5 to quickly switch to my IIS Express console output. (WinKey + N focuses/opens the Nth item on the taskbar; repeat as needed if you have multiple windows grouped for that taskbar icon).

  2. I can customize the command prompt preferences for just iisexpress.exe. In the screenshot below, I’m using a smaller font in purple color, with the window stretched the entire 1600 pixel width of my display. This helps greatly with the readability of long lines of text in the console output.

Screenshot of the iisexpress.exe open in a custom window

Here’s a closer look at the console ouptut: Screenshot of the iisexpress.exe open in a custom window

Here are screenshots of the Explorer settings I used for C:\Program Files\IIS Express\iisexpress.exe.lnk:

Screenshot of the iisexpress.exe.lnk settings Screenshot of the iisexpress.exe.lnk settings Screenshot of the iisexpress.exe.lnk settings Screenshot of the iisexpress.exe.lnk settings Screenshot of the iisexpress.exe.lnk settings Screenshot of the iisexpress.exe.lnk settings

How to Verify Administrative Rights in a Windows Batch Script

| Comments

While working on automated provisioning of a Jenkins slave server on Windows, I needed to verify that one of my batch scripts was running with administrative privileges.

Turns out this problem is easy to solve these days as long as you don’t need to support XP. Thanks to and31415 on SO for the great post on using fsutil!

Here’s a working example:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
@ECHO OFF
SETLOCAL ENABLEEXTENSIONS

:: verify we have admin privileges
CALL :IsAdmin || (ECHO %~n0: ERROR - administrative privileges required && EXIT /B 1)

ECHO "Hello, Admin!"

:EXIT
EXIT /B

:: function to verify admin/UAC privileges
:: CREDIT: http://stackoverflow.com/a/21295806/1995977
:IsAdmin
IF NOT EXIST "%SYSTEMROOT%\system32\fsutil.exe" (
  ECHO %~n0: WARNING - fsutil command not found; cannot verify adminstrative rights
) ELSE (
  "%SYSTEMROOT%\system32\fsutil.exe" dirty query "%SystemDrive%" >NUL 2>&1
)
EXIT /B

Shameless plug – learn more tips and tricks for batch scripting in my Guide to Windows Batch Scripting!

Configuring Vagrant to Dynamically Match Guest and Host CPU Architectures

| Comments

Today a work colleague put together a nice Vagrantfile to run a Linux dev environment on our laptops. Vagrant is sweet for DevOps. The Vagrant file worked great on his Macbook Pro. But it was no dice running on my Windows box – the VM was a 64-bit Linux VM (why wouldn’t a server be 32-bit?) and I’m on a 32-bit laptop (don’t ask why my corporate IT still issues 32-bit Windows images on 64-bit hardware!).

To my surprise, VirtualBox can actually a 64-bit guest VM on a 32-bit host OS:

If you want to use 64-bit guest support on a 32-bit host operating system, you must also select a 64-bit operating system for the particular VM. Since supporting 64 bits on 32-bit hosts incurs additional overhead, VirtualBox only enables this support upon explicit request.

Source: http://www.virtualbox.org/manual/ch03.html

However, I learned Vagrant cloud boxes may forget to explicity declare they want VirtualBox to enable 64-on-32 support. While changing the box “Operating System Type” from “Ubuntu” to “Ubuntu (64 bit)” would be an easy fix, I decided to see if Vagrant could dynamically choose the right guest CPU architecture based on the host OS’ CPU architecture. Our app would run as either 32 or 64, so it made sense to run 32 on 32 and 64 on 64, right?

Turns out it is quite easy. The power of ruby as the config language for Vagrant really shines here:

Here the relevant changes to our Vagrantfile to get Vagrant to run a 64-bit Linux guest on 64-bit hosts, and a 32-bit Linux guest on 32-bit hosts:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
# -*- mode: ruby -*-
# vi: set ft=ruby :

Vagrant.configure("2") do |config|
  config.vm.box = "hashicorp/precise64"
  config.vm.box_url = "https://vagrantcloud.com/hashicorp/precise64/current/provider/virtualbox.box"

  # support 32 windows hosts :(
  if ENV["PROCESSOR_ARCHITECTURE"] == "x86"
    puts "falling back to 32-bit guest architecture"
    config.vm.box = "hashicorp/precise32"
    config.vm.box_url = "https://vagrantcloud.com/hashicorp/precise32/current/provider/virtualbox.box"
  end

  # ... lots more vagrant plugin and chef goodness ...

end

Tips for Vagrant on Windows

| Comments

I learned some interesting things today about running Vagrant on a Windows machine. Vagrant is an amazing tool for running a VM on your local dev box with a target platform (e.g., Linux) provisioned by code (e.g., Chef/Puppet/shell scripts).

Spaces in Paths

A hard lesson about Vagrant on Windows was Vagrant uses Ruby heavily, and Ruby on Windows really, really doesn’t like spaces in paths.

The Vagrant installer can’t comply with the Windows Installer and Logo requirement to default to %ProgramFiles% folder due to Ruby’s known issues with spaces in paths like C:\Program Files.

I was able to work around this with a symlink:

1
2
IF NOT EXIST "%ProgramFiles%\Vagrant" MKDIR "%ProgramFiles%\Vagrant"
MKLINK /D "%SystemRoot%\vagrant" "%ProgramFiles%\Vagrant"

I then ran the VirtualBox-4.3.8-92456-Win.exe installer using all defaults except for the USB support and Python scripting.

TIP: do not install VirtualBox’s USB drivers if you have an enterprise USB device blocker/filter

I then followed with installing Vagrant_1.4.3.msi to C:\vagrant.

TIP: the Vagrant v1.5.0 installer is broken for Windows; use v1.4.3 until v1.5.1 is released.

VirtualBox in XP SP3 compatability mode

I needed to configure a few VirtualBox binaries to run in XP SP3 compatability mode for my Windows 7 SP1 Enterprise laptop. YMMV.

1
2
3
4
5
6
7
8
9
10
11
12
13
REM run VirtualBox in XP SP3 mode
REG ADD "HKCU\Software\Microsoft\Windows NT\CurrentVersion\AppCompatFlags\Layers" ^
        /v "%ProgramFiles%\Oracle\VirtualBox\VirtualBox.exe" ^
        /t REG_SZ  ^
        /d WINXPSP3
REG ADD "HKCU\Software\Microsoft\Windows NT\CurrentVersion\AppCompatFlags\Layers" ^
        /v "%ProgramFiles%\Oracle\VirtualBox\VBoxSVC.exe" ^
        /t REG_SZ  ^
        /d WINXPSP3
REG ADD "HKCU\Software\Microsoft\Windows NT\CurrentVersion\AppCompatFlags\Layers" ^
        /v "%ProgramFiles%\Oracle\VirtualBox\VBoxManage.exe" ^
        /t REG_SZ  ^
        /d WINXPSP3

Spaces in your home folder path

If your Windows username (or %USERPROFILE% path) include spaces, you will need to set an environmental variable %VAGRANT_HOME% to a path that does not use spaces. This caused many non-obvious errors with vagrant plugin install berkshelf and vagrant plugin install omnibus.

A simple fix was setting %VAGRANT_HOME% to “C:\VagrantHome”

Example running a simple 32-bit Ubuntu LTS box on 32-bit Windows 7 SP1

I don’t really need the omnibus plugin here, but, this proves it can install a plugin that would otherwise fail with spaces in the %USERPROFILE% path.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
SETLOCAL
IF NOT EXIST C:\VagrantHome MKDIR C:\VagrantHome
PUSHD C:\VagrantHome
SET VAGRANT_HOME=C:\VagrantHome
PUSHD %TEMP%
MKDIR VagrantTest
CD VagrantTest
vagrant init hashicorp/precise32
vagrant box add hashicorp/precise32 https://vagrantcloud.com/hashicorp/precise32/version/1/provider/virtualbox.box
vagrant plugin install omnibus
vagrant up --provision
PAUSE
vagrant halt
vagrant destroy --force
CD ..
RMDIR /S /Q "%TEMP%\VagrantTest"
POPD
ENDLOCAL

Binding Jenkins to Port 80 on SUSE Linux

| Comments

I’ve been helping an awesome colleague on DevOps for our Jenkins farm, which we use for continuous integration and continuous deployment to our preproduction environments.

We are really trying to do it right:

  • Use Puppet to provision the Jenkins master, Linux VM build slaves, Windows VM slaves, and even OS X bare metal slaves (for iOS builds)
  • Automated backups of Jenkins config files to a private GitHub repo for disster recovery
  • Patches the GitHub OAuth plugin to make sure you have the same collaborator permissions (read/write/admin) in a Jenkins job as you do the GitHub repo.
  • Have a Jenkins staging environment to test upgrades to Jenkins and plugins to avoid surprises.
  • Run Jenkins on the Long Term Support (LTS) release channel to avoid surprises.

I wish my shop used CentOS or Debian; sadly we are stuck on SUSE Enterprise. SUSE is really good at turning 5 minute tasks on CentOS or Debian into uber frustrating hour-long ordeals.

One of the glitches we faced was running the Jenkins web UI on port 80. SUSE lacks the authbind package for binding to port below port 1024 as a non-root user. We wanted to run the Jenkins deamon as a regular privilege user, so running as root was not an option.

We are currently smoke testing this LSB /etc/init.d/jenkins.portforwarding script, which is just a wrapper around iptables. So far, it seems to get the job done.

If all goes well, I will merge this logic into a pull request for the Jenkins init.d script for OpenSuse.

A Better FTP Client for Windows You Already Have: Git Bash’s Curl Command

| Comments

My shop has a couple of internal FTP servers to mirror commonly used installers for .Net devs. Installers for apps like Visual Studio can be huge, so GitHub isn’t the best place for this, and it would also smoke most of our Dropbox quotas. So an FTP server seems like the 3rd best option.

We are a geographically distributed team, with a VPN to access internal servers. Even with a reliable VPN session over ISP fiber connection, I’ve experience lots of realiability problems downloading large files with the native Windows Explorer / Internet Explorer FTP.

The Windows ftp command line client can be a pain to work with. Fortunately, the Git bash emulator for Windows (msysgit) includes a MinGW port of the awesome curl utility. The curl utility has all kinds of awesome features for downloading large files.

Here’s a few options I found really useful:

1
curl -C - -v -O "ftp://ftp.example.com/path/to/file.zip"
  • -C - option tells bash to automatically continue an interrupted download, if the server supports this feature.
  • -v prints verbose stats, including an dynamic progress info
  • -O automatically saves the file using the remote file name to the current working directory

I crafted this gist to enable downloading a large number of binaries related to .Net development from our FTP server.

Be warned, this hack spawns a new command prompt window for each download, so it can get a bit crazy. This seemed like the best worst way to download in parallel while also making sense of each download’s status.

Breaking the 3GB Memory Barrier of 32-bit Windows

| Comments

My corporate laptop has 6 GB of RAM installed, but, only sees 3 GB of logical RAM. Why? My corporate IT department images laptops with the 32-bit flavor of Windows 7.

As you can see in this screenshot from my Control Panel’s System information applet, installing more memory hits a glass ceiling with Windows at ~3GB.

System information screenshot showing 3GB of RAM

My laptop has 6 GB of physical RAM installed, yet my user applications have access to less than half of the physical memory!

Hacking a Solution: “Physical Virtual Memory”

Fortunately, there is a solution to this problem. It’s a hack and it uses a reasonably priced piece of 3rd party commercial software.

The solution combines a feature of Windows known as Physical Address Extensions (PAE) in tandem with a RAMDISK as the storage “disk” for the virtual memory paging file. The result is a total hack – we’re using a page file to expose the address space of physical memory. It’s “physical virtual” memory. An oxymoron if I ever heard one!

A commercial software package called Primo Ramdisk Standard by Romex Software is needed to create the Ramdisk. It’s $30/seat.

This is the only Ramdisk driver I could find that:

  1. Supports Windows 7
  2. Supports PAE
  3. Supports the Intel/AMD physical memory remapping (“Invisible Memory”) chipset feature (read more)
  4. Not flagged as a removable storage device by our corporate data loss prevention nanny software

Performance

Indeed, the performance of this hack to use “physical virtual memory” will be less than just using a 64 bit O/S with it’s address space of 264 bytes. Nevertheless, paging to a RAMDISK will always beat paging to a magnetic hard drive, and will probably beat paging to a SSD disk as well.

I speculate there are a number of very good reasons why corporate IT would deploy 32-bit over 64-bit – availability of 64-bit client software for VPNs, anti-malware, remote backup agents, remote support agents, encryption policy engines; the difficulty in recreating and testing a new image from scratch; the density of older 32-bit laptops still in use.

Known Issues

Caveat Emptor: You must disable hibernation mode. Hibernating sporadically crashes upon shutdown or startup when using this hack. The good news is you will not miss much. My laptop clocked faster times with a normal shutdown/startup cycle compared to the time required to enter and exit hibernation. The disk IO was just too slow to copy 6 GB of RAM contents to into and out of the C:\hiberfil.sys hibernation file.

Testing

This setup was tested successfully for over one year on a Lenovo ThinkPad T410 with 6 GB of RAM (2 GB +4 GB DIMMS) as well as one year on a Lenovo T420s with 8 GB of RAM. Please test your setup. Should your machine fail to restart after following below steps, you should boot into Windows Safe Mode and disable/uninstall the RAMDISK driver and paging file.

Setup (8 steps)

Step 1

Enable PAE in the Windows boot options, disable hibernation in the power options for Windows, and reboot the system.

Run the following commands in Command Prompt (cmd.exe). Note this will force a restart in 30 seconds, so save your work.

1
2
3
4
bcdedit /set pae ForceEnable 
bcdedit /enum | FINDSTR pae 
powercfg.exe /hibernate off 
shutdown /r /t 30 /d p:1:1 

Screenshot of command prompt usage in step 1

Step 2

Install the commercial software Primo Ramdisk Standard by a vendor named Romex. There is a $30/seat license cost. Romex offers a 30 day free trial.

Step 3

Launch the Primo Ramdisk configuration program. (“%ProgramFiles%\Primo Ramdisk Standard Edition\FancyRd.exe”)

Step 4

Launch the dialog to configure “Invisible Memory Management”

Click the icon in the lower right corner of the configuration program that resembles an blue SD Card and a yellow wrench. On the dialog, click the “Enable IM” button. The default options worked successfully a Lenovo ThinkPad T410 (BIOS) and a Lenovo T420s (UEFI). See the Romex documentation on front-end/back-end reserve if you experience video card problems on your hardware.

Screenshot of configuring "Invisible Memory Management" in step 2

Step 5

Define a new RAMDISK

a) Take note of the maximum amount of available invisible memory as displayed in the lower right hand corner of the main window. This will be the size of the RAMDISK.

b) Click the “Create a new disk” toolbar button to define a new persistent RAMDISK

c) Select “Direct-IO” as the disk type. This is the faster of the two options. Also, Credant will only ignore this device type.

d) Assign a drive letter of “Z”. This can be changed, however, a later step will need to be manually adjusted.

e) Leave “One Time Disk” unchecked to make this disk persistent across boots.

f) On the next dialog screen, enable the option for “Use Invisible Memory”. Leave all other options unchecked/disabled.

g) On the final dialog screen, select the FAT32 format and label the device “RAMDISK”.

Screenshots:

Screenshot of defining a new RAMDISK in step 5 Screenshot of defining a new RAMDISK in step 5 Screenshot of defining a new RAMDISK in step 5 Screenshot of defining a new RAMDISK in step 5

Step 6

Modify Windows’ Virtual Memory settings

a) Run “sysdm.cpl” to open System Properties

b) Open the virtual memory dialog by selecting Advanced > Performance > Settings > Advanced > Virtual Memory > Change

c) Uncheck/disable “Automatically manage paging file size for all drives”

d) Select the “C:” drive in the drive list, and select the “No paging file” option. Click the Set button.

e) Select the “Z:” drive in the drive list, and select “Custom” size of X for initial and maximum, where X is the space available listed for the drive. You may need to slightly reduce X by ~5 megabytes.

f) Click the “Set” button and confirm your settings resemble the screenshot below. Click the “Ok” button.

Screenshot of modifying Windows virtual memory settings in step 6

Step 7

Hide the Z: drive from Explorer

Windows will be very annoying about the Z: drive being full. You can hide this drive from Explorer and the common dialogs with the following registry setting. Note you can still explicity access this drive with a full file path in any open/save dialog (e.g., Z:\folder\file.ext). If you changed the drive letter for the RAMDISK from Z: to something else, you will need to adjust the hex value of the registry key (see TechNet for the correct hex value).

Run the following commands in Command Prompt (cmd.exe):

1
2
REG add HKCU\Software\Microsoft\Windows\CurrentVersion\Policies\Explorer /v NoDrives /t REG_DWORD /d 0x02000000
REG add HKCU\Software\Microsoft\Windows\CurrentVersion\Policies\Explorer /v NoLowDiscSpaceChecks /t REG_DWORD /d 1

Screenshot of disabling Explorer disk space warnings for the new RAMDISK in step 7

Step 8

Reboot

It’s Windows, why not throw in a reboot?

Final Thoughts

My Windows setup recommends 3 GB of virtual memory. I’d like to try upgrading my physical RAM from 6 GB to 8GB. This would let me add another gigabyte to the paging file. It would also leave another 1 GB of free space on Z:. I’m considering using this free space as a NTFS junction point with “%TEMP%” and “%SYSTEMROOT%\TEMP” to make the temp folders both fast and non-persistent between reboots. (Junction points are the Windows equivalent of *nix symlinks for directories. You can use the Sysinternals utility junction.exe or the Primo Ramdisk utility to define junction points.)

I also want to test setting my IIS document root to Z: to make tests of deployment packages lightning fast (i.e., relocating the IIS document root from C:\inetpub to Z:\inetpub). This will make disk I/O way faster for copying scores of little image and text files. It also forces me to run an automated build/package/deploy between reboots (since Z:\ is wiped between reboots).

Are Great Developers Both Left and Right Brain Expressive?

| Comments

My wondeful wife pointed to me this outstanding visualization of left vs. right brain expression.

It made me thing that a great developer is probably expressive on both sides: you clearly need the academic properties of the left brain: logic, analysis, objectivity.

But the right side creativity is also going to be needed to create something worth using, something that impacts our daily lives, something with an outstanding user experience.

Creative Commons visualization by VaXzine

CC visualization by VaXzine

What do you think? Are great devs truly ambidextrous of the mind?

GitHub Sings the Praises of a Distributed Workforce

| Comments

Tom Preston-Werner, co-founder of GitHub.com, highlights the competitive advantages behind a number of company virtues I admire. A few of these virtues are organic growth, outstanding user experience, and a distributed workforce.

Below is an video excerpt from a fireside chat interview with Mr. Preston-Werner from July 2013, speaking to the benefits of a remote workers, particularly developers:

One of the most memorable quotations from the interview is:

“Companies that aren’t distributed can’t possibly say that they hire the best people.”

I have the privilege of working at a great employer that also “gets it”. Most of my colleagues are remote workers across nearly every time zone. Constraining your team to a single city is a self-imposed barrier, particularly for creative work like coding that fits brilliantly with remote collaboration.