Category Archives: tutorials Project Retrospective Part 3: tools for managing software projects

This is the third part in my series on the migration of to Umbraco.  This post will cover the tools and processes I used to manage this project.

Development Process Overview

Here is a high level overview of the implementation process – some of the the development steps were done in parallel:

  1. Interview stakeholders – understand their workflow and goals for the new site
  2. Perform in-depth site survey – build a sitemap to understand site structure
  3. Document types/page field survey – survey all fields (properties) used on each page, think about how they are related
  4. Template design – a combination of scraping the visual design of the old site and building static HTML for the new one
  5. Document type design – implement the document types in Umbraco based on the Page Field survey results
  6. Custom functionality – implement all the custom functionality not provided by Umbraco – payments, custom widgets, etc.
  7. Content migration – prepare a set of migration tools which would be run at the time of the switch
  8. Live release – execute the switch, including the final migration of content and media from old site to new.

Tools Overview:

  • Basecamp: basecamp was used in lieu of email for information discussions.
  • JIRA: JIRA is used to manage all development tasks
  • BitBucket (git): contains source code & database scripts,
  • TeamCity: build server used for continous integration of the dev servers and production releases
  • Amazon Web Services:  hosts the application, include website, DB, email, storage, etc
  • Hangouts/Skype: Permanent Hangout is used for ongoing discussion, weekly video calls on Hangout and Skype for team meetings.
  • Evernote:  a project notebook contains interview notes and other various technical snippets which I might need to refer to
  • LastPass: password manager which I use to store all project credentials and share them with the team

Project Tools in Detail:

JIRA Release Schedule:

A JIRA project is the first development artifact that I create.  It contains the development plan, all the individual dev tasks, time tracking, and links to git commits.  A project schedule is important to stakeholders, so I organize a high-level tasks list into pre-launch and post-launch releases:jira versions

JIRA Kanban Board

The JIRA Agile Board is how I organize my tasks.  If there is a dedicated PM who is familiar with scrum, I will use the scrum board, otherwise I will use the more flexible Kanban board.  I configured it into the standard four lanes:board

JIRA Task Detail:

Pretty standard.  I link JIRA to git so the commits for each task are visible, and also to TeamCity, so that the build status for each commit is linked.  As I work on stories, I add screenshots and technical notes, for myself as much as the tester/product owner.story details


I use a modified git-flow proces – each commit is tagged with a build, releases are tagged by date, and released code is merged to master.

Screen Shot 2015-09-26 at 12.29.19 AM


Each commit is automatically deployed to the dev site.  Live releases are triggered from TeamCity took.  I configure the Publish Web wizard in Visual Studio – this creates an msbuild configuration which I can trigger in the TeamCity build:TeamCity2

New Relic Monitoring

New Relic is pretty essential to running lots of websites without an ops team.  It sends alerts when there is any problem and makes it easy to identity problematic components.NewRelic



Everynote contains technical notes, potential third party components, and client interviews, airplane tickets, and a lot of other information I may need to refer to:



Last but not least, I write regular emails and reports to communicate project status information to non-technical stakeholders and educate them about the development process.


How to regain access to an AWS EC2 instance that you’ve lost access to

Last night I accidentally locked myself out of a production EC2 instance. Arg! Panic!

How I regained access:

1: Take a snapshot of the instance.  (Note: if you require 100% uptime, this is a good time to restore the snapshot to a new instance and switch the Elastic IP to it while you fix the issue. )
2: Launch new Ubuntu recovery instance *in the same AZ* using a key file you have access to.
3: Start and SSH to your new recovery instance.
4: Create a new volume *in the same AZ* from your snapshot
5: Attach the volume you created as device /dev/sdf to your recovery instance. (You need to attach the volume after the instance is running because Linux may boot to the attached volume instead of the boot volume and you’ll still be locked out.)
6: On your new instance, run lsblk. You should see the default 8GB volume and the backed up volume you just attached.  (More @ AWS Support):

ubuntu@ip-172-31-3-214:~$ lsblk
xvda 202:0 0 8G 0 disk
└─xvda1 202:1 0 8G 0 part /
xvdf 202:80 0 100G 0 disk
└─xvdf1 202:81 0 100G 0 part

7: Mount the backed up volume using the name from lsblk:

sudo mkdir recovery
sudo mount /dev/xvddf1 /recovery

8: Now you can cd /recovery/home and fix your login issue.
If you lost your access key, edit /recovery/home/ubuntu/.ssh/authorized_keys
You can copy the private key from the new ubuntu instance that you know you have access to.  Worst case, copy the .ssh or the entire /home/ubuntu folder from the new instance to the locked-out volume.

9: Assuming you fixed your permission issue, stop the instance and detach the repaired volume.
10: Detach the old locked-out volume from your original instance and attach the repaired volume under /dev/sda1
11: Start the instance – you should have access now.  Whew.  Next time, take a snapshot before making configuration changes!

Thanks for StackOverflow for help with some tricky bits.

Distributing .Net apps for Windows and OS X

I wanted to distribute the desktop client I made for as a single .exe, without any install or additional DLL’s.  I used ILMerge, a tool from Microsoft Research to merge all the assemblies into one DLL. (If you have the .Net 4.5 beta installed like me, read this  to target .Net 4.0.)  I merged the assemblies with ILMerge:

ILMerge.exe /target:CryptAByte /out:Crypt.exe /targetplatform:"v4,C:Program Files (x86)Reference AssembliesMicrosoftFramework.NETFrameworkv4.0" CryptAByte.exe CryptAByte.CryptoLibrary.dll CryptAByte.Domain.dll Ionic.Zip.dll

ILMerge has the added benefit of making the merged DLL slightly smaller that than the sum of its parts.

I wanted my app to run on OS X too. After I stripped out the Entity Framework Code First data annotations from my data structures, it compiled and ran smoothly on OS X:

Actually, that doesn’t look very good at all. The Mono WinForms color scheme has some kind of sickly yellow tinge to it.  I want to distribute this without requiring Mono to be installed, so I used macpack:

macpack -m:2 -o:. -r:/Library/Frameworks/Mono.framework/Versions/Current/lib/ CryptAByte.Domain.dll CryptAByte.CryptoLibrary.dll Ionic.Zip.dll -n:CryptAByte -a:CryptAByte.exe

The resulting .app was over 300MB!  Looks like I’m going to have to write a native client for OS X.

Getting around packet-inspecting firewalls with free VPN+proxy tools

Surfing the Internet in China requires some creativity to work around the government’s packet sniffing firewall which monitors all traffic into the country. “Packet sniffing” means that a simple proxy will not work – you must encrypt the traffic to prevent the contents of the data from being inspected. Here is a quick tutorial.

The most important part is the tunneling VPN. For this, I chose Hamachi – a free VPN solution from LogMeIn.  Because VPN only provides an encrypted tunnel, you still need a Proxy server to run on the outside.

You have many options for your proxy. Privoxy is easiest to configure and by default it blocks ads and other junk, improving your experience and saving you bandwidth.

Next, you need something to help you manage the Proxy settings on your machine. You can enable it manually, but generally you do not want the proxy enabled for 100% of your traffic. For this I suggest Proxy Switchy – a Google Chrome browser plugin to auto-proxy blocked sites. For Firefox there is Foxy Proxy, but it is not as easy to use. Proxy Switchy makes its settings global, so other apps also use its settings.

Here my stack:

  1. My browser on a slow Chinese network
  2. Hamachi VPN tunneling to encrypt everything
  3. Squid on a fast connection inside the Great Firewall for high-speed local proxy
  4. Privoxy in the USA

Because my home connection is slow, I use Squid -a caching web proxy to cache data on a computer near me. You can also run Squid on your local PC.

Other proxy servers which you can use to speed up your connection:

  • Polipo for DNS caching, HTTP optimization, pipelining, etc
  • Apache with PageSpeed for opimizing web page content (combining inlining, minifying, img optimizing, etc)

You can use these proxies instead of Privoxy or you can layer them together in sequence.

Addendum: how to configure Proxy Switchy:

Proxy Switchy is a proxy helper extension for the Google Chrome browser. It works with your existing VPN/proxy solution. The cool thing it does is automatically switch over to the proxy just for the sites that need it so you get a seamless transition. To get you started, it comes with a default switch rule list which works for most sites blocked by the GFC.  Even though this extension is for Google Chrome, it exports its settings to the system settings, so it works with any browser.

You can use the online rule list at

You can see how I configured some of the rules below:

Setting up continuous integration with TFS

This is a quick visual guide to setting up continuous integration with TFS 2010.

TFS is used to create build definitions for specific trigger criteria.  After the builds are copied to the drop folder, MS Build calls the MSDeploy publishing service to update the target website.

1: Configure a new TFS build controller and connect it to your TFS team project collection:


2: Add a new build definition:

3:  Choose continuous integration build triggers.  I setup up two triggers: one for a rolling build, and one to run every day at 3 AM.

4: Specify the build controller and staging location (build output folder) for the builds

5:  Now you need to copy the builds from the build folder to the web server.  We can do this using MSDeploy.

In the “MS Build Arguments”  field, I put:

/p:DeployOnBuild=True /p:DeployTarget=MsDeployPublish /p:MSDeployServiceURL= /p:DeployIISAppPath=”/OdinDaily” /p:CreatePackageOnPublish=False /p:MsDeployPublishMethod=RemoteAgent /p:AllowUntrustedCertificate=True /p:username=MYDOMAINSERVICE_ACCOUNT /p:password=PASSWORD

Note that TFS will run unit tests by default.

(TODO: I need to figure out how to use integrated authentication so I don’t have to save the credentials in plaintext.)


Now I need to configure the website on the target server:

6:  Setup the integration web server.  Use the same application path as the DeployIISAppPath above.

7:  Install the web deployment tool 2.0 from

When you install the tool, make sure the Remote Agent Service is included in the install.

8:  The Remote Agent Service is manual by default.  Change it to automatic.

9:   That should be it.  Now you should be able to “Queue a New Build” or (depending on your build trigger) check in some code and have your website updated!

You should be able to see your builds in TFS by double-clicking on the build definition.  Any successful or failed builds will show up here:


Closing Notes:

The TFS server, build server, build drop location, and web server can all be separate machines.  Using web deployment, we can locate the web server anywhere on the Internet, so we can use this method for one-click deployment to production as well.  However, we probably don’t want to save the credentials for the production server in the build definition to avoid accidental one-click deployment to live servers!



If you just want to deploy the build output to a folder or file share, you can modify the build process template. Add a CopyDirectory to the step after RunOnAgent.  (Source)


Great image optimization tools for Windows and Mac

Over the last few weeks, I’ve experimented with image optimization tools. Using these tools, I have rapidly eliminated gigabytes of image data from thousands of images without any quality loss. Over time, this should translate to many terabytes of bandwidth savings.

Because these tools can be run in batch mode on thousands of images at a time, they are useful for optimizing large, existing image libraries.   They are lossless and designed for bulk mode, which means you can safely run them without any loss in image quality. But be careful: test on small samples first and learn their specialties and quirks.

An alternative to running them locally is to use Yahoo!, which is an online service that  “uses optimization techniques specific to image format to [losslessly] remove unnecessary bytes from image files” using the same tools. The best way to run is via the Yahoo! YSlow for Firebug, an add-on for the Firebug add-on for Firefox. (By the way, renames GIF images to .gif.png when it shrinks them. I wrote a console app to rename them back to .gif.   Browsers only look at the image header to identify images, so it’s safe to serve up PNG images with a .gif extension.)


ImageOptim Screenshot

For OS X, all you need is ImageOptim, which optimizes JPEG, PNG, and GIF auto-magically.  Seriously awesome tool.  (Free.)

For lossy optimization, JPEGmini is amazing.  It uses knowledge of human visual perception to shrink JPEG’s up to 5X without visible quality loss.  (Semi-free.)


RIOT (Radical Image Optimization Tool):

Though it has a batch mode, this is the best tool for optimizing single images, whether they are JPG, PNG, or GIF. I use RIOT to save every image I work on as well as to reduce the size of existing images that are too large. You can re-compress PNG and GIF images losslessly, but for JPG you want to save from the original file.

RIOT is available as a standalone version as well as a plugin for several image editors such as the excellent IrfanView.


RIOT - Radical Image Optimization Tool screenshot
RIOT – Radical Image Optimization Tool screenshot

The JPEG Reducer:

Run this tool in bulk on all your JPEG images to save ~10% of the file size. This is a GUI front end for jpegtran, which optimizes JPEG images by doing removing metadata and other non-display data from JPEG images. Because it is lossless, it is safe to run on all your image. It will ignore any files you add which are not really JPEG images.

The JPEG reducer screenshot
The JPEG reducer

PNG Gauntlet:

This tool is a front end for PNGOUT which will losslessly reduce the size of PNG images. Warning: if you add GIF or JPEG images to it, it will create PNG versions of those images. Sometimes you want to do this, but if not, don’t add images to the queue.

PNG gauntlet screenshot
PNG gauntlet screenshot

Let me know if you have other tools or ideas for image optimization.


Running you first .Net app on OS X

Installing Mono will allow you to run .Net applications in OS X – as long as they are 100% .Net and do not use any native Windows API’s. However, if you try to simply double-click on a .Net exe, OS X will not know what to do with it, or if you have VMware or wine installed, try to open that executable in another application.

To open a .Net exe with mono, you must open a terminal window, switch to the application directory, and type “mono Application.exe.” If you get an error stating something like “System.DllNotFoundException: gdiplus.dll” it’s probably because your application uses System.Windows.Forms, which must run under X11.  To test X11, install the latest version and run the app again from the X11 terminal.  Did it work? Great!

You probably don’t want to open X11 and type a command in terminal every time you want to run a .Net application, so you can make a script to do it for you.  Open ScriptEditor (/Applications/Utilities/AppleScript/) and create a new script.  Type something like:

do shell script "mono /Users/YOU/Downloads/YourApplication.exe"

Save the script as an Application and you’re done!  Now you can run it just like any native OS X app.  You can try it with SharpChess – just download the exe version.  I was able to download the source of SharpChess, compile it with MonoDevelop on OS X and ran the exe I made on both Mono/OS X and Microsoft .Net/Windows.  Unfortunately, Mono’s implementation of WinForms does not use the native Cocoa API, so it doesn’t look very good – I’ll work on that later.

Customizing Terminal when compiling Mono apps

Pkg-config is a helper tool used when compiling applications and libraries.  If you want to build Mono apps from source using configuration scripts, you will need to put the Mono.pc path in your PKG_CONFIG_PATH environment variable.  If it’s not set, you will get an error like “configure: error: missing the mono.pc file, usually found in the mono-devel package” or “Failed to initialize the ‘Mono 3.5 Profile’ (mono-3.5) target framework”

Here’s how to customize your terminal prompt.   To add the location of mono.pc, edit .profile or .bashrc in the root of your home folder, and add this line

export PKG_CONFIG_PATH=/Library/Frameworks/Mono.framework/Versions/Current/lib/pkgconfig/"

Here is my full .profile file:

export PATH=/Applications/Windows/Darwine/Wine.bundle/Contents/bin:/opt/local/bin:/opt/local/sbin:$PATH
export PKG_CONFIG_PATH=/Library/Frameworks/Mono.framework/Versions/Current/lib/pkgconfig/
export DISPLAY=:0.0

(When using MacPorts, the path is /opt/local/lib/pkgconfig/)

Tutorial #2: Organizing your music library

My music library began sometime in the late 90’s with hours spent waiting for MP3’s to download from the Internet or be ripped with my 4X CD-ROM and AMD K6-2 processor. Today, I mostly get my music from the Amazon MP3 store and the bargain shelf at Half-Price Books. When I was preparing to transfer my music to my new Mac, I wanted a way to clean up my music library and add correct meta data (song info and album art) to all my songs.

Tagging and organizing files:

Thanks to audio fingerprinting technology, it is now possible to quickly identify a song based on a short sample of its audio. There are a number of free and commercial tools for doing this. The one I used is MusicBrainz Picard – a free program for Mac and Windows. Picard comes with a graphical online tutorial, so I’ll just provide some additional tricks:

Be sure to go through Picard’s options as well as the list of plugins available online. I got the cover art downloader to get album art from some additional source. If you are willing to do additional work, you can get plugins that automatically search or Google Images for album art for Picard and iTunes.

Unfortunately, Picard does not support AAC4 files, so any files purchased from the iTunes music store will not be updated -but that’s OK, since iTunes music is already tagged. I enabled the rename file option and set the Various Artists custom field to nothing. This moves songs from compilation CD’s into the folders of their respective artists. I checked the Move Files option and picked a folder on my desktop so I could see just how much music was tagged. This also allowed me to delete all the empty folders.

Ultimately, Picard was able to look up about 90% of my music and tag 96%. (You can use existing tags to find album art even if the audio fingerprint is not recognized.) About the only tracks it did not recognize were AAC-encoded files and bootlegs recorded from radio and live events.

Fixing corrupt MP3’s:

After tagging all my music, I had to fix all my corrupt files. iTunes is very picky about only playing files that conform to the Mp3 standard, and will silently skip any that don’t. This is a tricky problem to identify, as iTunes will silently skip over bad Mp3 files in Windows and silently refuses to import them at all in OS X. I only noticed that it was doing so because of the different song count in iTunes. Ultimately, over 20% of my music was corrupt – but if your MP3’s are from the Internet, or a number of years old, your count may be much higher.

The best tool I found to fix corrupt files was MP3 Validator. It’s a Windows application, but I was able to run it in Crossover for Mac, and I assume it will work in other Wine distributions as well. Just follow the instructions to run it on all of your files before importing it into iTunes.

iTunes AppleScripts and helper apps:

Once your music is in iTunes, I found a few helper applications and scripts to keep it organized. Doug’s AppleScripts for iTunes has hundreds of scripts you can run to perform various operations in iTunes. My favorites are Super Remove Dead Tracks, and embed album art in files. I also use GimeSomeTune to automatically look up lyrics for songs and control iTunes with my Apple Remote. In Windows, I also used TuneUp, a commercial music labeling and cover art application that is “coming this fall” to the Mac.

Tutorial #1: Using a Mac with a Windows pc

This post is about getting started: moving your files to the Mac (and back), sharing hardware, and running Windows applications on your Mac.

Sharing input devices between a Mac and Windows (or Linux) PC

I like working on my Mac, but I still have many things that I need my Windows pc for, so I dedicate a single monitor to each on my desk. I switch between OS X and Windows constantly, but I don’t want to keep two keyboards and two mice on my desk.

One solution is to use a KVM switch. A KVM switch is a gizmo that allows you to switch your Keyboard, Video and Mouse between two computers. But we already have two monitors, and so there is a free software option: Synergy. Synergy allows you to share input devices and clipboards between two computers, kind of like two monitors connected to a single computer.  Get it for for Mac, Windows, and Linux.

After you install Synergy on both computers, you need to set up one computer as a client and the other as a server. I use my Windows PC as the server, since it’s always connected. LifeHacker has some detailed instructions for setting it up.

Sharing and synchronizing files between a Mac and a Windows PC

If you’re like me, there are some files you want to have copies of on both Windows and Mac, and some files you want to keep on one PC and access from both. For example, I want to take my music with me but also want to be able to play at home. I have a huge photo collection that requires my Windows-licensed editing software, so I keep that on my PC. I also have my work files, which I keep on a portable hard drive, so I can access them on any computer.

To share files:

First, get both computers online on the same network. Second, enable file sharing in OS X and Windows XP or Vista. When specifying the folder to share, I shared the home folder of my respective Windows and OS X user accounts, which gave me access to my music, docs, etc. Now, you can easily copy files and directory from one computer to another. I suggest you map the Mac share as a drive in Windows. In OS X, you can select Go -> “Connect to Server” and then type “smb://PCNAME”, where PCNAME is the Windows computer name (Computer->Properties) to mount any of your shares in Finder.

To keep files in sync:

There are many neat OS-X exclusive tools for editing music (more on that in a later tutorial) so I want to be able to edit my music and synchronize it back to Windows, including any moved and deleted files. Furthermore, I want to synchronize just the changes, without having to delete my Windows version and copy everything every time. The best tool I’ve found for doing this is Microsoft SyncToy. It’s Windows-only, but you can use it to synchronize folders either way. There’s an outdated tutorial on this at LifeHacker, but I suggest you try all three synchronization options, and use the “Preview” option before every sync to verify that it’s doing what you want.

Share a backup drive:

You have automatic backup enabled on all your computers, right?  Right? Backing up your Mac is very simple: just pick a destination for your backups and enable Time Machine. In my case, I have a shared external drive I use for several Windows computers as well as my Mac.  Time Machine requires a natively formatted partition, so I had to shrink the NTFS partition in Windows, create an empty volume for the Time Machine partition, and then format that partition with Disk Utility to be a native OS X volume.  Now I can use a single drive for all my Windows and Mac backups.

Accessing your files remotely from another computer:

A great free tool for accessing your files from your Mac/Windows remotely is another Microsoft product – Windows Live FolderShare. It works quite well with both Windows and OS X, even behind firewalls. You can set up synchronized folders, or just browse the entire file system on the web.

Running Windows (and Linux) apps on a Mac

There are three ways you can run Windows applications on a Mac:

· Use Boot Camp to install Windows on a dedicated partition and rebooting to switch operating system.

· Run Windows in a virtual machine application such as VMware or Parallels. You can use both with your BootCamp partitions, so you don’t need to keep two installations.

· Run Windows apps natively with Wine. Wine is a compatibility layers that allows you to run most Windows applications at full speed. I use the commercial product CrossOver for Mac, but you can also use the free Darwine. (Get the latest unofficial build of Darwine.) Using the unofficial TriX add-on that comes with Darwine, you can add native Windows components to Wine, increasing the range of the Windows applications you can run.

Viewing Windows files on OS X

OS X does not support Windows Media files by default, but there is an easy fix for that: Flip4Mac ads Windows Media support to QuickTime, and the VLC media player will play pretty much everything else you’ve got. While you’re on the Microsoft website, you may want to get Microsoft Remote Desktop for OS X, so you can connect remotely to Windows pc’s.

Using the Apple Keyboard in Windows and Logitech keyboards/mice with OS X

I have an Apple Keyboard which I occasionally use with Windows, so I got AutoHotKey and cobbled together an autohotkey .kbd script to remap some of the keys in Windows to their usual positions on a Windows keyboard.

I also got Logitech Control Center for my Logitech keyboard and mice – unfortunately it only recognizes one of my two mice.