30 December 2008

Speech / Talking Results for RSpactor

After discovering that autospec was taking up a lot of CPU while it was "idle", I looked for alternatives. I found RSpactor, which doesn't take as much CPU, and is better since it's a dedicated window with nice GUI results and so. My only gripe was that I'd gotten used to the spoken results output I'd rigged up for Autospec.

I really prefer the spoken results because it is not visually distracting, and doesn't require me to be paying attention to the area of the screen where they pop up (I use a 30" monitor, plus the 17" laptop monitor, so I'm not always looking at the right spot for the Growl notices, and I don't like the monitor-wide growls). Lucky for me, RSpactor is open source and is up on GitHub.

I thought it was an Objective-C Cocoa app, but as it turns out it's a RubyCocoa app. I'd built RubyCocoa apps before, so was familiar with that, plus of course know Ruby. It wouldn't have mattered either way (I'm fine working in ObjC as well), but this did make things a slight bit faster.

Anyway, did a quick bit of work and got a new preferences panel for Speech added, and then rigged that up to test results, so that I now have my desired spoken results. A slight improvement comes along in that it (optionally) speaks the number of passing/failing/pending tests - just insert a question mark in the string/phrase you want spoken for each and it'll say the number at that spot.

I've sent a pull request to RubyPhunk, but no guarantees it will get added to the main line. In the mean time, if you're interested, grab it from my RSpactor fork on GitHub. Update: RubyPhunk integrated my changes into the main RSpactor code.

29 December 2008

DealRank™, Similar and Nearby Hotels and Deals, and "Interesting" Deals

Over the last couple weeks, we've built some cool new features on DealBase.com. Three in particular...

Interesting Deals Pages



This is fairly simple, but one of my favorites. There is a list of these pages on the home page in the right hand column, under "Interesting Deals". You can find the cheapest deals, the best deals at 5 star hotels, and so on. But, my favorite to check out is the Most Expensive Deals page. This is where we list the most expensive hotel packages, and you can get a glimpse of what the uber-rich might plunk down for. Deals currently as as much as $110,000/night, yes per night. But go take a look and check out what those deals include. Things like private jet for transportation, $40,000 in jewelry, and so on. Oh, how about it includes a trip to Russia, and you know, to make it extra cute, a "Presidential Puppy"! Crazy.

This page is just a lot of fun to explore and see what some of the hotel's come up with.

DealRank™



DealRank™ is our new way of ranking deals to show you the best value. This is an algorithmic/mathematical determination based on various criteria for deals. It factors in the nightly rate, percentage savings, and various other aspects. We expect this to be one of the best ways to sort deals and help you evaluate what is truly a great deal. Each person looking at hotels has different reasons and criteria for evaluating deals. Some want the absolutely best price, some want the best overall value, some are looking to get the biggest savings, or the most extras thrown in, and so on. DealRank™ should help provide a more holistic view on this. We've done a lot of testing and tweaking here, and will continue to do so as the site evolves, but I am pretty darn happy with the results so far. We still have many other ways to sort deals (see the sort menu in the upper right of any deal listing page), but this is our new default.

Check this out on say the New York City hotel deals page, where previously the most economical deals didn't always rise to the top (percentage savings wise, NYC tends to have the best deals at more expensive hotels).

Similar and Nearby Hotels and Deals



Another quite useful feature is the list of similar and nearby hotels or deals (depends on the what kind of page you're on) listing at the bottom of some pages. This factors in various criteria, but one thing for sure is that all of the listed hotels/deals are always within a 50 mile radius. I personally used this when looking for deals on a trip I took the other weekend.

An example is similar and nearby hotels on, one of my favorites, the W hotel in Seattle. Amazon put me up in this hotel. If you've never stayed at a W, they're pretty cool. While working for Adobe for many years, I stayed at the Fairmont Seattle a ton, and you'll see that listed as the second hotel in the list of nearby hotels. But, the W is more edgy, they tend to have cool bars, upscale, swank rooms, etc. Seattle is a favorite place of mine to travel to.

I'm working on a particularly cool feature right now, that I can't wait to get out there. Will take a bit longer as we want to get the UI right and make it of utmost use, but I think it will be something of great value to everyone. Stay tuned.

11 December 2008

Browser Testing Services (BrowserCam, CrossBrowserTesting, etc.) - What I Really Want

When testing web apps for a general/wide-ranging audience, one must test across a slew of different browsers and operating systems (and different versions of each). This is a real pain, even if you have an army of testers at hand (which most of us don't). I employ various tools to help me with this (CrossBrowserTesting.com, BrowserCam, VM Ware with different VM setups, etc.). But, the reality is that I don't feel like these really stack up to what I need. More specifically, BrowserCam, and its competitors do not. CrossBrowserTesting is actually pretty awesome and does what it advertises quite nicely, it's a favorite tool right now.

What I'm after is a way to see what a survey of pages on my site look like on different browsers and OS's. This is by no means a definitive test of a site, but more of a quick visual inspection of appearance, without having to fire up a dozen different pages across maybe 30 different browsers/OS configurations. We use an automated test suite to test the bulk of other things, but appearance can't be tested that way.

BrowserCam helps, but honestly I find it quite lacking as a tool in this arena. Note, I'm picking on BrowserCam, as that's the one I use, but the others (BrowserShots, Litmus, etc.) all seem to suffer some or all of these problems, and further, they don't seem to have projects or ways to establish a standard suite of tests. Getting on with it, all of this really boils down to two primary things:


  • Ability to edit the settings on a project's images. As far as I can tell, once you select the URL's, image size, browsers, etc. for a project, there is no way to change any of that! What if I just want to change the window size? Can't, I have to create a new project (or add new images to the existing one, replicating all my previous work. This seems like an obvious hole.

  • Automation. I want to be able to automate regeneration of the project. I'd like something like a simple URL/HTTP API to do this, so that I can, from a command line, use curl or similar to issue a single HTTP request that will regenerate a project's images. Thus, the API would need to use HTTP authentication or similar, and specify which project to regenerate. With this, I would be able to automate requesting a regeneration of the project as part of/after deploying my application.


I'm currently looking into creating my own script to drive the automation, but it's looking non-trivial due to the way BrowserCam's pages work: everything winds up on the same URL, actions/links are all JavaScript that post forms, with a lot of params and a bunch of params that will take some time to ascertain if a) they're required, and b) all the info in the parameter is needed, etc. If someone's already done this, let me know! Also, suggestions for tools to drive the automation? I've used the Ruby Mechanize library in the past for things like this, and this may work if I can determine the params, etc., and if Mechanize can even drive the JavaScript links (not sure it can - anyone?). I can't use something like Selenium or Windmill because this needs to work from a script/command line, and not rely on a browser.

Finally, if anyone knows of a service similar to BrowserCam that solves these, even if it supports a few less browsers, do let me know. At very minimum it would need to handle Internet Explorer and Firefox, preferably also Safari, and cover Windows XP, Vista, MacOS X 10.5, and one popular Linux flavor. Nice to haves would include Opera and various others.

02 December 2008

GitHub Post-Receive Hook for Pivotal Tracker

Over the holiday, I whipped up a quick GitHub Post-Receive Hook for use with Pivotal Tracker. This is just a small web service, implemented in Sinatra. It was my first time using Sinatra, so any suggestions on improvements are of course welcome (as are they in general, this is open source). I've put the code up on GitHub in the somewhat painfully named tracker_github_hook repo.

The service supports multiple GitHub repos and Tracker projects, so you can run a single service that integrates multiple projects. The service will figure out which commits go to which projects based on a config file on the server that associates a GitHub repo URL (make sure to use the http version of the URL, not https), to a Tracker project ID. For example:


tracker_github_hook:
github_url: http://github.com/chris/tracker_github_hook
tracker_api_token: a1234b56789c0defa12b3c4def56a78b
tracker_project_id: 123


You will need to take care of running the service within your particular server setup. I'm personally running it via Thin/Rack, behind Nginx. I have it setup on the same server that runs our continuous integration system, so these two are differentiated by subdomain.

It should be noted, I will not claim this thing is secure. You run it at your own risk, etc.

Aside from getting the service running on your own server, you'll need to add the URL to it as a GitHub post-receive hook for each project you want to integrate. To do that, go to the Admin tab of your GitHub repo, and then the Services tab. At the top you'll see where you put the URL in. The URL is just the root of the service. Also see GitHub's docs on post-receive hooks as it illustrates just how I built this, how to set it up, etc.

Hopefully others find this useful. Or, what I really hope is that the Pivotal guys get with the GitHub guys and add a standard integration service, where it's automatically configured on the Tracker side, and you just need to turn it on on the GitHub side much like the other service integrations.

19 November 2008

Using View Helpers and Controller Actions from Rails' script/runner

Recently I coded up a controller and view that produces a large data file. It was done this way because it needs to generate the proper URL's and take into account a fair bit of stuff at the view level in our application (like pagination, and the custom URL's we have, so extensive use of our URL helpers and other things at this level). The action takes a long time to run (about 3 minutes), so of course I page cache it. However, 3 minutes breaks the web server and/or Mongrel timeouts in our environment, so won't get served up in production and staging environments.

The solution, and I'd love to hear other ways, as I certainly won't claim this is the best way, was to create a small script/runner script that simply executed this route within our app. However, script/runner doesn't normally give you controller and view layer access, plus it doesn't have the context of a web request, so it won't know say the host it's being run against and so on. However, one can leverage an Integration::Session and manually set the host to get that. Thus, the script becomes as simple as:


#!script/runner

require 'action_controller/integration'

session = ActionController::Integration::Session.new
session.host = 'www.yourdomain.com'

session.get_via_redirect '/controller/action'

Note, I'm using get_via_redirect here because the particular action I call does a redirect after it expires the cached page of the action it's redirecting to. If you don't have a redirect going on, then you can just call get instead. This does not output anything of course, but for us, just caches the resulting page, which is exactly what we want.

18 November 2008

Video of DealBase CEO's Demo/Presentation at PhoCusWright Show

Sam Shank, the CEO of DealBase, the hotel deals site I work on, unveiled the site officially at PhoCusWright 2008. The video of his presentation (as well as all the others) is now available. Go here, and then if you hover your mouse over the video you'll see a slide/icon for each company that presented along the top - scroll to the right until you find DealBase, and then click it to watch. Presentation is about 5 minutes, and is nearly all demo and discussion of our advantages, strengths, how the site works, and business model aspects, etc.

I think the demo went great, and I've continued to hear tons of praise and useful feedback. The demo was completely live, no smoke-and-mirrors; Sam was not kidding when he told the audience to go check out the deal he just posted during the demo.

We somehow (I'm truly surprised, but of course I am a bit biased) didn't get picked as a top 6 for the show, but for example, others disagreed as well. Tim Hughes, author of The Business of Online Travel blog, listed DealBase.com in his Top Six pick of 2008 PhoCusWright Travel Innovation Summit finalists. Regardless, interest has been outstanding, and we're really excited. We're still cranking away with new features, and various other improvements. It's a lot of fun!

14 November 2008

DealBase - Check out the new features

Since doing our initial public access to the DealBase.com site, you know, the web site that has the most hotel deals on the web (50x more than anyone else), we've been working hard on improving the experience for users. Today we rolled out the latest major revision to the site, which includes some pretty cool stuff. First, let me list these things out, and then I'll cover a few interesting technical bits that occurred along the way.


  • New home page: yes, a brand new, much nicer home page. This really helps tell you what's great about the site. It also has a new search box that I call the "omniscient search" - a single search field that auto-completes on our deal locations and hotels, which is much nicer than the separate search boxes we had before. This same search box is also used everywhere else on the site.

  • Deal filtering: this was a big one. Seems fairly simple on first blush, but lots of interesting stuff going on in the background. You can now filter deals on any deal listing page, by various criteria. You can combine filters too. So, for example, you can narrow down the deal listing by dates, prices, and hotel ratings, allowing you to, for example, look for 4 star hotel deals valid from April through August of 2009, in a certain price range, for a given city. Very handy. Here, take a look at the deals for Hawaii page as an example.

  • Deal sorting: in addition to the filtering, you can now sort the deal listing a myriad of ways, from most percent savings or most dollar savings to high or low rates, or most recently posted, etc. Combining this with filters allows you to really narrow down what deals are best for you.

  • Various UI/visual improvements. Meagan, our talented designer, has done a lot of work here.

  • Speed improvements. Various database queries and other operations for the site have been sped up, sometimes in small amounts, sometimes in extremely drastic ways.

  • More deals: we should probably have about 10,000 deals on the site by the time you read this. This is very exciting for us, and shows how serious we are. These are all very real deals, no link bait, no BS. These are true deals, checked by our team of editors. This gives us about 50x more deals than any other hotel deals site.

  • Comments! You can now comment on deals. No login required (just like the rest of the site). Comments do get reviewed, and we'll be watching for spam and so forth, but this is a great way to tell other people about a good deal or a hotel you like, etc. Comment box is at the bottom of a deal's page. For example, check out this wild personal fireworks show deal at the Ritz-Carlton in New York.

  • Chrome browser support. DealBase works and looks great in Chrome.


And now, since this is a geek blog anyway, a few technical bits...

  • We're running Rails 2.1.2, which is the latest release of Rails as of this writing. We try to stay up to date regardless, but this was a key release, as it fixed some tricky ActiveRecord named_scope issues when using SQL JOINs. Our filtering and other work requires various JOINs and the fixes here prevented us from having to explicitly hand craft a bunch of queries. Thanks Rails team.

  • My current favorite gem is Ryan Bates' scope-builder. This is just so nice for building up big, conditionally chainged named_scopes. As you can imagine this is heavily used in building up combined filtering and sorting of deals.

  • More jQuery goodness. I continue to love jQuery, and use it extensively. It is used heavily in the filtering features, pulling in some nice slider UI elements, and also using it for the "updating" status and dimming effects when the AJAX filtering operations are running.

  • One issue we ran into with Chrome was using the :cache feature of Rails' javascript_include_tag. If we used this to combine and create a cache file of a bunch of separate JavaScript files, Chrome failed to properly load/parse the resulting JavaScript file. This broke pretty much everything JavaScript wise in Chrome, but the simple fix was to not use :cache to achieve this.

  • As a helpful, and economical testing tool, we've been using CrossBrowserTesting.com to give us VM's of a slew of different OS and browser combinations. I tend to run VMWare Fusion and do a lot that way, but it's also a pain to keep up a bunch of different VM images, or have to fire that up for a quick test, etc. We're also using BrowserCam.

  • Finally, another shout-out to the Hoptoad service/folks. This continues to be an outstanding service for us. It works really really well, and it's free, so to me it is the winner amongst the competitors.

  • We've done a fair number of modifications to our tagging plugin (acts_as_taggable_on_steroids), although they're all particular to our app, so not sure if any will get contributed back. Things like enforcing all our rules about tag naming and so on. The same goes for the will_paginate plugin. But in this case, I'm hoping to contribute these back as soon as I can properly contribute the patches and ensure they'll work in any app.


I'm sure there's more, but that's what I can think of for the moment. It's been a busy couple weeks, and I'm really excited about the state of the site these days. We've been getting some great feedback, and I've had a few friends book deals they've found on the site (one friend saved $800 on a trip!). If you have feedback, don't hesitate to add a topic or question in our Get Satisfaction feedback system. This tells us what things you'd like to see, or any problems you're finding, etc.

04 November 2008

Speed Up and named_scope acts_as_taggable_on_steroids Finds

I use the acts_as_taggable_on_steroids plugin for tagging. I've been happy with it, but recently have been adding a lot of searching, sorting, filtering, etc. functionality to an app, and needed the find by tag functionality to work as a named_scope, so that I can have it within a chain of many named_scope finders.

This turned out to be trivially easy to do (without having to copy the SQL and put that into my named_scope). To add a "tagged_with" named_scope to your model that is already acts_as_taggable, you can just do this:


named_scope :tagged_with, lambda { |tags| YourModel.find_options_for_find_tagged_with(tags) }


I've been doing a lot of benchmarking and performance improvements to our SQL as well, and decided to see if this was any different in performance compared to just doing YourModel.find_tagged_with that acts_as_taggable_on_steroids adds. As it turns out, the named_scope version, which is really identical at the core, is faster, especially if called more than once (per thread/Rails request)! Here's the benchmarks to prove it:


Single query/1 Iteration:
user system total real
named_scope 0.040000 0.000000 0.040000 ( 0.045085)
find_tagged_with 0.020000 0.010000 0.030000 ( 0.108233)

10 Iterations:
user system total real
named_scope 0.030000 0.000000 0.030000 ( 0.040282)
find_tagged_with 0.420000 0.030000 0.450000 ( 2.040245)


I repeated these multiple times and got the same/similar results each time. So, for a single query, it's only about 2x faster, but if you start issuing this same find multiple times per request then I believe it's the Rails query caching that kicks in with named_scope, but apparently not with the generic find with all the options (I'd love to hear some commentary on this from someone who knows the details).

Regardless, using a named_scope is nice because now you can more easily chain a tag find together with other named_scope items.

16 September 2008

Mocked by Default, but Unmocking in Some Cases with RSpec

Uh, ya, another great blog title, but we'll get over it. We use geocoding in our app, and that's a relatively costly operation time wise, especially when you may be doing it hundreds or thousands of times when your test suite runs. I can't stub out the objects that use it in many cases, so I wanted to stub out the actual geocode call unless I truly needed real geocoding (which is only when I'm testing the actual geocoding itself, and thus is a very small part of the test suite).

We use RSpec Specs and Stories and I wanted to mock out the geocoding by default, but unmock it in a few places. I asked about this on the mailing list, Googled and so on, but didn't find a solution that was working. So here is what I wound up doing...

In my spec_helper.rb file, I added:


Spec::Runner.configure do |config|
config.before(:each) do
# Setup fake geocoding unless told not to
unless @do_not_mock_geocoding
fake_geocode = OpenStruct.new(:lat => 123.456, :lng => 123.456, :success => true)
GeoKit::Geocoders::MultiGeocoder.stub!(:geocode).and_return(fake_geocode)
end
end
end


What this does is mock the geocoding unless a test has set the @do_not_mock_geocoding variable to true. One caveat, at least from what I've found, is that you need to set that to true in a before(:all) block in your tests, so that it happens before the before(:each). This is minor, as you can just have something like:

describe "with real geocoding" do
before(:all) do
@do_not_mock_geocoding = true
end

# your tests that want real geocoding
end


The impact this has had on our test suite is tremendous. I had already had some partial mocking of the geocoding in place, but was sweeping the system to put it in because the time it took to run our test suite was out of hand at about 13 minutes! Now that I've got this in, it runs in 2 minutes! Geocoding is used in two of our most core objects, which is why it has such a big impact on the test suite. This is one place mocking has really proved to be a massive value!

13 September 2008

Use the Lighthouse API for Mass Ticket Changes

Lighthouse is a nice issue tracker/bug database. We're using at at DealBase and some of my other projects. However, one thing it doesn't have, is a way to set the state of a bunch of tickets at once, or assign a slew of tickets to a certain user, etc. I need to do this regularly, as anytime we do a deploy to our staging environment, I need to go mark any bugs that were set to 'fixed' to now have a state of 'deployed', and then assign them to someone else to be verified. Lighthouse API to the rescue...

You can whip up a quick Ruby script to automate operations like this. The following is the script (tweaked to protect the innocent) I use to mark all tickets that are set to "fixed" to now be "deployed" and assign them to another user for verification. Obviously you can change this pretty easily to affect various other changes. I recommend playing with the Lighthouse API in IRB to see what the attribute names are for Tickets and so on.

02 September 2008

We're Hiring for a Rails or Similar Developer

The startup I work for, DealBase, is in need of another developer. This is pretty exciting for us, and the business is quite exciting as well! We're looking for someone with Rails experience, as well as MySQL, Git, TDD (BDD is good too), agile development practices, and so on. You can read all the details in our posting on the 37signals Gig Board. Please make sure you respond to the ad, as opposed to sending me email (or asking via a blog comment). You can mention in your email though that you found it via my blog entry.

One thing I want to specifically note, you don't have to be a "Rock Star"! Sure, we want the best people, but I too am sick of this "Rock Star" designation. I'd like to see someone who's passionate about software development, web apps, technology, TDD, JavaScript, and so on.

Another thing to note, we're a partially distributed team. I myself live in Eugene, Oregon, others are in the Bay area, one person Boston.

29 August 2008

WordPress, Nginx, Subdirectories, and Separate Server Proxying

Ok, first, apologies for the title of this entry, there wasn't exactly a concise way to title what this is about :) So, here's a better description of what this entry is about...

I needed to setup a WordPress blog for a client, but they needed the blog to be located as a subdirectory of their primary domain name, as opposed to its own domain name or a subdomain. Furthermore, we use Nginx for all our web servers, and so needed to be able to configure Nginx both on the main domain name serving infrastructure, as well as the machine that would host the blog. There were some other writeups on the web, but none quite covered this case - basically the complication of proxying to it, as well as WordPress being in a "subdirectory" in terms of the path. So, here's how I got it working.

But first, a quick intermission. This was made possible by the fine support folks at EngineYard, in particular David S, but others as well. They provided the solution to the final problem we ran into, as well as a few tips along the way. Our primary servers and testing/staging systems are hosted at EngineYard, but the WordPress blog is actually on a Slicehost slice, so double thanks to them, as even though part of it involved their systems, part did not. As so many others have said, EngineYard is really superb, and absolutely worth the price. Getting on with it...

I'm going to do this not precisely in the order I did it, but the idea is to cut out my missteps and just give you (and me for future reference) a recipe for this.

Setup the Wordpress Server



For me this was an Ubuntu 8/Hardy machine, using MySQL as the DB, and WordPress 2.6.1, with Nginx as the web server. Presuming you have a base Ubuntu install, you may or may not need all these steps (e.g. you may already have PHP):

Install PHP CGI, and PHP MySQL components:

sudo aptitude install php5-cgi php5-mysql

Install Lighttpd to get the spawn-fcgi program that will take care of handling PHP FastCGI requests, but then stop Lighttpd and remove its daemon service:

sudo aptitude install lighttpd
sudo /etc/init.d/lighttpd stop
sudo rm /etc/init.d/lighttpd
sudo update-rc.d lighttpd remove


Next, setup the spawn-fcgi program to run as a daemon. You need to pick a port you'll use, in my case I happened to choose 53987 (at random). The following daemon script should be placed in /etc/init.d/spawn-fcgi or a name you like. Note that this is somewhat specific to Ubuntu, but likely works on similar distros or can be adapted for RedHat-based ones, etc.:


#! /bin/sh

### BEGIN INIT INFO
# Provides: nginx
# Required-Start: $all
# Required-Stop: $all
# Default-Start: 2 3 4 5
# Default-Stop: 0 1 6
# Short-Description: starts FastCGI for PHP
# Description: starts FastCGI for PHP using start-stop-daemon
### END INIT INFO

PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin
DAEMON=/usr/bin/spawn-fcgi
NAME=spawn-fcgi
DESC=spawn-fcgi
DAEMON_OPTS="-f /usr/bin/php-cgi -a 127.0.0.1 -p 53987"

test -x $DAEMON || exit 0

set -e

case "$1" in
start)
echo -n "Starting $DESC: "
start-stop-daemon --start --quiet --pidfile /var/run/$NAME.pid \
--exec $DAEMON -- $DAEMON_OPTS
echo "$NAME."
;;
stop)
echo -n "Stopping $DESC: "
start-stop-daemon --stop --quiet --pidfile /var/run/$NAME.pid \
--exec $DAEMON
echo "$NAME."
;;
restart|force-reload)
echo -n "Restarting $DESC: "
start-stop-daemon --stop --quiet --pidfile \
/var/run/$NAME.pid --exec $DAEMON
sleep 1
start-stop-daemon --start --quiet --pidfile \
/var/run/$NAME.pid --exec $DAEMON -- $DAEMON_OPTS
echo "$NAME."
;;
reload)
echo -n "Reloading $DESC configuration: "
start-stop-daemon --stop --signal HUP --quiet --pidfile /var/run/$NAME.pid \
--exec $DAEMON
echo "$NAME."
;;
*)
N=/etc/init.d/$NAME
echo "Usage: $N {start|stop|restart|reload|force-reload}" >&2
exit 1
;;
esac

exit 0

You can download/get the above script here.

Then make it a daemon by doing:

cd /etc/init.d
sudo update-rc.d spawn-fcgi defaults


Now install the WordPress files and setup the database

Download the latest tarball, and unpack it where you want it to be. In my case, this was in /var/www/wordpress/blog. Note the sort of double directory there. That is because the subdirectory within the path that the blog is accessible at is "blog", i.e. http://www.example.com/blog. The key here is that the directory name that WordPress ultimately lives in, needs to be the same as the subdirectory in the path of the URL.

Follow steps 1-5 of the WordPress install guide, where you setup the database, etc.

Install and Configure Nginx:

sudo aptitude install nginx

My configuration file for Nginx, which lives in /etc/nginx/sites-available/blog, and is symlinked from /etc/nginx/sites-enabled/blog, looks like this:


server {
listen 80;
server_name blog.example.com;

root /var/www/wordpress;
index index.php;

access_log /var/log/nginx/blog.access.log;
error_log /var/log/nginx/blog.error.log notice;

location / {
root /var/www/wordpress;
index index.php;

# this serves static files that exist without running other rewrite tests
if (-f $request_filename) {
expires 30d;
break;
}

# this sends all non-existing file or directory requests to index.php
if (!-e $request_filename) {
rewrite ^(.+)$ /index.php?q=$1 last;
}
}

location ~ .php$ {
include /etc/nginx/fastcgi_params;
fastcgi_pass 127.0.0.1:53345;
fastcgi_index index.php;

fastcgi_param SCRIPT_FILENAME /var/www/wordpress$fastcgi_script_name;

root /var/www/wordpress;
}
}

You can download this file here.

In the above configuration file, if you didn't locate your WordPress install at /var/www/wordpress/blog, then you need to edit the various /var/www/wordpress paths you find. Also, adjust the port number as needed, and any of the paths to log files, etc.

To explain this config file, you have an initial location directive that has two parts within it. The first says that if the request is for a static file, just serve it up and ignore any further rules. The second says that if it doesn't find a static file, then rewrite it to an index.php based URL, and continue. This leads to the second location directive which processes PHP requests, sending them to the fastcgi processor, which is setup via the SCRIPT_FILENAME setting, as well as the common settings in the included fastcgi_params (which is part of a base Nginx install).

WordPress Install and Testing



Now we can fire everything up on the WordPress server, get it installed (WordPress wise), and make sure it's working, before moving on to setting up the proxying from the other server that handles the domain. The below depends some on how you access your WordPress server directly (e.g. via IP or subdomain, etc.). I was doing it by IP while waiting for the DNS for the blog.example.com part to be working.

Start up the spawn-fcgi, and Nginx daemons:

sudo /etc/init.d/spawn-fcgi start
sudo /etc/init.d/nginx start


Proxy/Domain Server Setup



On the server(s) that host your domain, you need to setup the proxying in Nginx to send requests to the /blog path over to your WordPress server. To do that, I added the following to my Nginx configuration:


location /blog {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect false;
proxy_pass http://;
}

This file is availble here.

Note that you need to insert the IP Address of your WordPress server where indicated (or use the subdomain - I don't have that going yet, as I'm still waiting for my subdomain DNS to percolate through).

WordPress Installer and Config



Now surf over to the WordPress installer at:

http://IP ADDRESS/blog/wp-admin/install.php

This should fire up the WordPress installer and off you go. Now, big caveat here, as this isn't quite how I did it, so I can't say for certain that this install phase will work properly. To diverge for a minute, incase this happens to you, I initially had WordPress installed directly in /var/www/wordpress. I then hit http://IP ADDRESS/wp-admin/install.php, and did the install. Once installed, I went into the "Settings" tab of WordPress's admin area, and changed the site URL and home page URL's to be http://www.example.com/blog. And then, I moved the actual WordPress files from /var/www/wordpress to /var/www/wordpress/blog.

Back to the suspect way to use... You will want to ensure in the Settings panel of the WP admin to set your blog URL's (site URL and homepage) to be http://www.example.com/blog. This is so that WordPress will generate URL's that look like that (instead of looking like one with the IP address in it, or however you originally accessed the installer).

11 July 2008

Fork of acts_as_versioned to provide version diffs and more

Update: Added the earliest? and latest? methods, see below.

On my current project, I've recently begun using Technoweenie's acts_as_versioned (I also looked at simply_versioned which also looks great - you'll need to evaluate for your own needs). This project has some particular needs around versions that aren't covered by the existing plugin, so of course I forked it on GitHub and have been adding my enhancements to my fork. These are publicly available.

Right now there are two enhancements that may be of interest to others:


  1. For each version, there is now an updated_attributes field that stores an Array of the attribute names that were changed in the creation of this version. This is essentially the same array provided by the changed method from ActiveRecord (requires 2.0 (or 2.1?) or later for the "dirty" handling stuff). This provides a nice way of being able to show what changed between versions, without having to compute that yourself (as well as compute it each time you need to display it). Since the data is right there when making the version, I just store it off in a serialized column. This will not record "non-versioned columns". This obviously requires another column in your DB table, and I've amended the migration method, but if you already are using acts_as_versioned, you'd need to manually add the updated_attributes column (of type "text").

  2. A small tweak, but the revert_to! method now has an optional second parameter that, if set to true, will remove all newer versions. In our workflow, when you do a revert/rollback, you no longer want the newer versions, so we delete them. You can achieve this in your app without deleting them, but it just seems cleaner in our case. You can also do this manually by calling the same method that revert_to! uses, delete_newer_versions.

  3. Added the earliest? and latest? methods to the model's Version object as well, so that you can call these on an individual version instance - very convenient for me at least.



I have only tested this under Rails 2.1 and MySQL and SQLite (all the unit tests for acts_as_versioned use SQLite). I have not sent Rick a pull request, because a) I've only been using this for a few days, and I'd like to have it in use a bit longer to be sure of the approach and quality. For example, to really be ideal for the mainline distro, I suspect that adding an option to acts_as_versioned when you define it in your model, would be best for the updated_attributes storage aspect, e.g. making this part optional. It's never optional for my needs, so I haven't spent the time to do this. If you use my fork and do that, please send me a pull request.

The next piece I'm looking at is supporting versioning of associated models. This will be a bit more involved. There is acts_as_versioned_association, but it says it is not maintained and doesn't work with Rails 2.x. However, I may be able to fork that and bring it up to speed for the latest Rails and acts_as_versioned...

01 July 2008

Update on Rails, jQuery, autocomplete

Today I changed my auto_complete_jquery plugin (which I blogged about previously) to work with a different jQuery autocomplete plugin (that's not confusing is it?!). Previously, I'd been using the "jquery-autocomplete" plugin, but had been having problems with it always being case sensitive, and with it being pretty darn slow. To solve that, I wound up switching to Dylan Verheul's jquery autocomplete plugin, which is fantastic! So, I had to update my plugin (literally changed two lines, along with the readme and some comments).

So, what does it all mean? First, things are no longer case-sensitive (although you can tweak that if you need it, see the docs for DV's jquery plugin). Second, the speed is near instant, and if it's taking any time at all, there's now a CSS style you can set to show an indicator while the AJAX call is running (nice!). Further, there are a slew of options you can set, but one of the coolest things is the ability to set a formatter JavaScript function to adjust the display of the returned results, but without affecting the actual value that is placed in your text field. This is really cool for providing further information about a given matched item. For example, I use it to display, on a line below the matched name, the location of the item. There's an interactive example of this on Dylan Verheul's site.

I highly recommend updating all around on this. For those that might be using my prior version, the changes you'd need to make are:


  • Change your JavaScript calls that make a HTML input into an auto-complete from this format:

    $("input#post_title").autocomplete({ ajax: "auto_complete_for_post_title" })

    to this:

    $("input#post_title").autocomplete("auto_complete_for_post_title")


  • Update to the HEAD of my plugin.

  • Remove the old jquery.ui.autocomplete*.js files, and install Dylan's single jquery.autocomplete.js file, updating your JS includes accordingly. Same goes for the stylesheet.



Note, to do the cool 2nd line output of auto-complete items, you will need to write your own auto-complete Rails action, which means you don't need my plugin :) I may update my plugin at some point so you can pass a block in to the autocomplete function to create this same scenario, but this is pretty specific stuff, so we'll see. As a very brief set of instructions to do this, you can essentially copy-paste the auto-complete method that gets defined by my plugin, and then update what you return as text. To do the 2nd line bit, you want to return items in the format, "item|2nd line stuff", (so use the pipe symbol to separate the two lines). Then, you can use a simple JavaScript formatter function like:

function formatAutocompleteItem(row) {
return row[0] + "<br><i>" + row[1] + "</i>";
}


Then, update your call to the jQuery autocomplete that sets up a field as autocomplete to be:

$("input#post_title").autocomplete("auto_complete_for_post_title", { formatItem:formatAutocompleteItem })


Enjoy!

17 June 2008

RSpec View Testing Problems

I've been using RSpec exclusively on my latest project. I'd say I'm still fairly new to it, but it has won me over for the time being. However, the view testing has been a real problem. You can test views quite easily and nicely with RSpec, however, when you do something subtly wrong, it can cause RSpec to fail without any visible error! In fact, Autotest doesn't report a failure at all, and the only way I know it is failing is either I pay close attention to the number of examples it said it ran, or more likely, my cruisecontrol.rb CI server will actually fail (because it detects the commands exit status).

For example, while Autotest says everything is fine, the CI server will show something like this (note for easier reading I've trimmed this a bit):


/widgets/index.html.erb
- should render list of widgets

/bobbles/index.html.erb

Finished in 4.977586 seconds

175 examples, 0 failures
rake aborted!
Command /usr/bin/ruby1.8 -I"/var/cruisecontrolrb/projects/myproject/work/vendor/plugins/rspec/lib" "/var/cruisecontrolrb/projects/myproject/work/vendor/plugins/rspec/bin/spec" "spec/controllers/bobbles_controller_spec.rb" ... --options /var/cruisecontrolrb/projects/myproject/work/spec/cruisecontrol_rcov.opts failed

(See full trace by running task with --trace)


As you can see, RSpec is reporting 0 failures, yet rake fails. This is because in reality RSpec is returning an exit code of 1 instead of 0. But, looking at the output, it's certainly not revealing. Running with --trace is of no help either.

What I now know to pay attention to in situations like this is the fact that that last spec it ran, the /bobbles/index.html.erb one has no examples listed under it. That is thus the culprit (you can also argue that it's the last thing to "run" and then rake fails, so it's likely in this, etc.).

The real pain comes when you try to figure out what the heck is causing this. You have zero feedback, and no way that I know of to somehow debug or inspect the test to see what's failing. In my experience to date with these, it boils down to some problem in your mocks and stubs, but this can be difficult to figure out. I admit, the one that prompted me to write this blog entry is one I've still yet to figure out, and finally just punted on.

I've been hearing a lot about not testing views, or testing very little of the views. I agree in general on this, and am now looking into using Webrat to do integration testing to really test "view" functionality, and leave the rest of my view testing mostly to testing my helpers, controllers, and models. Here are a couple of blog entries related to all this:

Have Autotest speak to you

Update: the first .autotest I had in here was bogus, sorry about that. The .autotest file contents below work properly with Rspec tests at least.

I make extensive use of ZenTest's Autotest to constantly watch my test suite and ensure my app's tests are passing on my dev box/during development. Historically I've used Growl/growlnotify to get little popup notices indicating if my tests passed or failed. That's nice, and I've done the enhancements that add graphics and style it nicely, etc. But, in reality, I'm not always looking and sometimes don't see the messages. Plus, they can be somewhat distracting.

So, I've switched to using the handy say tool on the Mac (on Linux I think you could use "espeak", no clue on Windows, but then, uh, well, why are you doing dev work on Windows?! ;-)

The speaking is nice - I hear it, but don't get visually distracted. I use different voices for tests passed vs. failed too. This may not work great if you work in a cube farm, or even a cafe, but here at home, or in your own office, I think it's great. Here's my .autotest file as an example:


require 'autotest/redgreen'

module Autotest::Growl
def self.growl(title, msg, img, pri=0, stick="")
system "/usr/local/bin/growlnotify -n autotest --image #{img} -p #{pri} -m #{msg.inspect} #{title} #{stick}"
end

Autotest.add_hook :ran_command do |at|
results = at.results.last

unless results.nil?
output = results[/(\d+)\s+examples?,\s*(\d+)\s+failures?(,\s*(\d+)\s+pending)?/]
if output
failures = $~[2].to_i
pending = $~[4].to_i
end

if failures > 0
`/usr/bin/say -v Zarvox "you broke the code"`
elsif pending > 0
`/usr/bin/say -v Alex "Tests passed, with some pending"`
else
unless at.tainted
`/usr/bin/say -v Victoria "all tests passed"`
else
`/usr/bin/say -v Victoria "tests passed"`
end
end
end
end
end


To see what voices your system has available, open the Speech system preference pane, pull down "System Voice" and select "Show More Voices" to see the full list.

11 June 2008

Changelogs and Deployment Notification for Capistrano and Git

Early warning: this is a hack, which doesn't mean it's bad, just that it's not polished. However, I am documenting my solution for myself thus far, as well as figured others might find it useful...

Update: Added my shell command for doing deploys (see end of this post).

I wanted a way to automate a few things around deployments, and integrate this a bit with my continuous integration server. I use CruiseControl for the CI server, and previously blogged about setting up CC.rb with Git. The goals for this next task, and subject of this blog post are:


  • Tag the code on successful deploys. My CI server already tags the code anytime it does a successful build, but since I didn't cover that previously, I'll mention it here as well.

  • Notify a list of people via email whenever a new deploy happens.

  • Generate a changelog, based on Git commit messages (better make sure they're suitable reading for whoever gets your deploy notices!), and include this changelog in the deploy emails.

  • Have the CI tag I want to deploy as the only required piece of info/parameter when issuing a deploy command.



Tagging


First, I tag the code on any successful CI run. This tag is what I can then use as the Git tag to deploy. Capistrano supports this via the branch variable (set its value to the tag name). As you can guess, you can use pretty much any Git ID/tag/branch name for this. To do this, add a task to your cruise.rake file (or similar - wherever you define your custom CruiseControl command), and then ensure you run that task during a CruiseControl session. Here's my task:

desc "Tag the code on successful CI build"
task :ci_tag do
timestamp = Time.now.strftime("%Y%m%d%H%M%S")
tag_name = "CI_#{timestamp}"
# Create an empty file with our tag name, so we can easily go grab the tagname
# from the CI output page and do deploys, etc.
system("touch #{File.join(ENV['CC_BUILD_ARTIFACTS'], tag_name)}")
system("git tag -a -m 'Successful continuous integration build on #{timestamp}' #{tag_name}")
system("git push --tags")
end


From the above, you can see that I'll get tags of the form: CI_timestamp. Next up, I want to tag a successful deploy to indicate which commit/tag actually got deployed and when. This is handled via an after task in my Capistrano deploy.rb:

after "deploy:restart", "tag_last_deploy"
task :tag_last_deploy do
set :timestamp, Time.now
set :tag_name, "deployed_to_#{rails_env}_#{timestamp.to_i}"
`git tag -a -m "Tagging deploy to #{rails_env} at #{timestamp}" #{tag_name} #{branch}`
`git push --tags`
puts "Tagged release with #{tag_name}."
end


This will create tags like, deployed_to_staging_1213223458, and works for both staging and production (or any environment you're targeting - note the use of the rails_env variable - you may need to use something else). One thing to pay particular attention to, is that this tag is actually tagging another tag, as defined by the branch variable (mentioned above). In order for this to work though, you need to ensure that your tags are up to date locally. Thus, somewhere in your workflow you'll need to do a git pull --tags, if like me, your CI server is elsewhere and is generating those tags.

Ok, we're all tagged up, let's move on...

Notification



It turns out there's a nifty new plugin called Cap Gun that will take care of emailing a list of folks on deploy. Setup is covered in their README, but the one bit they don't mention, is that you can include a comment in the email message that goes out. I wanted to include a changelog in these emails, so I tapped into this comment attribute, setting it to the text of my changelog. To use the comment, you can either set it via -s comment="my lovely comment" on your Capistrano deploy command, or you can set the comment variable in your Capistrano deploy.rb or included script. More on that in a minute.

Changelogs


My changelog, so far, is very simple, it just pulls the comments for the Git commits that occurred since the last deploy (for the appropriate target), up to the tag specified (which in this case will be the CI tag you are about to deploy). To handle this, I use a small Ruby script, combined with the great Grit gem that lets one manipulate Git via a nice Ruby API. The script simply spits out a simple chunk of text that will be what gets put into the comment Capistrano variable for our deployment notifications. This is in particular where the "hack" comes into play. This script is not robust, does essentially no error checking, etc, etc. Use at your own risk! And with that, here it is:

#!/usr/bin/env ruby

require 'rubygems'
require 'mojombo-grit'
include Grit

unless ARGV.length == 2
puts "Usage: changelog.rb staging|production <commit-or-tag>"
puts " where commit-or-tag is the commit ID or tag you are planning to deploy"
exit -1
end

repo_location = File.expand_path(File.dirname(__FILE__) + '/..')
target = ARGV[0]
about_to_deploy_commit = ARGV[1]
repo = Repo.new(repo_location)

# Find the tag for the last deployed
tags = repo.tags.collect {|tag| tag.name }
tags.delete_if {|tag| !(tag =~ /^deployed_to_#{target}_/)}
tags.sort!
last_deployed_tag = tags[-1]

commits_for_changelog = repo.commits_between(last_deployed_tag, about_to_deploy_commit)
commits_for_changelog.reverse!

puts "Changes since last release:"
commits_for_changelog.each do |commit|
puts " "
puts " #{commit.message}"
end


To run through it briefly, it takes two parameters (and clearly, you can change this for your own deployment targets, etc.): a deployment target, and a tag (which can actually be a tag, a commit ID, branch, etc.). It sets up a repo variable for your Git repository using Grit, and then proceeds to find the last deployed tag for that deployment target. After that, it gets all the commits between that last deployed tag and the tag you specified as the second script argument, and prints out the commit messages.

To integrate this, I added this line to my Capistrano deploy.rb:

set :comment, `script/changelog.rb staging #{branch}`

As you can see, that one is specific to my staging environment, and lives inside my "staging" task in deploy.rb. Same, appropriately edited version goes for production.

Deployment Command


Lastly, I define a simple shell function to do my deploys, which ensures I have done a git pull so I have all the tags, and makes the command easier to remember and get right, etc:

stagemyproject () {
git pull
cap -s branch=$1 staging deploy:migrations
}


You would thus have a command line to do a deploy like this:

stagemyproject CI_20080612052417

That's it, and if you've managed to read this far, congrats, and if you've not only managed to read this far, but payed attention and got value out of it, well, cool.

For anyone who uses/adapts this, please do let me know improvements you make, or suggestions, or tweaks/changes, and so on. I've been using this for all of about a half dozen deploys so far. If (more like when) I make improvements, I'll update.

06 June 2008

Perforce for CruiseControl.rb now on GitHub

After getting another request for my Perforce implementation for CruiseControl.rb, I've put it up on GitHub (cruisecontrolrb_perforce), and also updated my previous blog entry on the subject.

Just note, I haven't used this since August 2007, so use at your own risk :)

04 June 2008

Rails, jQuery, auto-complete, and a New Plugin

Update: I've switched which jQuery autocomplete plugin I use for this, see my newer blog entry.

The other day, I made a whole switch from Prototype & Scriptaculous to jQuery. I've had the bug to do this for a while, and this is a new project, so I went for it. I don't have anything against Prototype, so my main impetus for this was a move towards Unobtrusive JavaScript, and also the speed aspect (the site I'm currently working on, if things go accordingly to plan, will do some pretty serious traffic). But, the unobtrusive JavaScript was the key, and really, my switch is more of a philosophy of approach rather than say a dislike for Prototype, etc. And, of course, it's something new to play with :)

Before I go any further, I'll state right now, I am not a JavaScript expert, and I've been using jQuery now for all of a couple hours.

One of the results of my switch however, was that I hacked DHH's auto_complete Rails plugin, to work for jQuery. Simple change. I tweaked the controller macro, and then gutted the JS helpers, as you just don't need those when using jQuery in this way. It does require the jquery-autocomplete plugin for jQuery. I've published my Rails plugin for this on GitHub as auto_complete_jquery.

Circling back around, here's what I did to get all this going. I did run into one issue (see step 10 below) that I'm still tracking down (easy solution in the interim, but I'd like to understand what's happening, so if you have comments, please let me know):


  1. Removed the Prototype and Scriptaculous JS files from the public/javascripts dir of my Rails app. You don't have to do this, but I am no longer using them, so saw no need to keep them there, and it helps ensure I don't mistakenly use something from them or include them in the view. This includes: prototype.js, controls.js, dragdrop.js, and effects.js.

  2. Removed the prototype-based Rails auto_complete plugin from vendor/plugins.

  3. Installed the latest minified jQuery file in public/javascripts/jquery.

  4. Installed the JS files for the jquery-autocomplete plugin, and its dependencies: jquery.templating.js, and jquery.ui.autocomplete.js. (see the jquery-autocomplete plugin for these files).

  5. Added the jquery.ui.autocomplete.css file to public/stylesheets.

  6. Installed my auto_complete_jquery plugin.

  7. Put the proper includes for the CSS file and the JS files in my application layout file:


    <%= stylesheet_link_tag 'jquery.ui.autocomplete' %>
    <%= javascript_include_tag 'jquery/jquery.min', 'jquery/jquery.templating', 'jquery/jquery.ui.autocomplete.ext', 'jquery/jquery.ui.autocomplete', :cache => 'jquery' %>

    Note that I keep my jQuery JS files in a subdir for organizational purposes, but you can modify as needed.

  8. Changed my existing auto-complete text fields that used the Rails Prototype based auto_complete plugin's helpers to just be plain old text fields, such as:
    <%= coffee.text_field :drink, :autocomplete =>"off" %>
    This is doing an auto-complete for the "drink" attribute of the Coffee model.

  9. I can simply leave any auto_complete_for calls that existed in my controller, as that works the same. If you had custom versions that were based on the code from the Prototype-based Rails plugin, just go look at the code in my plugin to see the differences, it's a simple change.

  10. Add the JavaScript that sets up the auto-complete for the given text field. This will typically look like:

    $(document).ready(function() {
    $("input#coffee_drink").autocomplete({ ajax: "auto_complete_for_coffee_drink" })
    });

    Where does this go? It depends. What I've been liking is using the JavaScript auto-include plugin, which creates a Rails-style convention for JavaScript files that pertain to individual actions, or are controller-wide. So in my case, this code would get placed in public/javascripts/views/coffees/new.js, or likely one directory up, as simply coffees.js (so that I can use it in any CoffeesController action that needs to auto-complete on coffee.drink. Without that plugin, you just put it in whatever JS file is appropriately included for the view you're using it in, etc. You can of course put it directly into the view in a script block, but then you aren't doing the whole Unobtrusive JavaScript thing as rigidly.
  11. Finally, what I found is that I had to add a route for this. This is the issue I mentioned above. It sort of makes sense, but what I'm unclear on is, why the prior standard/Prototype-based Rails auto_complete plugin didn't require a route. They both seem to use a GET, define the action the same way, and so on. I'm hoping I'm just missing something obvious. So, the route I added is:
    map.connect ':controller/auto_complete_for_coffee_drink', :action => 'auto_complete_for_coffee_drink', :format => 'json'



A bunch of steps, but pretty simple work. The app I'm doing this on is all of a few days old, so I hadn't gotten into use of much else in Prototype and so on, thus making the wholesale switch easy.

If you'd like to learn more about any of these things, and as a comprehensive set of links:


Enjoy!

Update: I removed the jQuery Dimensions JS file and include for it in my layout, as this is now included in the latest jQuery JS file itself.

Update 2: I don't know how the standard auto_complete plugin manages to do without routes, but here is a generic route for all auto-complete actions across controllers:

map.auto_complete ':controller/:action',
:requirements => { :action => /auto_complete_for_\S+/ },
:conditions => { :method => :get }

I hesitate to put this into the plugin, as routes can be quite tricky in more complex apps, and I wouldn't want to auto-hose someone :)

02 June 2008

Fixing Capistrano 2.3.0 and Git Deploy Problem

If you upgrade to Capistrano 2.3.0, and are doing deploys from a Git repository, you may find that all of a sudden you can no longer deploy. This is the case if you have no tags in your Git repo. Cap 2.3.0 changed one of the Git commands it uses and that apparently doesn't work right if you don't have tags. So, to solve the problem, you can simply create a single tag in your Git repository. The tag does not have to relate to your build at all, you only need one tag in the repo (not one per build or anything like that), etc. Once you create the tag, you can now deploy again.

To create a tag in Git, or, I think the "cooler" kind of tag, an annotated tag, you can do:

git tag -a tag_name

Replace "tag_name" with your tag name of course. The "-a" option says to make it an annotated tag, which lets you enter a comment about the tag. You can put whatever you want in there. I'm liking this potential use with my continuous integration server when it makes tags on successful builds. Lots of possibilities.

Finally, if you deploy from a remote repo, or if you have a remote repo (say on GitHub), you will need to push your tag. This does not automatically occur on a push, you need to add "--tags" option to git-push to include your tags:

git push --tags

Now you'll have your tags on your remote repo, and listed under the "all tags" tab on GitHub.

28 May 2008

I'm at RailsConf 2008 This Week

I'll be at RailsConf this week. Looking forward to it, and to time in Portland which I always like. Already scouting out places to eat, which it appears I'll have to watch the time on given there are many BoF's and other sessions in the evenings that sound good. See you there...

23 May 2008

Jelly - work at my house

Jelly, what is it? It's not something you spread on toast, or not this kind anyway. Jelly is "casual coworking". It's popping up in different cities, and what it really boils down to is, someone opens up their house/apartment/some space, and folks get together to work. They aren't collaborating (although they could), it's just a way to get that social interaction for folks (like me) who work at home. You get to meet and interact with other interesting folks in your area, maybe do a bit of work, and get that social interaction you miss while working at home.

To learn more, check out this video that NPR did, or Amit Gupta's (who started Jelly) video.

This is pretty interesting to me, and I'm actually thinking I might do it here in Eugene. I'll have to consider whether I want to invite a bunch of strangers (as great as you all might be (hopefully ;-) to overrun my house for the day or not, but it seems pretty interesting. I always have to think about balancing this with privacy and having a bunch of random folks that I'm not directly aquanted with knowing my house/where I live, and so on. I consider it a bit different than say the 20-something single person living in a random apartment in a city. This is my house, and where my family (wfie & kids) live, etc. Still, it intrigues me. Plus, I have the space, WiFi/net, etc. to do it, and the tech folks (shout out to the Django group) I've met here in Eugene are all cool so far.

21 May 2008

Monitoring Mongrels on an F5 BigIP

Almost a year after I raised the issue of problems monitoring Mongrel instances on an F5 BigIP, I still get occasional emails asking whether I got it working and how, etc. So, firing off this blog entry to hopefully provide a more findable solution for folks (the mailman mongrel-users archive seems broken, and clearly people find my initial query on the subject, but not the response with the solution).

So, if you need to monitor Mongrel instances with an F5 BigIP, and are doing so with an HTTP monitor health check, you want to set your Send String to be along the lines of the following:


GET /heartbeat/index HTTP/1.1
\r\n
\r\n

The key point here is to have the "\r\n" lines (the "\r" is what I was missing and many folks seem to get tripped up by the proper way to specify the line breaks needed for the HTTP request).

Here's a link to the solution provided by Jason Hoffman of Joyent, on the mongrel-users maiing list.

Note that I haven't worked with F5's since September 2007, so likely can't answer any further questions.

08 May 2008

Webcam Recommendations

I'm back to needing recommendations for a webcam. On the site (Basecamp Silverton), we will only show images/stills taken from the camera, probably at some interval in the "minutes" range. The camera will be pointed at the local mountain from a house in town (the mountain is pretty darn close, but still, it's a mountain/in the distance, as compared to say a security or other similar application). We're looking for a camera that can automatically upload images to a server via FTP/SSH/SCP at regular intervals.

Our criteria are the following, and I was hoping folks could make some recommendations:

Requirements:


  • video doesn't matter, must be able to produce still images of "reasonable" quality

  • lens and such must be sufficient to shoot a pretty high depth of field to maximize focus across long-ish range (I will explain that it'll be pointing at a nearby mountain from house in town, etc.)

  • Must do automated image upload via FTP and/or SSH/SCP.

  • Controllable interval on automated upload (nothing too fancy, but we may want as fine as say every 2 minutes or every 5 minutes?)

  • Wireless/WiFi (in case we want to mount it outside, or just to make it easier inside)

  • web/browser based configuration, or configurable via a Mac (we do not have a Windows machine)

  • $500 or less. We may have some flexibility here, if for example it includes the outdoor enclosure/is weatherproof.

  • Really prefer not to have any separate software solution, other than what's in the camera, but if that is the best way to get the FTP/SCP upload, because we won't have a machine to dedicate to running it.



Nice to haves:

  • Ability to be mounted in an outdoor enclosure, or weatherproof itself. This will be in Silverton, Colorado where they routinely see temperatures below zero, and have snow, etc.

  • user adjustable zoom



We already had one camera, a TrendNet TV-IP201W, which was nice and cheap, but the thing appears to just have up and stopped working. Plus, we had a lot of problems with it's FTP upload (and weird limitations on length of user login names and so on). Also, this camera seemed to just "crash"/need reboot every so often (often enough that it was annoying).

06 May 2008

Bring Light's New Facebook App

At Bring Light, we recently released a Facebook application to serve as a companion to our main site. You can now use the Facebook application to show what projects you've contributed to, what your favorites are, what groups you're in, and tell the world about your philanthropic activities on Bring Light.

If you're already a Bring Light user, you can get to the Facebook application two ways. If you're logged into www.bringlight.com, there is now a Facebook link in the upper right. Or, you can go directly to the app on Facebook, by going to http://apps.facebook.com/bringlight/.

Here's a look at what the wide profile box on Facebook looks like after you add the app:

Facebook | Chris Bailey
Uploaded with plasq's Skitch!

05 May 2008

Announcing ShipFu!

Have you ever needed to quickly compare UPS and USPS shipping rates, and just wanted a simple web page/app to do it? If so, check out my new ShipFu app. It's incredibly simple. No gratuitous web decoration, no hidden motivation behind it to promote some other product or shipping service, etc., just a simple calculator. Enter your package origin and destination, and dimensions and weight, and voila, you'll be presented with all the rates available for that from the two services.

I built ShipFu to address a need of my own, and the fact that I just didn't seem to like any other solutions out there - seemed to be too many hidden agendas or ungainly UI's, etc. Also, I built this as a bit of an experiment with a few technology aspects. It's a Rails app, and is running on the new-ish Thin server, etc. I plan to have a blog post that goes into a lot more depth on some of the fun things that occurred as part of building this super simple app.

Also, please, by all means, send me feedback (use the feedback link shown near the bottom of the site) if you'd like to see some feature or enhancement or what not. Or, just use it to help you quickly gauge shipping rates.

21 April 2008

Movable Type 4 Setup on Ubuntu

I am in the process of consolidating my blogs to a single system, and in doing so, am switching to Movable Type. This blog will eventually move, but the initial impetus was for some other blogs that were running on WordPress where I was having some problems (and spent way too much time trying to fix that - one of the key things I don't want to be doing with my blog system). But, in relation, I found the setup instructions for Movable Type a bit lacking, especially in relation to using multiple blogs, and how this works in terms of Apache and MT setup.

I also want to say a bit thanks to Duncan for showing me his Apache config, and some brief discussion of multiple blogs with MT. The MT docs were really lacking here as said, but Duncan's knowledge made it clear this was pretty easy and a nice way to go. Thanks Duncan, and check out his blog and site, and of course all his great pics on Flickr.

I am using a single installation of Movable Type, supporting multiple blogs, with each blog having their own domain name, and that blog (or really the MT content) living at the root of that domain name (this last part isn't required/essentially for this tutorial, you can easily tweak the Apache config).

So, with that, given that it actually turns out to be relatively simple to set up once you know a few key bits, I figured I'd pass along what I did...

First, I created my standard Ubuntu slice at Slicehost. I host everything with them these days, and as such, I have a base system image that I've built for myself. It's built with their standard Ubuntu 7.10 choice, and then I make tweaks to the SSH setup, add a few bits I need, etc. But, I believe you could simply take pretty much a standard Ubuntu server install and use that. Please let me know if this guide doesn't work for you in that case.

Preparations



In preparation for the move, while my existing blogs were running, I exported their data from WordPress using WordPress' Export feature, which produces a WXR file. Save that to my local machine.

I also made sure I had my domain names secured, and the DNS for them setup. In particular, one thing to note is that Movable Type is sort of a two part system. You have the MT web application, which is a publishing application, but is not what someone hits when visiting your blog(s). MT publishes your blog out as static content (or that's the default option anyway). So, I setup an "mt" subdomain on one of my domains where I will access MT (more on that below).

Apache and Perl Install



Apache 2 and mod_perl were not on my system by default, so I needed to install those. This amounts to:
sudo apt-get install apache2 libapache2-mod-perl2 libapache2-mod-perl2-doc

The above will not only install it, but configure mod_perl for use in Apache, and you can now run Perl based web apps (MT is developed in Perl). Also, depending on how you want to do email, you may need to install Perl's Mail::Sendmail (if using SMTP; if you use sendmail, then you can choose that when setting up MT):

sudo perl -MCPAN -e "install Mail::Sendmail"



Create a Database



I created a database for MT using MySQL. I also setup a specific MySQL user, password, and assigned them rights to that database. You'll need this info later when setting up/configuring MT.

I have been using Navicat for nearly all my DB management. It works really well given it can do SSH tunneling as I don't open the MySQL port on my servers, etc. It is a commercial app, but as a developer who works with DB's often, has proven to be my tool of choice (I've tried many others, and this is the one that's worked best for me).

Apache Configuration



I have a relatively minimal Apache configuration file. The bulk of it is done with a file that sets up my virtual hosts (several domains point to a single machine). The setup for MT has a few critical pieces:

  • Setting a ScriptAlias for the "mt" directory, so mod_perl knows to execute .cgi files there.

  • Setting an Alias for the "mt-static" directory, which is MT's static content, and which you'll want to be referenced from all your blogs. You can also do this with a symlink, but I've done it with an Apache alias below so that I don't have to worry about a given blog's static content getting wiped out and breaking this (if the static content gets wiped out, you can just republish in MT to restore it, so I prefer to keep that purely maintained by MT).

  • Setup the proper options/settings for the MT directory.



Thus, my configuration for my virtual hosts looks like the following (fake domain names used), notes following:


NameVirtualHost *
<VirtualHost *>
ServerAdmin chris@example.com
ServerName mt.example.net
DocumentRoot /var/www/mt

Alias /mt-static /var/www/mt/mt-static
<Directory /var/www/mt/mt-static>
Options Indexes FollowSymLinks MultiViews
AllowOverride All
Order allow,deny
allow from all
</Directory>

ScriptAlias / /var/www/mt/
<Directory /var/www/mt>
AllowOverride None
Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch
Order allow,deny
Allow from all
</Directory>
</VirtualHost>

<VirtualHost *>
ServerAdmin chris@example.com
ServerName example.com
ServerAlias example.com www. example.com
DocumentRoot /var/www/example

Alias /mt-static /var/www/mt/mt-static
<Directory /var/www/mt/mt-static>
Options Indexes FollowSymLinks MultiViews
AllowOverride All
Order allow,deny
allow from all
</Directory>
Redirect /mt.cgi http://mt.example.net/mt.cgi
</VirtualHost>


The first VirtualHost block sets up where I'll access MT, thus at http://mt.example.com. There you will see both the Alias for mt-static, and also the ScriptAlias for mt. These are critical.

The second Virtual Host block defines one of the actual blogs, which will be accessible at http://example.com or http://www.example.com. For additional blogs, you would add another of these blocks per domain name. The key bits here are the DocumentRoot and the Alias for mt-static. The DocumentRoot is where you will have MT publish your static blog content. Make sure that directory is writable by Apache/MT.

Finally, the Redirect sets things up so that when you are logged in and visit your blog, and see the various links for "Edit this content" that those will actually work (they point to mt.cgi, so this redirects them to whatever domain is serving mt).

Setup Movable Type



Next, visit http://mt.example.com/mt.cgi in your browser to begin setting up and configurating Movable Type. It will ask you about your database, and a few other bits, and then prompt you to create the first blog. Make sure the blog URL and directory match what you setup in your Apache configuration above. Beyond this you will need to refer to Movable Type docs for questions. But, you should essentially have a blog running, and will just need to Publish it to have MT write the static files into the directory you've defined.

The last step for me was to import my blog content from my prior WordPress setup, using the WXR dump I created at the beginning of all this. One key note here. When you go into MT's Import page - choose the blog you want to import to (even if you only have one) first. Only then when you pick WXR (if appropriate) for your import will it give you the proper fields for the info it needs for the WXR import. Otherwise it'll set your import type back to MovableType, and claim it has read in your import file fine (but you'll of course have no content).

I'm still tweaking my templates and doing a few bits to the sites I'm moving over, so I'm off to continue on that. Hope this helps someone.

16 April 2008

Working at Home, The Zone, and Importance of Equipment

Recently, there was a good writeup at Hivelogic on Offices and the Creativity Zone. This is partially also in response to the Jason Calcanas' post on how to save money, 37 Signals response, and so on. I'm getting around to my thoughts/response, as someone who has been working at home for about 60% of the last 10 years.

I'd like to comment on/respond to a few things, in particular:


  • chairs and desks

  • pair programming

  • "The Creativity Zone", as well as working in coffee shops

  • what I think is important for a home office - and in working at home

  • passion for your work, and the relation of that and hours put in at startups



Chairs and Desks



First, chairs and desks. As most folks will say, do NOT skimp on a chair. Go straight to a Herman Miller Aeron, or a Human Scale chair, do not pass go. I'll wait. I've had my posterior in an Aeron since I started at Adobe (thank you Adobe!) in 1996. When I moved, and was no longer working in the office, I used their program that allowed employees to buy these chairs at a discount, and picked one up for $500. I'd have gladly paid full price. Additionally, make sure you get the proper size, it makes a huge difference!

Following on that, I completely disagree on getting cheap desks, or doing the door/board on top of a file cabinet approach mentioned in Calcanas' post. I don't think you need to spend a lot on a desk, afterall, you just need a good surface. However, the key here is getting a desk that is the proper height. If you do the file cabinets thing, or buy your average stock desk they are almost always too tall. Take it from me, I'm 6'2" tall, and these desks are still too tall if you properly set your chair height (thighs level, feet flat on ground, forearms level or close to it, etc.). So, I suggest finding adjustable height desks, or if you are building your own, to make sure you figure out the proper height. I've been using Anthro's AnthroBench desks, which are not cheap, but are kick ass. However, the height adjustment is non-trivial, so you mostly have to get it right the first time. I've seen some since, that consumers can buy, that have more of an infinite height adjustment (which is what we had with the desks at Adobe, but I was unable to buy those).

Pair Programming



I'll cut to the chase: I'm not into it. I know folks who are and swear by it (e.g. Pivotal Labs does it the most and best I've ever seen). But, it's not for me. It doesn't fit with the way I think and work. I like a personalized environment, I like things quiet, and I like a bit more free flow. I also don't feel that it is a guarantee of better code quality.

Some of the complications to me are all the personalization developers like to do, whether that be fonts, keyboards, screen arrangements, colors, coding styles, and so on. Some of that can be worked around, but I'm simply not a fan, and don't believe it's the big advantage various others believe it is. But, in the same note, if it works for you, you prefer it, and you find someone/people to pair with that works well, then more power to you.

Also, I have a long history of doing remote work, working with other remote folks, and so on, and that is either impossible, or mostly defeats pair programming (Pivotal may disagree, but I do know they've had some hardships in this area as well). Differing time zones are not friendly to pair programming.

All this also ties into the next topic...

The Creativity Zone



I think Hivelogic nails it with this:
Unfortunately, most people can’t simply step into The Zone. In the very same way you’d want to find the right time and place to read a book, creative types need to setup the specific conditions they need to enter The Zone. For some people, this might mean listening to a certain kind of music. It might be fueled by caffeine and a dark room late at night. Some people work best in the silence of the early morning. It all depends on the person.


As you can guess from my pair programming comments above, I agree about having the right environment, and that you can't just force the work to flow. I've worked with a lot of different folks. Some people like to listen to music, some don't. Those that do range from listening to classical on low volumn to high volumn metal. Some work at night best, others can do the 9-5 thing, etc. This to me is similar to the situation of working in a cafe.

I think working in cafes is not good. I'm ok with popping in for an espresso, having a casual meeting there, or just using it to take a break from the office (whether that be a company office, or home office), and just checking email or reading RSS, or what not. But I don't buy it for serious work, and secondarily, I think you people who do do that suck. Yep, straight up, you suck. You go into a coffee shop, and take up space, and then ignore everyone. Why are you there? And why do you think that's fair? You are in no way contributing to the "cafe culture" or environment of a cafe, you are detracting from it. I was glad to see Ritual take away outlets and such. You shouldn't be sitting there for hours on end leeching from them.

And furthermore, I simply don't buy it as a productive environment, even when you wall yourself off from what is around you - which by definition tells me you don't think it's a productive environment either, otherwise you wouldn't need to bring your headphones and ignore everyone and all that.

Instead, make yourself a nice home office. There are a ton of resources on the web on how to do this if you need some pointers. Which leads me to...

What's Important in a Home Office - And In Working at Home



The above referenced articles already cover some of this, I'll try to be brief. Bring on the bullets:

  • Great chair and desk, see above

  • Proper lighting. In this I mean both the actual lights, but also how windows affect your workspace. Do not face directly into a window, as much as the view may be awesome. Usually you want windows on the side of you (not in front or back). I have a nice forest, mostly, to look out on my left side window - easy enough when I need a break to just turn my head.

  • A separate room. Not everyone can do this, but I feel VERY strongly about this if you plan to do significant amounts of work at home. You need a space that you can go to that is your office, where you can make a shift into work mode, have some isolation, close doors (so phone calls are quiet and so you can work without distraction), etc. It doesn't have to be huge, but make it your office space.

  • Good machine and monitor(s). Big monitors are key. I use a MacBook Pro as my main machine, but have an external 24" monitor (I want to go to a 30" when I can) on it as well, and an external keyboard is good too.

  • I feel you shouldn't have a beverage bar in your office. Just keep it in the kitchen, save electricity or whatever. But, for me, this is a good way to force me to get up and walk a bit, allows some different thinking time, etc. I almost always have a glass of water on my desk, but I get up to make an espresso, or maybe grab some fruit, or whatever. The break is always good.

  • Ok, I used to laugh at this recommendation, but I'm now one who does it, although I don't think it's required... Get up, and take a shower, get dressed, etc. I mention this for the two reasons I need to do it (but if you don't, then no biggy): I am not a morning person. I need to wake up a bit slower, and I prefer less interaction with people when I first get up. So, for me, what I've found is great, is to get up, and go shower. It is my way of having a slower re-entry. But, it also helps shift me into work mode (even if I don't wind up "going to work" for another hour or two. It flips that switch more explicitly for me.



That's all for now, as I want to get on to the last point...

Passion



In some of the referenced articles, and this has grown to be discussed a lot in reference to these, there is mention of whether folks need to be work-a-holics to have a successful startup. 37 Signals says to fire them. Calcanas mostly the opposite. I'm very strongly in the 37 Signals camp on this - to me it all comes down to passion. I believe this beyond startups as well, and it's one reason I just have no interest in working for larger companies anymore, because I feel the logistics simply make it a lot harder to have everyone be passionate. But, in the end, the folks I want to work with are passionate about their work/the project. This is how I want to be with what I'm working on. Sure, there are always parts that aren't as fun, but the overall idea is to have an overarching passion for what you're doing. To me that produces the best result, regardless of actual hours worked. In fact, I'd argue that you will get FAR better results from passionate folks working moderate hours, than simply a box of people putting in massive hours.

I recall we used to joke about how there was this notion that at Oracle (or substitute various others), all the engineers worked 80 hour weeks. That was BS of course. Note, I haven't worked at Oracle, but know folks who did, although that is somewhat beside the point... They may have been at the office 80 hours a week, but there is no way they were productively cranking out great work for all 80 hours. No, they were going to the gym, eating in the cafes, goofing off, or half awake at the keyboard. Recipe for burnout.

Now, as long as you have the passion, that's the key to me. After that, if you want to put in some epic hours because you're so psyched to be moving some great project forward, that's cool. I've done it. I don't think it's something that's sustainable long term, but bursts of this are great, go for it.

Right now, to share a bit, I'm making less money than I have in a long long time, but I'm more psyched than I probably ever have been, on the work I'm doing. I'm hoping the money part changes as the startups I'm working on grow, but I'm just loving it. Working with others who are passionate, working on cool stuff, running the show myself, or being involved at fundamental levels is why I left the mothership, and I really just wish I'd left sooner. I can get into that Creative Zone every day, and I look forward to doing so!

So, I recommend you think hard about your work environment, how you make for a productive and enjoyable environment. But most of all, shoot for the passion, and mold your environment to support and foster that passion.

11 April 2008

Shoulda and object_daddy Sitting in a Tree, t-e-s-t-i-n-g

Like some other folks working with Rails, I've been a bit frustrated with Fixtures. Foxy Fixtures, Rathole, and such things, including what is in Rails 2, have helped a lot. However, the two biggest frustrations for me come down to the fragility of fixtures, and knowing what fixtures you have and how they relate to their associated fixtures. I would find myself thinking, "hmm, which user do I use when I want the one with X associations" or what not. Naming your fixtures well helps, but only so much.

Lately, I've come across two plugins that I am really loving. These are Shoulda and object_daddy. Shoulda seems to be gaining in popularity in the Rails community, which doesn't surprise me. It gives you some of the best syntax of RSpec, without having to use RSpec (which I am not all in love with, unlike various others), as well as it gives you some nice "should" methods, and other features I'll get into in a minute.

object_daddy I think is rather obscure. I only found out about it due to Tammer Saleh's presentation (video link) on Shoulda from the 2008 MountainWest RubyConf. object_daddy has improved in its short life, and is quite useful today. What it does, is provide a factory/generator mechanism for creating model objects. I've done this in the past with object constructors or factories, etc., but object_daddy organizes all this, and provides a slick mechanism, called "exemplars", that specify how model attributes are defined when generating objects, and more importantly, when generating multiple objects. Tammer covers this issue in his presentation, which I highly recommend watching.

It's taken me a bit of time/use of Shoulda to get really into it, but like so many things, use it a bit and then the light bulb not only goes off, but seems to erupt with light. The big one here for me was how to leverage contexts, and by that I really mean nested contexts. My tests now not only read better, but can be written in a much nicer fashion, as well as organized in a great way. On the organization front for example, I now often have two top level contexts in a functional test: one for cases where I'm testing actions without a user being logged in, and the other for when a user is logged in. Great way to group them.

But, what's got me most excited lately is the combination of these two testing tools, and what it's done to my tests. First, I've darn near eliminated fixtures on the project I'm using this most with so far. This has removed the fragility, as well as it's just FAR easier to understand a test case's setup/scenario. I use Shoulda's contexts and setup, combined with generated objects from object_daddy to create "scenarios" (to steal the term from the plugin that provides this kind of thing for fixtures). The benefit is that you have all of the info about your test right there in one place in front of you. You don't need to bounce between likely multiple fixture files and your test code file to ascertain what's being used in your test. Plus, you can be very specific about the data being used in that particular test (or tests).

I heartily recommend you try this out. Two other things of note:


  • Shoulda can be mixed in with existing TestUnit. So, you can slowly convert to it, or just build new tests with it, etc. Very nice. And, it doesn't require anything special to run the tests (it really is just a method generator for building TestUnit tests).

  • Check out Shoulda's "should_eventually" method. I'm making more and more use of this, as I use a Test First approach. So, I go in, and build lots of tests, and do a lot of "should_eventually" as I think of things to test and functionality I need, etc. Then as I determine how to write those tests, and following on from that, write the implementation, you simply remove the "_eventually" and let it rip.



Note, if you are an RSpec fan, you can of course achieve the same as Shoulda for contexts and such (I think anyway), so just pull in object_daddy and leverage that aspect.

And last, but not least, I've forked object_daddy to make one tiny change (a single line, actually, a single method call name change!) that's made a big difference for me (comments very welcome). This change is to, by default, call create in the generate method that object_daddy adds to your ActiveRecord objects, instead of calling new. This avoids what I found to be the common case of doing:

my_new_object = SomeModel.generate
my_new_object.save
my_new_object # or some use of it

Now, you can simply call SomeModel.generate, and use that inline, knowing it's saved in the DB, etc. I want to take a look at adding options to generate, or additional generate methods that provide the flexibility to use new, or create! or such things, for the cases where those are needed. My fork is hosted on GitHub, and is public, so feel free to check it out: http://github.com/chris/object_daddy.

p.s. For those that wonder why I've "all but eliminated fixtures", as in, what do I still have in fixtures? The only things are standard data, which in my case amounts to a couple specific users, and a couple of Roles and Permissions. These normally get setup in the DB migrations, but I'm working through what Rails does when you run tests and it wipes the DB clean (and thus doesn't pick up seed data from migrations), and other such issues.