·  What is an RSS Feed?
  ·  The History of RSS Feeds
  ·  RSS Versions and Formats

  ·  Latest News

RSS Feed Categories

  ·  Arts  (287)
  ·  Business  (167)
  ·  Computers  (218)
  ·  Education  (5)
  ·  Entertainment  (10)
  ·  Games  (24)
  ·  Health  (72)
  ·  Home  (51)
  ·  Kids and Teens  (7)
  ·  Lifestyle  (5)
  ·  News  (118)
  ·  Recreation  (125)
  ·  Reference  (47)
  ·  Regional  (1,284)
  ·  Science  (131)
  ·  Shopping  (6)
  ·  Society  (200)
  ·  Sports  (214)
  ·  World  (1,005)



End of the line

For almost seven years, Brighter Planet has been a constant source of inspiration for me and, I’m proud to say, for the broader sustainability community. This makes it all the more difficult to announce that the company and its Brighter Planet Visa cards and other services are coming to the end of the line.

We thank our customers, our partners, and all those with whom we’ve worked with over the years to move the needle on building a clean-energy future. It’s phenomenal how knowing what your carbon footprint is and what you can do about it or even what simple choices like which card to pull out of your wallet can do. Because of you, we offset over 350 million lbs. of CO2, supporting the development of 24 renewable energy and sustainable forestry projects across the US. This is the equivalent of over 20,000 homes being electrified by renewable energy for a year, removing over 10 million cars from the road for a day, or every cardholder turning off all the lights in their home for more than a year.

We also thank our investors, leadership boards, and experts who have supported and advised us over the years. As a business that came to be profitable with almost 200,000 customers, we hope that we have helped demonstrate, along with all of our compatriots in the social entrepreneurship world, that you can certainly do well while also doing good.

I must thank our team too, past and present, the soul of Brighter Planet. Andy, Seamus, Robbie, Ian H., Nancy, Matt K., Derek, Jon, Jake, Adam, Carolyn, Rich St., Daniel, Emily, Ashley, Bruce, David, Ian W., Kerry, Matt V., Rich Sa., Remy, and others. Some were with us for more than five years, others were interns for a few months, all played a role in our many successes.

You may be wondering why we are liquidating at this point. Having led businesses large and small, I can tell you that winding a company down is the toughest decision to make. It was not one that Brighter Planet’s Board or shareholders took lightly.

A perfect storm of regulatory upheaval made Brighter Planet’s business model increasingly challenging and financially non-sustainable going forward. The passage of the Durbin Act, intended as a plus for U.S. consumers and merchants, upended the card market, effectively precluding economically viable rewards cards that provide revenue for cause-based organizations including Brighter Planet. At the same time, the government’s inaction on meaningful cap-and-trade and the rise and fall of the U.S. carbon markets negated the need for and value of computational sustainability tools for businesses—tools that we pioneered in developing. Finally, the country’s economic meltdown these past few years naturally impacted discretionary spending by consumers and businesses, in turn impacting companies offering products and services like Brighter Planet.

Since getting our start in a Middlebury College classroom in 2005, so much has changed. The powerful call to action of An Inconvenient Truth came the year after Brighter Planet was created, and left millions of Americans searching for ways to make a difference in climate change, the most important challenge our world faces today. Back then, there weren’t many significant organizations devoted to the global warming challenge. Brighter Planet in partnership with the Bank of America and Visa offered a straightforward first step for people to reduce their impact on our planet’s atmosphere, and we appreciate the bold steps that these partners took along with us.

Now in 2013, I see a myriad of ways for people and businesses to take action. With organizations like building the largest climate movement the world has seen or CERES mobilizing business leadership for a sustainable world, I think it’s safe to say that while we certainly haven’t solved the climate problem, the community has made a serious dent in the participation problem. That you no longer need a credit or debit card to join the fight is cause for celebration.

So while Brighter Planet is wrapping up, I’m happy to count hundreds more capabilities that have sprung up from the ingenuity of the community as well as thousands of organizations both big and small that transformed themselves to embrace sustainability within their core businesses. Even several of Brighter Planet’s former employees, including one of its founders, have formed a new startup, Faraday, to apply Big Data technology to customer acquisition of energy efficiency solutions. What so many for-profits, not-for-profits, and individuals are doing today in 2013 is nothing short of amazing.

I’d like to close with some of my favorite Brighter Planet highlights from these past several years:

  • Managed the country’s largest consumer carbon offset program with almost 200,000 customers, outlasting every competitor in our space

  • Supported the development of 24 projects such as wind power in public school districts and open lands, cow power on family farms, and conservation of redwood forests, decreasing our dependence on carbon-emitting dirty energy sources like coal burning power plants and removing carbon from the atmosphere

  • Modeled over 27.5 million carbon, energy, and resource impacts of real-life carbon emissions sources, the most in the country

  • Conducted groundbreaking research in the aviation and hotel sectors, examining key drivers of energy efficiency and analyzing economic and environmental benefits

  • Assessed employee sustainability engagement in corporations and identified best practices

  • Provided grants to support community climate projects such as school gardens

  • Ran industry defining social media campaigns

  • Successfully built and sold a social giving platform

  • Named the country’s Best Small Business by Discovery Channel’s Treehugger

  • Won the Financial Times Social Innovation Award

  • Won the EPA’s Apps for the Environment Award

  • Recognized as thought leaders, presenting the company’s work to diverse audiences such as the technology, corporate travel, energy, and open government communities

Thank you all.


A new offset project

We’ve got a new offset project, and once again it’s the first of it’s kind for Brighter Planet. The Rentech Fertilizer Plant project supports a new N2O destruction system at midwestern fertilizer plant, and is certified under the Climate Action Reserve.


Rentech’s fertilizer plant in East Dubuque, Illinois produces nitric acid and ammonia, two of the main ingredients in fertilizer. The plant used to release Nitrous oxide (N2O), a greenhouse gas about three hundred times more powerful than CO2, as a byproduct of the nitric acid production process. With this project the tail gas from nitric acid production is piped through a special chamber where a catalyst converts between 95% and 99% of the N2O into nitrogen and oxygen. A continuous emissions monitoring system directly measures the amount of N2O in the tail gas exiting the chamber to ensure the catalyst is functioning effectively.

This project is the first of its kind in the United States and serves as an example of how the fertilizer industry can reduce the impact of its operations while continuing to serve the needs of farmers.

Learn more about the Rentech fertilizer plant or our other offset projects.

Back to the Browser - A JavaScript Workflow for UNIX Nerds

When Apple announced Mac OS X Lion, their tagline was “Back to the Mac” as they were bringing some features from iOS into the desktop-oriented Mac OS. In the JavaScript world, a similar thing has happened: innovations in the Node.js space can be brought back to the browser. These innovations have made JavaScript development faster and cleaner with command-line tools and the npm packaging system.

As I began writing serious JavaScript libraries and apps, I wanted the same kind of workflow I enjoy when writing Ruby code. I wanted to write my code in vi, run tests in the command line, organize my code into classes and modules, and use versioned packages similar to Ruby gems. At the time, the standard way to write JavaScript was to manage separate files by hand and concatenate them into a single file. One of the only testing frameworks in town was Jasmine, which required you to run tests in the browser. Since then, there has been an explosion of command-line code packaging and testing frameworks in the Node.js community that have lent themselves well to client side development. What follows is the approach I find to be the most productive.

Here’s a list of the tools that correspond to their Ruby world counterparts:

Application Ruby Javascript
Testing RSpec vows, buster
Package management rubygems, bundler npm, browserify
Code organization require CommonJS
Build tools jeweler, rubygems browserify

By installing Node.js, you have access to a command-line JavaScript runtime, testing, package management, and application building. Running tests from the command-line allows you to more easily use tools like guard, run focused unit tests, and easily set up continuous integration.


Many JavaScripters run Jasmine in the browser for testing. While it does the job, its syntax is extremely verbose and it breaks the command-line-only workflow. There is a Node.js package for running Jasmine from the command line, but I have found it to be buggy and not as feature rich as a typical command line testing tool. Instead I prefer vows or buster.js. Each supports a simpler “hash” based syntax, as opposed to Jasmine’s verbose syntax:

// Jasmine

describe('MyClass', function() {
  describe('#myMethod', function() {
    before(function() {
      this.instance = new MyClass('foo');
    it('returns true by default', function() {
    it('returns false sometimes', function() {
// Vows

  '#myMethod': {
    topic: new MyClass('foo'),

    'returns true by default': function(instance) {
    'returns false sometimes': function(instance) {

Vows and buster can be used just like rspec to run tests from the command line:

> vows test/my-class-test.js
OK >> 22 honored

One advantage that buster has over vows is that it can run its tests both from the command line and from a browser in case you want to run some integration tests in a real browser environment.

For mocks and stubs, you can use the excellent sinon library, which is included by default with buster.js.

Integration testing

In addition to unit testing, it’s always good run a full integration test. Since every browser has its own quirks, it’s best to run integration tests in each browser. I write cucumber tests using capybara to automatically drive either a “headless” (in-memory) webkit browser with capybara-webkit and/or GUI browsers like Firefox and Chrome with selenium.

In features/support/env.rb you can define which type of browser is used to run the tests by defining custom drivers

require 'selenium-webdriver'

    Capybara.register_driver :selenium_chrome do |app| app, :browser => :chrome

    Capybara.register_driver :selenium_firefox do |app| app, :browser => :firefox

    if ENV['BROWSER'] == 'chrome'
      Capybara.current_driver = :selenium_chrome
    elsif ENV['BROWSER'] == 'firefox'
      Capybara.current_driver = :selenium_firefox
      require 'capybara-webkit'
      Capybara.default_driver = :webkit

Now you can choose your browser with an environment variable: BROWSER=firefox cucumber features

If you are testing an app apart from a framework like Sinatra or Rails, you can use Rack to serve a static page that includes your built app in a <script> tag. For example, you could have an html directory with an index.html file in it:

    <title>Test App</title>
    <script type="text/javascript" src="application.js"></script>
  <body><div id="app"></div></body>

When you’re ready to run an integration test, compile your code into application.js using browserify:

> browserify -e lib/main.js -o html/application.js

Then tell cucumber to load your test file as the web app to test:

# features/support/env.rb
    require 'rack'
    require 'rack/directory' = do
      run'../../../html/', __FILE__))

Once cucumber is set up, you can start writing integration tests just as you would with Rails:

# features/logging_in.feature

Feature: Logging in

Scenario: Successful in-log
  Given I am on the home page
  When I log in as derek
  Then I should see a welcome message
# features/step_definitions/log_in_steps.rb

    Given %r{I am on the home page} do
      visit '/index.html'
    When %r{I log in as derek} do
      click '#login'
      fill_in 'username', :with => 'derek'
      fill_in 'password', :with => 'secret'
      click 'input[type=submit]'
    Then %r{I should see a welcome message} do
      page.should =~ /Welcome, derek!/

Package management

One of the joys of Ruby is its package manager, rubygems. With a simple gem install you can add a library to your app. There has been an explosion of JavaScript package managers lately. Each one adds the basic ability to gather all of your libraries and application code, resolve the dependencies, and concatenate them into a single application file. I prefer browserify over all the others for two reasons. First, you can use any Node.js package, which opens you up to many more utilities and libraries than other managers. Second, it uses Node.js’ CommonJS module system, which is a very simple and elegant module system.

In your project’s root, place a package.json file that defines the project’s dependencies:

      "dependencies": {
        "JSONPath": "0.4.2",
        "underscore": "*",
        "jquery": "1.8.1"
      "devDependencies": {
        "browserify": "*",
        "vows": "*"

Run npm install and all of your project’s dependencies will be installed into the node_modules directory. In your project you can then make use of these packages:

var _ = require('underscore'),
        jsonpath = require('JSONPath'),
        myJson = "...";

    _.each(jsonpath(myJson, '$.books'), function(book) {

If you’re looking for packages available for certain tasks, simply run npm search <whatever> to find pacakges related to your search terms. Some packages are tagged with “browser” if they are specifically meant for client side apps, so you can include “browser” as one of your search terms to limit your results accordingly. Many of the old standbys, like jquery, backbone, spine, and handlebars are there.

Code organization

As JavaScript applications get more complex, it becomes prudent to split your code into separate modules, usually placed in separate files. In the Ruby world, this was easily done by require-ing each file. Node.js introduced many people (including me) to the CommonJS module system. It’s a simple and elegant way to modularize your code and allows you to separate each module into its own file. Browserify allows you to write your code in the CommonJS style and it will roll all of your code up into a single file appropriate for the browser.

Ruby structure

For example, my Ruby project may look like:


Where lib/my_library.rb looks like:

require 'my_library/book'

    class MyLibrary
      def initialize(foo)
        @book = Book.parse(foo)

And lib/my_library/book.rb looks like:

require 'jsonpath'

class MyLibrary
  class Book
    def self.parse(foo)
      JSONPath.eval(foo, '$\[0\]')

And spec/my_library/book_spec.rb looks like:

require 'json'
    require 'helper'
    require 'my_library/book'

    describe MyLibrary::Book do
      describe '.parse' do
        it 'parses a book object' do
          json ='support/book.json')
          book = Book.parse(JSON.parse(json))
          book.title.should == "Breakfast at Tiffany's"

JavaScript structure

A javascript project would look similar:


Where lib/my-library.js looks like:

var Book = require('./my-library/book');

var MyLibrary = function(foo) { = new Book(foo);

module.exports = MyLibrary;

And lib/my-library/book.js looks like:

var jsonpath = require('jsonpath');

var Book = {
  parse: function(foo) {
    return jsonpath(foo, '$\[0\]');

module.exports = Book;

And test/my-library/book-test.js looks like:

var fs = require('fs');
var helper = require('../helper'),
    Book = require('../../lib/my_library/book');
    // NOTE: there are ways to set up your modules 
    // to be able to use relative require()s but
    // it is beyond the scope of this article

  '.parse': {
    'parses a book object': function() {
      var json = fs.readFileSync('support/book.json'),
          book = Book.parse(JSON.parse(json));
      assert.equal(book.title, "Breakfast at Tiffany's");

Build tools

Browserify will build concatenated JavaScript files when you’re ready to deploy your code on a website or as a general-purpose library. Its usage is simple:

> browserify -e <main_application_startup_code> -o <path_to_built_file>

Building a library

If we were building the library in the section above, we could run browserify -e lib/my-library.js -o build/my-library.js. Then, any user of your library can use your library with the require function:

<script type="text/javascript" src="jquery.js"></script>
    <script type="text/javascript" src="my-library.js"></script>
    <script type="text/javascript">
      var myLibrary = require('my-library');
      $.ajax('/lib.json', function(data) {

You can also save the library user some time with a custom entry point for browsers:

// in /browser.js
    window.MyLibrary = require('my-library');

Then run browserify -e browser.js -o build/my-library.js

And the library user would use it thusly:

<script type="text/javascript" src="jquery.js"></script>
    <script type="text/javascript" src="my-library.js"></script>
    <script type="text/javascript">
      $.ajax('/lib.json', function(data) {

Building a web app

A spine app might look something like:

// in app/main.js
    var $ = require('jquery'),
        Spine = require('spine');

    Spine.$ = $;

    var MainController = require('./controllers/main-controller');

    var ApplicationController = Spine.Controller.sub({
      init: function() {
        var main = new MainController();
          '/': function() {; }

    Spine.Route.setup({ history: true });

It would be built with browserify -e app/main.js -o build/application.js and the application.js added to your website with a <script> tag.

You can extend browserify with plugins like templatify, which precompiles HTML/Handlebar templates into your app.

Together, npm packages, command-line testing and build tools, and modular code organization help you quickly build non-trivial JavaScript libraries and applications just as easily as it was in Ruby land. I’ve developed several in-production projects using this workflow, such as our CM1 JavaScript client library, our flight search browser plugin, and

Split XML files with `sgrep`, a classic UNIX utility from 1995

sgrep is better than split or csplit for breaking up XML files by element – you can even use it to create a constant-memory streaming “parser.”

$ sgrep -o "XXXSTART%rSTOPXXX" '"<TourismEntity" .. "</TourismEntity"' transmission_file.xml
    <State>New York</State>
    <Saying>Is For Lovers</Saying>
    <Saying>America's Dairyland</Saying>

(see below for why that output is useful)


sgrep and a simple Ruby program (given below) let you stream XML elements into an #emit method that can do whatever you want. What’s more, the memory usage is constant (and small); memory usage doesn’t grow like if you parse the entire XML document into memory like with nokogiri.

Using sgrep to split XML

Combine sgrep with, for example, a Ruby program:

#!/usr/bin/env ruby

# your target element here
ELEMENT_START = '<TourismEntity'
ELEMENT_STOP = '</TourismEntity'

# your emit code here - in this case I'm just writing it to a separate file named tourism_entity-NUM.txt
def emit(tourism_entity)
  $tourism_entity_count ||= 0
  $tourism_entity_count += 1"tourism_entity-#{$tourism_entity_count}.txt", 'w') { |f| f.write tourism_entity }

SGREP_BIN = %w{ sgrep sgrep2 }.detect { |bin| `which #{bin}`; $?.success? }

leftover = ''
IO.popen([ SGREP_BIN, '-n', '-o', "#{MAGIC_START}%r#{MAGIC_STOP}", %{"#{ELEMENT_START}" .. "#{ELEMENT_STOP}"}, ARGV[0] ]) do |io|
  while additional =
    buffer = leftover + additional
    while (start = buffer.index(MAGIC_START)) and (stop = buffer.index(MAGIC_STOP))
      element_body = buffer[(start+MAGIC_START.length)...stop] + '>'
      # what "emit" does is up to you
      emit element_body
      buffer = buffer[(stop+MAGIC_STOP.length)..-1]
    leftover = buffer

So let’s go back to the example, transmission_file.xml:

    <State>New York</State>
    <Saying>Is For Lovers</Saying>
    <Saying>America's Dairyland</Saying>

You will get:

$ ruby emit_tourism_entity.rb transmission_file.xml 
$ tail +1 tourism_entity-*
==> tourism_entity-1.txt <==
    <State>New York</State>
==> tourism_entity-2.txt <==
    <Saying>Is For Lovers</Saying>
==> tourism_entity-3.txt <==
    <Saying>America's Dairyland</Saying>

What’s happening is:

  1. Ruby spawns sgrep using a pipe
  2. sgrep spits out a stream of element bodies separated by “XXXSTART” and “STOPXXX” into the pipe
  3. Ruby reads from the pipe and watches for element bodies separated by the aforementioned magic tokens
  4. When Ruby sees a whole element body, it runs #emit

Why are you so amazed by this program from 1995

Because just look at that beautiful syntax:

$ sgrep '"{" .. "}"' eval.c

And because memory usage is really low, and it’s really fast.

I have less than 100 elements and just want to split up the file

Both of these will break up the XML file into separate files without the need for a Ruby wrapper:

$ split -p '<TourismEntity' transmission_file.xml
$ csplit -s -k transmission_file.xml '/<TourismEntity/' '{100}'

But there are little problems, like you max out at 100 separate files (i.e. elements), and other things.

Upsert for MySQL, PostgreSQL, and SQLite3 (and Ruby)

Our upsert library for Ruby gives you NoSQL-like upsert functionality in traditional RDBMS databases. How?

50%–80% faster than ActiveRecord

New in 0.4.0: When used in PostgreSQL mode, database functions are re-used, so you don’t have to be in batch mode to get the speed advantage.

You don’t need ActiveRecord to use it, but it’s benchmarked against ActiveRecord and found to be up to 50% to 80% faster than traditional techniques for emulating upsert:

# postgresql (pg library)
Upsert was 78% faster than find + new/set/save
Upsert was 78% faster than find_or_create + update_attributes
Upsert was 88% faster than create + rescue/find/update

# mysql (mysql2 library)
Upsert was 46% faster than find + new/set/save
Upsert was 63% faster than find_or_create + update_attributes
Upsert was 74% faster than create + rescue/find/update
Upsert was 28% faster than faking upserts with activerecord-import (which uses ON DUPLICATE KEY UPDATE)

# sqlite3
Upsert was 72% faster than find + new/set/save
Upsert was 74% faster than find_or_create + update_attributes
Upsert was 83% faster than create + rescue/find/update

(run the tests on your own machine to get these benchmarks)

What is a selector? What is a document?

upsert was inspired by the MongoDB upsert method – AKA mongo-ruby-driver’s update method – and involves a “selector” (how to find the row to be inserted or updated) and a “document” (attributes that should be set once the record has been found.)

Example 1

  • Selector: :name => 'Jerry'
  • Document: :age => 5
  • Expression: upsert.row({:name => 'Jerry'}, :age => 5)

Example 2

  • Selector: :id => 45
  • Document: :updated_at =>
  • Expression: upsert.row({:id => 45}, :updated_at =>

Unfortunately, you currently can’t do things like :counter => 'counter + 1'.


One record at a time:

connection =[...])
upsert = connection, 'pets'
upsert.row({:name => 'Jerry'}, :breed => 'beagle')

With ActiveRecord helper: (first require 'upsert/active_record_upsert')

Pet.upsert({:name => 'Jerry'}, :breed => 'beagle')

In batch mode, which is the fastest:

connection =[...])
Upsert.batch(connection, 'pets') do |upsert|
  upsert.row({:name => 'Jerry'}, :breed => 'beagle')
  upsert.row({:name => 'Pierre'}, :breed => 'tabby')

Graphite and statsd – beyond the basics

The Graphite and statsd systems have been popular choices lately for recording system statistics, but there isn’t much written beyond how to get the basic system set up. Here are a few tips that will make your life easier.

Graphite + statsd - the rundown

The graphite and statsd system consists of three main applications

  • carbon: a service that receives and stores statistics
  • statsd: a node server that provides an easier and more performant, UDP-based protocol for receiving stats which are passed off to carbon
  • graphite: a web app that creates graphs out of the statistics recorded by carbon

Use graphiti

Several alternative front-ends to graphite have been written. I chose to use graphiti because it had the most customizable graphs. Note that graphiti is just a facade on top of graphite - you still need the graphite web app running for it to work. Graphiti makes it easy to quickly create graphs. I’ll cover this later.

The flow looks like:

|App| ==[UDP]==> |statsd| ==> |carbon| ==> |.wsp file|

|.wsp file| ==> |graphite| ==> |graphiti| ==> |pretty graphs on your dashboard|

Use chef-solo to install it

If you’re familiar with chef, you can use the cookboos that the community has already developed for installing graphite and friends. If not, this would be a good opportunity to learn. You can use chef-solo to easily deploy graphite to a single server. I plan to write a “getting started with chef-solo” post soon, so stay tuned!

Chef saved me a ton of time setting up python, virtualenv, graphite, carbon, whisper, statsd, and many other tools since there are no OS-specific packages for some of these.

Use sensible storage schemas

The default chef setup of graphite stores all stats with the following storage schema rule:

priority = 0
pattern = ^.*
retentions = 60:100800,900:63000

The retentions setting is the most important. It’s a comma-delimited list of data resolutions and amounts. * The number before the colon is the size of the bucket that holds data in seconds. A value of 60 means that 60 seconds worth of data is grouped together in the bucket. A larger number means the data is less granular, but more space efficient. * The number after the colon is the number of data buckets to store at that granularity. 100800 will cover (100800 * 60) = 70 days of data. That’s (100800 * 12) = 1.2MiB of space for those 70 days. A bigger number means more disk space and longer seek times.

Alternatively, you can specify retentions using time format shortcuts. For example, 1m:7d means “store 7 days worth of 1-minute granular data.”

Use a good stats client

In the ruby world, there are two popular client libraries: fozzie and statsd-ruby. Both provide the standard operations like counting events, timing, and gauging values.

Fozzie differs in that it integrates with Rails or rack apps by adding a rack middleware that automatically tracks timing statistics for every path in your web app. This can save time, but it also has the downside of sending too much noise to your statsd server and can cause excessive disk space consumption unless you implement tight storage schema rules. It also adds a deep hierarchy of namespaces based on the client machine name, app name, and current environment. This can be an issue on heroku web apps where the machine name changes frequently.

If you want more control over your namespacing, statsd-ruby is the way to go. Otherwise, fozzie may be worth using for its added conveniences.

Make sure you don’t run out of disk space

Seriously, if you do run out of disk, the graphite (whisper) data files can become corrupted and force you to delete them and start over. I learned this the hard way :) Make sure your storage schemas are strict enough because each separate stat requires its own file that can be several megabytes in size.

Use graphiti for building graphs and dashboards

Graphiti has a great interface for building graphs. You can even fork it and deploy your own custom version that fits your company’s needs and/or style. It’s a small rack app that uses redis to store graph and dashboard settings. There’s even a chef cookbook for it!

When setting up graphiti, remember to set up a cron job to run rake graphiti:metrics periodically so that you can search for metric namespaces from graphiti.

Use graphite’s built-in functions for summarizing and calculating data

Graphite provides a wealth of functions that run aggregate operations on data before it is graphed.

For example, let’s say we’re tracking hit counts on our app’s home page. We’re using several web servers for load balancing and our stats data is namespaced by server under stats.my_app.server-a.production.home-page.hits and stats.my_app.server-b.production.home-page.hits. If we told graphite to graph results for stats.my_app.*.production.home-page.hits we would get two graph lines – one for server-a and one for server-b. To combine them into a single measurement, use the sumSeries function. You can then use the alias function to give it a friendlier display name like “Home page.”

Graphiti has a peculiar way of specifying which function to use. In a normal series list, you have the following structure:

"targets": [

The {} is an object used to specify the list of functions to apply, in order, on the series specified in the parent array. Each graphite function is specified as a key and its parameters as the value. A true value indicates the function needs no parameters and an array is provided if the function requires multiple parameters.

You’ll notice in the function documentation that each function usually takes two initial arguments, a context and a series name. In graphiti, you won’t need to specify those first two arguments.

Here’s an example of sumSeries and alias used together. Note that the order matters!

"targets": [
      "sumSeries": true,
      "alias": "Homepage hits"

Different graph areaMode for different applications

While not well documented, graphite has a few options for displaying graph lines. By default, the “stacked” area mode stacks each measurement on top of each other into an area chart that combines multiple measurements that are uniquely shaded. This can be good for seeing grand totals. The blank option plots each measurement as a line on a line chart. This is preferable for comparing measurements.

Different metrics for different events

Each stats recording method provided by statsd-ruby and fozzie has different behavior, which isn’t well documented anywhere.

  • Stats.count is the base method for sending a count of some event for a given instance. It’s rarely used alone.
  • Stats.increment and Stats.decrement will adjust a count of an event. It’s useful for counting things like number of hits on a page, number of times an activity occurs, etc. It will be graphed as “average number of events per second”. So if your web app runs Stats.increment 'hits' 8 times over a 1 second period, the graph will draw a value of 8 for that second. Sometimes you will see fractional numbers charted. This is because graphite may average the data over a time period based on your schema storage settings and charting resolution.
  • Stats.timing will take a block and store the amount of time the code in the block took to execute. It also keeps track of average, min, and max times, as well as standard deviation and total number of occurrences.
  • Stats.gauge tracks absolute values over time. This is useful for tracking measurements like CPU, memory, and disk usage.
  • Fozzie provides Stats.event 'party' to track when an event happens. This is useful for tracking things like deploys or restarts. Equivalent functionality can be obtained in statsd-ruby by running Stats.count 'party',

Bonus tip: Graphs on your Mac dashboard

If you’re using a mac, you can add your favorite graphs to your dashboard. Create a graph in graphiti, then view it on the graphiti dashboard with Safari. Click File->Open in Dashboard… and select the graph image with the select box. Now, you can quickly see important graphs at the press of a button!

Overall, statsd is a great tool and can add great visibility into your applications.

Analyze CREATE TABLE SQL with pure Ruby

You can use the new create_table library to analyze and inspect CREATE TABLE statements (what is the primary key? what are the column data types? what are the defaults?) You can also generate SQL that works with different databases.

>> require 'create_table'
=> true
>> c ={
  CREATE TABLE employees
  (employeeid INTEGER NOT NULL,
  lastname VARCHAR(25) NOT NULL,
  firstname VARCHAR(25) NOT NULL,
  reportsto INTEGER NULL); 
=> #<CreateTable>
=> ["employeeid", "lastname", "firstname", "reportsto"]
=> [false, false, false, true]

(grabbed that example from the entry on CREATE TABLE SQL, thanks!)

Uses Ragel for parsing

The library uses Ragel internally for parsing.

Check out the column parser code, for example.

Translates among MySQL, PostgreSQL, and SQLite3

Early versions target MySQL, PostgreSQL, and SQLite.

>> require 'create_table'
=> true
>> c ={
    id INTEGER AUTO_INCREMENT, /* AUTO_INCREMENT with an underscore is MySQL-style... */
    nickname CHARACTER VARYING(255),
    birthday DATE,
    license_id INTEGER,
    price NUMERIC(5,2),
    PRIMARY KEY ("id")
=> #<CreateTable>
>> c.to_mysql
=> ["CREATE TABLE cats ( `id` INTEGER PRIMARY KEY AUTO_INCREMENT, nickname CHARACTER VARYING(255), birthday DATE, license_id INTEGER, price NUMERIC(5,2) )"]
>> c.to_postgresql
=> ["CREATE TABLE cats ( \"id\" SERIAL PRIMARY KEY, nickname CHARACTER VARYING(255), birthday DATE, license_id INTEGER, price NUMERIC(5,2) )"]
>> c.to_sqlite3
=> ["CREATE TABLE cats ( \"id\" INTEGER PRIMARY KEY AUTOINCREMENT, nickname CHARACTER VARYING(255), birthday DATE, license_id INTEGER, price NUMERIC(5,2) )"]

Obviously there’s a web service

You can POST statements to and get the results back as JSON:

$ curl -i -X POST -H "Accept: application/json" --data "CREATE TABLE cats ( id INTEGER AUTO_INCREMENT, nickname CHARACTER VARYING(255), birthday DATE, license_id INTEGER, price NUMERIC(5,2), PRIMARY KEY (\"id\") )"
HTTP/1.1 201 Created
Cache-Control: max-age=0, private, must-revalidate
Content-Type: application/json; charset=utf-8
Date: Fri, 24 Aug 2012 22:24:52 GMT
Etag: "f13513b9126eb1fb909229e828c6a7cd"
Server: thin 1.4.1 codename Chromeo
X-Rack-Cache: invalidate, pass
X-Runtime: 0.051092
X-Ua-Compatible: IE=Edge,chrome=1
Content-Length: 1490
Connection: keep-alive

{"statement":{"original":"CREATE TABLE cats ( id INTEGER AUTO_INCREMENT, nickname CHARACTER VARYING(255), birthday DATE, license_id INTEGER, price NUMERIC(5,2), PRIMARY KEY (\"id\") )","mysql":"CREATE TABLE cats ( `id` INTEGER PRIMARY KEY AUTO_INCREMENT, nickname CHARACTER VARYING(255), birthday DATE, license_id INTEGER, price NUMERIC(5,2) )","postgresql":"CREATE TABLE cats ( \"id\" SERIAL PRIMARY KEY, nickname CHARACTER VARYING(255), birthday DATE, license_id INTEGER, price NUMERIC(5,2) )","sqlite3":"CREATE TABLE cats ( \"id\" INTEGER PRIMARY KEY AUTOINCREMENT, nickname CHARACTER VARYING(255), birthday DATE, license_id INTEGER, price NUMERIC(5,2) )","columns":[{"name":"id","data_type":"INTEGER","allow_null":false,"default":null,"primary_key":true,"unique":true,"autoincrement":true,"charset":null,"collate":null},{"name":"nickname","data_type":"CHARACTER VARYING(255)","allow_null":true,"default":null,"primary_key":false,"unique":false,"autoincrement":false,"charset":null,"collate":null},{"name":"birthday","data_type":"DATE","allow_null":true,"default":null,"primary_key":false,"unique":false,"autoincrement":false,"charset":null,"collate":null},{"name":"license_id","data_type":"INTEGER","allow_null":true,"default":null,"primary_key":false,"unique":false,"autoincrement":false,"charset":null,"collate":null},{"name":"price","data_type":"NUMERIC(5,2)","allow_null":true,"default":null,"primary_key":false,"unique":false,"autoincrement":false,"charset":null,"collate":null}]}}

Submissions are recorded so that you can add errata to them—like this one.

Madison Ruby Conf 2012 Healthy Hacker Fun Run

Thanks to those who showed up for the Fun Run! Photos coming soon.

How to parse quotes in Ragel (and Ruby)

The key to parsing quotes in Ragel is ([^’\] | /\./)* as found in the rlscan example. Think of it as ( not_quote_or_escape | escaped_something )*.

Making it work with single and double quotes

Here’s the heart of a working example that covers both single and double quotes:

  machine not_scanner;

  action Start {
    s = p
  action Stop {
    quoted_text = data[s...p].pack('c*')
    # do something with the quoted text!

  squote = "'";
  dquote = '"';
  not_squote_or_escape = [^'\\];
  not_dquote_or_escape = [^"\\];
  escaped_something = /\\./;
  ss = space* squote ( not_squote_or_escape | escaped_something )* >Start %Stop squote;
  dd = space* dquote ( not_dquote_or_escape | escaped_something )* >Start %Stop dquote;

  main := (ss | dd)*;

Why does it work?

Use this example string:


Follow it on the graph: (notice the symmetry… the “top” processes double quotes and the “bottom” processes single quotes)

thumbnail of the graph of the state machine

… tl;dr …

   "      a     \     "     b      c      "
➇  →  ➁  →  ➂  →  ➃  →  ➂  →  ➂  →  ➂  →  ➇

State ➃ is eating the escaped double quote and therefore preventing the machine from stopping—that’s the key!

You can also do it with a scanner

Here’s what you would do in a scanner:

  machine scanner;

  action GotOne {
    quoted_text = data[(ts+1)...(te-1)].pack('c*')
    # do something with quoted text!

  squote = "'";
  dquote = '"';
  not_squote_or_escape = [^'\\];
  not_dquote_or_escape = [^"\\];
  escaped_something = /\\./;

  main := |*
    squote ( not_squote_or_escape | escaped_something )* squote => GotOne;
    dquote ( not_dquote_or_escape | escaped_something )* dquote => GotOne;

Simple, clean reports in Ruby

Our report library for Ruby is the shortest path between

mysql> select * from employees;
| id | first_name | last_name | salary  | birthdate  | role    |
|  1 | Deirdre    | Irish     |   45000 | 1960-09-10 | Liaison |
|  2 | Gregor     | German    | 16000.5 | 1950-09-09 | Tech    |
|  3 | Spence     | Scot      |    5000 | 1955-12-11 | Joker   |
|  4 | Vincent    | French    | 8000.99 | 1947-04-17 | Fixer   |
|  5 | Sam        | American  | 16000.5 | 1930-04-02 | Planner |

and simple, clean reports like

screenshot of the .xlsx version of the TPS report

90% of the way by default

Did you notice these little details?

  1. Business-class typography: Arial 10pt, left-aligned text and dates, right-aligned numbers and currency
  2. Auto-fit to contents: always enabled
  3. Autofilters: always added to your column headers
  4. Freeze pane: always frozen beneath your column headers

Here’s the code that generated it:

class Tps < Report
  table 'Hierarchy' do
    head do
      row 'TPS code', :code
      row 'Date', :date
      row 'Section', 'Hierarchy'
    body do
      rows :employees, ['last_name ASC']
      column('Full name') { first_name + ' ' + last_name }
      column 'Role'
      column 'Salary', :type => :Currency
  table 'Seniority' do
    head do
      row 'TPS code', :code
      row 'Date', :date
      row 'Section', 'Seniority'
    body do
      rows :employees, ['birthdate DESC']
      column('Full name') { first_name + ' ' + last_name }
      column 'Birthdate'
      column 'Over 70?'
  attr_reader :code
  def initialize(code)
    @code = code
  def employees(order)
    Employee.order(order).each { |employee| yield employee }
  def date

And here’s how you generate it:

>> t ='ABC123')
=> #<Tps:0x00000101426998 @code="ABC123">
>> t.xlsx.path
  Employee Load (36.1ms)  SELECT `employees`.* FROM `employees` ORDER BY last_name ASC
  Employee Load (0.3ms)  SELECT `employees`.* FROM `employees` ORDER BY birthdate DESC
=> "/var/folders/Nm/Nm0HFsHAH90Zyt7NLDUZUE+++TI/-Tmp-/133538737_Report__Xlsx.xlsx"
>> t.pdf.path
  Employee Load (0.2ms)  SELECT `employees`.* FROM `employees` ORDER BY last_name ASC
  Employee Load (0.1ms)  SELECT `employees`.* FROM `employees` ORDER BY birthdate DESC
=> "/var/folders/Nm/Nm0HFsHAH90Zyt7NLDUZUE+++TI/-Tmp-/133548261_Report__Pdf.pdf"
>> t.csv.paths
  Employee Load (0.3ms)  SELECT `employees`.* FROM `employees` ORDER BY last_name ASC
  Employee Load (0.2ms)  SELECT `employees`.* FROM `employees` ORDER BY birthdate DESC
=> ["/var/folders/Nm/Nm0HFsHAH90Zyt7NLDUZUE+++TI/-Tmp-/133553761_Report__Csv__Table_Hierarchy.csv", "/var/folders/Nm/Nm0HFsHAH90Zyt7NLDUZUE+++TI/-Tmp-/133553767_Report__Csv__Table_Seniority.csv"]

You get paths to files in the tmp dir. Only the CSV output format gives you multiple files.

Three output formats: XLSX, PDF, and CSV

You’ve already seen the XLSX output format - it’s currently the most advanced.

The PDF output format starts each table on its own page:

screenshot of the .pdf version of the TPS report

The CSV output format puts each table into its own file:

TPS code,ABC123

Full name,Role,Salary
Sam American,Planner,16000.5
Vincent French,Fixer,8000.99
Gregor German,Tech,16000.5
Deirdre Irish,Liaison,45000.0
Spence Scot,Joker,5000.0

tps0.csv tps1.csv

How the DSL works

See the following for a line-by-line analysis…

# don't forget to inherit from Report
class Tps < Report

  # this is the sheet name in excel
  table 'Hierarchy' do

    head do
      # calling Tps#code
      row 'TPS code', :code

      # calling Tps#date
      row 'Date', :date

      # no calls are made
      row 'Section', 'Hierarchy'

    body do
      # Tps#employees('last_name ASC')
      rows :employees, ['last_name ASC']

      # instance_eval'ing the proc on Employee... getting Employee#first_name + ' ' + Employee#last_name
      column('Full name') { first_name + ' ' + last_name }

      # Employee#role
      column 'Role'

      # formatted as currency where available (currently only XLSX output)
      column 'Salary', :type => :Currency

You can apply formatting to the XLSX and PDF output formats according to what the underlying libraries support:

class Tps < Report
  # [...]

  # Where 72 = 1 inch
    :stamp => File.expand_path('../acme_letterhead/report_template_landscape.pdf', __FILE__),
    :body => { :width => (10*72), :header => true },
    :document => {
      :top_margin => 118,
      :right_margin => 36,
      :bottom_margin => 72,
      :left_margin => 36,
      :page_layout => :landscape,

  # Whatever is supported by
  format_xlsx do |xlsx|
    acme_logo = xlsx.add_image(File.expand_path('../acme_letterhead/acme_logo.emf', __FILE__), 118, 107)
    acme_logo.croptop = '11025f'
    acme_logo.cropleft = '9997f'
    brighterplanet_logo = xlsx.add_image(File.expand_path('../acme_letterhead/brighterplanet_logo.emf', __FILE__), 116, 36)
    xlsx.header.left.contents = acme_logo
    xlsx.header.right.contents = 'Corporate TPS Reporting Program'
    xlsx.footer.left.contents = 'Confidential' = [ 'Powered by ', brighterplanet_logo ]
    xlsx.footer.right.contents = :page_x_of_y = 1.5
    xlsx.page_setup.header = 0
    xlsx.page_setup.footer = 0


  1. Finalize the DSL - do you like it?
  2. Finish documenting all the methods
  3. Make sure XLSX output format renders on all versions of Microsoft Office above 2007

The Green Button that could have been

What’s the first step in discovering efficiency opportunities? Data, data, data. That’s what we always say here at Brighter Planet, where we’re trying to compute our way to a more hopeful environmental future.

So I’m sure it won’t surprise you when I say that the key to the energy challenge also starts with data: how much we’re using and when and where we’re using it. Which makes it all the more poignant to write this critique of the much-lauded Green Button program, which ostensibly is all about opening up energy data.

The truth is that Green Button, as a government advocacy program, has not succeeded in unleashing the gold rush of energy efficiency magic we know is locked up in the heads of entrepreneurs. Developers aren’t building apps, consumers aren’t using them, and utilities aren’t playing ball. Frankly it’s hard to blame them. Luckily there’s a better way to do Green Button, using modern technology to truly empower energy consumers in a lasting, meaningful way. But first . . .

Back to the beginning

Green Button started as a gleam in U.S. CTO Aneesh Chopra’s eye. At the GridWeek2011 Smart Grid event, Aneesh challenged the energy industry to provide consumers with a “green button” you can click to download detailed energy use data. Here’s the bit from his keynote:

His oratory is certainly inspiring, and, had I been there, I would have been roused to standing applause. But now I can’t help but wonder if the seasoned (grizzled?) energy infrastructure veterans in the crowd immediately saw the flaws in the Green Button challenge.

I’ve met Aneesh twice (at Strata and then later at CleanWebHackathon) and both times said to myself, here is a sharp guy. In conversation he is refreshingly blunt and he clearly does not put up with nonsense.

So although Green Button is now deserving of this critique, I do not believe Aneesh intended for it to be the empty tease it is today. As he said himself in the keynote, “Nothing’ll be perfect on day one,” and so, in that spirit, I will explain where I think this simple promising concept went so completely wrong.

Buttons galore

If you rewind the video a bit, you’ll see that before Aneesh issues his challenge, he describes a program called Blue Button, launched at the Veterans Administration, that allows patients to download their medical history with the proverbial click of a blue button.

Using the VA program as inspiration was Green Button’s original sin. If you look at Blue Button promotional materials, you’ll find phrases like “human readable” and “easy-to-read.” Sure, we can program computers to interpret Blue Button files, but their primary purpose is to inform humans. Don’t believe me? Take a look at a sample file:

Allergy Name: Pollen
Date:         18 Mar 2010
Severity:     Mild    
Diagnosed:    Yes

Reaction: Watery eyes, itchy nose 

Comments: Took an over the counter antihistamine

Eyes watering yet?

A smart move

One thing Green Button got right was its data exchange format. Even though the data we’re talking about sounds simple (just a bunch of meter readings with timestamps) there has to be an agreed-upon way to serialize this data, and it turns out there’s an existing standard—NAESB’s ESPI—for doing just that.

Green Button’s adoption of ESPI was an excellent step in the right direction. The data files are really quite simple to parse and use with existing tools. It would have been all too easy to follow Blue Button too closely and specify a newline-delimited or otherwise “human readable” file format. Although who knows what your average human would be able to do with this:

2012-03-01 00:00 to 2012-03-01 00:15 0.302 kWh $0.01
2012-03-01 00:15 to 2012-03-01 00:30 0.302 kWh $0.01
2012-03-01 00:30 to 2012-03-01 00:45 0.302 kWh $0.01
2012-03-01 00:45 to 2012-03-01 01:00 0.302 kWh $0.01
2012-03-01 01:00 to 2012-03-01 01:15 0.302 kWh $0.01
2012-03-01 01:15 to 2012-03-01 01:30 0.302 kWh $0.01
2012-03-01 01:30 to 2012-03-01 01:45 0.302 kWh $0.01
2012-03-01 01:45 to 2012-03-01 02:00 0.302 kWh $0.01
2012-03-01 02:00 to 2012-03-01 02:15 0.302 kWh $0.01
2012-03-01 02:15 to 2012-03-01 02:30 0.302 kWh $0.01
2012-03-01 02:30 to 2012-03-01 02:45 0.302 kWh $0.01

Manual labor

With a human endpoint in mind, Blue Button’s “download this file to your computer and use it how you’d like” interaction model makes perfect sense. Patients will likely email the file to their doctor or print it out and bring it to an appointment.

But with energy data, where real value comes from continuous computer analysis, this model is truly inappropriate. Here’s what I, as an energy consumer, must do to leverage the Green Button program:

  1. Discover an interesting energy app
  2. Open a new tab
  3. Find my utility’s website
  4. Log in
  5. Find the green button (if it exists!) and click it
  6. Choose a place on my computer to save the file
  7. Go back to the first tab
  8. Click the app’s “upload” link
  9. Find the location I saved by data file on my computer

I’ll admit this list is a bit verbose, but you get my point: it’s a bore. The energy app in step 1 is going to have be incredibly interesting to drive a user through this interaction. And here’s the really awful part: you have to go through all of these steps every time you want your energy app to have access to new data.

Imagine if in order to use you had to visit every one of your online banking accounts every morning, download exports of your transactions, and upload them to your profile. That’s what we’re dealing with when it comes to today’s Green Button.

To put it simply, this is just an unworkable model for developers for anything beyond experimentation. It’s ridiculous to expect your users to go through this process (repeatedly) to draw value from your application, which means a user base of approximately zero, which means a revenue stream of roughly the same amount.

I’ll be honest that my first reaction to hearing about the Apps for Energy contest was to be a bit insulted. Why should developers spend their valuable time building software on a platform that by its very nature precludes meaningful, lasting user relationships? Well, at least there were cash prizes.

A way forward

What we need is a system that allows us to tell our utility that it’s OK for an energy app we like to have ongoing access to our electricity usage data. This is how we’re able to allow apps like Tweetdeck to look at our Twitter feeds without downloading all of our tweets from Twitter and uploading them to Tweetdeck. It’s how we can add apps to our Facebook accounts that do cool things with our social data.

This paradigm—applications talking to other applications, about us, with our permission—is the enduring reward of Web 2.0, and it’s exactly what we need in energy land.

If we can achieve this, it won’t look much like today’s Green Button at all. We’d probably stick with ESPI for data exchange, but there wouldn’t even be a “green button” involved. (Recall that to add an app to a given platform, you start with the app, which send you over to a “yes/no” authorization page on the platform.)

There’s already a time-tested, secure, best-practice way of doing this (OAuth) that will allow energy app developers to accept data in a universal format with a common authorization process, regardless of the user’s utility. All the developer will need is a database listing the OAuth endpoints for each utility in the country.

Making the case

And, of course, that’s the rub. Adoption by utilities of the Green Button program has been underwhelming to say the least: only five have actually posted the clickable green button to their sites. I don’t think we can ask each utility to voluntarily expose a usage data endpoint via OAuth authentication and expect a more enthusiastic response.

I know that Aneesh liked to fly a bit below the radar (alas he’s not CTO anymore), championing solutions that don’t require budget outlays, laws, executive orders, or political dogfights. But this is a place where we really need government direction: all U.S. electric utilities should be required to offer ESPI data at an OAuth endpoint by Jan. 1, 2014.


But of course we developers don’t like to sit still. Theoretically we could build a proxy system that automatically “logs in” to utility accounts, downloads Green Button data, and feeds this data to authorized applications on the user’s behalf.

This is what we did with Sparkwire, and boy was it a bear. We were only able, in fact, to produce a driver for one utility—PG&E—and for that we had to whip out the nuclear weapon of application integration: screen scraping. Behold:

What a mess. And we’d have to do this for each and every utility around the country that we want to support. But in the end, I suppose, that’s better than expecting every one of our users to go through the download/upload song-and-dance every time they wanted some new insight.

In conclusion

If we truly want to empower energy consumers, we must replace Green Button’s “download/upload” torture with a modern, OAuth-powered “app and platform” model. To do this, we either have to prevail upon utilities nationwide to get with the program, or we use a proxy like Sparkwire and write drivers for the 3,000-odd electric utilities around the country.

Either way, it’s daunting. But it’s gotta happen. Who’s in?


How incredible is it that they’re even thinking about something as obscure as energy use data exchange formats in the top advisory levels of the White House? Heady times!

My CompSust'12 Presentation

Here are the slides from my talk on our Flight emitter at CompSust’12.

Presentation slides

I basically gave a quick overview of the model and then summarized some findings from the Air Travel Carbon and Energy Efficiency paper we released last year.

People were surprised that so much variation in flight efficiency exists since airlines have a financial incentive to reduce fuel use. They were also interested in how much of the variation was due to seat configuration and wondered to what extent the average passenger would support increased efficiency at the cost of smaller seats.

Our Careplane browser plugin was a big hit, and a few people asked if we could extend it into an intermodal comparison tool.

I also presented this poster at the poster session.



I’m just back from the 3rd international conference on computational sustainbality in Copenhagen, Denmark. A beautiful scandinavian city, a jazz festival, and the latest on using machine learning and linear programming to develop intelligent energy management systems, integrate electric vehicles into the electricity grid, and optimize logistics - who could ask for more?

One of the things I liked was the conference’s size. At a bit less than 100 people it was big enough to have a variety of sessions but small enough that you could track down anyone you’d missed or had questions for. It was also a refreshing change to go to an academic conference as opposed to a professional one.

Some of the presentations I particularly enjoyed included:

Developing an intelligent system that automatically learns the characteristics of your home and your energy use patterns and then provides energy-saving tips (Nick Jennings).

A model that improves short-term local wind forecasts by analysing spatio-temporal correlations in forecast versus actual wind speed across all of Denmark (Julija Tastu).

Optimizing purchasing and distribution logistics for biomass power plants (Niels Kjeldsen).

Automating the repositioning a shipping fleet (Kevin Tierney).

Predicting future vegetation cover in the arctic (Theo Damoulas).

All in all a great summary of how computational techniques are being applied to sustainability issues.

A Deep Dive Into the New Automobile Emitter

We’ve made some exciting changes to our automobile emitter. Let’s take a look!

All the fuels

We now support automobile variants like flex fuel and diesel vehicles. If you want to specify, for instance, a flex fuel Ford F-150, you can specify “FFV” as part of the model name, as in this query: FFV&year=2012

Alternatively, you can specify an automobile_fuel and the correct model will be used by CM1:

Alternative fuels like electricity, compressed natural gas (CNG) and hydrogen are also supported. One of the most difficult tasks facing anyone trying to calculate automobile emissions is to convert from the EPA’s miles per gallon rating to alternative fuel consumption, then convert that fuel value to emissions. We do this all for you, all you need to tell us is distance travelled.

For electric cars, if you specify a country, we will use that country’s average emissions from electricity generation. Otherwise, a global average is used.

Activity Years

We now keep track of “activity years” which determine a range of years that an automobile is used. What this means is that when you say “I drove a Volkswagen Rabbit in 2010,” we can get the average emissions for a typical Rabbit that would have been in use in 2010.

Using distance-based emission factors

Previously, we had used per-unit-fuel emission factors like “0.23 kilograms of CO2 per liter of gasoline.” Now we follow EPA’s GHG inventory methodology which calculates the amount of CO2 per unit of distance. This better takes an automobile’s engine characteristics and air conditioning use into account.

As always, we are available for support in using CM1. If you’re on IRC, you can join us in the #brighterplanet room. You can also email us at or tweet us @brighterplanet.

Striving for a great API client

I wanted to take a moment to share some of the principles and technologies we used to build client libraries for our CM1 web service. As developers, we know how frustrating it can be to learn a new API and we keep that in mind as we design our client libraries to spare others of the same frustration.

Make the API client simple

Give first-time developers early wins by avoiding signups. Don’t waste developers’ time with weird object instantiation patterns. A good API is one that takes very little configuration and setup. If I want to run a query, I want to do it with as few lines of code as possible. With our carbon gem you can be up and running with a single function call:

result = Carbon.query('Flight', {
      :origin_airport => 'MSN',
      :destination_airport => 'ORD'
    puts "Carbon for my cross country flight: #{result.carbon}"

No account setup is needed until you’re in production. Once you’re ready, you can sign up for an API key and set it:

Carbon.key = 'MyKeyABC'

Our JavaScript client works similarly:

var CM1 = require('cm1');
    CM1.impacts('flight', {
      origin_airport: 'IAD',
      destination_airport: 'PDX'
    function(err, impacts) {
      console.log('Carbon for my cross-country flight: ',

Another benefit of a simple API is that it’s easier to mock out when testing an application against it.

Craft well-written documentation

Document your client library’s code, README, and website. Never assume a new user is familiar with all of the terminology your API uses and explain it well. Note that the simpler your API is, the easier it’ll be to write documentation.

Ruby docs with YARDoc

You may be familiar with Ruby’s RDoc documentation generator. YARDoc is similar, but adds some handy directives that better format your documentation.

The @param directive - @param [<type>] <name> <description> - defines a parameter that a function accepts.

The @return directive - @return [<type>] - specifies the return value of the function.

Each @overload directive @overload <function>(<param>) tells YARDoc that the function can be called with different method signatures.

Here’s an example from the carbon gem:

# @overload query(emitter, params)
    #   The simplest form.
    #   @param [String] emitter The { emitter name}.
    #   @param [optional, Hash] params Characteristics like airline/airport/etc., your API key (if you didn't set it globally), timeframe, compliance, etc.
    #   @return [Hashie::Mash] The API response, contained in an easy-to-use +Hashie::Mash+
    # @overload query(obj)
    #   Pass in a single query-able object.
    #   @param [#as_impact_query] obj An object that responds to +#as_impact_query+, generally because you've declared {Carbon::ClassMethods#emit_as} on its parent class.
    #   @return [Hashie::Mash] The API response, contained in an easy-to-use +Hashie::Mash+
    def Carbon.query(*params)

And here’s the YARDoc output:

yardoc output sample

These directives are important for dynamically typed languages like Ruby.


Docco provides “narrative” style documentation that reads more like a manual than your typical API reference. It turns your comments into documentation on one side of the page, with the actual code on the other side. CM1.js uses Docco to great effect. a docco sample


Having a dedicated web page for your client or API can be a big help. For example, our CM1 site, rather than being a brochure for our service, is a guide to using our API and an introduction to our language-specific API clients.

Eat your own dog food

It should go without saying, but by using your API for your own projects, you instantly become a constructive critic of your own work. The great benefit is that because you own the API, you get to change it if you don’t like it! This is particularly useful in early stages of API development.

Use VCR or other HTTP mocking libraries

VCR is a great testing tool that fakes out HTTP requests so that your tests run quickly and are run against real responses. A nice feature of VCR is that you can configure it to refresh response data, say, every month so you can verify that your client still works with your latest API.

Here’s an example from the carbon gem:

describe Carbon do
      describe '.query' do
        it "calculates flight impact" do
          VCR.use_cassette 'LAX->SFO flight', :record => :once do
            result = Carbon.query('Flight', {
              :origin_airport => 'LAX', :destination_airport => 'SFO'
            result.decisions.carbon.object.value.should be_within(50).of(200)

In JavaScript land, a tool called replay provides similar functionality.

Do the multithreading for them

It’s best to save developers the trouble of handling performance issues by providing a solution. This goes hand-in-hand with eating your own dog food. We took our own pattern of parallelizing CM1 requests and baked it into the carbon gem. Simply pass Carbon.query an array of calculations to perform, and we’ll use the amazing Celluloid gem to parallelize the requests. Celluloid provides a pool of threaded workers for this task.

The carbon gem first creates a Celluloid worker pool:

require 'celluloid'

    module Carbon
      class QueryPool
        include Celluloid

        def perform(query)

Then it hands out each query to workers in the pool:

queries.each do |query|
      pool.perform! query

This is a super-simple way to provide parallelism to your API users.

Make it asynchronous

An interesting trend among API providers has been the idea of providing a queued interface. This makes asynchronous processing much easier for developers and also takes some load off of your web servers. We even played around with an SQS-based client at one time with our carbon gem. In the future, we could see a Socket.IO-based, asynchronous API for our JavaScript client.

Writing XLSX from Ruby

Our xlsx_writer library for Ruby lets you create spreadsheets compatible with Microsoft Office 2007 Excel and above.


  1. Essential cell types: general, currency, date, integer, float (decimal)
  2. Standardized formatting: Arial 10pt, left-aligned text and dates, right-aligned numbers and currency
  3. Auto-fit to contents: always enabled
  4. Autofilters: just give it a range of cells
  5. Header and footer print styles: margins, arbitrary text, page numbers, and vector logos (.emf)

Basic example

require 'xlsx_writer'
doc =
sheet1 = doc.add_sheet 'Sheet1'
sheet1.add_row ['header1', 'header2', 'header3']
sheet1.add_row ['a', 'b', 'c']
sheet1.add_row [1, 2, 3]
require 'fileutils' doc.path, "/path/to/desired/location"

Advanced example

require 'xlsx_writer'
doc =
sheet1 = doc.add_sheet("People")

# First add data...

  "Average citations per paper"
  Date.parse("July 31, 1912"), 
  "Milton Friedman",
  "Economist / Statistician",
  {:type => :Currency, :value => 10_000},

# Then add autofilters and page styles...

sheet1.add_autofilter 'A1:E1'

# (figure out your croptop and cropleft by mocking it up in Excel and then unzipping the xlsx file. Get the .emf files, "cropleft" (if necessary), etc. from there)

left_header_image = doc.add_image('image1.emf', 118, 107)
left_header_image.croptop = '11025f'
left_header_image.cropleft = '9997f'
center_footer_image = doc.add_image('image2.emf', 116, 36) = 1.5
doc.page_setup.header = 0
doc.page_setup.footer = 0
doc.header.right.contents = 'Corporate Reporting'
doc.footer.left.contents = 'Confidential'
doc.footer.right.contents = :page_x_of_y
doc.header.left.contents = left_header_image = [ 'Powered by ', center_footer_image ]

# Finally you can generate the file.

require 'fileutils' doc.path, 'myfile.xlsx'

# don't forget

Debugging utilities

The library comes with two scripts:

  1. unpack.rb: Takes an XLSX file, unzips it, and reformats the XML it contains to be more readable.
  2. repack.rb: Takes a directory, converts the XML files to DOS line endings, and zips it into out.xlsx.

They have been useful in the past for debugging Excel crashes. You create a file in xlsx_writer and also in Excel, then unpack both of them and do a line-by-line comparison of the XML files within using diff -r.

Vote for Sparkwire!

We want YOU to vote for Sparkwire, our Apps for Energy challenge entry. It allows any app to access your Green Button data without you having to download it yourself from your utility.

Green what?

Green Button is a new standard being led by the Department of Energy which allows anyone to download their energy usage data from their utility. The DOE started the Apps for Energy contest in April to help drive use of the new standard. Currently, 15 utilities serving 27 million homes have committed to providing Green Button downloads. With this data, there are many ways apps could help us save energy: think targeted conservation tips to comparisons with neighbors to verification of energy-efficiency investments to rewarding greener behavior. Many companies and individuals have already developed great new energy apps to help all of us make sense of this data, reduce our energy use, and save money.

There is a catch, though: for these energy apps to work well, they have to be able to get our Green Button data without us having to log in and download the data every month/hour/minute. Our experience in helping individuals reduce their environmental impact has taught us that removing barriers to entry is critical. So we created our own app, Sparkwire, to easily share your Green Button data with other apps. You just provide Sparkwire with your utility site login and it does the rest. Any app connected to Sparkwire can pull in your latest data without any additional log-in or download. Don’t worry, your login info is encrypted and can only be used when an app you authorize requests access. This video demonstrates everything.

Please give us your support by voting today (and every day thereafter :)) at

2012 Fuel Economy Data

Another data update - you can now find the fuel economy and other info for all 2012 model-year automobiles sold in the US in our public data repository.

This data release took unusually long to process because we decided to improve the way we handle dual-fuel and alternate fuel vehicles in light of their increasing popularity. We now store all EPA-reported fuel efficiencies for every vehicle. Using the Automobile or Automobile Trip impact models on Cm1 you can specify a particular fuel with the ‘automobile fuel’ characteristic.

Electric Utility Data

We’ve just added US electric utilities to our public data repository. Our current data includes which zip codes each utility serves (from the EIA) and whether the utility has committed to or already begun giving customers access to their own energy information through the Green Button program.

EcoHackNYC, spring 2012 edition

Last weekend we were proud to sponsor another round of the fantastic EcoHackNYC green hackathon event.

Photo of hackathon {.wide}

It’s always amazing to see another crop of apps for the environment emerge over the course of just one day. Personally, I worked with a team on CSA Builder, a tool for local food organizations to figure out where to establish CSA pick-up locations for maximum effect.

Look forward to next time!

Copyright © 2003-2016, Inc. All Rights Reserved.