Contact Us
6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14

Going to

Posted on 1/10/09 by Felix Geisendörfer

Tim and I just got our tickets for the upcoming in Berlin. It's almost sold out now, but in case you already have a ticket let us know so we can hang out.

I'm very thrilled about the speaker lineup (Amy Hoy, Thomas Fuchs, John Resig, Dion Almaer & Ben Galbraith, Douglas Crockford, Ryan Dahl, Kevin Dangoor, Kyle Simpson, Steve Souders). It's especially exciting to see that Ryan Dahl has been selected as a speaker to talk about node.js, my new favourite OS project!

I will try my best to cover his as well as some of the other talks for the blog!

-- Felix Geisendörfer aka the_undefined


Streaming file uploads with node.js

Posted on 28/9/09 by Felix Geisendörfer

Update: I just updated the code so it works with node v0.1.18.

Not excited by hello world in node.js? No problem.

Let's say you are a startup focusing on upload technology and you want the maximum level of control for your file uploads. In our case that means having the ability to directly interact with the multipart data stream as it comes in (so we can abort the upload if something isn't right, - beats the hell out of letting the user wait an hour to tell him after the upload has finished).

Here is a complete example on how to accomplish this in node.js (you'll need the bleeding edge git version):

var http = require('http');
var multipart = require('multipart');
var sys = require('sys');

var server = http.createServer(function(req, res) {
  switch (req.uri.path) {
    case '/':
      display_form(req, res);
    case '/upload':
      upload_file(req, res);
      show_404(req, res);

function display_form(req, res) {
  res.sendHeader(200, {'Content-Type': 'text/html'});
    '<form action="/upload" method="post" enctype="multipart/form-data">'+
    '<input type="file" name="upload-file">'+
    '<input type="submit" value="Upload">'+

function upload_file(req, res) {

  var stream = new multipart.Stream(req);
  stream.addListener('part', function(part) {
    part.addListener('body', function(chunk) {
      var progress = (stream.bytesReceived / stream.bytesTotal * 100).toFixed(2);
      var mb = (stream.bytesTotal / 1024 / 1024).toFixed(1);

      sys.print("Uploading "+mb+"mb ("+progress+"%)\015");

      // chunk could be appended to a file if the uploaded file needs to be saved
  stream.addListener('complete', function() {
    res.sendHeader(200, {'Content-Type': 'text/plain'});
    res.sendBody('Thanks for playing!');
    sys.puts("\n=> Done");

function show_404(req, res) {
  res.sendHeader(404, {'Content-Type': 'text/plain'});
  res.sendBody('You r doing it rong!');

The code is rather straight forward. First of all we include the multipart.js parser which is a library that has just been added to node.js.

Next we create a server listening on port 8000 that dispatches incoming requests to one of our 3 functions: display_form, upload_file or show_404. Again, very straight forward.

display_form serves a very short (and invalid) piece of HTML that will render a file upload form with a submit button. You can get to it by running the example (via: node uploader.js) and pointing your browser to http://localhost:8000/.

upload_file kicks in as soon as you select a file and hit the submit button. It tells the request object to expect binary data and then passes the work on to the multipart Stream parser. The result is a new stream object that emits two kinds of events: 'part' and 'complete'. 'part' is called whenever a new element is found within the multipart stream, you can find all the information about it by looking at the first argument's headers property. In order to get the actual contents of this part we attach a 'body' listener to it, which gets called for each chunk of bytes getting uploaded. In our example we just use this event to render a progress indicator in our command line, but we could also append this chunk to a file which would eventually become the entire file as uploaded from the browser. Finally the 'complete' event sends a response to the browser indicating the file has been uploaded.

show_404 is handling all unknown urls by returning an error response.

As you can see, the entire process is pretty simple, yet gives you a ton of control. You can also easily use this technique to show an AJAX progress bar for the upload to your users. The multipart parser also works with non-HTTP requests, just pass it the {boundary: '...'} option into the constructor and use steam.write() to pass it some data to parse. Check out the source of the parser if you're curious how it works internally.

-- Felix Geisendörfer aka the_undefined



Posted on 24/9/09 by Felix Geisendörfer

What happens if you take an insanely fast JavaScript engine and build an asynchronous I/O library around it? You get node.js, an up and coming project that brings you bloat-free server-side JavaScript.

This enables you to write any kind of "backend" service with just a few lines of JavaScript. But don't take my word for it, here is how a "Hello World" example looks in node.js:

node.http.createServer(function (req, res) {
  setTimeout(function () {
    res.sendHeader(200, {"Content-Type": "text/plain"});
    res.sendBody("Hello World");
  }, 2000);
puts("Server running at");

This code creates a new http server and tells it to listen on port 8000. Whenever a request comes in, the closure function passed to createServer is fired. In this example we are waiting 2 seconds before sending a "Hello World" response back to the client.

However, during those 2 seconds the server will continue to accept new connections. That gives you a very scalable hello world server. Talking about performance: Ryan (the awesome guy behind node.js) has done some initial benchmarks which show how fast of a beast we are talking about here. To me the most notable aspect is the extremely low response times you can achieve with node.js (pretty much 0-1ms).

Here is another chart from a req/sec benchmark among various server-side JS libs:

Now you may think this is all very nice and stuff, but what do you actually need it for? Well, it depends. There are a few people writing web frameworks for node.js, but at this point it would be rather adventurous to build a full blown web application on top of node. However, node has incredible potential if you want to develop chat applications, check out this one for example. We are using node.js to run the backend for which we'll share more details about in the future. One could also easily write his own key-value store in node.js - let's say you'd like to have something like Memcached but with a REST interface.

Trying out node.js requires you to have a Linux/Unix machine but it's pretty much as simple as downloading the code and running:

make install

The documentation is pretty excellent and there are lots of friendly people on the mailing list to help if you have any problems.

To me the most exciting thing about node.js is its incredible potential to reunite. You can have long debates about using Python, Ruby, PHP, Java, ASP or Perl for a web project, but nobody will debate that JavaScript is *the* language of the web. Now imagine the smartest people from all those communities creating code in a single language that works on your browser as well as your server ...

I'd love to hear your thoughts on node.js and any questions you might have. Just leave comment!

-- Felix Geisendörfer aka the_undefined


Fixing non-atomic commits in git

Posted on 15/9/09 by Felix Geisendörfer

Let's say somebody else made a commit that mixes a bug fix and a new feature together. This sucks if you only want to take the bugfix to merge it into your stable branch (using git cherry-pick).

If you were using SVN you'd be screwed now. However, if you are using git you can actually clean this mess in an elegant and transparent fashion:

git revert --no-edit 
git revert --no-edit HEAD
git reset HEAD^
git add -p # or other commands to partially stage stuff from the current tree
git commit -m "Commit A"
git add -p
git commit -m "Commit B"

Let's look at this step by step:

git revert --no-edit 

This command creates a new commit that reverts the bad commit.

git revert --no-edit HEAD

The second 'revert' creates a new commit that basically redoes the bad commit. Why? Because now you have a "clone" of the bad commit sitting cleanly on top of your HEAD, waiting for you to be messed with. This is much better than using git reset to go back to it, because you are not deleting history (which is very likely to give you merge conflicts).

git reset HEAD^

By using git revert you can now go back to the moment before the "clone" commit was created, but still have all of its changes in your tree (=local file system). This essentially takes you back in time to the moment before your evil teammate hit the commit button and gives you the ability to yell "Nooooooo! Let me tell you about atomic commits buddy".

Now you can use 'git add' or 'git add -p' to stage all your changes for the first atomic change and commit it. Repeat the process until you've split the bad commit into nice atomic pieces.

All that is left to do now is to explain to your teammate that next time he accidentally makes a bad commit, he should use 'git reset HEAD^' right away. This way the problem can be fixed before git push distributes the bad commit to your team.

Let me know if this makes sense or if you have any question. I also accept "Git Challenges". That is if you have a problem you'd like to know how it could be solved in Git, just let me know in the comments and I'll blog about it : ).

-- Felix Geisendörfer aka the_undefined


The open source business model

Posted on 12/9/09 by Felix Geisendörfer

Seth Godin explains it perfectly this morning:

You need to make something else abundant in order to gain attention. Then, and only then, will you be able to sell something that's naturally scarce.

Tim and I stumbled across this model by accident when we started our blogs a few years ago. Since then our business has been a 99% byproduct of the "free" stuff we are doing.

-- Felix Geisendörfer aka the_undefined

6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14