Home Java Expressive JavaScript: Node.js

Expressive JavaScript: Node.js

by admin

Contents

A student asked : "Programmers in the old days used only simple computers and programmed without languages, but they made great programs. Why do we use complicated computers and programming languages?" Fu-Tzu answered : "Builders in the old days used only sticks and clay, but they made beautiful huts."
Master Yuan-Ma, "The Book of Programming"

At this point, you’ve been learning JavaScript and using it in a single environment: the browser. In this and the next chapter we will briefly introduce you to Node.js, a program that allows you to apply JavaScript skills outside the browser. You can write everything from command line utilities to dynamic HTTP servers with it.
These chapters are dedicated to teaching you the important ideas that make up Node.js and are intended to give you enough information to write useful programs in this environment. They do not attempt to be comprehensive guides to Node.
The code from the previous chapters you could write and execute directly in a browser, but the code in this chapter is written for Node and will not work in a browser.
If you want to run the code in this chapter right away, start by installing Node from the nodejs.org website for your operating system. You’ll also find documentation on Node and its built-in modules on that site.

Introduction

One of the most difficult problems when writing systems that communicate over a network is handling input and output. Reading and writing data in and out of the network, to disk, and other devices. Moving data around takes time, and planning these activities intelligently can have a big impact on system response times for the user or network requests.
In a traditional input and output processing method, it is common that a function, for example, readFile, starts reading a file and only stops working when the file is completely read. This is called synchronous I/O (input/output).
Node was conceived to make asynchronous I/O easier and simpler to use. We’ve seen asynchronous interfaces before, such as the XMLHttpRequest browser object discussed in Chapter 17. Such an interface allows the script to continue running while the interface does its own, and calls the callback function when it finishes. This is how all I/O works in Node.
JavaScript fits easily into a Node-type system. It is one of the few languages that does not have an I/O system built into it. Therefore, JavaScript easily fits into Node’s rather eccentric approach to I/O, and as a result does not spawn two different input and output systems. In 2009, when developing Node, people were already using callback-based I/O in the browser, so the community around the language was used to the asynchronous programming style.

Asynchrony

Let me try to illustrate the difference in synchronous and asynchronous approaches in I/O with a small example where a program has to get two resources from the Internet, and then do something with the data.
In a synchronous environment, the obvious way to solve the problem would be to query sequentially. This method has a disadvantage – the second query will start only after the first one is finished. The total time will not be less than the sum of the time to process the two queries. This is an inefficient use of the computer, which will be idle most of the time while the data transfer takes place over the network.
The solution to the problem in a synchronous system is to run additional execution control threads (we already discussed them in chapter 14). The second thread can start the second request, and then both threads will wait for the result to return, after which they will be re-synchronized to merge the work into a single result.
In the diagram, the bold lines represent the normal program run time and the thin lines represent the I/O wait time. In the synchronous model, the time spent on I/O is included in the timeline of each thread. In the asynchronous model, triggering an I/O action results in a branching time line. The thread that started the I/O continues execution, and the I/O is executed in parallel to it, making a function callback when it finishes.
Expressive JavaScript: Node.js
Program flow for synchronous and asynchronous I/O
Another way of expressing this difference: in the synchronous model, the I/O pending is implicit and in the asynchronous model, it is explicit and under our direct control. But asynchrony works both ways. It makes it easier to express non-straight line programs, but it makes it harder to express straight line programs.
In chapter 17 I already touched on the fact that callbacks introduce a lot of noise and make the program less orderly. Whether this approach is generally a good idea is debatable. Either way, it takes time to get used to it.
But for a JavaScript-based system, I would say that using asynchrony with callbacks makes sense. One of the strengths of JavaScript is simplicity, and trying to add multiple threads to a program would lead to a lot of complexity. While callbacks don’t make the code simple, their idea is very simple and yet strong enough to write high-performance web servers.

The node command

When you have Node.js installed on your system, you have a program called node that runs JavaScript files. Let’s say you have a hello.js file with the following code :

var message = "Hello world";console.log(message);

You can execute your program from the command line :

$ node hello.jsHello world

The console.log method in Node acts the same way as in the browser. It outputs a piece of text. But in Node it outputs text to the standard output, not to the JavaScript console in the browser.
If you run node without a file, it will give you a query string where you can write JavaScript code and get results.

$ node> 1 + 12> [-1, -2, -3].map(Math.abs)[1, 2, 3]> process.exit(0)$

The process variable, like console, is globally available in Node. It provides several ways to inspect and manipulate the program. The method exit terminates the process and can be given an exit status code which tells the program which started the node (in this case, the shell) whether the program completed successfully (zero code) or with an error (any other number).
You can read the process.argv string array to access the command line arguments passed to your program. It also includes the node command name and the name of your script, so the list of arguments starts at index 2. If the showargv.js file contains only the console.log(process.argv) instruction, you can run it like this :

$ node showargv.js one --and two["node", "/home/marijn/showargv.js", "one", "--and", "two"]

All the standard JavaScript global variables – Array, Math, JSON – are also in the Node environment. But there is no browser-related functionality, such as document or alert.
The global scope object, which is called window in the browser, has a more meaningful name in Node: global.

Modules

Apart from the few variables like console and process mentioned above, Node keeps little functionality in the global scope. You have to go to the module system to access the rest of the built-in features.
The CommonJS system, based on the require function, was described in Chapter 10. This system is built into Node and is used to load everything from built-in modules and downloaded libraries to files that are part of your program.
When you call require Node, you must convert the given string into a filename. Paths starting with "/", "./" or "../" are converted to paths relative to the current one. "./" means the current directory, "../" means the directory above, and "/" means the root directory of the file system. If you request "./world/world" from /home/marijn/elife/run.js, Node will try to load /home/marijn/elife/world/world.js. You can omit the .js extension.
When a string is passed that does not look like a relative or absolute path, it is assumed to be either an embedded module or a module installed in the node_modules directory. For example, require("fs") will give you an embedded module to work with the file system, while require("elife") will try to load the library from node_modules/elife/. The typical method of installing libraries is with NPM, which I’ll come back to later.
To demonstrate, let’s make a simple project with two files. The first one will be called main.js, and it will contain a script, called from the command line, designed to distort strings.

var garble = require("./garble");// Index 2 contains the first argument of the program from the command linevar argument = process.argv[2];console.log(garble(argument));

The garble.js file defines a string distortion library that can be used both by the previously defined command line program and by other scripts that need direct access to the garble function.

module.exports = function(string) {return string.split("").map(function(ch) {return String.fromCharCode(ch.charCodeAt(0) + 5);}).join("");};

Replacing module.exports instead of adding properties to it allows us to export a certain value from a module. In this case, the result of our module query will be the distortion function itself.
The function splits the string into characters, using split with an empty string, and then replaces all characters with others whose codes are 5 units larger. It then connects the result back to the string.
Now we can call our tool :

$ node main.js JavaScriptOf{fXhwnuy

Installation via NPM

NPM, mentioned in passing in Chapter 10, is an online repository of JavaScript modules, many of which are written specifically for Node. When you put Node on your computer, you get an npm program that gives you a handy interface to this repository.
For example, one of the NPM modules is called figlet, and it converts text into "ASCII art", pictures made up of text characters. Here’s how to set it up :

$ npm install figletnpm GET https://registry.npmjs.org/figletnpm 200 https://registry.npmjs.org/figletnpm GET https://registry.npmjs.org/figlet/-/figlet-1.0.9.tgznpm 200 https://registry.npmjs.org/figlet/-/figlet-1.0.9.tgzfiglet@1.0.9 node_modules/figlet$ node> var figlet = require("figlet");> figlet.text("Hello world!", function(error, data) {if (error)console.error(error);elseconsole.log(data);});_ _ _ _ _ _ _| | | | ___| | | ___ __ _____ _ __| | __| | || |_| |/ _ \ | |/ _ \ \ \ /\ / / _ \| '__| |/ _` | || _ | __/ | | (_) | \ V V / (_) | | | | (_| |_||_| |_|\___|_|_|\___/ \_/\_/ \___/|_| |_|\__, _(_)

After running the npm install NPM will create a directory called node_modules. Inside it will be the directory figlet containing the library. When we start the node and call require("figlet") the library will be loaded and we can call its text method to print big pretty letters.
What’s interesting, instead of just returning a string that contains capital letters, figlet.text takes a callback function to which it passes the result. It also passes another argument there, error, which will contain an error object in case of error and null in case of success.
This is the principle adopted by Node. To create letters, figlet must read the file from the disk containing the letters. Reading the file is an asynchronous operation in Node, so figlet.text cannot return the result immediately. Asynchrony is contagious – any function that calls asynchronous becomes asynchronous itself.
NPM is more than just an npm install. It reads package.json files, which contain JSON information about a program or library, in particular which libraries it depends on. Running npm install in a directory containing such a file automatically results in the installation of all dependencies, and in turn their dependencies. The npm tool is also used to place libraries in the NPM online repository so that other people can find, download, and use them.
We won’t go into any more details about using NPM. Go to npmjs.org for documentation and a simple library search.

File system module

One of the most popular Node plugins is the "fs" module, which stands for "file system". The module provides functionality for working with files and directories.
For example, there is a readFile function that reads a file and makes a callback with the contents of the file.

var fs = require("fs");fs.readFile("file.txt", "utf8", function(error, text) {if (error)throw error;console.log("And in the volume file was :", text);});

The second readFile argument specifies the character encoding in which to convert the contents of the file to a string. Text can be converted to binary data in various ways, but UTF-8 is the newest. If you have no reason to believe that the file contains text in another encoding, you can safely pass in the "utf8" parameter. If you don’t specify an encoding, Node will give you the binary encoded data as a Buffer object, not a string. This is an array-like object containing bytes from the file.

var fs = require("fs");fs.readFile("file.txt", function(error, buffer) {if (error)throw error;console.log("The file had ", buffer.length, " bytes.", "First byte :", buffer[0]);});

A similar function, writeFile, is used to write a file to disk.

var fs = require("fs");fs.writeFile("graffiti.txt", "Here was Node ", function(err) {if (err)console.log("It didn't work, and here's why :";, err);elseconsole.log("Entry successful. All free.");});

You don’t need to set the encoding here, because writeFile assumes that if it’s given a string instead of a Buffer object to write, it should be output as text with default UTF-8 encoding.
The "fs" module contains a lot of useful stuff: the readdir function returns a list of files in the directory as an array of strings, stat returns information about the file, rename renames the file, unlink deletes the file, etc. See the documentation at nodejs.org.
Many "fs" functions have both synchronous and asynchronous versions. For example, there is a synchronous variant of the readFile function called readFileSync.

var fs = require("fs");console.log(fs.readFileSync("file.txt", "utf8"));

Synchronous functions are easier and more useful for simple scripts where the extra speed of the asynchronous method is not important. But note – for the duration of the synchronous action, your program stops completely. If it has to respond to user input or other programs over the network, the latency of waiting for synchronous I/O leads to annoying delays.

HTTP module

Another basic module is "http". It gives functionality for creating HTTP servers and HTTP requests.
Here’s everything you need to run a simple HTTP server :

var http = require("http");var server = http.createServer(function(request, response) {response.writeHead(200, {"Content-Type": "text/html"});response.write("h1> Hello!</h1> <p> You have requested <code>" +request.url +"/code></p>");response.end();});server.listen(8000);

By running the script on your machine, you can direct your browser to localhost :8000/hello, thus making a request to the server. It will reply with a small HTML page.
The function passed as an argument to createServer is called whenever an attempt is made to connect to the server. The request and response variables are objects representing input and output data. The former contains the request information, for example the url property contains the request URL.
To send something back, the methods of the response object are used. The first, writeHead, writes the response headers (see chapter 17). You give it a status code (in this case 200 for "OK") and an object containing the header values. Here we tell the client to wait for the HTML document.
Then the response body (the document itself) is sent via response.write. This method can be called several times if you want to send the response in chunks, for example by streaming data as it arrives. Finally, response.end signals the end of the response.
The server.listen call causes the server to listen for requests on port 8000. So you have to go to localhost:8000 in your browser, not just localhost (where the default port is 80).
To stop such a Node script, which doesn’t terminate automatically because it’s waiting for the next events (in this case, connections), you have to press Ctrl-C.
The real web server does a lot more than what is described in the example. It looks at the request method (method property) to see what action the client is trying to perform, and at the request URL to see what resource that action should be performed on. Next you’ll see a more advanced version of the server.
To make an HTTP client, we can use the "http" request module function.

var http = require("http");var request = http.request({hostname: "eloquentjavascript.net", path: "/20_node.html", method: "GET", headers: {Accept: "text/html"}}, function(response) {console.log("The service responded with the code ", response.statusCode);});request.end();

The first argument request sets up the request, explaining to the Node which server to talk to, which path the request will take, which method to use, etc. The second is the function to call at the end of the request. It takes a response object which contains all information about the response, a status code for example.
Like the server’s response object, the object returned by the request allows you to pass data using the write method and end the request using the end method. The example does not use write because GET requests should not contain data in the body.
For requests to secure URLs (HTTPS), Node offers the https module which has its own request function, similar to http.request.

Streams

We’ve seen two examples of threads in the HTTP examples – a response object, which the server can write to, and a request object, which is returned from http.request
Writeable threads are a popular concept in Node interfaces. All threads have a write method that can be passed a string or Buffer object. The end method closes the thread and, if given an argument, will output a piece of data before closing. Both methods can be given a callback function with an optional argument that they will call when they finish writing or when the stream is closed.
It is possible to create a stream pointing to a file with the function fs.createWriteStream. Then you can use the write method to write to the file in chunks, rather than the whole file as in fs.writeFile.
Readable streams will be a bit more complicated. Both the request variable passed to the callback function on the HTTP server and the response variable passed to the HTTP client are readable threads. (The server reads the request and then writes the response, while the client first writes the request and then reads the response). Reading from a stream is done through event handlers, not methods.
Node event objects have an on method similar to the browser addEventListener method. You give it an event name and a function and it registers that function to be called immediately when the event happens.
Readable threads have "data" and "end" events. The first happens when data is received, the second happens when it is finished. This model fits streaming data, which can be processed immediately, even if not the whole document is received. The file can be read as a stream via fs.createReadStream.
The following code creates a server that reads request bodies and sends them back as a stream of uppercase text.

var http = require("http");http.createServer(function(request, response) {response.writeHead(200, {"Content-Type": "text/plain"});request.on("data", function(chunk) {response.write(chunk.toString().toUpperCase());});request.on("end", function() {response.end();});}).listen(8000);

The chunk variable passed to the data handler will be a binary Buffer that can be converted into a string by calling its toString method, which decodes it from the default encoding (UTF-8).
The following code, running at the same time as the server, will send a request to the server and display the response received :

var http = require("http");var request = http.request({hostname: "localhost", port: 8000, method: "POST"}, function(response) {response.on("data", function(chunk) {process.stdout.write(chunk.toString());});});request.end("Hello server");

The example writes to process.stdout (the standard process output, which is a writable thread) rather than console.log. We can’t use console.log because it adds an extra line break after each piece of code-it’s not needed here.

Simple file server

Let’s combine our new knowledge of HTTP servers and working with the file system, and build a bridge between the two: an HTTP server that provides remote access to files. This server has many uses. It allows web applications to store and share data, or it can give a group of people access to a set of files.
When we treat files as HTTP resources, the GET, PUT and DELETE methods can be used to read, write and delete files. We will interpret the path in the request as the file path.
We don’t need to access the whole file system, so we will interpret these paths as being set relative to the root directory, and this will be the directory of the script. If I run the server from /home/marijn/public/ (or C:\Users\marijn\public\ on Windows), the /file.txt query should point to /home/marijn/public/file.txt (or C:\Users\marijn\public\file.txt).
We’ll build the program incrementally, using the methods object to store the functions that handle the different HTTP methods.

var http = require("http"), fs = require("fs");var methods = Object.create(null);http.createServer(function(request, response) {function respond(code, body, type) {if (!type) type = "text/plain";response.writeHead(code, {"Content-Type": type});if (body body.pipe)body.pipe(response);elseresponse.end(body);}if (request.method in methods)methods[request.method](urlToPath(request.url), respond, request);elserespond(405, "Method " + request.method +" not allowed.");}).listen(8000);

This code will start the server returning 405 errors – this code is used to indicate that the requested method is not supported by the server.
The respond function is passed to functions that handle different methods, and acts as a callback to terminate the request. It takes the HTTP status code, the body, and possibly the content type. If the passed body is a readable stream, it will have a pipe method which is used to pass the read stream to the write stream. If not, it is assumed to be either null (the body is empty) or a string, in which case it is passed directly to the end response method.
To get the path from the URL in the request, the urlToPath function parses the URL, using the Node’s built-in "url" module. It takes a path name, something like /file.txt, decodes it to remove the %20 escape codes, and inserts a period at the beginning to get the path relative to the current directory.

function urlToPath(url) {var path = require("url").parse(url).pathname;return "." + decodeURIComponent(path);}

Do you think the urlToPath function is insecure? You’re right. Let’s come back to that question in the exercises.
We arrange the GET method so that it returns a list of files when reading the directory, and the contents of the file when reading the file.
The pop-up question is what type of Content-Type header we should return when reading the file. Since there can be anything in the file, the server can’t just return the same type for everything. But NPM can help with that. The mime module (file content type indicators like text/plain are also called MIME types) knows the correct type for a huge number of file extensions.
Running the following npm command in the directory where the server script lives will allow you to use require("mime") to query the type library.

$ npm install mimenpm http GET https://registry.npmjs.org/mimenpm http 304 https://registry.npmjs.org/mimemime@1.2.11 node_modules/mime

When the requested file does not exist, the correct error code for this case is 404. We will use fs.stat to return information on the file to find out if it exists and if it is a directory.

methods.GET = function(path, respond) {fs.stat(path, function(error, stats) {if (error error.code == "ENOENT")respond(404, "File not found");else if (error)respond(500, error.toString());else if (stats.isDirectory())fs.readdir(path, function(error, files) {if (error)respond(500, error.toString());elserespond(200, files.join("\n"));});elserespond(200, fs.createReadStream(path), require("mime").lookup(path));});};

Since disk queries take time, fs.stat runs asynchronously. When no file exists, fs.stat will pass an error object with the value "ENOENT" of the code property to the callback function. It would be great if Node defined different error types for different errors, but there is no such thing. Instead it outputs confusing Unix-style codes.
We will output all unexpected errors with code 500, indicating that there is a problem on the server – as opposed to codes starting with 4, indicating a problem with the request. This won’t be quite neat in some situations, but it will suffice for a small sample program.
The object stats returned by fs.stat tells us everything about the file. For example, size – file size, mtime – modification date. Here we want to know if it’s a directory or a normal file – the isDirectory method will tell us that.
To read the list of files in the directory, we use fs.readdir, and through another callback, return it to the user. For normal files, we create a read stream via fs.createReadStream and pass it back, along with the content type that the "mime" module produced for that file.
DELETE processing code would be simpler :

methods.DELETE = function(path, respond) {fs.stat(path, function(error, stats) {if (error error.code == "ENOENT")respond(204);else if (error)respond(500, error.toString());else if (stats.isDirectory())fs.rmdir(path, respondErrorOrNothing(respond));elsefs.unlink(path, respondErrorOrNothing(respond));});};

You might be wondering why trying to delete a non-existent file returns status 204 instead of an error. You could say that when you try to delete a file that doesn’t exist, because the file isn’t there anymore, the request is already executed. The HTTP standard encourages people to make idempotent requests — that is, ones in which repeating the same action multiple times does not lead to different results.

function respondErrorOrNothing(respond) {return function(error) {if (error)respond(500, error.toString());elserespond(204);};}

When the HTTP response contains no data, you can use the status code 204 ("no content"). Since we need to provide callback functions that either report an error or return a 204 response in different situations, I wrote a special function respondErrorOrNothing that creates such a callback.
Here is the PUT request handler:

methods.PUT = function(path, respond, request) {var outStream = fs.createWriteStream(path);outStream.on("error", function(error) {respond(500, error.toString());});outStream.on("finish", function() {respond(204);});request.pipe(outStream);};

Here we don’t need to check if the file exists, we just overwrite it. Again we use pipe to transfer the data from the read stream to the write stream, in our case from the request to the file. If the thread can’t be created, we create an "error" event, which we report in our response. When the data transfer is successful, pipe closes both threads and the "finish" event is generated. After that we can report success with code 204.
The full server script is available at : eloquentjavascript.net/code/file_server.js. You can download it and run it through Node to run your own file server. Of course, it can be modified and augmented to solve exercises or experiments.
The command line utility curl, commonly available on unix systems, can be used to create HTTP requests. The following snippet tests our server. The -X option is used to specify the request method and -d to include the request body.

$ curl http://localhost:8000/file.txtFile not found$ curl -X PUT -d hello http://localhost:8000/file.txt$ curl http://localhost:8000/file.txthello$ curl -X DELETE http://localhost:8000/file.txt$ curl http://localhost:8000/file.txtFile not found

The first file.txt request fails, because there is no file yet. PUT request creates file, and look, next request gets it. After deleting file via DELETE, file is gone again.

Error handling

There are six places in the file server code where we forward exceptions when we don’t know how to handle errors. Since exceptions are not automatically passed to callback functions, but are passed to them as arguments, they must be handled personally each time. This negates the advantage of exception handling, namely the ability to handle errors centrally.
What happens when something actually throws an exception on the system? We don’t use try blocks, so it will be passed to the very top of the call stack. In Node this will cause the program to stop executing and output the exception information (along with stack tracing) to the standard output.
So our server will crash when there are problems in the code – as opposed to asynchronous problems, which will be passed as arguments to the call functions. If we need to handle all exceptions that occur when processing a request so that we send a response accurately, we need to add try/catch blocks in every callback.
This is bad. A lot of Node programs are written to use as little exception handling as possible, implying that if an exception occurs, the program can’t handle it and therefore must crash.
Another approach is to use promises, which were described in Chapter 17. They catch exceptions thrown by callback functions and pass them as errors. You can load the promise library into Node and use it to handle asynchronous calls. Few Node libraries integrate promises, but they’re usually pretty easy to wrap around. A great "promise" module with NPM contains a denodeify function that takes an asynchronous function like fs.readFile and converts it into a function that returns a promise.

var Promise = require("promise");var fs = require("fs");var readFile = Promise.denodeify(fs.readFile);readFile("file.txt", "utf8").then(function(content) {console.log("The file contained: " + content);}, function(error) {console.log("Failed to read file: " + error);});

For comparison, I wrote another version of the file server using promises, which can be found at eloquentjavascript.net/code/file_server_promises.js It’s cleaner because functions can now return results rather than assign callbacks, and exceptions are passed implicitly.
Here are a few lines from there to demonstrate the difference in styles.
The fsp object used in the code contains variants of fs functions with promises wrapped with Promise.denodeify. The object returned from the method handler, with the code and body properties, becomes the final result of the promise chain, and it is used to determine which response to send to the client.

methods.GET = function(path) {return inspectPath(path).then(function(stats) {if (!stats) // Does not existreturn {code: 404, body: "File not found"};else if (stats.isDirectory())return fsp.readdir(path).then(function(files) {return {code: 200, body: files.join("\n")};});elsereturn {code: 200, type: require("mime").lookup(path), body: fs.createReadStream(path)};});};function inspectPath(path) {return fsp.stat(path).then(null, function(error) {if (error.code == "ENOENT") return null;else throw error;});}

The inspectPath function is a simple wrapper around fs.stat, handling the case when a file is not found. In this case, we replace the error with a success returning null. All other errors can be passed. When the promise returned from these handlers is screwed up, the server responds with code 500.

Result

Node is a great simple system that lets you run JavaScript outside the browser. It was originally designed to run over a network, to play the role of a node in a network. But it lets you do a lot of things, and if you enjoy JavaScript programming, automating daily tasks with Node works great.
NPM provides libraries for everything you can think of (and even for something you can’t think of), and it lets you download and install them with a simple command. Node also comes with a set of built-in modules, including "fs" to handle the file system, and "http" to run HTTP servers and make HTTP requests.
All input and output in Node is done asynchronously unless you use an explicitly synchronous version of a function such as fs.readFileSync. You provide callback functions, and Node calls them at the right time when the I/O operations are done.

Exercise

Content harmonization again

In chapter 17, the first exercise was to create queries to eloquentjavascript.net/author that asked for different types of content by passing different Accept headers.
Do this again using the Node http.request function. Request at least the text/plain, text/html, and application/json types. Remember that you can pass the request headers as an object in the headers property, the first argument of http.request.
Print the contents of each answer.

Repairing leaks

For easy access to the files I kept the server running on my computer, in /home/marijn/public. One day I discovered that someone had access to all my passwords that I stored in my browser. What happened?
If this doesn’t make sense to you, remember the urlToPath function, which was defined as :

function urlToPath(url) {var path = require("url").parse(url).pathname;return "." + decodeURIComponent(path);}

Now remember that the paths passed to the "fs" function can be relative. They may contain the path "../" to the top directory. What happens if the client sends requests for URLs like the following :
myhostname :8000/../.config/config/google-chrome/Default/Web%20Data
myhostname :8000/../.ssh/id_dsa
myhostname :8000/../../../etc/passwd
Change the urlToPath function to fix this problem. Note that on Windows Node allows both forward and backslashes for paths.
Also, meditate on the fact that once you put a raw system on the internet, bugs in the system can be used against you and your computer.

Creating directories

Although the DELETE method works for deleting directories (via fs.rmdir), so far the server does not provide a way to create directories.
Add support for the MKCOL method, which should create a directory via fs.mkdir. MKCOL is not a basic HTTP method, but it exists, just for that, in the WebDAV standard, which contains HTTP extensions to use it for writing resources, not just reading them.

Public space on the net

Since the file server gives out any files and even returns the correct Content-Type header, it can be used to serve a website. Since it allows everyone to delete and replace files, it would be an interesting website — one that can be modified, defaced, and deleted by anyone who can create a valid HTTP request. But it would still be a website.
Write a simple HTML page with a simple JavaScript file. Place them in the directory served by the server and open in a browser.
Then, as an advanced exercise, combine all the knowledge you’ve gained from the book to build a more user-friendly interface for modifying a website from within the site itself.
Use the HTML form (Chapter 18) to edit the files that make up the site, allowing the user to update them on the server via HTTP requests, as described in Chapter 17.
Start with one file that you are allowed to edit. Then make it so that you can select the file to edit. Use the fact that our file server returns file lists on directory request.
Don’t change files directly in the file server code – if you make a mistake, you are likely to corrupt those files. Work in a directory that is not accessible from the outside, and copy them there after testing.
If your computer is connected to the Internet directly, without a firewall, router or other devices, you can invite a friend to your site. To check, go to whatismyip.com, copy the IP address into the address bar and add :8000 to select the correct port. If you get to your site, it is available for all to see.

You may also like