Sunday, August 18, 2013

Simple Callbacks - Executing parallel and dependent tasks using async without the sphagetti

There has been some bloggers complaining lately about the callback pattern and its spaghetti code structure.  Some even compare it to GOTO statements, although that post is less about coffee/javascript.  The beauty of javascript, especially written in coffee-script, is that you can conceptualize parallel and dependent tasks based on code indentation.  Commands executed in the same depth are executed at the same time, while code resting deeper (in the particular case of callback patterns) are deferred until dependent execution is complete.

The following image on the left depicts what I am explaining above.  Blue lines are executed at the same time.  Red lines are dependent on their blue parent's completion before they execute, and so on.

I then take the idea a step further and mix batches of parallel tasks with tasks dependent on the batch to complete.

This is a sample gist of personal work I am doing on a Project Management webApp. The goal of this gist is to show how non dependent tasks can be parallelized and dependent tasks can be run after those parallel dependencies are run. eachTask iterator takes a task object and a callback. It uses the Function prototype method call to pass scope to each parallel task.



Tuesday, August 6, 2013

Async HTML5 File Uploads with Node / Express Framework

File uploads with HTML5 these days is easy as pie, especially when you have a framework that handles everything for you, internally.  Combining Node, the Express framework and a little HTML5 magic with FormData and XMLHttpRequest, we can create a very simple file uploader that can support large files.

The Express framework supports multi-part uploads through its bodyParser middleware which utilizes the node-formidable module internally.



Start off with the client side.


Add the button click event with a bootstrap progress indicator



Add the server route.


Thursday, July 11, 2013

Basic CKEditor 4 Image Uploader with Express and NodeJS



I recently found an interesting article on a simple hack to get the existing image upload feature in CKEditor 3 enabled and functioning with a PHP server.  I took his idea and applied it to the latest version, which is currently 4 with a NodeJS and Express framework backend.

It basically requires editing two files inside the ckeditor sdk:  image.js and config.js.

Edit ckeditor/plugins/image/dialogs/image.js and look for "editor.config.filebrowserImageBrowseLinkUrl" around line 975.  A few lines below will be "hidden: true".  Change this to "hidden: false".  Further down is another "hidden: true" near "id: 'Upload'", which needs to be changed to "hidden: false".  Once you are done with the changes, image.js should look like this.

Next, we need to edit the config.js file to point to where the upload POST route is.  Edit ckeditor/config.js and add config.filebrowserUploadUrl = '/upload'



Next, we need to create our Express POST route to handle the upload.  I am taking the temp file name and prepending it to the actual file name and saving it under ./public/uploads.  Since public is a default static route in Express, any uploaded image will be immediately available in the CKEditor UI.  The important part here is to return a script block, instructing CKEditor to take the new image.






Finally, route it through express:

var fn = require("./upload.js");
app.post("/upload", fn);

Thursday, June 13, 2013

AWS Coffee-Script Web Stack - Part 1: The Development Stack

AWS Coffee-Script Web Stack - Part 1: The Development Stack


Welcome to a multi-part series on setting up an Amazon Web Service Coffee-Script Web Server using NodeJS and the Express framework.

Part 1 of this series consists of building up the server components in a nice stack on top of ubuntu linux.  By the end of this blog entry, you will have a basic personal website with the twitter bootstrap look and feel.  We will then expand on its capabilities and features as I write more posts on the series.  Some of these posts will include: adding connect-assets which allows you to include compiled coffee-script into the rendered templates, session management with redis-server for persistent session state across server reboots using oAuth providers such as google, combining sessions with socket.io for seamless authentication validation, and simple M(odel) V(iew) C(ontroller) structure with classical inheritance design.

The stack will consist of three main NodeJS components: node modules, server code, and client code.  It will run on two environments:  Internet facing AWS running Ubuntu Server 12.04 and a development environment consisting of Ubuntu Desktop 13.04 with JetBrains Webstorm 6 Integrated Development Environment (IDE).  The code will be stored in a git repository which we use to keep a history of all our changes.



Setup Development Environment

So, lets get going and setup our development environment.  First, download the latest Ubuntu Linux Desktop, currently 13.04.  While you wait for the ISO, go ahead and start building the Virtual Machine instance.  The following screens relate to my Virtual Host based on VMWare Fusion 5.



Virtual Hardware
The minimum to get going smoothly is pretty lightweight compared to other environments such as Microsoft's SharePoint 2013 which requires a good 24GB of ram for a single development environment.  This virtual machine should have at least 2 vCPU and 2GB RAM.  Depending on how big your project gets (how many files Webstorm needs to keep an eye on), you might need to bump up to 4GB of RAM





OS Volume
For OS volume, 20GB should suffice.  If you plan on making upload projects, then provision more storage.






Virtual Machine Network Adapter

I prefer to keep the virtual machine as its own entity on the network.  This clears up any issues related to NATing ports and gives our environment the most realistic production feel.










Install Ubuntu OS From Media

Boot up the virtual machine and Install ubuntu.



Update OS 

I prefer to keep my linux machines patched.

$ sudo apt-get update
$ sudo apt-get upgrade

Install Dependencies

Before we can really get going with Webstorm, we need to get vmware-tools installed for all that full screen resolution glory.  Once installed, we need to pick up g++.

$ sudo apt-get install g++

Now that we have our Ubuntu Linux Desktop 13.04 development environment setup, install nodejs, and also install our global node modules/binaries.

Install git, setup bare repository, use /opt for development

$ sudo apt-get install git
$ sudo mkdir /git
$ sudo mkdir /opt
$ sudo chown <username> /opt

$ sudo chown <username> /git

$ mkdir /git/myFirstWebServer.git
$ cd /git/myFirstWebServer.git
$ git --bare init

Install Node v0.10.15 (latest)
$ wget http://nodejs.org/dist/v0.10.15/node-v0.10.15.tar.gz

$ tar zxvf node-v0.10.15.tar.gz
$ cd node-v0.10.15
$ ./configure
$ make
$ sudo make install

Install global node modules and their binaries

$ sudo npm install -g coffee-script express bower

Creating our basic Express Web Server


I prefer to keep all my custom code inside /opt, so we will setup our Express web server there.  Set permissions as necessary.  We will also initialize git at this time.

$ cd /opt
$ express -s myFirstWebserver
$ cd myFirstWebserver
$ git init
$ npm install

Great.  Now that we have our basic Express Web Server, we can run it with 'node app.js' and visit it by going to http://localhost:3000 using a web browser.

Next, we want to convert all of our assets to coffee.  Use an existing js2coffee converter, such as http://js2coffee.org.  Convert and clean up your js files into coffee.

Your app.coffee code could look something like this:


Configuring bower and components

We will use bower to install client libraries.  Its like npm, and helps streamline deployments.   For our particular configuration, we choose the project folder's pubilc directory to store components since its already routed through express.
Configure bower (docs)

edit /home/<username>/.bowerrc:



Now that we have our basic rc setup for bower, we need to specify our client library dependencies in our project.

edit /opt/myFirstWebsite/component.json

Bower is completely setup and ready to install packages.

$ cd /opt/myFirstWebsite
$ bower install

Now that we have our client libraries, lets add references to them in the website's layout.
edit ./views/layout.jade:





Commit the changes and push to the code repository

Before we commit our changes, we need to setup a .gitignore file so we dont blast the code repository with stuff bower and npm deal with.  
edit /opt/myFirstWebsite/.gitignore


Now that we have successfully created a starter bootstrap enabled website, its time to commit our changes and push them to our source repository.  First, we will setup the repository origin.

$ cd /opt/myFirstWebsite 
$ git remote add origin /git/myFirstWebsite.git

Next, we will commit the changes to origin: 
$ git add .
$ git commit -a
$ git push -u origin master


Sunday, February 10, 2013

Node E-Mail Contact Form Submission Rate Limiting With Sessions

Using mongoose and coffee-script, I will layout a simple approach to limiting POST queries by client session id.

There are specific requirements for this POST route:

  • E-mail uses Gmail service account (provided by Node-Mailer)
  • Provide user feedback in real time (res.send)
  • Rate limit user's ability to POST over a defined interval (timestamps)


Lets create our model.  This collection will store our users submissions by session id.


Now lets define our app.post function.

Sunday, September 16, 2012

MicroServer ESXi WhiteBox Part 3 - VM Backup GUI with ghettoVCB.js

Hello World.  Ive spent the last few weeks working on a GPL interface for ghettoVCB called ghettoVCB.js.  This interface uses NodeJS (Express + MongoDB) + ExtJS 4.1 for its main components.  This will probably be the last ExtJS project I do for a while.  I think time is now for me to begin learning objective-c for OSX/iOS apps.  That, and I will focus on Twitter's Bootstrap and jQuery for a more light weight set of widgets than what ExtJS offers me.  I also shouldn't settle in with a single product ie ExtJS.  Sencha.  Interacting with the company has a lot to be desired, but enough with that.

ghettoVCB.js

A little history, first.  This project was inspired by a bioinformatics pipeline workflow web app I wrote called scriptQueue that remotely ran fastQC jobs on a compute node through SSH, and pulled the HTML results with SSHFS and display them through the web service.  The goal of that project was to allow for additional workflow types and the workflows themselves were to be absolutely and completely flexible.

scriptQueue used the child_process module that comes with NodeJS.  With the spawning of a child SSH process, I realized through the events of watching its standard output and process status (running/not running), I was able to keep complete control of not only any arbitrary command line application on the host OS, but on any remote host through SSH piping.  Awesome!

Back to ghettVCB.js, which is the topic of this post.  Naturally, one of the workflows that came to mind for scriptQueue was a ghettoVCB workflow for my MicroServer ESXi WhiteBox.  Since scriptQueue was designed with the workflow concept (both server side and client side) forking the project was as easy as adding a new workflow form client side and workflow route server side.  Bam.

For the ghettoVCB workflow, the webapp creates a folder './ghettoVCB' in the apps running path.  It then uses SSHFS to mount the remote ESXi script path which contains ghettoVCB.sh.  When the "Submit ghettoVCB" button is clicked, the webapp will try and save a configuration file (ghettoVCB.js.conf) and a list of virtual machines to backup (vms_to_backup.conf) on the remote ESXi host through the ghettoVCB mount.

Once the files are saved, the web app will send off a remote child process with the proper command line parameters pointing to ghettoVCB.sh script, the configuration and list of vm files.



Below is a view of the History Grid, which visualizes all the meta-data stored for a child process.  This is where mongoDB comes in.  I store all standard output and standard error results in the meta-data of the child process in mongoDB through the mongoose API.  I also keep tabs on the PID number, its status, start and end dates.

As you can see, the display is a little interesting when we get to the Clone percentage.  The normal stdout replaces the same line with an updated number, but the way I currently save output, it appends a new line.  No biggy.



So there you have it.  That's ghettoVCB.js, a GPL ExtJS 4.1 / NodeJS WebApp GUI for running ghettoVCB.sh on a remote ESXi host.










Monday, September 10, 2012

MicroServer ESXi WhiteBox Part 2 - VM Backups easily with ghettoVCB

Hello world.  Part two of my MicroServer ESXi WhiteBox series will be discussing how to back up those precious virtual machines made with our free ESXi server.  Fortunately for us, the community has provided a very powerful ghettoVCB script to do just that.

You can find the documentation here.  




Installation



First, make sure you can SSH to the ESXi box.  Enable remote console via the vSphere client.

You can download the tar.gz here.  Extract the ghettoVCB folder to a datastore using the vSphere client's datastore Browser inside a scripts folder.

SSH to your ESXi box.  Make sure the script itself is executable (ghettoVCB.sh), then edit the configuration file (ghettoVCB.conf).  Finally, you will need to create a list of virtual machines.  I keep this part simple (for more options, like targeting specific disks, see the docs).  I create a file called vms_to_backup and per line, include the names of the virtual machines I intend to backup and maintain two copies.

# cd /vmfs/volumes/2TB_Spindle/scripts/ghettoVCB
# chmod 755 ghettoVCB.sh
# vi ghettoVCB.conf



# vi vms_to_backup



Good enough for my backup sanity.  Lets give the script a dryrun to see if we have any problems.

#  ./ghettoVCB.sh -f vms_to_backup -g ./ghettoVCB.conf -d dryrun


2012-09-09 17:54:15 -- info: ###### Final status: OK, only a dryrun. ######





If all looks good, lets give it a whirl.


./ghettoVCB.sh -f vms_to_backup -g ./ghettoVCB.conf


2012-09-09 21:22:55 -- info: Successfully completed backup for vCenter!

2012-09-09 21:22:56 -- info: ###### Final status: All VMs backed up OK! ######

2012-09-09 21:22:56 -- info:  ghettoVCB LOG END 






The next step is to add a crontab entry and run it at a specific interval of your choosing.  Since the conf file says VM_BACKUP_ROTATION_COUNT=2, it will keep two run results of the backup script.


Good luck.