Thursday, July 11, 2013

Basic CKEditor 4 Image Uploader with Express and NodeJS



I recently found an interesting article on a simple hack to get the existing image upload feature in CKEditor 3 enabled and functioning with a PHP server.  I took his idea and applied it to the latest version, which is currently 4 with a NodeJS and Express framework backend.

It basically requires editing two files inside the ckeditor sdk:  image.js and config.js.

Edit ckeditor/plugins/image/dialogs/image.js and look for "editor.config.filebrowserImageBrowseLinkUrl" around line 975.  A few lines below will be "hidden: true".  Change this to "hidden: false".  Further down is another "hidden: true" near "id: 'Upload'", which needs to be changed to "hidden: false".  Once you are done with the changes, image.js should look like this.

Next, we need to edit the config.js file to point to where the upload POST route is.  Edit ckeditor/config.js and add config.filebrowserUploadUrl = '/upload'



Next, we need to create our Express POST route to handle the upload.  I am taking the temp file name and prepending it to the actual file name and saving it under ./public/uploads.  Since public is a default static route in Express, any uploaded image will be immediately available in the CKEditor UI.  The important part here is to return a script block, instructing CKEditor to take the new image.






Finally, route it through express:

var fn = require("./upload.js");
app.post("/upload", fn);

Thursday, June 13, 2013

AWS Coffee-Script Web Stack - Part 1: The Development Stack

AWS Coffee-Script Web Stack - Part 1: The Development Stack


Welcome to a multi-part series on setting up an Amazon Web Service Coffee-Script Web Server using NodeJS and the Express framework.

Part 1 of this series consists of building up the server components in a nice stack on top of ubuntu linux.  By the end of this blog entry, you will have a basic personal website with the twitter bootstrap look and feel.  We will then expand on its capabilities and features as I write more posts on the series.  Some of these posts will include: adding connect-assets which allows you to include compiled coffee-script into the rendered templates, session management with redis-server for persistent session state across server reboots using oAuth providers such as google, combining sessions with socket.io for seamless authentication validation, and simple M(odel) V(iew) C(ontroller) structure with classical inheritance design.

The stack will consist of three main NodeJS components: node modules, server code, and client code.  It will run on two environments:  Internet facing AWS running Ubuntu Server 12.04 and a development environment consisting of Ubuntu Desktop 13.04 with JetBrains Webstorm 6 Integrated Development Environment (IDE).  The code will be stored in a git repository which we use to keep a history of all our changes.



Setup Development Environment

So, lets get going and setup our development environment.  First, download the latest Ubuntu Linux Desktop, currently 13.04.  While you wait for the ISO, go ahead and start building the Virtual Machine instance.  The following screens relate to my Virtual Host based on VMWare Fusion 5.



Virtual Hardware
The minimum to get going smoothly is pretty lightweight compared to other environments such as Microsoft's SharePoint 2013 which requires a good 24GB of ram for a single development environment.  This virtual machine should have at least 2 vCPU and 2GB RAM.  Depending on how big your project gets (how many files Webstorm needs to keep an eye on), you might need to bump up to 4GB of RAM





OS Volume
For OS volume, 20GB should suffice.  If you plan on making upload projects, then provision more storage.






Virtual Machine Network Adapter

I prefer to keep the virtual machine as its own entity on the network.  This clears up any issues related to NATing ports and gives our environment the most realistic production feel.










Install Ubuntu OS From Media

Boot up the virtual machine and Install ubuntu.



Update OS 

I prefer to keep my linux machines patched.

$ sudo apt-get update
$ sudo apt-get upgrade

Install Dependencies

Before we can really get going with Webstorm, we need to get vmware-tools installed for all that full screen resolution glory.  Once installed, we need to pick up g++.

$ sudo apt-get install g++

Now that we have our Ubuntu Linux Desktop 13.04 development environment setup, install nodejs, and also install our global node modules/binaries.

Install git, setup bare repository, use /opt for development

$ sudo apt-get install git
$ sudo mkdir /git
$ sudo mkdir /opt
$ sudo chown <username> /opt

$ sudo chown <username> /git

$ mkdir /git/myFirstWebServer.git
$ cd /git/myFirstWebServer.git
$ git --bare init

Install Node v0.10.15 (latest)
$ wget http://nodejs.org/dist/v0.10.15/node-v0.10.15.tar.gz

$ tar zxvf node-v0.10.15.tar.gz
$ cd node-v0.10.15
$ ./configure
$ make
$ sudo make install

Install global node modules and their binaries

$ sudo npm install -g coffee-script express bower

Creating our basic Express Web Server


I prefer to keep all my custom code inside /opt, so we will setup our Express web server there.  Set permissions as necessary.  We will also initialize git at this time.

$ cd /opt
$ express -s myFirstWebserver
$ cd myFirstWebserver
$ git init
$ npm install

Great.  Now that we have our basic Express Web Server, we can run it with 'node app.js' and visit it by going to http://localhost:3000 using a web browser.

Next, we want to convert all of our assets to coffee.  Use an existing js2coffee converter, such as http://js2coffee.org.  Convert and clean up your js files into coffee.

Your app.coffee code could look something like this:


Configuring bower and components

We will use bower to install client libraries.  Its like npm, and helps streamline deployments.   For our particular configuration, we choose the project folder's pubilc directory to store components since its already routed through express.
Configure bower (docs)

edit /home/<username>/.bowerrc:



Now that we have our basic rc setup for bower, we need to specify our client library dependencies in our project.

edit /opt/myFirstWebsite/component.json

Bower is completely setup and ready to install packages.

$ cd /opt/myFirstWebsite
$ bower install

Now that we have our client libraries, lets add references to them in the website's layout.
edit ./views/layout.jade:





Commit the changes and push to the code repository

Before we commit our changes, we need to setup a .gitignore file so we dont blast the code repository with stuff bower and npm deal with.  
edit /opt/myFirstWebsite/.gitignore


Now that we have successfully created a starter bootstrap enabled website, its time to commit our changes and push them to our source repository.  First, we will setup the repository origin.

$ cd /opt/myFirstWebsite 
$ git remote add origin /git/myFirstWebsite.git

Next, we will commit the changes to origin: 
$ git add .
$ git commit -a
$ git push -u origin master


Sunday, February 10, 2013

Node E-Mail Contact Form Submission Rate Limiting With Sessions

Using mongoose and coffee-script, I will layout a simple approach to limiting POST queries by client session id.

There are specific requirements for this POST route:

  • E-mail uses Gmail service account (provided by Node-Mailer)
  • Provide user feedback in real time (res.send)
  • Rate limit user's ability to POST over a defined interval (timestamps)


Lets create our model.  This collection will store our users submissions by session id.


Now lets define our app.post function.

Sunday, September 16, 2012

MicroServer ESXi WhiteBox Part 3 - VM Backup GUI with ghettoVCB.js

Hello World.  Ive spent the last few weeks working on a GPL interface for ghettoVCB called ghettoVCB.js.  This interface uses NodeJS (Express + MongoDB) + ExtJS 4.1 for its main components.  This will probably be the last ExtJS project I do for a while.  I think time is now for me to begin learning objective-c for OSX/iOS apps.  That, and I will focus on Twitter's Bootstrap and jQuery for a more light weight set of widgets than what ExtJS offers me.  I also shouldn't settle in with a single product ie ExtJS.  Sencha.  Interacting with the company has a lot to be desired, but enough with that.

ghettoVCB.js

A little history, first.  This project was inspired by a bioinformatics pipeline workflow web app I wrote called scriptQueue that remotely ran fastQC jobs on a compute node through SSH, and pulled the HTML results with SSHFS and display them through the web service.  The goal of that project was to allow for additional workflow types and the workflows themselves were to be absolutely and completely flexible.

scriptQueue used the child_process module that comes with NodeJS.  With the spawning of a child SSH process, I realized through the events of watching its standard output and process status (running/not running), I was able to keep complete control of not only any arbitrary command line application on the host OS, but on any remote host through SSH piping.  Awesome!

Back to ghettVCB.js, which is the topic of this post.  Naturally, one of the workflows that came to mind for scriptQueue was a ghettoVCB workflow for my MicroServer ESXi WhiteBox.  Since scriptQueue was designed with the workflow concept (both server side and client side) forking the project was as easy as adding a new workflow form client side and workflow route server side.  Bam.

For the ghettoVCB workflow, the webapp creates a folder './ghettoVCB' in the apps running path.  It then uses SSHFS to mount the remote ESXi script path which contains ghettoVCB.sh.  When the "Submit ghettoVCB" button is clicked, the webapp will try and save a configuration file (ghettoVCB.js.conf) and a list of virtual machines to backup (vms_to_backup.conf) on the remote ESXi host through the ghettoVCB mount.

Once the files are saved, the web app will send off a remote child process with the proper command line parameters pointing to ghettoVCB.sh script, the configuration and list of vm files.



Below is a view of the History Grid, which visualizes all the meta-data stored for a child process.  This is where mongoDB comes in.  I store all standard output and standard error results in the meta-data of the child process in mongoDB through the mongoose API.  I also keep tabs on the PID number, its status, start and end dates.

As you can see, the display is a little interesting when we get to the Clone percentage.  The normal stdout replaces the same line with an updated number, but the way I currently save output, it appends a new line.  No biggy.



So there you have it.  That's ghettoVCB.js, a GPL ExtJS 4.1 / NodeJS WebApp GUI for running ghettoVCB.sh on a remote ESXi host.










Monday, September 10, 2012

MicroServer ESXi WhiteBox Part 2 - VM Backups easily with ghettoVCB

Hello world.  Part two of my MicroServer ESXi WhiteBox series will be discussing how to back up those precious virtual machines made with our free ESXi server.  Fortunately for us, the community has provided a very powerful ghettoVCB script to do just that.

You can find the documentation here.  




Installation



First, make sure you can SSH to the ESXi box.  Enable remote console via the vSphere client.

You can download the tar.gz here.  Extract the ghettoVCB folder to a datastore using the vSphere client's datastore Browser inside a scripts folder.

SSH to your ESXi box.  Make sure the script itself is executable (ghettoVCB.sh), then edit the configuration file (ghettoVCB.conf).  Finally, you will need to create a list of virtual machines.  I keep this part simple (for more options, like targeting specific disks, see the docs).  I create a file called vms_to_backup and per line, include the names of the virtual machines I intend to backup and maintain two copies.

# cd /vmfs/volumes/2TB_Spindle/scripts/ghettoVCB
# chmod 755 ghettoVCB.sh
# vi ghettoVCB.conf



# vi vms_to_backup



Good enough for my backup sanity.  Lets give the script a dryrun to see if we have any problems.

#  ./ghettoVCB.sh -f vms_to_backup -g ./ghettoVCB.conf -d dryrun


2012-09-09 17:54:15 -- info: ###### Final status: OK, only a dryrun. ######





If all looks good, lets give it a whirl.


./ghettoVCB.sh -f vms_to_backup -g ./ghettoVCB.conf


2012-09-09 21:22:55 -- info: Successfully completed backup for vCenter!

2012-09-09 21:22:56 -- info: ###### Final status: All VMs backed up OK! ######

2012-09-09 21:22:56 -- info:  ghettoVCB LOG END 






The next step is to add a crontab entry and run it at a specific interval of your choosing.  Since the conf file says VM_BACKUP_ROTATION_COUNT=2, it will keep two run results of the backup script.


Good luck.

Wednesday, August 15, 2012

MicroServer ESXi WhiteBox Part 1 - The Hardware

MicroServer ESXi White(silver)Box Part1

I recently researched a dedicated home office ESXi 5 server.  The goal was to have the capacity to run 4-8 virtual machines.  My implementation includes two dedicated windows server domain controllers, two linux servers for a continuously integrated development environment, and a dedicated storage appliance (running ubuntu linux) to host fast SSD and large spindle disk storage.  This "datacenter in a box" was to have the capacity to add disks, passthrough PCIe devices, and have the potential to team with more (with a central NAS) for a complete "cloud" environment on the cheap.  I wanted this datacenter in a box to simply "plug n' play".  No dependency on keyboard/mouse or display.  Everything would exist virtually via the network and thus power and ethernet are its only requirements.  Ultimately this box would end up hidden somewhere in a closet or under a desk, out of sight.  The greatest requirement was to be small, quiet and use as little power as possible.

After a bit of research on the concept of creating one of these home server whiteboxes (http://ultimatewhitebox.com/), I found a nice blog entry called The Baby Dragon:  http://rootwyrm.us.to/2011/09/better-than-ever-its-the-babydragon-ii/, which detailed a fairly recent (as of 2011) configuration for a home office whitebox ESXi server.  From there I compiled an order on newegg.

Part 1 of this series will detail the hardware procurement and installation.  Later parts will discuss ESXi installation, maintenance and caveats.




Motherboard, Processor, and Ram
The most critical aspect in terms of ESXi 5 is to have as many enterprise features as possible for extendability.  Virtualization technologies including hardware passthrough is one of the most important.  With passthrough, we can pass PCI express or USB devices directly to a virtual machine.  That means I could add a RAID card, drop in a few disks, and add it to a linux storage virtual machine, or add a TV tuner card to a Windows 7 Media Center Server.

One of the most popular server boards at newegg right now is from Supermicro, the X9SCM-F-O.  This MicroATX motherboard supports VT-d (via the Xeon processor), 32GB of RAM, and IPMI to name a few.  Like the Baby Dragon II, I will be using the Xeon 1230 processor.  For ram, I chose Kingston's 8GB Hynix RAM which worked out to be a great deal.  I only bought 16GB so far, since I am far from maximizing use of that much RAM in my setup, and I am currently encountering overheating issues with the CPU's stock heatsink and fan which is preventing me from adding more VMs.  Overheating was occurring when I was doing massive Windows Updates for two Server 2008 installs that were running on the fast SSD datastore.  Apparently the SSD is so fast that the processor had a hard time keeping up without getting too hot.

I am not entirely sure I have a real overheating issue, because supermicro dumbs down the IPMI interface to the temperature sensor on the CPU giving you OK, WARN, and CRITICAL values instead of the actual temperature.  Not very useful when you are trying to see if CRITICAL means its reached Intel's recommendation for maximum temperature or not.  Currently, this is my biggest issue with the setup.  When CRITICAL is reached, the motherboard beeps very annoyingly until you lessen its load.  According to intel's specs, the maximum operating temperature is 69 degrees c, which I could be reaching with the stock heatsink / fan.


Chassis and Power Supply
Chassis
For chassis I chose the Lian-Li v354A MicroATX Desktop Case.  The case supports up to seven 3.5 inch drives and two 2.5 drives.  The image to the left shows a 2TB spindle disk on top and a 240GB SSD on the bottom of the drive cage.  The lower drive cage has been removed.

Power Supply
For power supply, I chose the SeaSonic X650 Gold EPS 12V.  It is modular and rarely runs its fan.



ESXi Installation
For my build, as well as for the Baby Dragon II, a USB thumb drive was used for the ESXi Operating System.  Since the motherboard has an onboard USB port, it seemed entirely logical.  Its slightly faster than a spindle disk and definitely takes up less space.

I used VMware fusion to install ESXi 5 on my thumb drive.  VMware Player will work as well.  Pass through the USB thumb drive to a new blank virtual machine.  Do not worry about adding a harddrive to this VM, as you wont need it.  Boot up the blank VM with the ESXi installer iso and follow the prompts, choosing the USB thumb drive as the install destination.

This image shows the thumb drive on the left side.  In the middle are the 6 SATA ports (2 SATA3 and 4 SATA2).  In the way of the shot is the lower drive cage.

Once the system boots up with the thumb drive, you can use the management console (via IPMI) to configure networking.


ESXi Updating without vCenter (or Update Manager)
Since I am doing the free thing, I will need to fill gaps that vCenter server would fill and patching is one of them.  Thankfully, the process is quite easy.  First, we need to know what updates are available.

SSH into your ESXi server and run the following command:

# esxcli software sources vib list -d http://hostupdate.vmware.com/software/VUM/PRODUCTION/main/vmw-depot-index.xml | grep Update

A list of updates (if you have any) will return.  Pick up the latest cumulative update at the vmware website:  http://www.vmware.com/patchmgr/download.portal, and upload it to a datastore on the ESXi host.

# esxcli software vib update -d /vmfs/volumes/datastore_name/patch_location

In another console session, launch the following to keep an eye on the status:

# watch tail -n 50 /var/log/esxupdate.log

Issues with updating?
If you have any issues with installing the updates, and its related to "altbootbank", then you will need to "fix" a few partitions.  

# dosfsck -v -r /dev/disks/partition_id

My disk partition id on the thumb drive that was bad is called "mpx.vmhba32:C0:T0:L0:5".  To fix it, I said Y at the prompt when it notified me of errors.

Thursday, July 26, 2012

Creating SharePoint 2010 reports with Javascript and Twitter Bootstrap

SharePoint 2010 is a decent CMS out there from Microsoft.  Great for customized document storage, customized data lists, and of course Office Web Apps for cross-browser web based viewers/editors (word, excel, powerpoint, visio, etc).  SharePoint 2010 can also be extended to open source technologies rather easily with its REST services in order to create customized HTML reports rather than viewing through the clunky ribbon interface in SharePoint itself.

In this presentation, I will use NodeJS, Express, Jade, coffee-script, and Twitter's Bootstrap to construct a simple list report generator that page breaks on every item.  We will pretend that the SharePoint 2010 data is a custom list with three required columns and two optional columns that may not contain data:  Title (string), Content (rich text), and Feedback (rich text), TextArea1, TextArea2.  The last two are obviously the optional columns.

Imagine that this custom list contains a lot of text in each column.  Viewing this in SharePoint would show it as a grid, with each associated property listed horizontally (think spreadsheet).  There would be much scrolling or resizing of the browser.  Why cant we view this data like a book or a white paper?  That is what I intend to accomplish in this presentation.

First of all, you'll need to pick up all the tools.  I am using Node 0.8.3 with the following modules:  Express (3.0.0beta7), Jade (0.27.0), coffee-script (1.3.3), and request (2.9.203).  All of the modules are on npm.  See here for instructions on running coffee-script from the command line.

Next, pick up the latest bootstrap and its javascript plugins.  Finally go get jQuery.


$ mkdir reportGenerator
$ cd reportGenerator
$ mkdir public routes views



Now that we have our public folder, lets put the client side stuff in there.

$ mv ~/Downloads/bootstrap ./public
$ mv ~/Downloads/jQuery.min.js ./public/bootstrap/js


Lets setup our web server.

$ vim app.coffee




Now we need to create the route app.coffee depends on.

$ vim ./routes/index.coffee



Cool.  So this route renders a Jade template.  Lets make the layout and index templates.

$ vim ./views/layout.jade


$ vim ./views/index.jade




Now that the templates are done, we are ready to rock.  Start it up

$ coffee app.coffee

Print to PDF.  Look at that!  A nice report generated with page breaks on each main topic (item) based on SharePoint 2010 data in a simple yet expressive way.