Sunday, September 16, 2012

MicroServer ESXi WhiteBox Part 3 - VM Backup GUI with ghettoVCB.js

Hello World.  Ive spent the last few weeks working on a GPL interface for ghettoVCB called ghettoVCB.js.  This interface uses NodeJS (Express + MongoDB) + ExtJS 4.1 for its main components.  This will probably be the last ExtJS project I do for a while.  I think time is now for me to begin learning objective-c for OSX/iOS apps.  That, and I will focus on Twitter's Bootstrap and jQuery for a more light weight set of widgets than what ExtJS offers me.  I also shouldn't settle in with a single product ie ExtJS.  Sencha.  Interacting with the company has a lot to be desired, but enough with that.

ghettoVCB.js

A little history, first.  This project was inspired by a bioinformatics pipeline workflow web app I wrote called scriptQueue that remotely ran fastQC jobs on a compute node through SSH, and pulled the HTML results with SSHFS and display them through the web service.  The goal of that project was to allow for additional workflow types and the workflows themselves were to be absolutely and completely flexible.

scriptQueue used the child_process module that comes with NodeJS.  With the spawning of a child SSH process, I realized through the events of watching its standard output and process status (running/not running), I was able to keep complete control of not only any arbitrary command line application on the host OS, but on any remote host through SSH piping.  Awesome!

Back to ghettVCB.js, which is the topic of this post.  Naturally, one of the workflows that came to mind for scriptQueue was a ghettoVCB workflow for my MicroServer ESXi WhiteBox.  Since scriptQueue was designed with the workflow concept (both server side and client side) forking the project was as easy as adding a new workflow form client side and workflow route server side.  Bam.

For the ghettoVCB workflow, the webapp creates a folder './ghettoVCB' in the apps running path.  It then uses SSHFS to mount the remote ESXi script path which contains ghettoVCB.sh.  When the "Submit ghettoVCB" button is clicked, the webapp will try and save a configuration file (ghettoVCB.js.conf) and a list of virtual machines to backup (vms_to_backup.conf) on the remote ESXi host through the ghettoVCB mount.

Once the files are saved, the web app will send off a remote child process with the proper command line parameters pointing to ghettoVCB.sh script, the configuration and list of vm files.



Below is a view of the History Grid, which visualizes all the meta-data stored for a child process.  This is where mongoDB comes in.  I store all standard output and standard error results in the meta-data of the child process in mongoDB through the mongoose API.  I also keep tabs on the PID number, its status, start and end dates.

As you can see, the display is a little interesting when we get to the Clone percentage.  The normal stdout replaces the same line with an updated number, but the way I currently save output, it appends a new line.  No biggy.



So there you have it.  That's ghettoVCB.js, a GPL ExtJS 4.1 / NodeJS WebApp GUI for running ghettoVCB.sh on a remote ESXi host.










Monday, September 10, 2012

MicroServer ESXi WhiteBox Part 2 - VM Backups easily with ghettoVCB

Hello world.  Part two of my MicroServer ESXi WhiteBox series will be discussing how to back up those precious virtual machines made with our free ESXi server.  Fortunately for us, the community has provided a very powerful ghettoVCB script to do just that.

You can find the documentation here.  




Installation



First, make sure you can SSH to the ESXi box.  Enable remote console via the vSphere client.

You can download the tar.gz here.  Extract the ghettoVCB folder to a datastore using the vSphere client's datastore Browser inside a scripts folder.

SSH to your ESXi box.  Make sure the script itself is executable (ghettoVCB.sh), then edit the configuration file (ghettoVCB.conf).  Finally, you will need to create a list of virtual machines.  I keep this part simple (for more options, like targeting specific disks, see the docs).  I create a file called vms_to_backup and per line, include the names of the virtual machines I intend to backup and maintain two copies.

# cd /vmfs/volumes/2TB_Spindle/scripts/ghettoVCB
# chmod 755 ghettoVCB.sh
# vi ghettoVCB.conf



# vi vms_to_backup



Good enough for my backup sanity.  Lets give the script a dryrun to see if we have any problems.

#  ./ghettoVCB.sh -f vms_to_backup -g ./ghettoVCB.conf -d dryrun


2012-09-09 17:54:15 -- info: ###### Final status: OK, only a dryrun. ######





If all looks good, lets give it a whirl.


./ghettoVCB.sh -f vms_to_backup -g ./ghettoVCB.conf


2012-09-09 21:22:55 -- info: Successfully completed backup for vCenter!

2012-09-09 21:22:56 -- info: ###### Final status: All VMs backed up OK! ######

2012-09-09 21:22:56 -- info:  ghettoVCB LOG END 






The next step is to add a crontab entry and run it at a specific interval of your choosing.  Since the conf file says VM_BACKUP_ROTATION_COUNT=2, it will keep two run results of the backup script.


Good luck.

Wednesday, August 15, 2012

MicroServer ESXi WhiteBox Part 1 - The Hardware

MicroServer ESXi White(silver)Box Part1

I recently researched a dedicated home office ESXi 5 server.  The goal was to have the capacity to run 4-8 virtual machines.  My implementation includes two dedicated windows server domain controllers, two linux servers for a continuously integrated development environment, and a dedicated storage appliance (running ubuntu linux) to host fast SSD and large spindle disk storage.  This "datacenter in a box" was to have the capacity to add disks, passthrough PCIe devices, and have the potential to team with more (with a central NAS) for a complete "cloud" environment on the cheap.  I wanted this datacenter in a box to simply "plug n' play".  No dependency on keyboard/mouse or display.  Everything would exist virtually via the network and thus power and ethernet are its only requirements.  Ultimately this box would end up hidden somewhere in a closet or under a desk, out of sight.  The greatest requirement was to be small, quiet and use as little power as possible.

After a bit of research on the concept of creating one of these home server whiteboxes (http://ultimatewhitebox.com/), I found a nice blog entry called The Baby Dragon:  http://rootwyrm.us.to/2011/09/better-than-ever-its-the-babydragon-ii/, which detailed a fairly recent (as of 2011) configuration for a home office whitebox ESXi server.  From there I compiled an order on newegg.

Part 1 of this series will detail the hardware procurement and installation.  Later parts will discuss ESXi installation, maintenance and caveats.




Motherboard, Processor, and Ram
The most critical aspect in terms of ESXi 5 is to have as many enterprise features as possible for extendability.  Virtualization technologies including hardware passthrough is one of the most important.  With passthrough, we can pass PCI express or USB devices directly to a virtual machine.  That means I could add a RAID card, drop in a few disks, and add it to a linux storage virtual machine, or add a TV tuner card to a Windows 7 Media Center Server.

One of the most popular server boards at newegg right now is from Supermicro, the X9SCM-F-O.  This MicroATX motherboard supports VT-d (via the Xeon processor), 32GB of RAM, and IPMI to name a few.  Like the Baby Dragon II, I will be using the Xeon 1230 processor.  For ram, I chose Kingston's 8GB Hynix RAM which worked out to be a great deal.  I only bought 16GB so far, since I am far from maximizing use of that much RAM in my setup, and I am currently encountering overheating issues with the CPU's stock heatsink and fan which is preventing me from adding more VMs.  Overheating was occurring when I was doing massive Windows Updates for two Server 2008 installs that were running on the fast SSD datastore.  Apparently the SSD is so fast that the processor had a hard time keeping up without getting too hot.

I am not entirely sure I have a real overheating issue, because supermicro dumbs down the IPMI interface to the temperature sensor on the CPU giving you OK, WARN, and CRITICAL values instead of the actual temperature.  Not very useful when you are trying to see if CRITICAL means its reached Intel's recommendation for maximum temperature or not.  Currently, this is my biggest issue with the setup.  When CRITICAL is reached, the motherboard beeps very annoyingly until you lessen its load.  According to intel's specs, the maximum operating temperature is 69 degrees c, which I could be reaching with the stock heatsink / fan.


Chassis and Power Supply
Chassis
For chassis I chose the Lian-Li v354A MicroATX Desktop Case.  The case supports up to seven 3.5 inch drives and two 2.5 drives.  The image to the left shows a 2TB spindle disk on top and a 240GB SSD on the bottom of the drive cage.  The lower drive cage has been removed.

Power Supply
For power supply, I chose the SeaSonic X650 Gold EPS 12V.  It is modular and rarely runs its fan.



ESXi Installation
For my build, as well as for the Baby Dragon II, a USB thumb drive was used for the ESXi Operating System.  Since the motherboard has an onboard USB port, it seemed entirely logical.  Its slightly faster than a spindle disk and definitely takes up less space.

I used VMware fusion to install ESXi 5 on my thumb drive.  VMware Player will work as well.  Pass through the USB thumb drive to a new blank virtual machine.  Do not worry about adding a harddrive to this VM, as you wont need it.  Boot up the blank VM with the ESXi installer iso and follow the prompts, choosing the USB thumb drive as the install destination.

This image shows the thumb drive on the left side.  In the middle are the 6 SATA ports (2 SATA3 and 4 SATA2).  In the way of the shot is the lower drive cage.

Once the system boots up with the thumb drive, you can use the management console (via IPMI) to configure networking.


ESXi Updating without vCenter (or Update Manager)
Since I am doing the free thing, I will need to fill gaps that vCenter server would fill and patching is one of them.  Thankfully, the process is quite easy.  First, we need to know what updates are available.

SSH into your ESXi server and run the following command:

# esxcli software sources vib list -d http://hostupdate.vmware.com/software/VUM/PRODUCTION/main/vmw-depot-index.xml | grep Update

A list of updates (if you have any) will return.  Pick up the latest cumulative update at the vmware website:  http://www.vmware.com/patchmgr/download.portal, and upload it to a datastore on the ESXi host.

# esxcli software vib update -d /vmfs/volumes/datastore_name/patch_location

In another console session, launch the following to keep an eye on the status:

# watch tail -n 50 /var/log/esxupdate.log

Issues with updating?
If you have any issues with installing the updates, and its related to "altbootbank", then you will need to "fix" a few partitions.  

# dosfsck -v -r /dev/disks/partition_id

My disk partition id on the thumb drive that was bad is called "mpx.vmhba32:C0:T0:L0:5".  To fix it, I said Y at the prompt when it notified me of errors.

Thursday, July 26, 2012

Creating SharePoint 2010 reports with Javascript and Twitter Bootstrap

SharePoint 2010 is a decent CMS out there from Microsoft.  Great for customized document storage, customized data lists, and of course Office Web Apps for cross-browser web based viewers/editors (word, excel, powerpoint, visio, etc).  SharePoint 2010 can also be extended to open source technologies rather easily with its REST services in order to create customized HTML reports rather than viewing through the clunky ribbon interface in SharePoint itself.

In this presentation, I will use NodeJS, Express, Jade, coffee-script, and Twitter's Bootstrap to construct a simple list report generator that page breaks on every item.  We will pretend that the SharePoint 2010 data is a custom list with three required columns and two optional columns that may not contain data:  Title (string), Content (rich text), and Feedback (rich text), TextArea1, TextArea2.  The last two are obviously the optional columns.

Imagine that this custom list contains a lot of text in each column.  Viewing this in SharePoint would show it as a grid, with each associated property listed horizontally (think spreadsheet).  There would be much scrolling or resizing of the browser.  Why cant we view this data like a book or a white paper?  That is what I intend to accomplish in this presentation.

First of all, you'll need to pick up all the tools.  I am using Node 0.8.3 with the following modules:  Express (3.0.0beta7), Jade (0.27.0), coffee-script (1.3.3), and request (2.9.203).  All of the modules are on npm.  See here for instructions on running coffee-script from the command line.

Next, pick up the latest bootstrap and its javascript plugins.  Finally go get jQuery.


$ mkdir reportGenerator
$ cd reportGenerator
$ mkdir public routes views



Now that we have our public folder, lets put the client side stuff in there.

$ mv ~/Downloads/bootstrap ./public
$ mv ~/Downloads/jQuery.min.js ./public/bootstrap/js


Lets setup our web server.

$ vim app.coffee




Now we need to create the route app.coffee depends on.

$ vim ./routes/index.coffee



Cool.  So this route renders a Jade template.  Lets make the layout and index templates.

$ vim ./views/layout.jade


$ vim ./views/index.jade




Now that the templates are done, we are ready to rock.  Start it up

$ coffee app.coffee

Print to PDF.  Look at that!  A nice report generated with page breaks on each main topic (item) based on SharePoint 2010 data in a simple yet expressive way.




Saturday, July 21, 2012

Installing coffee-script cli


Today I will be updating my box to the latest nodejs and after doing so, installing the coffee-script command line interface which allows for coffee-script to be run on such things as crontab, init.d, or even nagios (see other post).  I will also be installing nodemon which will act as a daemon for my coffee-script web applications.  Nodemon will watch for changes and restart the application if it finds any, which is pretty neat.

$ wget http://nodejs.org/dist/v0.8.3/node-v0.8.3.tar.gz
$ tar zxvf node-v0.8.3.tar.gz

$ cd node-v0.8.3


$ ./configure

{ 'target_defaults': { 'cflags': [],
                       'default_configuration': 'Release',
                       'defines': [],
                       'include_dirs': [],
                       'libraries': []},
  'variables': { 'host_arch': 'x64',
                 'node_install_npm': 'true',
                 'node_install_waf': 'true',
                 'node_prefix': '',
                 'node_shared_openssl': 'false',
                 'node_shared_v8': 'false',
                 'node_shared_zlib': 'false',
                 'node_use_dtrace': 'false',
                 'node_use_etw': 'false',
                 'node_use_openssl': 'true',
                 'target_arch': 'x64',
                 'v8_no_strict_aliasing': 1,
                 'v8_use_snapshot': 'true'}}
creating  ./config.gypi
creating  ./config.mk



$ make
$ sudo make install
$ sudo npm install -g coffee-script
$ sudo npm install -g nodemon









Saturday, July 14, 2012

Sending custom Nagios notification emails with coffeescript

Lets imagine for a second that you are using Nagios to monitor systems and services.  Nagios can send you notifications upon events, but the default email notification command is kind of boring.

Lets spice it up with coffeescript!


Nagios


define contact{
        contact_name                    mike
        alias                           Mike Kunze
        service_notification_period     24x7
        host_notification_period        24x7
        service_notification_options    w,u,c,r
        host_notification_options       d,r
        service_notification_commands   notify-service-with-nodejs
        host_notification_commands      notify-host-by-email
        email                          
}


define command {
    command_name    notify-service-with-nodejs
    command_line    /opt/bin/notify-service.coffee "$HOSTNAME$" "$SERVICEDESC$" "$HOSTADDRESS$" "$NOTIFICATIONTYPE$" "$SERVICESTATE$" "$LONGDATETIME$" "$SERVICEOUTPUT$"
}



CoffeeScript



This script will be executed by nagios and will contain the macros provided by the nagios command.

Saturday, June 16, 2012

Windows 7 Updates (service not running error)

Today, I brought back up an old machine of mine running windows 7.  It had been a good 6 months since I've last used it, so naturally, windows updates was not working.

I managed to find a nice thread on social.technet:  http://social.technet.microsoft.com/Forums/en/w7itprogeneral/thread/6a8889a8-65b2-4012-9cf8-2689f47b21e4


stop Windows Update Service


delete C:\windows\softwaredistribution\*.*


start Windows Update Service


check for updates




Hope this helps if you run into the same issue.

Monday, June 11, 2012

NodeJS and OpenShift, a PortalGNU presentation

Today, I am going to get my platform service configured and ready for development.  For PortalGNU, I will be using Red Hat OpenShift.  

First we select a web cartridge.  In my case, I will select Node.JS.



Give the application a name.


The next screen will include necessary information for setting up git and pub keys.   I installed the openshift toolkit (rhc) so I will continue there.

OpenShift includes shared databases and other features, called Cartridges.  In my case, I want a MongoDB cartridge.

$ rhc app cartridge add -a portal -c mongodb-2.0



RESULT:

MongoDB 2.0 database added.  Please make note of these credentials:

       Root User:  root
   Root Password:  *******

   Database Name:  portal

Connection URL: mongodb://127.0.0.1:27017

You can manage your new MongoDB by also embedding rockmongo-1.1


So at this point, I have mongoDB installed.  Let's check out rockmongo, too.


$ rhc app cartridge add -a portal -c rockmongo-1.1


RESULT:

rockmongo-1.1 added.  Please make note of these MongoDB credentials again:

   RockMongo User    : root
   RockMongo Password: *******

URL: https://portal-portalgnu.rhcloud.com/rockmongo/


Sweet, a web interface.  Now I will create a new database for the Portal CMS.



Cool.  From now on, package.json in my git repository will dictate how this application works.  For now, I have a cool default landing page:






Thursday, March 22, 2012

Creating a content driven continuously tested and integrated source controlled website with NodeJS and Express

Creating a content driven, continuously tested and integrated source controlled website with NodeJS and Express.

The next set of blog entries will be dedicated to bringing online my pet project PortalGNU.com/org

Portal GNU will contain content dedicated to the open source movement.  The website will act as a blueprint for spawning your own personal NodeJS + Express + MongoDB driven website.

Before PortalGNU goes live, my blog will be dedicated to describing the process from github to travis-ci to amazon EC2.

Right now this is a work in progress.  More to come.

Friday, March 16, 2012

Server 8 Beta - Installing Core, Install GUI later

I'm beta testing server 8, which is in public beta (link)

This is the first time I've tried installing just the core, but it seems to make sense now.



It is an extremely quick way to get a box online and exposing RDP without the hassle of navigating a GUI.

To quickly get your hostname, networking, and RDP configured, simply run sysconfig.cmd



You may still use the GUI if you'd prefer.  The new windows 8 interface plays well with the apple trackpad.  I'll warn you though, this will take a while.  In order to later install the GUI tools, you simply open up powershell and issue the following:

Add-WindowsFeature Server-Gui-Shell



Finally you'll reboot.


Friday, February 17, 2012

Get Blogger Posts with NodeJS

Today I wrote a blogger scraper for my new NodeJS website:  http://mikekunze.info

This script connects to my blog's RSS JSON feed and sends its data to a mongolab hosted mongoDB instance.

Pretty nifty.  Now I have content for my website, and still can use blogger's API to create that content.

One of the bonuses with this script is how it will not duplicate entries.  This feature checks title names.  If we want to be more dynamic we would create hashes on the entry content... maybe some day.

Wednesday, February 1, 2012

Massive Bang.js updates

Hello internet.

I have been spending quite a bit of time refactoring the server side code for bang.js lately.  I started a total conversion of it to coffeescript.  Its quite amazing how easy it is to translate into coffeescript which natively supports classical inheritance.

bang.coffee is our main file, which is executed by the iced coffee-script compiler

abstractServer.coffee and server.coffee contain the coffee ingredients to boot up the server.

You'll also notice I've made significant improvements on the layout of the project.  Now there are two main folders:  client and server.

Tuesday, January 17, 2012

Books I am reading

Hello!

First, I'll recommend Javascript: The good parts.

Currently, I am reading the following:

Javascript Web Applications by Alex MacCaw
Javascript Patterns by Stoyan Stefanov

The javascript patterns book is exceptional for getting an idea of the power and customization javascript offers.

Wednesday, January 4, 2012

ExtJS 4.0 MVC dynamic controllers

ExtJS4 has been designed with an inheritance class based MVC core framework along with its rich set of UI controls.  The current ExtJS 4.0.7 framework supports a concept of an Application.  This application class, from my experience, works well if there is just one defined and initialized.

An application can have many controllers, however.  Each controller can take models, stores, and views.  The controller is the meat of an application and controls the logic for the views.  It is the controller we want to dynamically add at run time.

Lets setup our ajax getJS loader. Then lets get the interface, which will contain the application definition.



The long and short of it, you need to keep a reference of the application itself.  In my case:  Ext.bang.util.app is where I'm storing it.




Now, when I want to add another "sub" application (in this case, a login window), we use the application reference to getController('login'), and then init() it to run its init method.
remotejs.getJS({ js: 'login.js', app: 'bang'}, Ext.bang.util.run);




The controller does the rest. Take a look at the bang client code to see how it all fits together.

Bang.js

Hello.  Its been a while.  I took a break from coding up until around December, and since then I've been working on a new framework called bang.js.

I will take in many concepts from portalstack for security and ui.  Take a look at my github page:

https://github.com/mikekunze/bang.js