How I cut the cord to subscription TV!

I recently reviewed our family budget trying to find areas to trim the fat, and one of the things that just ate me up was how much money we were paying for subscription TV.  Our monthly TV payment was $115 for DirectTV and I can assure you I rarely, if ever, got $115 worth of use out of it!

A friend of mine suggested that I look into a Roku, which is essentially a small device that connects to your WIFI connection and serves content to your TV via the internet.  You can install scores of “channels” on it, such as Netflix, Hulu Plus, Crackle, NASA channel, Pandora, CNet, and many more.  Many channels are free, but some premium channels as Netflix and Hulu Plus have their own subscription fees.  In addition to the official channels offered in the Roku Channel Store, there are many private channels that you can install. Here is a list of private channels that I came across that were compiled in April 2011.  You can piece together a channel list that suits your needs.  I decided to purchase the Roku 2 XD, which offered everything I wanted for a one-time payment of $79 with no recurring fee for a subscription or anything.  With just a power cord and an HDMI cable, I was in business.

One thing that occurs when you switch to a Roku, or internet TV in general, is that you tend to quit using your TV as background noise.  With subscription TV, my family had a tendency to leave the TV on until something vaguely interesting came on and they would sit and watch it.  Instead, under this model, we actually seek out the programs that we want to watch and watch them when we wish.  Using Hulu Plus, we have access to entire series of many of the shows we would typically watch.  Often there is a several day delay between the live program run and the time that it shows up on Hulu, but considering how frequently we would previously DVR shows we wanted to watch and view them much later, very little has changed here.  With Netflix, we have access to the entire Netflix library on demand.

Roku home screen showing channel selection with Netflix focused.

Even with these pieces in place, I knew that I didn’t want to be totally cut off from live streaming TV.  I still plan on watching every Dallas Cowboys game as it happens, breaking news, certain live shows, and more.  Subsequently I decided to get an HD TV antenna so that I could watch all OTA (over the air) channels as well.  I mounted this antenna, which I found at BestBuy, to my roof using the existing coax in my house.  Depending on how close you are to the broadcast towers, you might be able to get by with less of an antenna.  You can help find your exact needs by putting your address in at www.antennaweb.org, which will tell you the distances you are from various stations and tell you the exact compass heading that you should point your antenna.  If you plan on splitting the line to multiple TVs, you may want to look into using a line amplifier (around $20)  to reduce attenuation.  Considering the fact that I am currently only serving one TV, I haven’t installed an amplifier at the splitter, but when I bring more TVs online in my house I may opt for doing so.

As a side note, I found it kind of interesting determining which ends of the coax on the outside of the house went to specific rooms in the house.  That is probably a post worthy of its own space!

Once the antenna was installed, I was shocked by how many OTA channels there are!  For example, on standard cable/satellite and formerly analog antenna before that, my local channel 8 WFAA (ABC affiliate) consisted of a single channel.  Now that channel exists as 8-1, and there is now 8-2  which is constant weather from Ch 8, and 8-3 which is… well, I don’t even know yet, but it’s some kind of programming also provided by WFAA.  I have found that many of the channels have sub-channels like this.  After letting the TV scan and find channels, we were going through all that it found, and my kids started watching 62-2 -  a children’s channel called Qubo.  Until now, I always thought Qubo was a subscription channel, and I had no idea there were OTA channels that high up the dial.

I have also found that the picture using the HD antenna is stunning! From what I have read on that topic, due to the compression techniques that the cable/satellite companies use for broadcasting, you will never see the picture quality that you can from a digital antenna.

So, what is missing?  One thing that I plan on adding is a DVR solution for broadcast TV.  I haven’t really figured out exactly what I am going to do here, but it seems that there are numerous options.  They do sell standalone DVR units like this, but I am considering setting up a media server.  That way I can pretty much play anything I want from the media server, through the Roku using something like Plex or Firefly.

Another point wort mentioning is that not all network shows are available on Hulu Plus, although almost all that we have been interested in are there.  What many people apparently do is subscribe to torrent sites which download specific shows for you once they become available (usually almost immediately after airing).  They are automatically placed onto your media server and then you can watch them via your Roku.  I can’t speak to the legality of this approach, but it is a method that I have seen nonetheless.  I believe that if I have a DVR like I listed above, this would eliminate much of the need for this approach.

Bottom line… with both Netflix and Hulu Plus, my total monthly expense is now amazingly under $15.00.  Amazingly I am saving $1,200 per year over what I was spending previously!  If there is a trade off in usability, and many would argue that there isn’t, it is FAR outweighed by the savings over time.

Video blogging on the cheap – not as easy as it should be!

I just recorded two screencast videos last night that I wanted to use as video blog entries. Seems easy right? Just find a video host!

Unfortunately, “easy” is far from the way I would describe my experience, and I am somewhat exasperated by the process at the moment. So here is the detail: I have two videos, one being 9:17 long, and the other being 15:02. Both of these are recorded as OGV files, which is part of the free, open, cross-platform OGG media container format.  All I need to do is find a service to host them and stream them.  So far so good right?

I decided that I would try to look around for a video hosting solution other than YouTube, since I have posted screencasts on there before and the video degradation was horrendous.  After some googling and reading reviews, I started down a spiraling path of services leading to nowhere, beginning with….

  • Vimeo (verdict: fail) – Vimeo seemed like a great place to start.  Any time I have seen their videos, I have never noticed a degradation.  They offer HD and the service is free – kind of.  In all actuality, there were three issues for me here.
    • bad: They do not support the OGV file format, so I had to convert the OGV to an AVI before uploading it.  Of course they don’t actually tell you this until you have sat through an entire upload first! There was a degradation that occurred during that process, so even after uploading it, the quality wasn’t as good as I liked.
    • bad: Free accounts are only allowed to upload a single HD video per week.  Already in my first try I had two, so that is a show stopper.
    • good: The HD version that was uploaded was better than many of the alternatives,
    • bad:  you can’t embed the HD.  If the user wishes to see it, they have to click through the player to the site and watch it on the Vimeo site.
  • Viddler (verdict: fail) – Viddler seemed like a good alternative to Vimeo.  However, ultimately it doesn’t seem to be the direct fit either.
    • bad: Just as with Vimeo, they do not support OGV.
    • good: As opposed to Vimeo, at least they tell you about the lack of OGV support  as soon as you attempt uploading!
    • bad:  Since I had already converted one of my videos to AVI, I went ahead and tried it.  Even in full screen mode, the degradation was bad enough that I couldn’t see what I was typing in the video, which is kind of the point!
  • YouTube (verdict: fail) – After nixing Viddler,  I thought “why not at least try YouTube again?”, and I was soon reminded of exactly why not.
    • good: They support OGV!
    • bad: Even my 9:41 video was deemed “too long” and was promptly removed.
    • bad: I couldn’t even get far enough to report on degradation!

So just as I began typing this blog entry to air my dissatisfaction with things in general, I came across this post, praising the combination of using Jing to record, and Screencast.com to host the video.  The video clarity on his example was really impressive.  “Ha!” I thought, “finally!”.  So I now have one more to add to my list:

  • Screencast.com (verdict: fail)
    • good: They allow you to upload any file type whatsoever! (I think anyway)
    • bad: They only embed a few different file types into players.  OGV is again not supported.

So here I sit, still without a good solution to what initially seemed like it should be a no-brainer of a problem to solve.  The amount of time that I have wasted to still be sitting at square one is terribly aggravating.  Between upload times and service-specific encoding times, I am more hours deep into this than I care to think about.

HTML5 to the rescue?

One thing that came out of this search is that I learned that HTML5 natively supports OGG/OGV using the <video/> tag.  (more here), and based on an example that on this page, it looks very cool!  The only fundamental thing that is holding me back at the moment is that there doesn’t appear to be any option to allow your user to ‘full screen’ your video out of the player.  So close, yet… still no solution!

If anyone has any good recommendations, feel free to leave them in the comments.

How to: write the last N linux terminal commands to a file

Sometimes blog entries are for you.  Sometimes they are for me.  This one is the latter.

The other day I asked the following question on Twitter:

Anyone know a way to write out the last N commands run in the #linux terminal to a file?

 

I got a plethora of responses within minutes, but by far the most complete and tricked out response came from Joseph Lamoree @jlamoree, who gave the following solution:

 

history | tail -n 10 | sed -E 's/^ +[0-9]+ +//' | grep -vE '^history$' > cmds

 

In that command, the “10″ represents the last 10 commands, and “cmds” is the filename that the output will write to. Since there isn’t the remotest chance in hell that I would ever remember this, and Twitter is about the worst place for me to go back later to find technical information, I am putting it here on my blog for future reference. Thanks Joseph!

Open Letter: Stepping down as DFWCFUG Manager

At Tuesday night’s meeting (3/8/2011),  I announced that after 55 meetings at the helm, I am stepping down as manager of the Dallas Ft. Worth ColdFusion User Group. 

Am I tired of doing it?  Am I leaving the language?  NO, and NO!

As an Adobe UGM, one of my responsibilities is to endorse and evangelize the product of Adobe ColdFusion (ACF).  For numerous reasons over the past year or so, I have found myself at growing odds with this task.  As competing open source engines such as Railo and OpenBD are gaining in functionality, stability, and performance, as well as being made freely available to the CFML community, it is impossible to ignore them as true contenders in this space.  Where they were once viewed as free alternatives, they have moved to the position of driving change and driving features that I would like to see in ACF. I wholly feel that these engines are the future of our community, and should be given equal attention rather than be viewed as just an alternative.  Based on that fact, it is disingenuous for me to continue in my role as an Adobe UGM.

As of its inaugural meeting on April 5, 2011 at the Paladin Consulting office in Dallas, I am going to serve as coordinator of the DFW CFML User Group, a non-product-specific user group composed of enthusiasts of the CFML language, regardless of the engine that runs it.  Without the pressure of promoting one company’s product over another, we can focus on what is really important to us, which is the power of the CFML language and the diverse ways that it can be used across various platforms.

It is important to note that the new group will not be strictly an “open source” group, nor is this a swipe at any kind of Adobe itself.  The group is simply not going to endorse a single product as the only viable solution to writing enterprise level applications in CFML.  Our content will doubtlessly include Adobe ColdFusion, but will not be exclusive to it.

So where does this leave the DFWCFUG?  Adrian Moreno has served as co-manager of the group for several years now.  Adobe mandated this hierarchical approach to how their groups will be organized so that in the event of the departure of a manger, the group can carry on without interruption with the co-manager taking over.  I have spoken with Adrian at length on this topic, and he does not share my vision on the DFW CFML User Group, and feels that it is important to have a product-focused user group under Adobe.  As a result, he has opted to take the role of group manager effective immediately and will be leading the DFWCFUG.

I want to make it abundantly clear that this will not be an “us vs. them” scenario between the two groups.  We are in this together as one community with varying interests and it is in all of our interests to positively promote both groups.

Fortunately, I think that this leaves the DFW CFML developers with some excellent options!

I plan on sharing much more about the new DFW CFML User Group in the near future.  Please follow us on Twitter at @dfwcfml and look for upcoming announcements in the next few days.

Lastly, thanks for letting me serve as leader of the DFWCFUG all these years.  It has been an honor and a privilege to do so.

~Dave Shuck
@dshuck
daveshuck.com

Refactoring: avoiding nested conditional statements

Recently at I was given the task of adding an new validation routine to an existing validation process.  In this piece of code, the requirements mandated that a series of sequential tests would be run, but in the event of a failure of any of them, the process would kick out and set an error state, provide user feedback, and whatever other tasks needed to occur.  We have all seen processes like this before.  Essentially it looked like this:

error = true;
if ( testOne() )  {
    if ( testTwo() ) {
        if ( testThree() ) {
            if (testFour() )  {
                 error = false;
                 doAllTestsPassedStuff()
            } 
        }
    }
}
if ( error ) {
	handleErrorCondition()
}

Looking at this block of code, the intent is pretty obvious as we progressively run tests as long as the previous test returned true, eventually firing the doAllTestsPassedStuff() method.  If any of the tests failed, we would call handleErrorCondition().  While this approach is completely functional, the maintainability of it is no fun, and it just feels wrong to me.  For the task I was given, I had to add a new test davesSuperTest()  between the 2nd and 3rd conditional blocks.   If I were to follow the previous approach, I would insert it there, and tab out the previous testThree() and testFour() conditional statements further to the right.  In my opinion this is an ugly block that is getting uglier by the minute.

By altering the approach to use try/catch blocks, we can still maintain the same level of control and order or operations as dictated in the requirements, but each condition becomes insulated from the others,  like this:

try {
	if ( !testOne() )	{
		throw "fail:testOne";
	}
	if ( !testTwo() )	{
		throw "fail:testTwo";
	}
	if ( !davesSuperTest() )	{
		throw "fail:davesSuperTest";
	}
	if ( !testThree() )	{
		throw "fail:testThree";
	}
	if ( !testFour() )	{
		throw "fail:testFour";
	}
	// if we reach this point, then all of the above tests passed.
	 doAllTestsPassedStuff()
}	
catch(e)	{
	handleErrorCondition()
}

Given this approach, it is very simple to add/remove conditions without disrupting other conditions, even better, I don’t have to scroll to my second monitor on the right!

Piecing together optional view elements at runtime with Mach-II

Often in web development, you run across a case where there is a display that contains optional view elements that are derived at runtime.  Perhaps there is a section of a form that is only available to residents of the US.  Maybe, only users with a certain level of group access have the ability to see a section of a page that can be partially viewed by other user types.  I am sure you can think of numerous cases that you have come across in your own work.   In a current project at our company, one of our developers was tasked with rewriting an old piece of legacy code in which logged in agents can select one or more of multiple reports to display.  The legacy code was a complete nightmare that would probably be worthy of an entire series of what not to do, but that is for another day!  To boil this piece down a bit, essentially the agent has a series of checkboxes of specific reports, and “to” and “from” date inputs to provide a date range.   Depending on what the agent selects, the submission page might show a single report, or a series of several reports in line.

One approach to this would be to have an event defined in which you compile each piece of data into some kind of data collection

if ( [Report1 was selected] )
     get data for Report1
if  ( [Report2 was selected] )
     get data for Report2
(... and so on...)

Then on the view, you could do something like:

if ([we have report data for Report1])
show Report1
if ([we have report data for Report2])
     show Report2
(... and so on ....)

Well, we could but it would be wrong!  Why?  For one thing, we now have conditional logic about each report built into multiple places in our application.  From a complexity and maintenance standpoint, you have just made it, at a minimum, twice as complex as it needs to be.  There is also a strong argument that could be made (and I would make it!) that your view shouldn’t be responsible for determining what it’s supposed to display.  It should simply display!

So what is another approach to this?  How could we employ MVC techniques without the individual components involved becoming intertwined, creating yet another administrative issue.  Here is the solution that I proposed to our developer:
First, let’s start with our form, which is remarkably simple:

<h3>Select the reports you would like to view</h3>
<cfoutput>
<form name="reportform" action="#buildUrl( "viewreports" )#" method="post">
<input type="checkbox" name="reportList" value="report1" /> Report 1<br />
<input type="checkbox" name="reportList" value="report2" /> Report 2<br />
<input type="checkbox" name="reportList" value="report3" /> Report 3<br />
<input type="checkbox" name="reportList" value="report4" /> Report 4<br />
<input type="checkbox" name="reportList" value="report5" /> Report 5<br />
<input type="submit" value="run reports">
</form>
</cfoutput>

As you can see, we are going to load up an event-arg on the submission named “reportList” that will be a comma separated list of reports that we will be displaying. For instance, if we make the selections you see below, on the viewreports event, event.getArg( “reportList” ) will be: report1,report3,report5

Report Selection form output

I decided that generally I wanted it to behave with a flow like this:

Reports flow diagram

It is a good goal in application development to move as much specific knowledge of the flow of the application outside of any component (view, service, or otherwise) that is not responsible for it to avoid coupling issues.  For instance, our report display page shouldn’t understand flow should it?   (hint: “no“)   Our service layer that is responsible for retrieving data shouldn’t should it? (you guessed it: “no”)

So where does that responsibility lie?  I place it squarely on the front controller framework at hand, namely Mach-II in this case.

If that is the approach we are going to follow, then how can we have freely operating pieces, and create a composite view without the individual pieces having any knowledge of each other, nor any knowledge of their role in the bigger picture?   We achieve this by creating small encapsulated pieces that are individually responsible for their limited role, and count on our framework to do the rest.

If you look at the flow diagram above, you will see that we start with a conditional statement on our submission event: “Has form output been generated?”

In our Mach-II configuration we can achieve this by doing the following:

<event-handler event="viewreports" access="public">
 <event-mapping event="noData" mapping="multi_report1" />
 <filter name="checkForReportData" />
 <view-page name="selectreports" contentArg="form" />
 <view-page name="reports" />
</event-handler>

Let’s talk about what those pieces are doing.  First, we are defining an event-mapping “noData“.  What this means is that anywhere further in this event, if someone announces “noData“, the event that we are really going to announce is “multi_report1“.   By doing this, we don’t bury specific knowledge into our component responsible for the announcing, but more on that in a moment.  Next, we are calling a filter named checkForReportData. Filters are Mach-II components that contain a single public method filterEvent() which returns a boolean value telling Mach-II whether or not it should continue further within this event.  In the code above, if the filter returns “false”, the <view-page/> nodes will not be processed.   So let’s take a look at the filterEvent() method.

<cffunction name="filterEvent" access="public" returntype="boolean" output="false" hint="I am invoked by the Mach II framework.">
     <cfargument name="event" type="MachII.framework.Event" required="true" hint="I am the current event object created by the Mach II framework." />
     <cfargument name="eventContext" type="MachII.framework.EventContext" required="true" hint="I am the current event context object created by the Mach II framework." />

     <cfset var result = event.isArgDefined( "reportOutput" ) />

     <cfif NOT result>
          <cfset announceEvent( "noData", event.getArgs() ) />
     </cfif>

     <cfreturn result />
</cffunction>

Very simply, we are saying: Is there an event-arg named reportOutput defined? If there is, we are returning true, telling the event to continue.  If not we are going to announce an event noData, and returning false.   By announcing a generic event named “noData”, and then defining what “noData” means in the XML config, we have just insulated this filter from change.  For instance, right now the <event-mapping/> says that this means that we should announce “multi_report1“.  If this ever changes to another report, then we only have to change the config.  Additionally, we might be able to repurpose this filter another way in the future and announce a completely different event by using a different event-mapping.

So in our example, we have no reportOutput on our first pass through this method, so we are being rerouted to the event “multi_report1“.  Here is what it looks like:

<event-handler event="multi_report1" access="private">
     <event-arg name="reportName" value="report1" />
     <event-mapping event="nextEvent" mapping="multi_report2" />
     <filter name="checkIncludeReport" />
     <notify listener="ReportListener" method="getData" resultArg="data" />
     <view-page name="reports.report1" contentArg="reportOutput" append="true" />
     <announce event="nextEvent" copyEventArgs="true" />
 </event-handler>

On the second line, all we are doing is defining an event-arg named “reportName” and assigning a value of “report1“.   We will be using this value in a moment.  Before we get to that, and now that you understand what event-mappings are doing, the third line should be clear.  We are just telling Mach-II “if someone or something announces nextEvent within the context of this event, announce multi_report2 instead“.  Again this allows our components to announce generic events which are explicitly defined in the config.   Next, we are calling a filter to see if report1 has been selected in the form by calling a filter named checkIncludeReport.   If the report was not selected in the form, we will kick out and announce nextEvent aka multi_report2.   However, if the report is included, we will continue down the line calling a method on our listener to retrieve data, and then using that data in a view named “reports.report1“.  We take that generated HTML and append it into an event-arg named “reportOutput“.   If you look at our code above, you will be reminded that this is the argument we were testing for in the checkForReportData filter.   Here is a look at our checkIncludedReport filter which makes the decision to include this report or not.

<cffunction name="filterEvent" access="public" returntype="boolean" output="false" hint="I am invoked by the Mach II framework.">
     <cfargument name="event" type="MachII.framework.Event" required="true" hint="I am the current event object created by the Mach II framework." />
     <cfargument name="eventContext" type="MachII.framework.EventContext" required="true" hint="I am the current event context object created by the Mach II framework." />    

     <cfset var result = ListFindNoCase( event.getArg( "reportList" ), event.getArg( "reportName" ) ) />

     <cfif NOT result>
          <cfset announceEvent( "nextEvent", event.getArgs() ) />
     </cfif>

     <cfreturn result />   
</cffunction>

All this filter is doing is checking in the event-arg reportList, which is a comma separated list of reports, and seeing if the value of event-arg reportName (which was defined on line 2 above) exists in the list.  Based on our example of selecting reports 1, 3, and 5, the plain English translation of this comparison is:  If the list “report1,report3,report5″ contains “report1″, return true, otherwise announce “nextEvent” and return false. As you surely know by now, in this case “nextEvent” translates to “multi_report2

Essentially we just repeat this exact pattern for the next 4 events, with a minor change in the last event:

<event-handler event="multi_report2" access="private">
    <event-arg name="reportName" value="report2" />
    <event-mapping event="nextEvent" mapping="multi_report3" />
    <filter name="checkIncludeReport" />
    <notify listener="ReportListener" method="getData" resultArg="data" />
    <view-page name="reports.report2" contentArg="reportOutput" append="true" />
    <announce event="nextEvent" copyEventArgs="true" />
</event-handler>

<event-handler event="multi_report3" access="private">
     <event-arg name="reportName" value="report3" />
     <event-mapping event="nextEvent" mapping="multi_report4" />
     <filter name="checkIncludeReport" />
     <notify listener="ReportListener" method="getData" resultArg="data" />
     <view-page name="reports.report3" contentArg="reportOutput" append="true" />
     <announce event="nextEvent" copyEventArgs="true" />
</event-handler>

<event-handler event="multi_report4" access="private">
     <event-arg name="reportName" value="report4" />
     <event-mapping event="nextEvent" mapping="multi_report5" />
     <filter name="checkIncludeReport" />
     <notify listener="ReportListener" method="getData" resultArg="data" />
     <view-page name="reports.report4" contentArg="reportOutput" append="true" />
     <announce event="nextEvent" copyEventArgs="true" />
</event-handler>

<event-handler event="multi_report5" access="private">
     <event-arg name="reportName" value="report5" />
     <event-mapping event="nextEvent" mapping="viewreports" />
     <filter name="checkIncludeReport" />
     <notify listener="ReportListener" method="getData" resultArg="data" />
     <view-page name="reports.report5" contentArg="reportOutput" append="true" />
     <announce event="nextEvent" copyEventArgs="true" />
</event-handler>

As I mentioned, there is a slight change in multi_report5 in that nextEvent is defined as “viewreports“.   By doing this, we have then ended our report generation and are redirecting the flow back to the initial event that kicked this process off.  Since we now have reportOutput data, we are directed to the page that ouputs it all.  Quite simply, our big, massive, magnificent multi-form display page looks like this:

these are the reports:

<cfoutput>#event.getArg( "reportOutput" )#</cfoutput>

There is no conditional nonsense, and the view simply outputs all of the generated output that was appended into the event-arg reportOutput.   Additionally, if you reflect on the things we have done, no where are we explicitly saying “if the user selected report1, do something“.  We have left it all fairly generic and hopefully have created some potentially reusable components.    For instance, let’s say that we now have a requirement for an event that only displays report2. No problem!  All we need to do is add an additional event like this:

<event-handler event="report2" access="public">
     <notify listener="ReportListener" method="getData" resultArg="data" />
     <view-page name="reports.report2" />
</event-handler>

Easy, huh!

Lastly,  I know that some of the more astute of you may have noticed a fatal flaw in the design above.  What happens when no reports are selected?   In the interest of keeping this example as stripped down as I could, I let that one go, but it is a very simple fix.  What would you do?  Where would you put it?   Feel free to post your fix in the comments, along with any other thoughts you have on this solution.

download fully-functional example files – NOTE: doesn’t include the Mach-II framework

Help the homeless. Scriptalizer.com is being evicted!

I am cross posting this for AaronScriptalizer is a GREAT project that needs a home.  If you have a server, VPS, etc running a CFML engine (Railo, OpenBlueDragon, or ColdFusion), and wouldn’t mind adding another site to your server, I am sure Aaron would be grateful.

For details contact Aaron in the comments of this post.

End of an era – Turning the lights off at InstantSpot

As of this first week of 2011, we will be permanently turning off the lights at InstantSpot. I felt that it was fitting to give it at least some type of obituary, and share my reflections of what has been a 4-and-a-half year story of the process.  We have learned much through the process, and hopefully we can share some of what we learned with others.

Starting in 2006, Aaron Lynch and I started on modest plan to revolutionize the blogging world.  OK, so that isn’t really true – it happened like this…

In the beginning… a simple CMS

In 2005, Aaron was making a career change and after years of being friends through the Jeep and 4×4 community we found ourselves in the same line of work as ColdFusion developers.  Around the beginning of 2006, he joined my company which gave us way too much time to discuss and conspire on how we might take over the world.  We decided to begin by making a simple no-database CMS in ColdFusion that we could distribute and sell for a nominal fee.  Of the ones that we deployed, we found that no one was really interested in hosting it themselves and we ended up putting a few instances on our server.

A network was born

Once a small handful of them were running, we created a single directory page that listed and linked to all of the individual sites.  Even though it was exceptionally simple and rudimentary, we realized that in essence we had a “network”.  Given the fact that no one really had an interest in hosting themselves, we quickly realized the fallacy of duplicating the code base for each new site and set out to more effectively externalize the settings and make the configuration centrally persisted in a shared database and run all of the sites from a single set of code.  We quickly developed methods of skinning individual sites, and using subdomain URLs.  The methods we came up with are almost exactly the same methods that we would end up using the next 4 years moving forward.

The addition of blogs

Almost immediately we decided that we needed to add blogs to the sites.  In a move that would haunt us to some degree for a long time to come, we chose to implement the open source BlogCFC application created and maintained by Raymond Camden.  While there was nothing inherently wrong with BlogCFC itself, it was just a poor choice on our part for several reasons:

  • We already had a functioning CMS build using the Mach-II framework using its own database.  Adding in multi-site support to the blogs (which I don’t think it had at the time, if I recall correctly) and tying that into our existing application proved to be a ton of work and felt like such a goofy solution.
  • Combining the two databases into a single one, and the vastly different approaches of data persistence (naming schemes, UUID vs integer keys, etc…) made our code base look schizophrenic.
  • We actually had two complete applications and had to manage session replication between them.
  • We ended up using a really a small portion of the actual BlogCFC code base.  If we would have just created the functionality on our own from scratch, we would have spent far less time and ended up with something that seemed like far less of a hodgepodge.

Eventually we got it working, and did so somewhat successfully, but it really felt like a hodgepodge of methodologies.  However, we started growing our user base, got some great community feedback and support, and had a relatively well performing blogging network.

The dirty secret

One of the seriously embarrassing dirty secrets that we had was the original hardware/server architecture of InstantSpot.  When Aaron and I both started out writing the code, we were not the Linux nerds that we grew into being later.  We were both Windows developers and had never paid as close of attention to casing issues as we grew to later.   Unfortunately the first time that we tried to run Spot under Linux, it blew up severely.  We started trying to work through issue by issue,  but decided we would do a temporary fix by throwing it into a Windows VM on our co-located Linux server.  However, this temporary work-around ended up sticking around much longer than we anticipated.   So, for the first year or so, InstantSpot chugged along on a single PC that was little more than a common desktop, running Ubuntu Server, with a Windows VM in VirtualBox!  The fact that it supported as much traffic as it did was always just a little bit comical to us.

The inevitable rewrite

As we continued working to bring in a continual array of new enhancements – many of which were implemented but never even seen by our users! – we found more and more that our architecture sucked badly enough that we would actually have to do something about it.  In addition goofy hardware/OS issue, the maintenance of dealing with the two separate applications became this growing elephant in the room.   As our user base and our exposure continued to grow, we decided it would be worth a complete re-write from the ground up.  When we first started slapping stuff together in the befinning, we didn’t have a defined target in mind, so surely a rewrite would be less time consuming since we actually had a goal now right?!   We set out about mid-2007 to find out.  Around that time, Railo had hit RC 2.0 and Aaron and I were both enamored by the idea of a completely open source version of InstantSpot.  We busted our asses the last quarter of the year and were finally ready to roll right at the turn of the year in 2008.   I will never forget how many people scoffed at us using Railo.  I mean seriously, given the enormous growth of it in 2010, you wouldn’t believe all the snickering we put up with about it early on.  However, we found huge performance increases over ColdFusion and felt like we were really breaking new ground.

Disaster strikes

In January 2008, we rolled out the newly rewritten InstantSpot with HUGE expectations.  As soon as we flipped the switch at 1:00am CST on January 13, we immediately started seeing very strange errors.  Some of these errors were coming in the fashion of ColdSpring returning improper service object bean types, Mach-II throwing errors that made absolutely NO sense at all, and errors in code that was too simple to fail!  Often when someone would visit a blog, someone else’s content would show up.  (not cool!)  It began looking like we had some serious threading issues.  Immediately we discovered one issue in which the getBean() method of ColdSpring (1.x?) was not thread safe!  (Here is an email thread discussing it).    We also realized that we were using getBean() in a few inappropriate places, and moved code around so that the only place this was being used was in the bootstrapping process and we made those processes threadsafe on our own.   Even then, however, we continued to see things spiraling out of control in what became increasingly clear as a concurrency issue within Railo itself.   Gert Franz actually contacted us as soon as we brought the issue to light and wanted to help us resolve it by us sending him our application.  Given the complexity of the application, however, we felt that we didn’t have the option to spend time going down that road without knowing that we would see a solution come out of it quickly.  We made a snap decision to purchase a license of ColdFusion 8, and cut over as soon as we could.  Two nights later we made the cut-over to CF8 and all of our threading issues completely disappeared.  (blog entry from January 2008 discussing this in detail)

Immediately we breathed easier knowing that our months worth of work was actually paying off, but unfortunately this gave some “I told you so” ammo to all the previously mentioned skeptics of our use of Railo, and we were eating crow.  As it turned out, Railo put out a patch within the week that would have completely fixed the problem we had, and in hindsight, I really wish that we had the foresight to hold out and continue down that path.

Easy days

So there we were… we had our new codebase running… and running… and running… with almost zero time spent on maintenance issues.  Before too long we had amassed somewhere around 1000 blogs, with less than 5% of them being relatively active.  We couldn’t have been more pleased with the results of the rewrite as it made our lives so easy.  This period, for better or worse, allowed Aaron and I to put a huge focus on client work in our moonlighting hours.  While this was a temporarily lucrative venture, the focus was definitely off of InstantSpot since it really didn’t demand any of our attention on a daily basis.   One result of this that we essentially quit marketing the network.   Aaron and I are both really cheap dudes… I mean really… ask our wives.   Subsequently we never spent money to push InstantSpot beyond word of mouth.  We had this idea that if we made it an obvious tool, that an organic growth was inevitable.  While this was true to some extent, the flaw of this was the real industry giants that we were up against.  I mean, what had we made really?  A blogging network that from a feature perspective was a competitor against services like Google, WordPress, etc.   These giants that obviously had no idea that we even existed and were only competitors in our wildest imaginations and only on paper from a feature-by-feature perspective, rather than a true competition of any kind.  We had an obvious niche in the CFML development community, but the fact is that many developers want a more hands-on solution than what an out-of-the box solution such as a blogging network would provide.

What about revenue?

So we have never really talked publicly about InstantSpot revenue.  We spent a lot of time trying to figure out how to make money of it, but never really got it worked out right.  I mean, it paid for itself (until recently), but it never really made any money at all.  In periods of our largest traffic, we were still only making about $20.00/day.  So, with our colo fees of about 160/month, we were netting somewhere around 440.00/month.  For those that wondered why we didn’t bust our asses a little more, perhaps you understand our lack of motivation for doing so now!  It was kind of frustrating to have worked so hard, and to put so much time and effort in, for such a small monetary reward.    The thing that was really frustrating was that about 75% or more of that revenue was being generated from one very spammy blog that was set up with keywords all driving around prepaid wireless.  It felt kind of dirty, but it was the one consistent blog that kept us in the black.  Although technical blogs like mine, Aaron’s Sam Farmer’s, and others certainly brought more traffic, the click-through and click-rates on that stuff was dismal.

The decline

Over 2010, we have really seen a solid decline of anything positive from InstantSpot.  What was initially one of the HUGE benefits that we created with InstantSpot ultimately turned into a negative against us.  Our users enjoyed really ridiculous search engine exposure.  A brand new blog could be created and with a relatively well crafted title and content could easily hit page 1 of Google on a wide array of subjects.  This was due in large part to the fact that we started off with a few blogs that already had PR5 ranks, and then we used what we felt was a really smart way of linking pieces together so that new blogs made more relevant by being tied do solidly ranked blogs as they came in.   If you think I am overstating this, here are a couple of examples I can think of…

  • When Saddam Hussein was executed, we had a blogger that put up a youtube video of it ( yeah, the appropriateness  of that was definitely discussed, but we really never established a TOC that restricted specific content ).  This blog entry made page one of Google, and was picked up by El Tiempo, which is a huge latin news source, and linked as the first blog to carry the info.  We hit somewhere around 100K impressions by mid morning and our sever absolutely croaked.
  • We had a blogger that posted about State Farm Life Insurance and for a period, that blog link was higher in Google search results than statefarm.com with the search “state farm life insurance”.  That one was mind blowing to me!

There were a lot more anecdotal examples like this and we were continually shocked by what we were seeing in the way of search relevance.  Unfortunately we were not the only ones that noticed it.  We ended up with a couple of worthless spam sites that capitalized on this.  They would create a blog and then generate scores of blog entries every day with complete bullshit content stuffed with keywords and links to various products.  Over time, our valid content was overshadowed by our spam content as more and more (and more… and more… and MORE) spam sites were created.   Along with them came thousands upon thousands of spam comments.  We were pretty effective at blocking the bots, but there were a surprising number of manually typed spam comments, which were much more difficult to stop.  Eventually when you looked at our recent entries across the network, it was just shameful.   The more spam issues that came up, the more performance issues we had as well.  Unfortunately both Aaron and I were so tied up with our families and our jobs that neither one of us could deal with effectively and things just spiraled negatively.

Realization

At some point it really became quite clear that we were either going to have to invest much more effort and certainly much more money to make a real run at it, or we could just continue sailing along as long as it would go and see what happened.  It became clear to us that the kind of traffic that we would need to actually make InstantSpot something that could support us was so far beyond reality that we didn’t even know what to do about it.  Considering that both Aaron and I have full time jobs and each supports a family of 5,  quitting our jobs, taking loans and going full bore never seemed like the right decision.  “And who knows… maybe someday it will just take off!“.  Obviously, we never made that jump or commitment and both feel that we have hit “… as long as it would go”.  With the drop in revenue we have seen, we are actually spending money on InstantSpot to keep it up now.  We have experienced more and more performance issues that we don’t have the available time – and to be honest, the will – to keep things on track and at a level they should be.  After some discussions, we have decided that it’s time to pull the plug.

But, it’s not all bad

It really has been a great experience, and we have learned so much about so many things.  It has given us not only a great tool that has boosted our education and skill set, but gave us a great reason to be passionate about what we do and strive to do it well.  Hopefully we can find another project on the near horizon that gives us the same opportunity.

Thanks!

In closing, I would like to say thanks…  Thanks first and foremost to my family that were supportive day in and day out, and believed in what we were trying to do simply because we believed in it.  Thanks to all of the early adopters and people who stuck with us even when things were less than ideal with our system.  Thanks to many in the ColdFusion community that gave us really great support.  This has been a really great run, and I am happy to have been a part of it.

Invisible blog post

This is my first post after moving my blog over to WordPress.  Considering that the new feed isn’t in the Adobe Feeds, ColdFusion Bloggers, FullAsAGoog, etc., this will effectively be a tree falling in the forest with no one around.   Quite honestly, I am not even sure that I will request the update to the aggs.  When I am not listed in tech feeds, I can have stupid meaningless posts like this. :)

Solved: Strange sun.awt.X11.XToolkit exception with Mach-II/Railo/Tomcat/Ubuntu

I am running a BER version of Railo to experiment with the Hibernate ORM functionality for a new project.  I set up a Mach-II app from the 1.8 skeleton using Mach-II 1.8.1 on Railo under Tomcat6 on Ubuntu… whew!  That’s a mouthful huh?

The simple skeleton came up just fine, but after a little bit of customization, I ran across a strange issue.  I had created an event, in which a listener pulled a new entity from a ColdSpring bean, and persisted it using EntitySave().  Somewhere in that process, I started getting exceptions related to sun.awt.X11.XToolkit.

The first time the error would occur, I would see this:

Can’t connect to X11 window server using ‘:0.0′ as the value of the DISPLAY variable.

 /var/lib/tomcat6/webapps/railo-orm/MachII/properties/HtmlHelperProperty.cfc: line 142

    140: <!--- Configure auto-dimensions for addImage() --->
    141: <cfif StructKeyExists(serverInfo, "productLevel") AND serverInfo.productLevel NEQ "Google App Engine">
    142: <cfset variables.AWT_TOOLKIT = CreateObject("java", "java.awt.Toolkit").getDefaultToolkit() />
    143: <cfelse>
    144: <!--- Some hosts (such as GAE) do not support java.awt.* package so replace with mock function --->

On subsequent requests, I would get the following:

Could not initialize class sun.awt.X11.XToolkit again with the specitic exception pointed to HtmlHelperProperty.cfc: line 142.

After some Googling I came across a similar sounding issue in which a guy had added params to his app server.  I added it to mine, and the error went away.  If you come across this yourself, try adding: -Djava.awt.headless=true to the JAVA_OPTS (in catalina.sh for Tomcat).