Piecing together optional view elements at runtime with Mach-II

Often in web development, you run across a case where there is a display that contains optional view elements that are derived at runtime.  Perhaps there is a section of a form that is only available to residents of the US.  Maybe, only users with a certain level of group access have the ability to see a section of a page that can be partially viewed by other user types.  I am sure you can think of numerous cases that you have come across in your own work.   In a current project at our company, one of our developers was tasked with rewriting an old piece of legacy code in which logged in agents can select one or more of multiple reports to display.  The legacy code was a complete nightmare that would probably be worthy of an entire series of what not to do, but that is for another day!  To boil this piece down a bit, essentially the agent has a series of checkboxes of specific reports, and “to” and “from” date inputs to provide a date range.   Depending on what the agent selects, the submission page might show a single report, or a series of several reports in line.

One approach to this would be to have an event defined in which you compile each piece of data into some kind of data collection

if ( [Report1 was selected] )
     get data for Report1
if  ( [Report2 was selected] )
     get data for Report2
(... and so on...)

Then on the view, you could do something like:

if ([we have report data for Report1])
show Report1
if ([we have report data for Report2])
     show Report2
(... and so on ....)

Well, we could but it would be wrong!  Why?  For one thing, we now have conditional logic about each report built into multiple places in our application.  From a complexity and maintenance standpoint, you have just made it, at a minimum, twice as complex as it needs to be.  There is also a strong argument that could be made (and I would make it!) that your view shouldn’t be responsible for determining what it’s supposed to display.  It should simply display!

So what is another approach to this?  How could we employ MVC techniques without the individual components involved becoming intertwined, creating yet another administrative issue.  Here is the solution that I proposed to our developer:
First, let’s start with our form, which is remarkably simple:

<h3>Select the reports you would like to view</h3>
<cfoutput>
<form name="reportform" action="#buildUrl( "viewreports" )#" method="post">
<input type="checkbox" name="reportList" value="report1" /> Report 1<br />
<input type="checkbox" name="reportList" value="report2" /> Report 2<br />
<input type="checkbox" name="reportList" value="report3" /> Report 3<br />
<input type="checkbox" name="reportList" value="report4" /> Report 4<br />
<input type="checkbox" name="reportList" value="report5" /> Report 5<br />
<input type="submit" value="run reports">
</form>
</cfoutput>

As you can see, we are going to load up an event-arg on the submission named “reportList” that will be a comma separated list of reports that we will be displaying. For instance, if we make the selections you see below, on the viewreports event, event.getArg( “reportList” ) will be: report1,report3,report5

Report Selection form output

I decided that generally I wanted it to behave with a flow like this:

Reports flow diagram

It is a good goal in application development to move as much specific knowledge of the flow of the application outside of any component (view, service, or otherwise) that is not responsible for it to avoid coupling issues.  For instance, our report display page shouldn’t understand flow should it?   (hint: “no“)   Our service layer that is responsible for retrieving data shouldn’t should it? (you guessed it: “no”)

So where does that responsibility lie?  I place it squarely on the front controller framework at hand, namely Mach-II in this case.

If that is the approach we are going to follow, then how can we have freely operating pieces, and create a composite view without the individual pieces having any knowledge of each other, nor any knowledge of their role in the bigger picture?   We achieve this by creating small encapsulated pieces that are individually responsible for their limited role, and count on our framework to do the rest.

If you look at the flow diagram above, you will see that we start with a conditional statement on our submission event: “Has form output been generated?”

In our Mach-II configuration we can achieve this by doing the following:

<event-handler event="viewreports" access="public">
 <event-mapping event="noData" mapping="multi_report1" />
 <filter name="checkForReportData" />
 <view-page name="selectreports" contentArg="form" />
 <view-page name="reports" />
</event-handler>

Let’s talk about what those pieces are doing.  First, we are defining an event-mapping “noData“.  What this means is that anywhere further in this event, if someone announces “noData“, the event that we are really going to announce is “multi_report1“.   By doing this, we don’t bury specific knowledge into our component responsible for the announcing, but more on that in a moment.  Next, we are calling a filter named checkForReportData. Filters are Mach-II components that contain a single public method filterEvent() which returns a boolean value telling Mach-II whether or not it should continue further within this event.  In the code above, if the filter returns “false”, the <view-page/> nodes will not be processed.   So let’s take a look at the filterEvent() method.

<cffunction name="filterEvent" access="public" returntype="boolean" output="false" hint="I am invoked by the Mach II framework.">
     <cfargument name="event" type="MachII.framework.Event" required="true" hint="I am the current event object created by the Mach II framework." />
     <cfargument name="eventContext" type="MachII.framework.EventContext" required="true" hint="I am the current event context object created by the Mach II framework." />

     <cfset var result = event.isArgDefined( "reportOutput" ) />

     <cfif NOT result>
          <cfset announceEvent( "noData", event.getArgs() ) />
     </cfif>

     <cfreturn result />
</cffunction>

Very simply, we are saying: Is there an event-arg named reportOutput defined? If there is, we are returning true, telling the event to continue.  If not we are going to announce an event noData, and returning false.   By announcing a generic event named “noData”, and then defining what “noData” means in the XML config, we have just insulated this filter from change.  For instance, right now the <event-mapping/> says that this means that we should announce “multi_report1“.  If this ever changes to another report, then we only have to change the config.  Additionally, we might be able to repurpose this filter another way in the future and announce a completely different event by using a different event-mapping.

So in our example, we have no reportOutput on our first pass through this method, so we are being rerouted to the event “multi_report1“.  Here is what it looks like:

<event-handler event="multi_report1" access="private">
     <event-arg name="reportName" value="report1" />
     <event-mapping event="nextEvent" mapping="multi_report2" />
     <filter name="checkIncludeReport" />
     <notify listener="ReportListener" method="getData" resultArg="data" />
     <view-page name="reports.report1" contentArg="reportOutput" append="true" />
     <announce event="nextEvent" copyEventArgs="true" />
 </event-handler>

On the second line, all we are doing is defining an event-arg named “reportName” and assigning a value of “report1“.   We will be using this value in a moment.  Before we get to that, and now that you understand what event-mappings are doing, the third line should be clear.  We are just telling Mach-II “if someone or something announces nextEvent within the context of this event, announce multi_report2 instead“.  Again this allows our components to announce generic events which are explicitly defined in the config.   Next, we are calling a filter to see if report1 has been selected in the form by calling a filter named checkIncludeReport.   If the report was not selected in the form, we will kick out and announce nextEvent aka multi_report2.   However, if the report is included, we will continue down the line calling a method on our listener to retrieve data, and then using that data in a view named “reports.report1“.  We take that generated HTML and append it into an event-arg named “reportOutput“.   If you look at our code above, you will be reminded that this is the argument we were testing for in the checkForReportData filter.   Here is a look at our checkIncludedReport filter which makes the decision to include this report or not.

<cffunction name="filterEvent" access="public" returntype="boolean" output="false" hint="I am invoked by the Mach II framework.">
     <cfargument name="event" type="MachII.framework.Event" required="true" hint="I am the current event object created by the Mach II framework." />
     <cfargument name="eventContext" type="MachII.framework.EventContext" required="true" hint="I am the current event context object created by the Mach II framework." />    

     <cfset var result = ListFindNoCase( event.getArg( "reportList" ), event.getArg( "reportName" ) ) />

     <cfif NOT result>
          <cfset announceEvent( "nextEvent", event.getArgs() ) />
     </cfif>

     <cfreturn result />   
</cffunction>

All this filter is doing is checking in the event-arg reportList, which is a comma separated list of reports, and seeing if the value of event-arg reportName (which was defined on line 2 above) exists in the list.  Based on our example of selecting reports 1, 3, and 5, the plain English translation of this comparison is:  If the list “report1,report3,report5″ contains “report1″, return true, otherwise announce “nextEvent” and return false. As you surely know by now, in this case “nextEvent” translates to “multi_report2

Essentially we just repeat this exact pattern for the next 4 events, with a minor change in the last event:

<event-handler event="multi_report2" access="private">
    <event-arg name="reportName" value="report2" />
    <event-mapping event="nextEvent" mapping="multi_report3" />
    <filter name="checkIncludeReport" />
    <notify listener="ReportListener" method="getData" resultArg="data" />
    <view-page name="reports.report2" contentArg="reportOutput" append="true" />
    <announce event="nextEvent" copyEventArgs="true" />
</event-handler>

<event-handler event="multi_report3" access="private">
     <event-arg name="reportName" value="report3" />
     <event-mapping event="nextEvent" mapping="multi_report4" />
     <filter name="checkIncludeReport" />
     <notify listener="ReportListener" method="getData" resultArg="data" />
     <view-page name="reports.report3" contentArg="reportOutput" append="true" />
     <announce event="nextEvent" copyEventArgs="true" />
</event-handler>

<event-handler event="multi_report4" access="private">
     <event-arg name="reportName" value="report4" />
     <event-mapping event="nextEvent" mapping="multi_report5" />
     <filter name="checkIncludeReport" />
     <notify listener="ReportListener" method="getData" resultArg="data" />
     <view-page name="reports.report4" contentArg="reportOutput" append="true" />
     <announce event="nextEvent" copyEventArgs="true" />
</event-handler>

<event-handler event="multi_report5" access="private">
     <event-arg name="reportName" value="report5" />
     <event-mapping event="nextEvent" mapping="viewreports" />
     <filter name="checkIncludeReport" />
     <notify listener="ReportListener" method="getData" resultArg="data" />
     <view-page name="reports.report5" contentArg="reportOutput" append="true" />
     <announce event="nextEvent" copyEventArgs="true" />
</event-handler>

As I mentioned, there is a slight change in multi_report5 in that nextEvent is defined as “viewreports“.   By doing this, we have then ended our report generation and are redirecting the flow back to the initial event that kicked this process off.  Since we now have reportOutput data, we are directed to the page that ouputs it all.  Quite simply, our big, massive, magnificent multi-form display page looks like this:

these are the reports:

<cfoutput>#event.getArg( "reportOutput" )#</cfoutput>

There is no conditional nonsense, and the view simply outputs all of the generated output that was appended into the event-arg reportOutput.   Additionally, if you reflect on the things we have done, no where are we explicitly saying “if the user selected report1, do something“.  We have left it all fairly generic and hopefully have created some potentially reusable components.    For instance, let’s say that we now have a requirement for an event that only displays report2. No problem!  All we need to do is add an additional event like this:

<event-handler event="report2" access="public">
     <notify listener="ReportListener" method="getData" resultArg="data" />
     <view-page name="reports.report2" />
</event-handler>

Easy, huh!

Lastly,  I know that some of the more astute of you may have noticed a fatal flaw in the design above.  What happens when no reports are selected?   In the interest of keeping this example as stripped down as I could, I let that one go, but it is a very simple fix.  What would you do?  Where would you put it?   Feel free to post your fix in the comments, along with any other thoughts you have on this solution.

download fully-functional example files – NOTE: doesn’t include the Mach-II framework

Help the homeless. Scriptalizer.com is being evicted!

I am cross posting this for AaronScriptalizer is a GREAT project that needs a home.  If you have a server, VPS, etc running a CFML engine (Railo, OpenBlueDragon, or ColdFusion), and wouldn’t mind adding another site to your server, I am sure Aaron would be grateful.

For details contact Aaron in the comments of this post.

End of an era – Turning the lights off at InstantSpot

As of this first week of 2011, we will be permanently turning off the lights at InstantSpot. I felt that it was fitting to give it at least some type of obituary, and share my reflections of what has been a 4-and-a-half year story of the process.  We have learned much through the process, and hopefully we can share some of what we learned with others.

Starting in 2006, Aaron Lynch and I started on modest plan to revolutionize the blogging world.  OK, so that isn’t really true – it happened like this…

In the beginning… a simple CMS

In 2005, Aaron was making a career change and after years of being friends through the Jeep and 4×4 community we found ourselves in the same line of work as ColdFusion developers.  Around the beginning of 2006, he joined my company which gave us way too much time to discuss and conspire on how we might take over the world.  We decided to begin by making a simple no-database CMS in ColdFusion that we could distribute and sell for a nominal fee.  Of the ones that we deployed, we found that no one was really interested in hosting it themselves and we ended up putting a few instances on our server.

A network was born

Once a small handful of them were running, we created a single directory page that listed and linked to all of the individual sites.  Even though it was exceptionally simple and rudimentary, we realized that in essence we had a “network”.  Given the fact that no one really had an interest in hosting themselves, we quickly realized the fallacy of duplicating the code base for each new site and set out to more effectively externalize the settings and make the configuration centrally persisted in a shared database and run all of the sites from a single set of code.  We quickly developed methods of skinning individual sites, and using subdomain URLs.  The methods we came up with are almost exactly the same methods that we would end up using the next 4 years moving forward.

The addition of blogs

Almost immediately we decided that we needed to add blogs to the sites.  In a move that would haunt us to some degree for a long time to come, we chose to implement the open source BlogCFC application created and maintained by Raymond Camden.  While there was nothing inherently wrong with BlogCFC itself, it was just a poor choice on our part for several reasons:

  • We already had a functioning CMS build using the Mach-II framework using its own database.  Adding in multi-site support to the blogs (which I don’t think it had at the time, if I recall correctly) and tying that into our existing application proved to be a ton of work and felt like such a goofy solution.
  • Combining the two databases into a single one, and the vastly different approaches of data persistence (naming schemes, UUID vs integer keys, etc…) made our code base look schizophrenic.
  • We actually had two complete applications and had to manage session replication between them.
  • We ended up using a really a small portion of the actual BlogCFC code base.  If we would have just created the functionality on our own from scratch, we would have spent far less time and ended up with something that seemed like far less of a hodgepodge.

Eventually we got it working, and did so somewhat successfully, but it really felt like a hodgepodge of methodologies.  However, we started growing our user base, got some great community feedback and support, and had a relatively well performing blogging network.

The dirty secret

One of the seriously embarrassing dirty secrets that we had was the original hardware/server architecture of InstantSpot.  When Aaron and I both started out writing the code, we were not the Linux nerds that we grew into being later.  We were both Windows developers and had never paid as close of attention to casing issues as we grew to later.   Unfortunately the first time that we tried to run Spot under Linux, it blew up severely.  We started trying to work through issue by issue,  but decided we would do a temporary fix by throwing it into a Windows VM on our co-located Linux server.  However, this temporary work-around ended up sticking around much longer than we anticipated.   So, for the first year or so, InstantSpot chugged along on a single PC that was little more than a common desktop, running Ubuntu Server, with a Windows VM in VirtualBox!  The fact that it supported as much traffic as it did was always just a little bit comical to us.

The inevitable rewrite

As we continued working to bring in a continual array of new enhancements – many of which were implemented but never even seen by our users! – we found more and more that our architecture sucked badly enough that we would actually have to do something about it.  In addition goofy hardware/OS issue, the maintenance of dealing with the two separate applications became this growing elephant in the room.   As our user base and our exposure continued to grow, we decided it would be worth a complete re-write from the ground up.  When we first started slapping stuff together in the befinning, we didn’t have a defined target in mind, so surely a rewrite would be less time consuming since we actually had a goal now right?!   We set out about mid-2007 to find out.  Around that time, Railo had hit RC 2.0 and Aaron and I were both enamored by the idea of a completely open source version of InstantSpot.  We busted our asses the last quarter of the year and were finally ready to roll right at the turn of the year in 2008.   I will never forget how many people scoffed at us using Railo.  I mean seriously, given the enormous growth of it in 2010, you wouldn’t believe all the snickering we put up with about it early on.  However, we found huge performance increases over ColdFusion and felt like we were really breaking new ground.

Disaster strikes

In January 2008, we rolled out the newly rewritten InstantSpot with HUGE expectations.  As soon as we flipped the switch at 1:00am CST on January 13, we immediately started seeing very strange errors.  Some of these errors were coming in the fashion of ColdSpring returning improper service object bean types, Mach-II throwing errors that made absolutely NO sense at all, and errors in code that was too simple to fail!  Often when someone would visit a blog, someone else’s content would show up.  (not cool!)  It began looking like we had some serious threading issues.  Immediately we discovered one issue in which the getBean() method of ColdSpring (1.x?) was not thread safe!  (Here is an email thread discussing it).    We also realized that we were using getBean() in a few inappropriate places, and moved code around so that the only place this was being used was in the bootstrapping process and we made those processes threadsafe on our own.   Even then, however, we continued to see things spiraling out of control in what became increasingly clear as a concurrency issue within Railo itself.   Gert Franz actually contacted us as soon as we brought the issue to light and wanted to help us resolve it by us sending him our application.  Given the complexity of the application, however, we felt that we didn’t have the option to spend time going down that road without knowing that we would see a solution come out of it quickly.  We made a snap decision to purchase a license of ColdFusion 8, and cut over as soon as we could.  Two nights later we made the cut-over to CF8 and all of our threading issues completely disappeared.  (blog entry from January 2008 discussing this in detail)

Immediately we breathed easier knowing that our months worth of work was actually paying off, but unfortunately this gave some “I told you so” ammo to all the previously mentioned skeptics of our use of Railo, and we were eating crow.  As it turned out, Railo put out a patch within the week that would have completely fixed the problem we had, and in hindsight, I really wish that we had the foresight to hold out and continue down that path.

Easy days

So there we were… we had our new codebase running… and running… and running… with almost zero time spent on maintenance issues.  Before too long we had amassed somewhere around 1000 blogs, with less than 5% of them being relatively active.  We couldn’t have been more pleased with the results of the rewrite as it made our lives so easy.  This period, for better or worse, allowed Aaron and I to put a huge focus on client work in our moonlighting hours.  While this was a temporarily lucrative venture, the focus was definitely off of InstantSpot since it really didn’t demand any of our attention on a daily basis.   One result of this that we essentially quit marketing the network.   Aaron and I are both really cheap dudes… I mean really… ask our wives.   Subsequently we never spent money to push InstantSpot beyond word of mouth.  We had this idea that if we made it an obvious tool, that an organic growth was inevitable.  While this was true to some extent, the flaw of this was the real industry giants that we were up against.  I mean, what had we made really?  A blogging network that from a feature perspective was a competitor against services like Google, WordPress, etc.   These giants that obviously had no idea that we even existed and were only competitors in our wildest imaginations and only on paper from a feature-by-feature perspective, rather than a true competition of any kind.  We had an obvious niche in the CFML development community, but the fact is that many developers want a more hands-on solution than what an out-of-the box solution such as a blogging network would provide.

What about revenue?

So we have never really talked publicly about InstantSpot revenue.  We spent a lot of time trying to figure out how to make money of it, but never really got it worked out right.  I mean, it paid for itself (until recently), but it never really made any money at all.  In periods of our largest traffic, we were still only making about $20.00/day.  So, with our colo fees of about 160/month, we were netting somewhere around 440.00/month.  For those that wondered why we didn’t bust our asses a little more, perhaps you understand our lack of motivation for doing so now!  It was kind of frustrating to have worked so hard, and to put so much time and effort in, for such a small monetary reward.    The thing that was really frustrating was that about 75% or more of that revenue was being generated from one very spammy blog that was set up with keywords all driving around prepaid wireless.  It felt kind of dirty, but it was the one consistent blog that kept us in the black.  Although technical blogs like mine, Aaron’s Sam Farmer’s, and others certainly brought more traffic, the click-through and click-rates on that stuff was dismal.

The decline

Over 2010, we have really seen a solid decline of anything positive from InstantSpot.  What was initially one of the HUGE benefits that we created with InstantSpot ultimately turned into a negative against us.  Our users enjoyed really ridiculous search engine exposure.  A brand new blog could be created and with a relatively well crafted title and content could easily hit page 1 of Google on a wide array of subjects.  This was due in large part to the fact that we started off with a few blogs that already had PR5 ranks, and then we used what we felt was a really smart way of linking pieces together so that new blogs made more relevant by being tied do solidly ranked blogs as they came in.   If you think I am overstating this, here are a couple of examples I can think of…

  • When Saddam Hussein was executed, we had a blogger that put up a youtube video of it ( yeah, the appropriateness  of that was definitely discussed, but we really never established a TOC that restricted specific content ).  This blog entry made page one of Google, and was picked up by El Tiempo, which is a huge latin news source, and linked as the first blog to carry the info.  We hit somewhere around 100K impressions by mid morning and our sever absolutely croaked.
  • We had a blogger that posted about State Farm Life Insurance and for a period, that blog link was higher in Google search results than statefarm.com with the search “state farm life insurance”.  That one was mind blowing to me!

There were a lot more anecdotal examples like this and we were continually shocked by what we were seeing in the way of search relevance.  Unfortunately we were not the only ones that noticed it.  We ended up with a couple of worthless spam sites that capitalized on this.  They would create a blog and then generate scores of blog entries every day with complete bullshit content stuffed with keywords and links to various products.  Over time, our valid content was overshadowed by our spam content as more and more (and more… and more… and MORE) spam sites were created.   Along with them came thousands upon thousands of spam comments.  We were pretty effective at blocking the bots, but there were a surprising number of manually typed spam comments, which were much more difficult to stop.  Eventually when you looked at our recent entries across the network, it was just shameful.   The more spam issues that came up, the more performance issues we had as well.  Unfortunately both Aaron and I were so tied up with our families and our jobs that neither one of us could deal with effectively and things just spiraled negatively.

Realization

At some point it really became quite clear that we were either going to have to invest much more effort and certainly much more money to make a real run at it, or we could just continue sailing along as long as it would go and see what happened.  It became clear to us that the kind of traffic that we would need to actually make InstantSpot something that could support us was so far beyond reality that we didn’t even know what to do about it.  Considering that both Aaron and I have full time jobs and each supports a family of 5,  quitting our jobs, taking loans and going full bore never seemed like the right decision.  “And who knows… maybe someday it will just take off!“.  Obviously, we never made that jump or commitment and both feel that we have hit “… as long as it would go”.  With the drop in revenue we have seen, we are actually spending money on InstantSpot to keep it up now.  We have experienced more and more performance issues that we don’t have the available time – and to be honest, the will – to keep things on track and at a level they should be.  After some discussions, we have decided that it’s time to pull the plug.

But, it’s not all bad

It really has been a great experience, and we have learned so much about so many things.  It has given us not only a great tool that has boosted our education and skill set, but gave us a great reason to be passionate about what we do and strive to do it well.  Hopefully we can find another project on the near horizon that gives us the same opportunity.

Thanks!

In closing, I would like to say thanks…  Thanks first and foremost to my family that were supportive day in and day out, and believed in what we were trying to do simply because we believed in it.  Thanks to all of the early adopters and people who stuck with us even when things were less than ideal with our system.  Thanks to many in the ColdFusion community that gave us really great support.  This has been a really great run, and I am happy to have been a part of it.