Parsing CSV files with Grails

One of the arguments that I often make for my use of CFML is how you can do so much with so little code. Seemingly every time I attempt to do something that I haven’t done previously with Grails, I find that argument holds less water than I thought, as I can often do it even easier in Grails.

For a current project, we have an occasionally updated CSV document that contains codes related to the customer’s industry. Given that this file will be changing with additional codes being added, while this app is in early development, we decided that we would just keep it in our application config directory and ensure that any new codes are added during the application bootstrap routine.  Here is what I came up with:


// Insert new codes
def csv = new File("grails-app/conf/code.list.csv")
def code
csv.splitEachLine(',') { row ->
   code = Code.findByLabel(row[1]) ?: new Code(
      code: row[0],
      label: row[1]
   ).save(failOnError: true, flush: true)
}

Essentially, the above is saying:

  • Read the CSV file
  • Loop each line, where each line is referred to as “row” in the closure.
  • Search in the database for codes with the same label
  • If the code does not yet exist in the system, create a new instance of Code passing in property values from the row in the CSV file.
  • Save the new code to the database.

As you see, I have added line breaks for readability, but I was able to get the result I was looking for in THREE lines of code! I figured I would share in case anyone is looking for a similar solution.

JRun wsconfig error- Security alert: attempt to connect to JRun server from host

I was experimenting with the Railo 3.3 installer, which includes an IIS connector to Tomcat, which works really well.   Too well in fact!  When I ran it, it actually unmapped all my existing IIS ISAPI mappings to JRun and was sending all requests to Tomcat.

I decided the quickest fix to this would be to simply open up /JRun4/bin/wsconfig.exe and remap the sites that were no longer connected.  However, when I did this, I received the following error:

Could not connect to JRun/ColdFusion servers on host localhost.

Knowing perfectly well that I had an instance of JRun running, I went to the terminal to look at the standard out and saw this:

Security alert: attempt to connect to JRun server from a 10.252.11.207 host

In case that is too hard to read, it says: “Security alert: attempt to connect to JRun server from a 10.252.11.207 host”.   I suspect that because I am attached to a WIFI connection with an IP Address on 192.168.*, and then VPN’d into my company with a second address of 10.252.*, JRun assumes that the connection attempt is coming from outside the subnet.

I went digging through files in JRun4/lib and came across security.properties.  In this file, there is a default setting:

jrun.subnet.restriction=255.255.255.0
jrun.trusted.hosts=
jrun.subnet.restriction.ipv6=[ffff:ffff:ffff:ffff:0:0:0:0]

I altered that restriction  setting from “255.255.255.0″ to “*” like this:

jrun.subnet.restriction=*
jrun.trusted.hosts=
jrun.subnet.restriction.ipv6=[ffff:ffff:ffff:ffff:0:0:0:0]

Once I did this and restarted the server, I was able to use wsconfig without issue.  And my ACF sites are pointed to JRun, my Railo sites are pointed to Tomcat, and all is right in the world again.

NOTE: DO NOT DO THIS ON A PRODUCTION MACHINE!   If you do, I strongly recommend that it is a very temporary change.

CFML wishlist: All collections should extend Iterator

Have you ever really given a second thought to the fact that in ColdFusion/CFML you have to loop queries, arrays, and structures in completely different ways?   For example, in each of these things, we are essentially doing the same thing:

<!--- looping our query --->
<cfloop query="myQuery">
	<cfset doStuff() />
</cfloop>
<!--- looping our array --->
<cfloop array="#myArray#" index="i">
	<cfset doStuff() />
</cfloop>
<!--- or --->
<cfloop from="1" to="#ArrayLen(myArray)#" index="i">
	<cfset doStuff() />
</cfloop>

<!--- looping our structure --->
<cfloop collection="#myStruct#" item="i">
	<cfset doStuff() />
</cfloop>

In each of these cases, we are essentially doing the same thing, that being looping a collection that contains multiple items and acting on each iteration.  I have always liked the fact that the ColdFusion array can be converted to a Java iterator like this:

<cfset iterator = myArray.iterator() />
<cfloop condition="#iterator.hasNext()#">
	<cfset thisIteration = iterator.next() />
	<cfset doStuff() />
</cfloop>

However, given the fact that <cfloop array=”#myArray#” index=”i”> is already an abstraction, it doesn’t make sense to use this in most cases.   But wouldn’t it be cool if you could call myQuery.iterator() or myStruct.iterator() and have the same functionality?  Or even better, why not have those collections all extend an iterator class so that it could be simplified even futher with myQuery.hasNext() or myStruct.hasNext().

Keep in mind, this discussion is only coming from the perspective of the programming interface itself, and I am not going to get into the differences behind the scenes of how a query result is actually a set of arrays of columns, or how an array differs from a struct.  My point is simply that if we have these abstractions, it sure would be cool if they were consistent, but we were still able to call specific functions on them like ArrayFind() , StructDelete(), etc.  If we had this ability, our loops in CFSCRIPT would be a lot more consistent as well.

With that said…. I will leave you with my wishful implementation for writing loops in CFML:

<cfloop condition="#myQuery.hasNext()#">
	<cfset thisRow = myQuery.next() />
	<cfset doStuff() />
</cfloop>

<cfloop condition="#myArray.hasNext()#">
	<cfset thisItem = myArray.next() />
	<cfset doStuff() />
</cfloop>

<cfloop condition="#myStruct.hasNext()#">
	<cfset thisItem = myStruct.next() />
	<cfset doStuff() />
</cfloop>

ColdFusion 9 catch() is not thread-safe!

I almost hate to admit this… no, I really hate to admit this.   For some reason, I was have always been under the impression that when you do a catch() in CFSCRIPT, that the variable you define as the catch is protected within the catch condition.   However, it hit me today that it is written to the variables scope by default.  Not only that, as I was testing it further, I believe that I have discovered that it is not thread-safe at all!   (edit: this problem applies to CFCATCH too. See notes at bottom and in comments)

Want to test this?  Open up a dummy CFM file and run the following:

(edit: I have modified this example since originally posting, putting the try/catch within a method so that it is consistent with the other examples)

WriteDump(ourFunction());	

public void function ourFunction()	{
	try	{
		local.a = b;
	}
	catch( any e )	{
		killE();
		WriteDump(e);
	}
}

public void function killE()	{
	StructDelete(variables,"e");
}

In case what we are doing isn’t abundantly obvious, we are creating a forced exception by referencing variable “b” which doesn’t exist.  In the catch(), we are saving the exception structure as variable “e”  so that we can handle however our business rules dictate.  In the case of our example, we are calling a method called killE(), and as you see it deletes “e” from the variables scope of the current template.  In the following line in catch() we are going to dump out exception details.  However, rather than a nice exception telling us that “b” wasn’t defined, we get the following:

ColdFusion exception

So, all we need to do is change “e” to “local.e” right and it will be thread-safe right?

WRONG!

This is apparently invalid syntax in Adobe ColdFusion (as of v.9).  Take this example:

WriteDump(ourFunction());	

public void function ourFunction()	{
	try	{
		local.a = b;
	}
	catch( any local.e )	{
		killE();
		WriteDump(local.e);
	}
}

public void function killE()	{
	StructDelete(variables,"e");
}

When we run this example, the following occurs:

What?!  So apparently we have to use the “var” scope to define our exception then, right?  So how about this attempt… this should do it, right?

WriteDump(ourFunction());	

public void function ourFunction()	{
	var e = "";

	try	{
		local.a = b;
	}
	catch( any e )	{
		killE();
		WriteDump(e);
	}
}

public void function killE()	{
	StructDelete(variables,"e");
}

WRONG!

If we take this approach, we receive the following:

My impression of this exception is that it is trying to write a variable into the variables scope that is already defined in the var scope and barking about it.

So then, what are we left with?

I am admittedly not the sharpest crayon in the box, but to me this indicates that catch() is 100% not thread-safe!   If anyone sees any problem with this diagnosis that I have made or has any thoughts, I am all ears.  If this is true, this throws huge wrenches into my work, and I am sure it does to others as well.

EDIT: As has been pointed out by Henry, this is an issue that Adobe was notified of this bug July of 2010 – almost 16 months!  It affects at least versions 8 and 9, and is not limited to CFSCRIPT.  It apparently affects CFCATCH as well.  I have looked through the tech notes of ColdFusion 9.0.1 Cumulative Hot Fix 1 and ColdFusion 9.0.1 Cumulative Hot Fix 2 and find no evidence that they have ever addressed it.

EDIT (again): I have tested this in both Railo and OpenBD.  OpenBD, due to the fact that it puts variables within methods in the var scope by default does not fail on the first test.  Railo, does fail.  However by using the variables scope like ColdFusion.  However, it allows you to do catch( any local.e ) and var scope e ahead.  So in a nutshell, ColdFusion is the only engine that does not give us any way to do thread-safe error handling.

How I cut the cord to subscription TV!

I recently reviewed our family budget trying to find areas to trim the fat, and one of the things that just ate me up was how much money we were paying for subscription TV.  Our monthly TV payment was $115 for DirectTV and I can assure you I rarely, if ever, got $115 worth of use out of it!

A friend of mine suggested that I look into a Roku, which is essentially a small device that connects to your WIFI connection and serves content to your TV via the internet.  You can install scores of “channels” on it, such as Netflix, Hulu Plus, Crackle, NASA channel, Pandora, CNet, and many more.  Many channels are free, but some premium channels as Netflix and Hulu Plus have their own subscription fees.  In addition to the official channels offered in the Roku Channel Store, there are many private channels that you can install. Here is a list of private channels that I came across that were compiled in April 2011.  You can piece together a channel list that suits your needs.  I decided to purchase the Roku 2 XD, which offered everything I wanted for a one-time payment of $79 with no recurring fee for a subscription or anything.  With just a power cord and an HDMI cable, I was in business.

One thing that occurs when you switch to a Roku, or internet TV in general, is that you tend to quit using your TV as background noise.  With subscription TV, my family had a tendency to leave the TV on until something vaguely interesting came on and they would sit and watch it.  Instead, under this model, we actually seek out the programs that we want to watch and watch them when we wish.  Using Hulu Plus, we have access to entire series of many of the shows we would typically watch.  Often there is a several day delay between the live program run and the time that it shows up on Hulu, but considering how frequently we would previously DVR shows we wanted to watch and view them much later, very little has changed here.  With Netflix, we have access to the entire Netflix library on demand.

Roku home screen showing channel selection with Netflix focused.

Even with these pieces in place, I knew that I didn’t want to be totally cut off from live streaming TV.  I still plan on watching every Dallas Cowboys game as it happens, breaking news, certain live shows, and more.  Subsequently I decided to get an HD TV antenna so that I could watch all OTA (over the air) channels as well.  I mounted this antenna, which I found at BestBuy, to my roof using the existing coax in my house.  Depending on how close you are to the broadcast towers, you might be able to get by with less of an antenna.  You can help find your exact needs by putting your address in at www.antennaweb.org, which will tell you the distances you are from various stations and tell you the exact compass heading that you should point your antenna.  If you plan on splitting the line to multiple TVs, you may want to look into using a line amplifier (around $20)  to reduce attenuation.  Considering the fact that I am currently only serving one TV, I haven’t installed an amplifier at the splitter, but when I bring more TVs online in my house I may opt for doing so.

As a side note, I found it kind of interesting determining which ends of the coax on the outside of the house went to specific rooms in the house.  That is probably a post worthy of its own space!

Once the antenna was installed, I was shocked by how many OTA channels there are!  For example, on standard cable/satellite and formerly analog antenna before that, my local channel 8 WFAA (ABC affiliate) consisted of a single channel.  Now that channel exists as 8-1, and there is now 8-2  which is constant weather from Ch 8, and 8-3 which is… well, I don’t even know yet, but it’s some kind of programming also provided by WFAA.  I have found that many of the channels have sub-channels like this.  After letting the TV scan and find channels, we were going through all that it found, and my kids started watching 62-2 -  a children’s channel called Qubo.  Until now, I always thought Qubo was a subscription channel, and I had no idea there were OTA channels that high up the dial.

I have also found that the picture using the HD antenna is stunning! From what I have read on that topic, due to the compression techniques that the cable/satellite companies use for broadcasting, you will never see the picture quality that you can from a digital antenna.

So, what is missing?  One thing that I plan on adding is a DVR solution for broadcast TV.  I haven’t really figured out exactly what I am going to do here, but it seems that there are numerous options.  They do sell standalone DVR units like this, but I am considering setting up a media server.  That way I can pretty much play anything I want from the media server, through the Roku using something like Plex or Firefly.

Another point wort mentioning is that not all network shows are available on Hulu Plus, although almost all that we have been interested in are there.  What many people apparently do is subscribe to torrent sites which download specific shows for you once they become available (usually almost immediately after airing).  They are automatically placed onto your media server and then you can watch them via your Roku.  I can’t speak to the legality of this approach, but it is a method that I have seen nonetheless.  I believe that if I have a DVR like I listed above, this would eliminate much of the need for this approach.

Bottom line… with both Netflix and Hulu Plus, my total monthly expense is now amazingly under $15.00.  Amazingly I am saving $1,200 per year over what I was spending previously!  If there is a trade off in usability, and many would argue that there isn’t, it is FAR outweighed by the savings over time.

Video blogging on the cheap – not as easy as it should be!

I just recorded two screencast videos last night that I wanted to use as video blog entries. Seems easy right? Just find a video host!

Unfortunately, “easy” is far from the way I would describe my experience, and I am somewhat exasperated by the process at the moment. So here is the detail: I have two videos, one being 9:17 long, and the other being 15:02. Both of these are recorded as OGV files, which is part of the free, open, cross-platform OGG media container format.  All I need to do is find a service to host them and stream them.  So far so good right?

I decided that I would try to look around for a video hosting solution other than YouTube, since I have posted screencasts on there before and the video degradation was horrendous.  After some googling and reading reviews, I started down a spiraling path of services leading to nowhere, beginning with….

  • Vimeo (verdict: fail) – Vimeo seemed like a great place to start.  Any time I have seen their videos, I have never noticed a degradation.  They offer HD and the service is free – kind of.  In all actuality, there were three issues for me here.
    • bad: They do not support the OGV file format, so I had to convert the OGV to an AVI before uploading it.  Of course they don’t actually tell you this until you have sat through an entire upload first! There was a degradation that occurred during that process, so even after uploading it, the quality wasn’t as good as I liked.
    • bad: Free accounts are only allowed to upload a single HD video per week.  Already in my first try I had two, so that is a show stopper.
    • good: The HD version that was uploaded was better than many of the alternatives,
    • bad:  you can’t embed the HD.  If the user wishes to see it, they have to click through the player to the site and watch it on the Vimeo site.
  • Viddler (verdict: fail) – Viddler seemed like a good alternative to Vimeo.  However, ultimately it doesn’t seem to be the direct fit either.
    • bad: Just as with Vimeo, they do not support OGV.
    • good: As opposed to Vimeo, at least they tell you about the lack of OGV support  as soon as you attempt uploading!
    • bad:  Since I had already converted one of my videos to AVI, I went ahead and tried it.  Even in full screen mode, the degradation was bad enough that I couldn’t see what I was typing in the video, which is kind of the point!
  • YouTube (verdict: fail) – After nixing Viddler,  I thought “why not at least try YouTube again?”, and I was soon reminded of exactly why not.
    • good: They support OGV!
    • bad: Even my 9:41 video was deemed “too long” and was promptly removed.
    • bad: I couldn’t even get far enough to report on degradation!

So just as I began typing this blog entry to air my dissatisfaction with things in general, I came across this post, praising the combination of using Jing to record, and Screencast.com to host the video.  The video clarity on his example was really impressive.  “Ha!” I thought, “finally!”.  So I now have one more to add to my list:

  • Screencast.com (verdict: fail)
    • good: They allow you to upload any file type whatsoever! (I think anyway)
    • bad: They only embed a few different file types into players.  OGV is again not supported.

So here I sit, still without a good solution to what initially seemed like it should be a no-brainer of a problem to solve.  The amount of time that I have wasted to still be sitting at square one is terribly aggravating.  Between upload times and service-specific encoding times, I am more hours deep into this than I care to think about.

HTML5 to the rescue?

One thing that came out of this search is that I learned that HTML5 natively supports OGG/OGV using the <video/> tag.  (more here), and based on an example that on this page, it looks very cool!  The only fundamental thing that is holding me back at the moment is that there doesn’t appear to be any option to allow your user to ‘full screen’ your video out of the player.  So close, yet… still no solution!

If anyone has any good recommendations, feel free to leave them in the comments.

How to: write the last N linux terminal commands to a file

Sometimes blog entries are for you.  Sometimes they are for me.  This one is the latter.

The other day I asked the following question on Twitter:

Anyone know a way to write out the last N commands run in the #linux terminal to a file?

 

I got a plethora of responses within minutes, but by far the most complete and tricked out response came from Joseph Lamoree @jlamoree, who gave the following solution:

 

history | tail -n 10 | sed -E 's/^ +[0-9]+ +//' | grep -vE '^history$' > cmds

 

In that command, the “10″ represents the last 10 commands, and “cmds” is the filename that the output will write to. Since there isn’t the remotest chance in hell that I would ever remember this, and Twitter is about the worst place for me to go back later to find technical information, I am putting it here on my blog for future reference. Thanks Joseph!

Open Letter: Stepping down as DFWCFUG Manager

At Tuesday night’s meeting (3/8/2011),  I announced that after 55 meetings at the helm, I am stepping down as manager of the Dallas Ft. Worth ColdFusion User Group. 

Am I tired of doing it?  Am I leaving the language?  NO, and NO!

As an Adobe UGM, one of my responsibilities is to endorse and evangelize the product of Adobe ColdFusion (ACF).  For numerous reasons over the past year or so, I have found myself at growing odds with this task.  As competing open source engines such as Railo and OpenBD are gaining in functionality, stability, and performance, as well as being made freely available to the CFML community, it is impossible to ignore them as true contenders in this space.  Where they were once viewed as free alternatives, they have moved to the position of driving change and driving features that I would like to see in ACF. I wholly feel that these engines are the future of our community, and should be given equal attention rather than be viewed as just an alternative.  Based on that fact, it is disingenuous for me to continue in my role as an Adobe UGM.

As of its inaugural meeting on April 5, 2011 at the Paladin Consulting office in Dallas, I am going to serve as coordinator of the DFW CFML User Group, a non-product-specific user group composed of enthusiasts of the CFML language, regardless of the engine that runs it.  Without the pressure of promoting one company’s product over another, we can focus on what is really important to us, which is the power of the CFML language and the diverse ways that it can be used across various platforms.

It is important to note that the new group will not be strictly an “open source” group, nor is this a swipe at any kind of Adobe itself.  The group is simply not going to endorse a single product as the only viable solution to writing enterprise level applications in CFML.  Our content will doubtlessly include Adobe ColdFusion, but will not be exclusive to it.

So where does this leave the DFWCFUG?  Adrian Moreno has served as co-manager of the group for several years now.  Adobe mandated this hierarchical approach to how their groups will be organized so that in the event of the departure of a manger, the group can carry on without interruption with the co-manager taking over.  I have spoken with Adrian at length on this topic, and he does not share my vision on the DFW CFML User Group, and feels that it is important to have a product-focused user group under Adobe.  As a result, he has opted to take the role of group manager effective immediately and will be leading the DFWCFUG.

I want to make it abundantly clear that this will not be an “us vs. them” scenario between the two groups.  We are in this together as one community with varying interests and it is in all of our interests to positively promote both groups.

Fortunately, I think that this leaves the DFW CFML developers with some excellent options!

I plan on sharing much more about the new DFW CFML User Group in the near future.  Please follow us on Twitter at @dfwcfml and look for upcoming announcements in the next few days.

Lastly, thanks for letting me serve as leader of the DFWCFUG all these years.  It has been an honor and a privilege to do so.

~Dave Shuck
@dshuck
daveshuck.com

Refactoring: avoiding nested conditional statements

Recently at I was given the task of adding an new validation routine to an existing validation process.  In this piece of code, the requirements mandated that a series of sequential tests would be run, but in the event of a failure of any of them, the process would kick out and set an error state, provide user feedback, and whatever other tasks needed to occur.  We have all seen processes like this before.  Essentially it looked like this:

error = true;
if ( testOne() )  {
    if ( testTwo() ) {
        if ( testThree() ) {
            if (testFour() )  {
                 error = false;
                 doAllTestsPassedStuff()
            } 
        }
    }
}
if ( error ) {
	handleErrorCondition()
}

Looking at this block of code, the intent is pretty obvious as we progressively run tests as long as the previous test returned true, eventually firing the doAllTestsPassedStuff() method.  If any of the tests failed, we would call handleErrorCondition().  While this approach is completely functional, the maintainability of it is no fun, and it just feels wrong to me.  For the task I was given, I had to add a new test davesSuperTest()  between the 2nd and 3rd conditional blocks.   If I were to follow the previous approach, I would insert it there, and tab out the previous testThree() and testFour() conditional statements further to the right.  In my opinion this is an ugly block that is getting uglier by the minute.

By altering the approach to use try/catch blocks, we can still maintain the same level of control and order or operations as dictated in the requirements, but each condition becomes insulated from the others,  like this:

try {
	if ( !testOne() )	{
		throw "fail:testOne";
	}
	if ( !testTwo() )	{
		throw "fail:testTwo";
	}
	if ( !davesSuperTest() )	{
		throw "fail:davesSuperTest";
	}
	if ( !testThree() )	{
		throw "fail:testThree";
	}
	if ( !testFour() )	{
		throw "fail:testFour";
	}
	// if we reach this point, then all of the above tests passed.
	 doAllTestsPassedStuff()
}	
catch(e)	{
	handleErrorCondition()
}

Given this approach, it is very simple to add/remove conditions without disrupting other conditions, even better, I don’t have to scroll to my second monitor on the right!

Piecing together optional view elements at runtime with Mach-II

Often in web development, you run across a case where there is a display that contains optional view elements that are derived at runtime.  Perhaps there is a section of a form that is only available to residents of the US.  Maybe, only users with a certain level of group access have the ability to see a section of a page that can be partially viewed by other user types.  I am sure you can think of numerous cases that you have come across in your own work.   In a current project at our company, one of our developers was tasked with rewriting an old piece of legacy code in which logged in agents can select one or more of multiple reports to display.  The legacy code was a complete nightmare that would probably be worthy of an entire series of what not to do, but that is for another day!  To boil this piece down a bit, essentially the agent has a series of checkboxes of specific reports, and “to” and “from” date inputs to provide a date range.   Depending on what the agent selects, the submission page might show a single report, or a series of several reports in line.

One approach to this would be to have an event defined in which you compile each piece of data into some kind of data collection

if ( [Report1 was selected] )
     get data for Report1
if  ( [Report2 was selected] )
     get data for Report2
(... and so on...)

Then on the view, you could do something like:

if ([we have report data for Report1])
show Report1
if ([we have report data for Report2])
     show Report2
(... and so on ....)

Well, we could but it would be wrong!  Why?  For one thing, we now have conditional logic about each report built into multiple places in our application.  From a complexity and maintenance standpoint, you have just made it, at a minimum, twice as complex as it needs to be.  There is also a strong argument that could be made (and I would make it!) that your view shouldn’t be responsible for determining what it’s supposed to display.  It should simply display!

So what is another approach to this?  How could we employ MVC techniques without the individual components involved becoming intertwined, creating yet another administrative issue.  Here is the solution that I proposed to our developer:
First, let’s start with our form, which is remarkably simple:

<h3>Select the reports you would like to view</h3>
<cfoutput>
<form name="reportform" action="#buildUrl( "viewreports" )#" method="post">
<input type="checkbox" name="reportList" value="report1" /> Report 1<br />
<input type="checkbox" name="reportList" value="report2" /> Report 2<br />
<input type="checkbox" name="reportList" value="report3" /> Report 3<br />
<input type="checkbox" name="reportList" value="report4" /> Report 4<br />
<input type="checkbox" name="reportList" value="report5" /> Report 5<br />
<input type="submit" value="run reports">
</form>
</cfoutput>

As you can see, we are going to load up an event-arg on the submission named “reportList” that will be a comma separated list of reports that we will be displaying. For instance, if we make the selections you see below, on the viewreports event, event.getArg( “reportList” ) will be: report1,report3,report5

Report Selection form output

I decided that generally I wanted it to behave with a flow like this:

Reports flow diagram

It is a good goal in application development to move as much specific knowledge of the flow of the application outside of any component (view, service, or otherwise) that is not responsible for it to avoid coupling issues.  For instance, our report display page shouldn’t understand flow should it?   (hint: “no“)   Our service layer that is responsible for retrieving data shouldn’t should it? (you guessed it: “no”)

So where does that responsibility lie?  I place it squarely on the front controller framework at hand, namely Mach-II in this case.

If that is the approach we are going to follow, then how can we have freely operating pieces, and create a composite view without the individual pieces having any knowledge of each other, nor any knowledge of their role in the bigger picture?   We achieve this by creating small encapsulated pieces that are individually responsible for their limited role, and count on our framework to do the rest.

If you look at the flow diagram above, you will see that we start with a conditional statement on our submission event: “Has form output been generated?”

In our Mach-II configuration we can achieve this by doing the following:

<event-handler event="viewreports" access="public">
 <event-mapping event="noData" mapping="multi_report1" />
 <filter name="checkForReportData" />
 <view-page name="selectreports" contentArg="form" />
 <view-page name="reports" />
</event-handler>

Let’s talk about what those pieces are doing.  First, we are defining an event-mapping “noData“.  What this means is that anywhere further in this event, if someone announces “noData“, the event that we are really going to announce is “multi_report1“.   By doing this, we don’t bury specific knowledge into our component responsible for the announcing, but more on that in a moment.  Next, we are calling a filter named checkForReportData. Filters are Mach-II components that contain a single public method filterEvent() which returns a boolean value telling Mach-II whether or not it should continue further within this event.  In the code above, if the filter returns “false”, the <view-page/> nodes will not be processed.   So let’s take a look at the filterEvent() method.

<cffunction name="filterEvent" access="public" returntype="boolean" output="false" hint="I am invoked by the Mach II framework.">
     <cfargument name="event" type="MachII.framework.Event" required="true" hint="I am the current event object created by the Mach II framework." />
     <cfargument name="eventContext" type="MachII.framework.EventContext" required="true" hint="I am the current event context object created by the Mach II framework." />

     <cfset var result = event.isArgDefined( "reportOutput" ) />

     <cfif NOT result>
          <cfset announceEvent( "noData", event.getArgs() ) />
     </cfif>

     <cfreturn result />
</cffunction>

Very simply, we are saying: Is there an event-arg named reportOutput defined? If there is, we are returning true, telling the event to continue.  If not we are going to announce an event noData, and returning false.   By announcing a generic event named “noData”, and then defining what “noData” means in the XML config, we have just insulated this filter from change.  For instance, right now the <event-mapping/> says that this means that we should announce “multi_report1“.  If this ever changes to another report, then we only have to change the config.  Additionally, we might be able to repurpose this filter another way in the future and announce a completely different event by using a different event-mapping.

So in our example, we have no reportOutput on our first pass through this method, so we are being rerouted to the event “multi_report1“.  Here is what it looks like:

<event-handler event="multi_report1" access="private">
     <event-arg name="reportName" value="report1" />
     <event-mapping event="nextEvent" mapping="multi_report2" />
     <filter name="checkIncludeReport" />
     <notify listener="ReportListener" method="getData" resultArg="data" />
     <view-page name="reports.report1" contentArg="reportOutput" append="true" />
     <announce event="nextEvent" copyEventArgs="true" />
 </event-handler>

On the second line, all we are doing is defining an event-arg named “reportName” and assigning a value of “report1“.   We will be using this value in a moment.  Before we get to that, and now that you understand what event-mappings are doing, the third line should be clear.  We are just telling Mach-II “if someone or something announces nextEvent within the context of this event, announce multi_report2 instead“.  Again this allows our components to announce generic events which are explicitly defined in the config.   Next, we are calling a filter to see if report1 has been selected in the form by calling a filter named checkIncludeReport.   If the report was not selected in the form, we will kick out and announce nextEvent aka multi_report2.   However, if the report is included, we will continue down the line calling a method on our listener to retrieve data, and then using that data in a view named “reports.report1“.  We take that generated HTML and append it into an event-arg named “reportOutput“.   If you look at our code above, you will be reminded that this is the argument we were testing for in the checkForReportData filter.   Here is a look at our checkIncludedReport filter which makes the decision to include this report or not.

<cffunction name="filterEvent" access="public" returntype="boolean" output="false" hint="I am invoked by the Mach II framework.">
     <cfargument name="event" type="MachII.framework.Event" required="true" hint="I am the current event object created by the Mach II framework." />
     <cfargument name="eventContext" type="MachII.framework.EventContext" required="true" hint="I am the current event context object created by the Mach II framework." />    

     <cfset var result = ListFindNoCase( event.getArg( "reportList" ), event.getArg( "reportName" ) ) />

     <cfif NOT result>
          <cfset announceEvent( "nextEvent", event.getArgs() ) />
     </cfif>

     <cfreturn result />   
</cffunction>

All this filter is doing is checking in the event-arg reportList, which is a comma separated list of reports, and seeing if the value of event-arg reportName (which was defined on line 2 above) exists in the list.  Based on our example of selecting reports 1, 3, and 5, the plain English translation of this comparison is:  If the list “report1,report3,report5″ contains “report1″, return true, otherwise announce “nextEvent” and return false. As you surely know by now, in this case “nextEvent” translates to “multi_report2

Essentially we just repeat this exact pattern for the next 4 events, with a minor change in the last event:

<event-handler event="multi_report2" access="private">
    <event-arg name="reportName" value="report2" />
    <event-mapping event="nextEvent" mapping="multi_report3" />
    <filter name="checkIncludeReport" />
    <notify listener="ReportListener" method="getData" resultArg="data" />
    <view-page name="reports.report2" contentArg="reportOutput" append="true" />
    <announce event="nextEvent" copyEventArgs="true" />
</event-handler>

<event-handler event="multi_report3" access="private">
     <event-arg name="reportName" value="report3" />
     <event-mapping event="nextEvent" mapping="multi_report4" />
     <filter name="checkIncludeReport" />
     <notify listener="ReportListener" method="getData" resultArg="data" />
     <view-page name="reports.report3" contentArg="reportOutput" append="true" />
     <announce event="nextEvent" copyEventArgs="true" />
</event-handler>

<event-handler event="multi_report4" access="private">
     <event-arg name="reportName" value="report4" />
     <event-mapping event="nextEvent" mapping="multi_report5" />
     <filter name="checkIncludeReport" />
     <notify listener="ReportListener" method="getData" resultArg="data" />
     <view-page name="reports.report4" contentArg="reportOutput" append="true" />
     <announce event="nextEvent" copyEventArgs="true" />
</event-handler>

<event-handler event="multi_report5" access="private">
     <event-arg name="reportName" value="report5" />
     <event-mapping event="nextEvent" mapping="viewreports" />
     <filter name="checkIncludeReport" />
     <notify listener="ReportListener" method="getData" resultArg="data" />
     <view-page name="reports.report5" contentArg="reportOutput" append="true" />
     <announce event="nextEvent" copyEventArgs="true" />
</event-handler>

As I mentioned, there is a slight change in multi_report5 in that nextEvent is defined as “viewreports“.   By doing this, we have then ended our report generation and are redirecting the flow back to the initial event that kicked this process off.  Since we now have reportOutput data, we are directed to the page that ouputs it all.  Quite simply, our big, massive, magnificent multi-form display page looks like this:

these are the reports:

<cfoutput>#event.getArg( "reportOutput" )#</cfoutput>

There is no conditional nonsense, and the view simply outputs all of the generated output that was appended into the event-arg reportOutput.   Additionally, if you reflect on the things we have done, no where are we explicitly saying “if the user selected report1, do something“.  We have left it all fairly generic and hopefully have created some potentially reusable components.    For instance, let’s say that we now have a requirement for an event that only displays report2. No problem!  All we need to do is add an additional event like this:

<event-handler event="report2" access="public">
     <notify listener="ReportListener" method="getData" resultArg="data" />
     <view-page name="reports.report2" />
</event-handler>

Easy, huh!

Lastly,  I know that some of the more astute of you may have noticed a fatal flaw in the design above.  What happens when no reports are selected?   In the interest of keeping this example as stripped down as I could, I let that one go, but it is a very simple fix.  What would you do?  Where would you put it?   Feel free to post your fix in the comments, along with any other thoughts you have on this solution.

download fully-functional example files – NOTE: doesn’t include the Mach-II framework