The Unbridled 4th Branch of Government

In a follow-up to my post yesterday about redundancy of federal agencies such as the EPA, one point I didn’t make is that they must continually create new laws in order to justify their existence (which is the roll of the legislature, not the 4th branch of government which is bureaucracy). Their very existence will ensure that they continually restrict freedoms since that is the only way that can remain. Here is a perfect example. Starting in 2015, the federal government will now dictate what kind of wood burning stove you can have… for your health of course. I don’t care if you are liberal, conservative, libertarian or politically agnostic. This is a mockery of the ideals that this country was founded upon.

The fact that we have unaccountable federal agencies acting as a legislative branch of government, creating tens of thousands of binding laws each year, without no real control in the hands of the citizenry is madness. We are in a heap of trouble.

At what point are we collectively not going to be OK with this?

Why eliminating federal agencies is our Constitutional obligation

I have heard a couple of conversations lately that made me realize how many people just simply don’t understand this basic principle. For the casual bystander who hears recommendations to eliminate the EPA, Dept. of Eduction, and other federal agencies, it is almost understandable that they may recoil thinking “Don’t they care about our environment?” “Don’t they care about education?” if they don’t understand the Constitution and the idea of a federalist nation.

Eliminating those agencies has nothing whatsoever to do with those specific topics being vital or not vital in any way. The Tenth Amendment to the United States Constitution reads: “The powers not delegated to the United States by the Constitution, nor prohibited by it to the States, are reserved to the States respectively, or to the people.”

That means effectively ‘If it isn’t mentioned in the Constitution, it is up to the states to manage as they deem fit’, period. Education, or the environment, or countless other roles that are now grabbed by our federal leviathan government were never enumerated by the US Constitution, and therefor are the responsibilities of the states themselves.

If you take time to read The Federalist Papers or historical documents around the creation of this nation, you will understand that we were never intended to have this top-down one-size-fits-all blanket solutions applied across this country. States were given the responsibility to create and manage their needs. As we stand today, we have morphed into a system in which we are ruled from afar on the affairs that should be handled at the state level, by applying heavy handed blanket rules that make no distinction to the differences in regional needs. With a top-down approach, where federal rules that dictate everything down to the type of light bulb you are allowed to buy, what is the point of having individual states other than to have your own flag?

The fact is that those federal agencies are redundant, and serve as a drain on our systems, both financially and by ignoring local needs and making it more difficult for states to act in the ways that best fit them. Why are they redundant? I will speak directly about Texas, since that is what I know, but the same applies to every state in the union. Here we have the TCEQ (Texas Commission on Environmental Quality) who’s mission statement is “… to protect our state’s public health and natural resources consistent with sustainable economic development. Our goal is clean air, clean water, and the safe management of waste.”. If we already have state level agencies with goals of managing these things, then there is no need for an EPA. None. What in turn happens is that we end up sending a huge amount of our money out of state to the federal government to support these redundant systems, and allow far less to retain in our states where the management should be focused in the first place.

This is the same for many other agencies. If this is not a totally familiar concept to you, I would encourage you to remember these things as you hear politicians talking about eliminating programs and understand it for what it is, which is bringing us back closer to the federalist government that we were intended to be.

Parsing CSV files with Grails

One of the arguments that I often make for my use of CFML is how you can do so much with so little code. Seemingly every time I attempt to do something that I haven’t done previously with Grails, I find that argument holds less water than I thought, as I can often do it even easier in Grails.

For a current project, we have an occasionally updated CSV document that contains codes related to the customer’s industry. Given that this file will be changing with additional codes being added, while this app is in early development, we decided that we would just keep it in our application config directory and ensure that any new codes are added during the application bootstrap routine.  Here is what I came up with:


// Insert new codes
def csv = new File("grails-app/conf/code.list.csv")
def code
csv.splitEachLine(',') { row ->
   code = Code.findByLabel(row[1]) ?: new Code(
      code: row[0],
      label: row[1]
   ).save(failOnError: true, flush: true)
}

Essentially, the above is saying:

  • Read the CSV file
  • Loop each line, where each line is referred to as “row” in the closure.
  • Search in the database for codes with the same label
  • If the code does not yet exist in the system, create a new instance of Code passing in property values from the row in the CSV file.
  • Save the new code to the database.

As you see, I have added line breaks for readability, but I was able to get the result I was looking for in THREE lines of code! I figured I would share in case anyone is looking for a similar solution.

JRun wsconfig error- Security alert: attempt to connect to JRun server from host

I was experimenting with the Railo 3.3 installer, which includes an IIS connector to Tomcat, which works really well.   Too well in fact!  When I ran it, it actually unmapped all my existing IIS ISAPI mappings to JRun and was sending all requests to Tomcat.

I decided the quickest fix to this would be to simply open up /JRun4/bin/wsconfig.exe and remap the sites that were no longer connected.  However, when I did this, I received the following error:

Could not connect to JRun/ColdFusion servers on host localhost.

Knowing perfectly well that I had an instance of JRun running, I went to the terminal to look at the standard out and saw this:

Security alert: attempt to connect to JRun server from a 10.252.11.207 host

In case that is too hard to read, it says: “Security alert: attempt to connect to JRun server from a 10.252.11.207 host”.   I suspect that because I am attached to a WIFI connection with an IP Address on 192.168.*, and then VPN’d into my company with a second address of 10.252.*, JRun assumes that the connection attempt is coming from outside the subnet.

I went digging through files in JRun4/lib and came across security.properties.  In this file, there is a default setting:

jrun.subnet.restriction=255.255.255.0
jrun.trusted.hosts=
jrun.subnet.restriction.ipv6=[ffff:ffff:ffff:ffff:0:0:0:0]

I altered that restriction  setting from “255.255.255.0″ to “*” like this:

jrun.subnet.restriction=*
jrun.trusted.hosts=
jrun.subnet.restriction.ipv6=[ffff:ffff:ffff:ffff:0:0:0:0]

Once I did this and restarted the server, I was able to use wsconfig without issue.  And my ACF sites are pointed to JRun, my Railo sites are pointed to Tomcat, and all is right in the world again.

NOTE: DO NOT DO THIS ON A PRODUCTION MACHINE!   If you do, I strongly recommend that it is a very temporary change.

CFML wishlist: All collections should extend Iterator

Have you ever really given a second thought to the fact that in ColdFusion/CFML you have to loop queries, arrays, and structures in completely different ways?   For example, in each of these things, we are essentially doing the same thing:

<!--- looping our query --->
<cfloop query="myQuery">
	<cfset doStuff() />
</cfloop>
<!--- looping our array --->
<cfloop array="#myArray#" index="i">
	<cfset doStuff() />
</cfloop>
<!--- or --->
<cfloop from="1" to="#ArrayLen(myArray)#" index="i">
	<cfset doStuff() />
</cfloop>

<!--- looping our structure --->
<cfloop collection="#myStruct#" item="i">
	<cfset doStuff() />
</cfloop>

In each of these cases, we are essentially doing the same thing, that being looping a collection that contains multiple items and acting on each iteration.  I have always liked the fact that the ColdFusion array can be converted to a Java iterator like this:

<cfset iterator = myArray.iterator() />
<cfloop condition="#iterator.hasNext()#">
	<cfset thisIteration = iterator.next() />
	<cfset doStuff() />
</cfloop>

However, given the fact that <cfloop array=”#myArray#” index=”i”> is already an abstraction, it doesn’t make sense to use this in most cases.   But wouldn’t it be cool if you could call myQuery.iterator() or myStruct.iterator() and have the same functionality?  Or even better, why not have those collections all extend an iterator class so that it could be simplified even futher with myQuery.hasNext() or myStruct.hasNext().

Keep in mind, this discussion is only coming from the perspective of the programming interface itself, and I am not going to get into the differences behind the scenes of how a query result is actually a set of arrays of columns, or how an array differs from a struct.  My point is simply that if we have these abstractions, it sure would be cool if they were consistent, but we were still able to call specific functions on them like ArrayFind() , StructDelete(), etc.  If we had this ability, our loops in CFSCRIPT would be a lot more consistent as well.

With that said…. I will leave you with my wishful implementation for writing loops in CFML:

<cfloop condition="#myQuery.hasNext()#">
	<cfset thisRow = myQuery.next() />
	<cfset doStuff() />
</cfloop>

<cfloop condition="#myArray.hasNext()#">
	<cfset thisItem = myArray.next() />
	<cfset doStuff() />
</cfloop>

<cfloop condition="#myStruct.hasNext()#">
	<cfset thisItem = myStruct.next() />
	<cfset doStuff() />
</cfloop>

ColdFusion 9 catch() is not thread-safe!

I almost hate to admit this… no, I really hate to admit this.   For some reason, I was have always been under the impression that when you do a catch() in CFSCRIPT, that the variable you define as the catch is protected within the catch condition.   However, it hit me today that it is written to the variables scope by default.  Not only that, as I was testing it further, I believe that I have discovered that it is not thread-safe at all!   (edit: this problem applies to CFCATCH too. See notes at bottom and in comments)

Want to test this?  Open up a dummy CFM file and run the following:

(edit: I have modified this example since originally posting, putting the try/catch within a method so that it is consistent with the other examples)

WriteDump(ourFunction());	

public void function ourFunction()	{
	try	{
		local.a = b;
	}
	catch( any e )	{
		killE();
		WriteDump(e);
	}
}

public void function killE()	{
	StructDelete(variables,"e");
}

In case what we are doing isn’t abundantly obvious, we are creating a forced exception by referencing variable “b” which doesn’t exist.  In the catch(), we are saving the exception structure as variable “e”  so that we can handle however our business rules dictate.  In the case of our example, we are calling a method called killE(), and as you see it deletes “e” from the variables scope of the current template.  In the following line in catch() we are going to dump out exception details.  However, rather than a nice exception telling us that “b” wasn’t defined, we get the following:

ColdFusion exception

So, all we need to do is change “e” to “local.e” right and it will be thread-safe right?

WRONG!

This is apparently invalid syntax in Adobe ColdFusion (as of v.9).  Take this example:

WriteDump(ourFunction());	

public void function ourFunction()	{
	try	{
		local.a = b;
	}
	catch( any local.e )	{
		killE();
		WriteDump(local.e);
	}
}

public void function killE()	{
	StructDelete(variables,"e");
}

When we run this example, the following occurs:

What?!  So apparently we have to use the “var” scope to define our exception then, right?  So how about this attempt… this should do it, right?

WriteDump(ourFunction());	

public void function ourFunction()	{
	var e = "";

	try	{
		local.a = b;
	}
	catch( any e )	{
		killE();
		WriteDump(e);
	}
}

public void function killE()	{
	StructDelete(variables,"e");
}

WRONG!

If we take this approach, we receive the following:

My impression of this exception is that it is trying to write a variable into the variables scope that is already defined in the var scope and barking about it.

So then, what are we left with?

I am admittedly not the sharpest crayon in the box, but to me this indicates that catch() is 100% not thread-safe!   If anyone sees any problem with this diagnosis that I have made or has any thoughts, I am all ears.  If this is true, this throws huge wrenches into my work, and I am sure it does to others as well.

EDIT: As has been pointed out by Henry, this is an issue that Adobe was notified of this bug July of 2010 – almost 16 months!  It affects at least versions 8 and 9, and is not limited to CFSCRIPT.  It apparently affects CFCATCH as well.  I have looked through the tech notes of ColdFusion 9.0.1 Cumulative Hot Fix 1 and ColdFusion 9.0.1 Cumulative Hot Fix 2 and find no evidence that they have ever addressed it.

EDIT (again): I have tested this in both Railo and OpenBD.  OpenBD, due to the fact that it puts variables within methods in the var scope by default does not fail on the first test.  Railo, does fail.  However by using the variables scope like ColdFusion.  However, it allows you to do catch( any local.e ) and var scope e ahead.  So in a nutshell, ColdFusion is the only engine that does not give us any way to do thread-safe error handling.

How I cut the cord to subscription TV!

I recently reviewed our family budget trying to find areas to trim the fat, and one of the things that just ate me up was how much money we were paying for subscription TV.  Our monthly TV payment was $115 for DirectTV and I can assure you I rarely, if ever, got $115 worth of use out of it!

A friend of mine suggested that I look into a Roku, which is essentially a small device that connects to your WIFI connection and serves content to your TV via the internet.  You can install scores of “channels” on it, such as Netflix, Hulu Plus, Crackle, NASA channel, Pandora, CNet, and many more.  Many channels are free, but some premium channels as Netflix and Hulu Plus have their own subscription fees.  In addition to the official channels offered in the Roku Channel Store, there are many private channels that you can install. Here is a list of private channels that I came across that were compiled in April 2011.  You can piece together a channel list that suits your needs.  I decided to purchase the Roku 2 XD, which offered everything I wanted for a one-time payment of $79 with no recurring fee for a subscription or anything.  With just a power cord and an HDMI cable, I was in business.

One thing that occurs when you switch to a Roku, or internet TV in general, is that you tend to quit using your TV as background noise.  With subscription TV, my family had a tendency to leave the TV on until something vaguely interesting came on and they would sit and watch it.  Instead, under this model, we actually seek out the programs that we want to watch and watch them when we wish.  Using Hulu Plus, we have access to entire series of many of the shows we would typically watch.  Often there is a several day delay between the live program run and the time that it shows up on Hulu, but considering how frequently we would previously DVR shows we wanted to watch and view them much later, very little has changed here.  With Netflix, we have access to the entire Netflix library on demand.

Roku home screen showing channel selection with Netflix focused.

Even with these pieces in place, I knew that I didn’t want to be totally cut off from live streaming TV.  I still plan on watching every Dallas Cowboys game as it happens, breaking news, certain live shows, and more.  Subsequently I decided to get an HD TV antenna so that I could watch all OTA (over the air) channels as well.  I mounted this antenna, which I found at BestBuy, to my roof using the existing coax in my house.  Depending on how close you are to the broadcast towers, you might be able to get by with less of an antenna.  You can help find your exact needs by putting your address in at www.antennaweb.org, which will tell you the distances you are from various stations and tell you the exact compass heading that you should point your antenna.  If you plan on splitting the line to multiple TVs, you may want to look into using a line amplifier (around $20)  to reduce attenuation.  Considering the fact that I am currently only serving one TV, I haven’t installed an amplifier at the splitter, but when I bring more TVs online in my house I may opt for doing so.

As a side note, I found it kind of interesting determining which ends of the coax on the outside of the house went to specific rooms in the house.  That is probably a post worthy of its own space!

Once the antenna was installed, I was shocked by how many OTA channels there are!  For example, on standard cable/satellite and formerly analog antenna before that, my local channel 8 WFAA (ABC affiliate) consisted of a single channel.  Now that channel exists as 8-1, and there is now 8-2  which is constant weather from Ch 8, and 8-3 which is… well, I don’t even know yet, but it’s some kind of programming also provided by WFAA.  I have found that many of the channels have sub-channels like this.  After letting the TV scan and find channels, we were going through all that it found, and my kids started watching 62-2 -  a children’s channel called Qubo.  Until now, I always thought Qubo was a subscription channel, and I had no idea there were OTA channels that high up the dial.

I have also found that the picture using the HD antenna is stunning! From what I have read on that topic, due to the compression techniques that the cable/satellite companies use for broadcasting, you will never see the picture quality that you can from a digital antenna.

So, what is missing?  One thing that I plan on adding is a DVR solution for broadcast TV.  I haven’t really figured out exactly what I am going to do here, but it seems that there are numerous options.  They do sell standalone DVR units like this, but I am considering setting up a media server.  That way I can pretty much play anything I want from the media server, through the Roku using something like Plex or Firefly.

Another point wort mentioning is that not all network shows are available on Hulu Plus, although almost all that we have been interested in are there.  What many people apparently do is subscribe to torrent sites which download specific shows for you once they become available (usually almost immediately after airing).  They are automatically placed onto your media server and then you can watch them via your Roku.  I can’t speak to the legality of this approach, but it is a method that I have seen nonetheless.  I believe that if I have a DVR like I listed above, this would eliminate much of the need for this approach.

Bottom line… with both Netflix and Hulu Plus, my total monthly expense is now amazingly under $15.00.  Amazingly I am saving $1,200 per year over what I was spending previously!  If there is a trade off in usability, and many would argue that there isn’t, it is FAR outweighed by the savings over time.

Video blogging on the cheap – not as easy as it should be!

I just recorded two screencast videos last night that I wanted to use as video blog entries. Seems easy right? Just find a video host!

Unfortunately, “easy” is far from the way I would describe my experience, and I am somewhat exasperated by the process at the moment. So here is the detail: I have two videos, one being 9:17 long, and the other being 15:02. Both of these are recorded as OGV files, which is part of the free, open, cross-platform OGG media container format.  All I need to do is find a service to host them and stream them.  So far so good right?

I decided that I would try to look around for a video hosting solution other than YouTube, since I have posted screencasts on there before and the video degradation was horrendous.  After some googling and reading reviews, I started down a spiraling path of services leading to nowhere, beginning with….

  • Vimeo (verdict: fail) – Vimeo seemed like a great place to start.  Any time I have seen their videos, I have never noticed a degradation.  They offer HD and the service is free – kind of.  In all actuality, there were three issues for me here.
    • bad: They do not support the OGV file format, so I had to convert the OGV to an AVI before uploading it.  Of course they don’t actually tell you this until you have sat through an entire upload first! There was a degradation that occurred during that process, so even after uploading it, the quality wasn’t as good as I liked.
    • bad: Free accounts are only allowed to upload a single HD video per week.  Already in my first try I had two, so that is a show stopper.
    • good: The HD version that was uploaded was better than many of the alternatives,
    • bad:  you can’t embed the HD.  If the user wishes to see it, they have to click through the player to the site and watch it on the Vimeo site.
  • Viddler (verdict: fail) – Viddler seemed like a good alternative to Vimeo.  However, ultimately it doesn’t seem to be the direct fit either.
    • bad: Just as with Vimeo, they do not support OGV.
    • good: As opposed to Vimeo, at least they tell you about the lack of OGV support  as soon as you attempt uploading!
    • bad:  Since I had already converted one of my videos to AVI, I went ahead and tried it.  Even in full screen mode, the degradation was bad enough that I couldn’t see what I was typing in the video, which is kind of the point!
  • YouTube (verdict: fail) – After nixing Viddler,  I thought “why not at least try YouTube again?”, and I was soon reminded of exactly why not.
    • good: They support OGV!
    • bad: Even my 9:41 video was deemed “too long” and was promptly removed.
    • bad: I couldn’t even get far enough to report on degradation!

So just as I began typing this blog entry to air my dissatisfaction with things in general, I came across this post, praising the combination of using Jing to record, and Screencast.com to host the video.  The video clarity on his example was really impressive.  “Ha!” I thought, “finally!”.  So I now have one more to add to my list:

  • Screencast.com (verdict: fail)
    • good: They allow you to upload any file type whatsoever! (I think anyway)
    • bad: They only embed a few different file types into players.  OGV is again not supported.

So here I sit, still without a good solution to what initially seemed like it should be a no-brainer of a problem to solve.  The amount of time that I have wasted to still be sitting at square one is terribly aggravating.  Between upload times and service-specific encoding times, I am more hours deep into this than I care to think about.

HTML5 to the rescue?

One thing that came out of this search is that I learned that HTML5 natively supports OGG/OGV using the <video/> tag.  (more here), and based on an example that on this page, it looks very cool!  The only fundamental thing that is holding me back at the moment is that there doesn’t appear to be any option to allow your user to ‘full screen’ your video out of the player.  So close, yet… still no solution!

If anyone has any good recommendations, feel free to leave them in the comments.

How to: write the last N linux terminal commands to a file

Sometimes blog entries are for you.  Sometimes they are for me.  This one is the latter.

The other day I asked the following question on Twitter:

Anyone know a way to write out the last N commands run in the #linux terminal to a file?

 

I got a plethora of responses within minutes, but by far the most complete and tricked out response came from Joseph Lamoree @jlamoree, who gave the following solution:

 

history | tail -n 10 | sed -E 's/^ +[0-9]+ +//' | grep -vE '^history$' > cmds

 

In that command, the “10″ represents the last 10 commands, and “cmds” is the filename that the output will write to. Since there isn’t the remotest chance in hell that I would ever remember this, and Twitter is about the worst place for me to go back later to find technical information, I am putting it here on my blog for future reference. Thanks Joseph!

Open Letter: Stepping down as DFWCFUG Manager

At Tuesday night’s meeting (3/8/2011),  I announced that after 55 meetings at the helm, I am stepping down as manager of the Dallas Ft. Worth ColdFusion User Group. 

Am I tired of doing it?  Am I leaving the language?  NO, and NO!

As an Adobe UGM, one of my responsibilities is to endorse and evangelize the product of Adobe ColdFusion (ACF).  For numerous reasons over the past year or so, I have found myself at growing odds with this task.  As competing open source engines such as Railo and OpenBD are gaining in functionality, stability, and performance, as well as being made freely available to the CFML community, it is impossible to ignore them as true contenders in this space.  Where they were once viewed as free alternatives, they have moved to the position of driving change and driving features that I would like to see in ACF. I wholly feel that these engines are the future of our community, and should be given equal attention rather than be viewed as just an alternative.  Based on that fact, it is disingenuous for me to continue in my role as an Adobe UGM.

As of its inaugural meeting on April 5, 2011 at the Paladin Consulting office in Dallas, I am going to serve as coordinator of the DFW CFML User Group, a non-product-specific user group composed of enthusiasts of the CFML language, regardless of the engine that runs it.  Without the pressure of promoting one company’s product over another, we can focus on what is really important to us, which is the power of the CFML language and the diverse ways that it can be used across various platforms.

It is important to note that the new group will not be strictly an “open source” group, nor is this a swipe at any kind of Adobe itself.  The group is simply not going to endorse a single product as the only viable solution to writing enterprise level applications in CFML.  Our content will doubtlessly include Adobe ColdFusion, but will not be exclusive to it.

So where does this leave the DFWCFUG?  Adrian Moreno has served as co-manager of the group for several years now.  Adobe mandated this hierarchical approach to how their groups will be organized so that in the event of the departure of a manger, the group can carry on without interruption with the co-manager taking over.  I have spoken with Adrian at length on this topic, and he does not share my vision on the DFW CFML User Group, and feels that it is important to have a product-focused user group under Adobe.  As a result, he has opted to take the role of group manager effective immediately and will be leading the DFWCFUG.

I want to make it abundantly clear that this will not be an “us vs. them” scenario between the two groups.  We are in this together as one community with varying interests and it is in all of our interests to positively promote both groups.

Fortunately, I think that this leaves the DFW CFML developers with some excellent options!

I plan on sharing much more about the new DFW CFML User Group in the near future.  Please follow us on Twitter at @dfwcfml and look for upcoming announcements in the next few days.

Lastly, thanks for letting me serve as leader of the DFWCFUG all these years.  It has been an honor and a privilege to do so.

~Dave Shuck
@dshuck
daveshuck.com