My pointless wish for ColdFusion 9

Before continuing, I am aware that the likelihood of this every happing is remote at best, but that won’t keep me from dreaming. If I could choose 1 thing – just 1! – for ColdFusion 9, it would have nothing at all to do with new features. In fact, it would actually mean less features in way.

What is this wish?

… a totally stripped down free version of ColdFusion.

Why would they do this? It would make ColdFusion far more accessible to the masses and would encourage a much larger user base. If Adobe was to offer a stripped down version for free, then the natural progression would be for people to build more mature applications that need more of the advanced features that would then be only available in the non-free versions, Standard and Enterprise. Small shops might be able to get away with the stripped versions, and as more applications are developed, more developers are created. More developers mean bigger and better applications, all pointing to the eventual goal of more Enterprise licenses sold. In addition to the potential upgrades to pay versions, this would open up a new stream of revenue for the pay-support that Adobe already offers.

So what changes should be made in the free version? Let’s start with a Standard license version as the comparison point. Obviously non-of the Enterprise features should be available. Here are the other changes I would recommend in my imaginary world in which a free version of ColdFusion would exist.

  • Limit the number of datasources that can be added. Heck, even just allow 1. I think that this limitation alone would keep Adobe from losing a majority of its pay customers to the free license. It would still make it a useful server and could definitely encourage an up-sell.
  • No <cfdocument /> functionality. There is no need to offer this powerful tool for free, and again this would be a good up-sell point for people who need that functionality.
  • No <cfsearch /> functionality. It is my understanding that some fraction of the consumer cost of ColdFusion goes to pay for Verity licensing contained within ColdFusion. Strip that out and make it a pay feature.
  • No <cfajax /> functionality. I they want to use Ajax, they can roll their own.
  • No <cfchart /> functionality
  • No Flex Remoting support
  • No LiveCycle integration
  • No Event Gateways
  • No scheduled tasks. It they want to schedule something, they can always make a cron job.
  • No WebService support (I am on the fence with this one, but let’s throw that in for good measure)

Am I crazy or couldn’t taking this approach be a smart thing for Adobe? I would be interested in others thoughts about my imaginary world where a free version of ColdFusion is a reality.

Publishing blog entries with ScribeFire using XMLRPC API

On the most recent release of InstantSpot, we added in XMLRPC support so that blog administration can happen anywhere, using any client that supports XMLRPC. We are thinking it might be fun to create an AIR app for this purpose down the line (or better yet… someone else! hint…hint…), but until that time, there are a variety of clients that can be used, since we followed the MetaWeblog API standards.

I am actually trying this out for the first time on our live instance with this blog entry by using a *sweet* Firefox plugin called ScribeFire. Considering that we haven’t really published this ability, I thought it might make sense to do a walk through of setting it up and using it.

First, let’s walk through the ScribeFire Account Wizard. One you have installed the ScribeFire plugin (available here), click on the little text pad icon in the bottom corner of your browser window and start the Account Wizard. You should see a window that looks like the one below. Choose “Manually Configure” and continue.

You will then be presented with a number of options of various inferior blogging services. Choose “Custom Blog” and continue.

On the screen you see below, choose “MetaWeblog API” and enter this URL into the Server API URL input box: http://www.instantspot.com/gospot/remote.metaweblogAPI
Leave “Advanced Settings” unchecked and continue.

On the following screen you will be prompted for your username and password. Since the new release of InstantSpot, your email address is now used as your username.

That’s it! If you did this correctly you should see your blog listed in the following screen like this:

Now, you should see a list of your categories (labeled in the interface as “tags”), blogposts, and in the future, saved “Notes” which are drafts stored locally by ScribeFire to the right of the ScribeFire window like this:


From here, the interface is pretty simple. One thing that is noteworthy is that we even support the ability for you to upload and insert images through ScribeFire. When you click on the image icon on the editor, you will see a window that looks like this:

Choose “Image Upload”, then after browsing to your file, select “Upload Via API”.

When it completes you will see the following window. Choose “Insert Image” and you will see your image inserted into your text.

Now you can start posting away to your heart’s content!

Powered by ScribeFire.

`c->xlib.lock’ failed error on Java applications

I am currently using the Alpha 3 release of Ubuntu 8.04 Hardy Heron.  Considering the fact that it is an alpha release, I tend to not get worked up over little errors that might occur.  However, I have found one that I just couldn’t get around.  I use Aqua Data Studio for my database client and since loading Hardy Heron, I have been unable to run it.

When I would start it from a terminal, I would get a dump that looked like this:

#0 /usr/lib/libxcb-xlib.so.0 [0x90d00767]
#1 /usr/lib/libxcb-xlib.so.0(xcb_xlib_unlock+0x31) [0x90d008b1]
#2 /usr/lib/libX11.so.6(_XReply+0xfd) [0x9039429d]
#3 /usr/lib/jvm/java-6-sun-1.6.0.04/jre/lib/i386/xawt/libmawt.so [0x9063e8ce]
#4 /usr/lib/jvm/java-6-sun-1.6.0.04/jre/lib/i386/xawt/libmawt.so [0x9061b067]
#5 /usr/lib/jvm/java-6-sun-1.6.0.04/jre/lib/i386/xawt/libmawt.so [0x9061b318]
#6 /usr/lib/jvm/java-6-sun-1.6.0.04/jre/lib/i386/xawt/libmawt.so(Java_sun_awt_X11GraphicsEnvironment_initDisplay+0x2f) [0x9061b61f]
#7 [0xb4cff3aa]
#8 [0xb4cf7f0d]
#9 [0xb4cf7f0d]
#10 [0xb4cf5249]
#11 /usr/lib/jvm/java-6-sun-1.6.0.04/jre/lib/i386/server/libjvm.so [0x637338d]
#12 /usr/lib/jvm/java-6-sun-1.6.0.04/jre/lib/i386/server/libjvm.so [0x64fd168]
#13 /usr/lib/jvm/java-6-sun-1.6.0.04/jre/lib/i386/server/libjvm.so [0x6373220]
#14 /usr/lib/jvm/java-6-sun-1.6.0.04/jre/lib/i386/server/libjvm.so(JVM_DoPrivileged+0x363) [0x63c90d3]
#15 /usr/lib/jvm/java-6-sun-1.6.0.04/jre/lib/i386/libjava.so(Java_java_security_AccessController_doPrivileged__Ljava_security_PrivilegedAction_2+0x3d) [0xb7d1096d]
#16 [0xb4cff3aa]
#17 [0xb4cf7da7]
#18 [0xb4cf5249]
#19 /usr/lib/jvm/java-6-sun-1.6.0.04/jre/lib/i386/server/libjvm.so [0x637338d]
java: xcb_xlib.c:82: xcb_xlib_unlock: Assertion `c->xlib.lock' failed.
Aborted (core dumped)

Considering the fact that I used the Ubuntu sun-java6-jdk package from the Ubuntu repository, I decided that I would try the self-extracting bin that is available on http://java.sun.com.  After swapping to that JVM, I still received the same dump and abort.  After doing a bit of searching, I came across a patch in one of the bug reporting forums that effectively patches your JVM and prevents this error from occurring.  I ran the patch and now everything works as it should.   If you are receiving this error, create a shell script with the following content and run it.  Assuming that it runs successfully, you should then be able to open the Java application that was failing.

#!/bin/sh
# S. Correia
# 2007 11 21
# A simple script to patch the java library in order
# to solve the problem with "Assertion 'c->xlib.lock' failed."
# see bug http://bugs.sun.com/bugdatabase/view_bug.do?bug_id=6532373
LIB_TO_PATCH=libmawt.so
for f in `find /usr/lib/jvm -name "$LIB_TO_PATCH"`
do
echo "Patching library $f"
sudo sed -i 's/XINERAMA/FAKEEXTN/g' "$f"
done

Big thanks to “S. Correia” for getting me back on my feet!

Free as in money, not as in pain – InstantSpot moves to ColdFusion 8

As I mentioned in several previous blog entries, on January 12 we re-launched InstantSpot after a complete bottom-to-top rewrite. In addition to a completely new code base, we took the unlikely choice of using Railo as our CFML processing engine.

“Why?” you ask? (You aren’t the first)

The reasons were several, and I will detail a few of the key points that went into our decision.

  • It’s free – InstantSpot is basically a small project of big ideas by two developers doing this out of our own pockets and – how can I put this delicately? – we are poor and cheap! Unfortunately, despite how much we will it to be so, InstantSpot has not made us bazillions of dollars (at least as of the time of this posting). From the beginning we have made an effort not to make InstantSpot a financial burden on our families, as they are already paying dearly in the time we spend tied up in code till all hours of the night, and we like to cut financial corners everywhere we can. A free CFML processing engine? That is an obvious avenue to at least explore.
  • It’s fast – No lie… Railo is fast. From our very first development and tests, it just seemed to blow other engines away in the speed that it processed code. This was backed up test after test. Not only does it shine in the speed of processing code, but it also has a tiny footprint on the server. In our environment running it as a Tomcat application, you almost wouldn’t even guess it was there.
  • It’s CFMX 7 compatible – To us this meant that we didn’t have to code anything differently simply because we chose to use Railo over ColdFusion. We had no issues whatsoever using our normal data model patterns we use in any other application, and we used BER releases of ColdSpring and Mach-II without the slightest hiccup. Eventually we found a couple of small places where we had to make a workaround (3 I can think of), but they were without question edge cases, and there were easy workarounds that didn’t feel as though we were compromising the application.
  • It’s the underdog – If you were to poll the ColdFusion community at large, you would find that many people don’t even know there are any other choices besides Adobe ColdFusion, and many of those may have only heard of New Atlanta’s Blue Dragon. Railo hardly gets a mention in most circles. We thought it might be fun to be an advocate by example and help promote what we felt is a great alternative choice. Additionally, Aaron and I have a tendency to choose the road less traveled, but this certainly fit our m.o.

Sounds reasonable right? Since we made that choice around June of 07 and started moving forward with the rewrite, we have felt overwhelmingly positive about our decision.

All of that began to change about 1:00am January 13.

After making the DNS changes and as we started seeing traffic redirect to the new server, we started seeing absolutely inexplicable errors. The closer that we examined them, it became obvious that we had some *serious* threading issues in our application. We are extremely careful in this regard when it comes to our code, so this was very surprising. However, this *was* brand new code, and of course there could have been a hole somewhere right?

As more traffic started coming in, the errors escalated. We started seeing errors at least every minute, each of which generated a painful new email to both Aaron and me. It became clear quite rapidly that the errors actually had nothing to do with the code. We started seeing errors from both Mach-II and ColdSpring that just simply couldn’t happen. For instance , here is one we started seeing from ColdSpring:

Message	variable [beandefinition] doesnt exist
Tag Context	/www/instantspot/www/coldspring/beans/AbstractBeanFactory.cfc (211)

Really? That is pretty interesting since line 210 is:

<cfset var beanDefinition = getBeanDefinition(arguments.beanName) />

And how about this one from Mach-II?

Message	variable [nextevent] doesnt exist
Tag Context /www/instantspot/www/MachII/framework/RequestHandler.cfc (115)

Oh yeah? Well… this is line 114:

<cfset nextEvent = appManager.getEventManager().createEvent(result.moduleName, result.eventName, eventArgs, result.eventName, result.moduleName) />

Clearly even our worst var-scoping misstep couldn’t have created those errors, and furthermore, these are well-tested frameworks used in hundreds if not thousands of applications. If these threading errors existed in them, Aaron and I would not be the ones discovering it in January 2008. We were also seeing some of our own objects that were attempting to call methods of other objects, and obvious sign of serious threading issues. In two instances, a person’s RSS feed actually contained someone else’s content.

We began to wonder if Railo even recognized var-scoping at all? I pulled up an old blog entry that I had made in which I wrote some code examples that showed an easy example of a var-scoping error and ran it against Railo. I set up a Railo scribble pad, and ran the test. It did pass that test, which tells me that Railo at least manages var scoping on a cursory level. However, under the load of our application it appeared that we were looking at something bigger than just var-scoping a few object methods.

At this point, there was no longer any question that in order to get out of this tailspin we needed to do something drastic. We quickly decided that the most logical step was to switch to Adobe ColdFusion 8. The cheap-gene which is so deeply embedded in our DNA had to be thrown out the window, and we had to act by getting a ColdFusion 8 license and getting it implemented asap. One immediate concern that came to mind was how much we had modified the Railo WEB-INF in order to do some of the URL handling that we had implemented as we not only use mod_rewrite in Apache, but we also use another Java application in the mix as well. After installing ColdFusion 8 Standard and digging into the /wwwroot/WEB-INF, we found that we could painlessly apply the same pieces to our ColdFusion application, and with some very small changes, we had InstantSpot running in our development environments.

After doing some heavy but rapid testing throughout all of our application, we felt that we could make the switch. Even if an error or two was discovered later, the benefits would strongly outweigh the utter nonsense we were dealing with at that time. So around midnight last Wednesday night, we pushed up the ColdFusion implementation of our application, crossed our fingers, held our breath, flipped the switch, and……

….. silence.

After the application initialized, suddenly there was peace… no errors… no emails… just an application purring along as it was intended. In fact, we had to push up a test template with broken code to ensure that our error notification was still working. Since that time we have not seen a single error occur in our application with well over 100K page requests since the move.

I want to be clear that this post is not meant to be an attack on Railo. I am sure that Gert and crew work extremely hard and I tend to believe that Railo will mature to a nice alternative if they keep up their efforts that they have shown to date. However I do hope that this post serves as a warning as we found that there are huge implications with using it as it stands today.

Wow… rough move from Ubuntu to PCLinuxOS!

Seeing as it has been a few months since I tried out a new distro, I got a wild hair today and decided to give PCLinuxOS (Gnome version) a shot.  The way that I keep my drives partitioned – specifically keeping my /home directory as a separate partition- swapping distros is usually a pretty painless endeavor and I can be back up and running within an our or so, with all my old apps in place and with all my preferences still in tact.

As I booted to the PCLinux Live CD, everything seemed to be business as usual.  The only notable point was that I thought that PCLinux has a nice default theme and icon set in the Gnome version of the distro.

So without too much hesitation, I went ahead and began the install process.  After choosing the appropriate keyboard and timezone settings, I was presented with the choice of how I would like my partitions setup, which by default uses the entire physical disk.  By selecting the “do it yourself” mode, I expected to be able to choose my smaller /dev/sda2 partition as my / mount point, format it for the OS, and leave my /dev/sda3 alone mounting it has /home.

I entered what appeared to be a nice little partition configuration tool (Disk Drake I think?), which appeared to be exactly what I was needed.  I then selected the /dev/sda2 partition as the place I wanted my / mount point, and chose /dev/sda3 as my /home mount point.  When I chose the option to format my / mount point, I got a an error message that said that the partion could not be formatted.  Considering that my plan was to wipe it out anyway, I went ahead and removed that partition, and re-added it using that utility.   As I tried to move forward, I got a message that indicated that I needed to reboot, restart the installation process, then choose “Use existing partitions”.  Simple enough right?

I then rebooted to the live CD and entered the installation again.  This time I was presented with a new option.  “Choose the partitions you would like to format” and it only listed my larger /dev/sda3 partition with a checkbox next to it, with no mention of my /dev/sda2.  I found this a bit interesting, and after carefully removing the checkbox I moved forward. As I entered the next step I went to a screen “Copying files…”…. wait… huh?  To where?

Apparently it now considered my /dev/sda3  (which I intended to be /home) as the only drive.  I cancelled the process and opened the terminal.  After browsing to that directory, I found new /usr and /boot directories in that directory, which confirmed my suspicions.

Things then began moving downhill and picking up speed….

I opened the partitioning tool GParted and was suprised to see that not only did my 15GB /dev/sda2 not exist anymore but that /dev/sda3 was now a 145GB partion of unallocated space.   NOT GOOD, considering that about 110GB of it is *very* allocated with data that I didn’t intend on losing.

Even with all the steps I have taken since, I have been unable to mount /dev/sda2.  I even popped in Damn Small Linux to attempt some quick surgery and even it was unable to save me.   I then tried an Ubuntu live CD and it didn’t recogize anything on /dev/sda at all.  At some point during this process I noticed that I was getting “bad magic number” and “corrupted superblock” type messages in relation to that device.

After putting the PCLinuxOS live CD back in, I was a bit relieved to see that it auto-mounted /dev/sda3 as /media/disk, and that I could at least access the files that were once safe and sound in my /home directory.

So…. here I sit waiting for GBs upon GBs of data to upload via FTP to various servers so that I can wipe the enter friggin thing out and start over.  Tomorrow I get the fun of retrieving it all and piecing my laptop world back together….   <sigh/>

More to come….

Installing the JRE plugin in Firefox on Ubuntu

I have now been using Ubuntu for about 2 years, and oddly enough one thing that has always evaded me is how to properly set up the JRE plugin in Firefox. It *seems* like that ought to be an easy process, but it is one of those annoying little things that just hasn’t worked for me, although it has never been important enough for me to chase down.

Yesterday I had to do a Webex presentation that required the JRE plugin, so I decided it was time to hack my way through it. One thing that I was thinking my be a factor is that I use Swiftfox instead of Firefox. I decided to take that out of the equation just to make sure, so I went ahead and removed it. When running Firefox and hitting about:plugins in the address bar, I could clearly see that the Java plugin was not in the list. I looked in ~/.mozilla/plugins, and saw a libjavaplugin.so in there, but it was obviously not doing its job.

So, after a lot of floundering, here are the basic steps I took that got me going…

  • First, I completely uninstalled Firefox:
    $ sudo apt-get –purge remove firefox
  • I then reinstalled it:
    $ sudo apt-get install firefox
  • Next, I had previously installed the sun-java2-bin package, so I wanted to wipe all evidence of that and reinstall it. I did the following:
    $ sudo apt-get –purge remove sun-java6-bin sun-java6-jre sun-java6-plugin
  • To reinistall it I did:
    $ sudo apt-get install sun-java6-bin sun-java6-jre sun-java6-plugin
  • After doing this I opened Firefox and put about:plugins and still didn’t see the Java stuff. At this point, I went into my ~/.mozilla/plugins directory. From earlier attempts I had some libjavaplugin.so and libjavaplugin-[something I don't remember].so. I decided to kill those off:
    $ sudo rm libjavaplugin*
  • At this point looking around I found a file /etc/alternatives/firefox-javaplugin.so that seemed like a decent candidate, so I did a symlink like this:
    $ ln -s /etc/alternatives/firefox-javaplugin.so ./libjavaplugin.so

At this point I restarted the browser, hit about:plugins and was thrilled to see an entirely new section for Java!

Now, take the steps above with a grain of salt. I certainly don’t want to infer that this is by any means the right way to get it working, but it is the series of steps that finally got it working for me. Hopefully someone else might get something out of it as well.

My first online presentation – Webex is pretty cool!

Mike Kollen asked if I would be willing to do a Webex presentation about Mach-II to a group he is teaching at Boeing. Specifically he was interested in the presentation that I gave to the Dallas/Ft. Worth ColdFusion User Group earlier this month that covered the steps it takes to add simple user authentication into a Mach-II application. I also covered the features that have been added with the 1.5/1.6 releases. Apparently I was following Brian Rinaldi who had previously covered the greatness of Illudium PU-36 CFC generator the day before, so it sounds like Mike is heading them down a great path!

A couple of things intrigued me about this opportunity. First and foremost, I love the Mach-II framework and always enjoy preaching the gospel. Secondly although I have done countless in-person presentations, somehow I have never gotten the opportunity to do a web presentation. I have to admit that the idea of not being able to see my audience’s faces and reactions seemed a little scary at first, I feel that it went fairly well.

Mike’s group is using Webex for their training class, as the participants are apparently scattered throughout various cubicles and offices. This meant that I would be using Webex for desktop sharing, and Mike set it up so that the audio would be done via toll-free call in number, although it does support VoIP as well. Webex uses a Java browser plugin, which is nice considering that I am unable to do Connect presos due to the fact that Adobe won’t create a Linux presenter client for Connect! (do you hear me Adobe?!?!). One thing that seemed to work well was that Mike served as a moderator of sorts, letting me know as questions came up and then relaying them. Having both of us with live mics seemed to work well and kept me from missing questions as they arose.

So – to the group at Boeing, thanks for breaking me in. Hope you guys and gals enjoyed it!

25 pounds gone in 5 weeks.

My wife and I are headed off on a cruise the first week of March. Both of us decided that we had been a bit lazy and let some extra pounds creep up, and that the cruise was a good excuse to get back to where we need to be. I decided to set my goal at dropping from 217 to 185 for a total of 32 pounds, knowing that if I ever saw 190 I would be pretty happy. Today is 5 weeks into our efforts and I have dropped 25 pounds as I saw 192 for the first time in a LONG time on the scale this morning. I now have 45 days to go to lose 7 more to meet my goal. That sure sounds doable at this point.

For me, the approach has been 100% diet-related, specifically following the South Beach Diet. The first phase which lasts 2 weeks is pretty hardcore with absolutely no carbs. We saw some dramatic changes in that period, so we went ahead and tacked on a 3rd week of it, which in hindsight I feel was a good decision. Now that we have moved into Phase 2 of the diet, it is much easier to live with. I never realized how damn good fruit tasted until I cut out carbs entirely! At this point, my wife is making dinners that are on the diet, yet still a healthy meal for our whole family, which I think that is an important factor. When it is just one person trying to maintain a diet, it can be a pain in the butt, not only having to eat separate meals, but watching everyone around you eat all the stuff you shouldn’t be eating. When the whole family is eating the same healthy foods, it is an easier path to follow, and hopefully we are setting some decent patterns in our kids.

Unfortunately with the release of InstantSpot 2, full-time job, lots of contracting work, and family, excercise has not fit into my schedule during this period, unlike my wife who has been running 2-3 miles every couple of days and hitting the gym regularly. This week, now that the dust has settled a bit on the InstantSpot rollout – well… aside from our plans to rip the guts out and swap CFML processors this weekend – I hope to find more time to get myself back in the gym as well. It has been a few years since I was fairly serious about working out, and I definitely miss that feeling.

Hopefully I will have another positive report soon.

Aaron West’s entry on SES URLs with Apache mod_rewrite

Somehow I totally missed this entry when Aaron posted it until he mentioned it on the Mach-II email list today. He has written a really nice and very detailed blog entry on how to configure your application to use SES URLs with by using mod_rewrite in Apache, and then goes on to show how the flow continues to his Mach-II application. We took some similar approaches with the URLs you see here on InstantSpot.

For those wondering how all these pieces fit togther, I strongly recommend you check out his blog entry entitled: Using Apache’s mod_rewrite: SES URL’s and More.

Nice new usability feature in Flex Builder for Linux Alpha 2

OK, I say “new” because I am almost certain that this wasn’t there before…

I just installed the Alpha 2 release after realizing that my Alpha 1 installation had expired. As I opened up one of my AS files in the Action Script editor, I noticed that when you single click on a string, it highlights all matching strings in the file. That is pretty cool, and it immediately helped me out when looking for a place where a value was being set in that file. Now, this may become annoying, but for the moment that seems like a pretty cool addition. I have a feeling a bunch of Windows/Mac people are probably rolling their eyes going “Whoopdee dooo, we have had that all along”. Well… now I do to! :)