Strange behavior with ColdFusion ExpandPath() when using Symbolic Links

I was playing around with the Quicksilver framework last night, and for some reason it was unable to find and instantiate my CFCs properly.  After digging into the framework a bit and determining where it was breaking, I discovered something strange about the way that ColdFusion interprets ExpandPath() when it exists in a directory that is defined as a symbolic link.  I am not sure if the same behavior exists on Macs, but I would imagine it does.  If someone could confirm that to be the case, I would be interested.

For starters, I usually have a ‘www’ directory in my user home directory. This way when I pass my user profile around from distro to distro, my development work is included in my home directory.  For ease of configuration I typically have a symbolic link in my OS that points /www/ —> /home/dshuck/www/.  Then when I am creating a new web project called ‘davescode’, I would put it in /home/dshuck/www/davescode, but my Apache config would usually point to /www/davescode.  For the past several years, this approach has worked will for me.  That is until last night when experimenting with Quicksilver.

When Quicksilver loads, it creates a list of service CFCs in the the application in such a way that if I had Foo.cfc in a directory ‘com’ in the root of my davescode site, it would look like /home/dshuck/www/davescode/com/Foo.cfc.  When I initted the application, I was getting an error that  it couldn’t find the CFC home/dshuckcom/Foo.cfc.  Essentially what was happening is that it was getting the full path of the CFC and replacing the path to the root of the site with “”.  In a perfect world the value of the path after the string replace would have looked like com/Foo.cfc. Unfortunately that was not so.  Here’s why!

I put a test file called path.cfm in the root of my davescode site that considted of the following:

<cfoutput>#ExpandPath("./")#</cfoutput>
<br/>
<cfoutput>#ExpandPath("/")#</cfoutput>

The result was very surprising!

/home/dshuck/www/davescode/
/www/davescode/

For some reason when you do ExpandPath(“/”) it looks at the symbolic link path, but when you do ExpandPath(“./”), it looks at the true file path.  For the life of me, I can’t think of why that would be.  If anyone has an explanation, I would be all ears!

How to install KDE 4.1 on Ubuntu Hardy 8.01 and my impressions of it

With yesterday’s announcement of the 4.1.0 release of KDE, my willpower did not allow me to go another day without giving KDE4 another shot. For a bit of history, I have been using Gnome for several years. In the past 3 months or so I began using KDE 3.5.9 long enough that I began to enjoy it and realize that it is a nice desktop environment as well. At this point I really have no favorite between the two and like different things about both of them.

When the first releases of KDE 4 started showing up several months ago, I gave it a shot but was extremely underwhelmed. While it came with all the warnings from the community that the 4.0 release was nothing more than the introduction of a new platform which developers would expand, some of the basic pieces of it just felt wrong. I can clearly say that after spending most of an afternoon using it, running my development environment, and doing basic daily functions, 4.1 is light years beyond the first peak I had of it. I am *really* enjoying it so far and I have a feeling it will be my desktop of choice for the immediate future.

One thing that seems to not be common knowledge to some people is that you can just install it and try it out without affecting your existing desktop environments, be it KDE 3.5.x or Gnome. For example, my current installation started out as Kubuntu 8.04 Hardy Heron. Shortly after installing I added Gnome by running:

sudo apt-get install ubuntu-desktop

Then in the GDM or KDM login window, I had options for either logging in using Gnome or the default KDE. Thankfully the two coexist without bothering each other, and I can switch back and forth at will. I took the same approach today when installing KDE 4.1, planning to keep both Gnome and KDE 3.5 as fallback positions or simply to use when I am in the mood.

If this type of setup sounds like something you want to try out, do the following. First, add the following repo into your /etc/apt/sources.list file:

deb http://ppa.launchpad.net/kubuntu-members-kde4/ubuntu hardy main

After adding that you will want to update your repos by running:

sudo apt-get update

Then to install KDE 4.1 you will run the following:

sudo apt-get install kubuntu-kde4-desktop kdeplasma-addons amarok-kde4 and kontact-kde4 kate-kde4 kmail-kde4

Notice that I am also updating several applications, such as kate, amaroK, kmail, and kontact. Kdeplasma-addons also brings you some extra goodies beyond the base install. During that installation you will be prompted to choose your login manager. KDE4 brings you yet another option beyond GDM and KDM. I chose it and it is a really nice clean look. I recommend giving it a look. Once the installation completes, restart X or reboot and have fun!

Solving java.lang.SecurityException: Seed must be between 20 and 64 bytes. Only 8 bytes supplied.

Recently I have began working with JMS and ColdFusion, in which we are building a system that subscribes to an enterprise JMS server and picks up messages relevant to its needs and acts on them. I had my proof of concept working with the open source Apache ActiveMQ server and was very pleased with the results.  However, in our production environment, the powers that be decided to use the very non-free SonicMQ server.

As I tried to convert the event gateway over to the SonicMQ server, it failed on initialization with the following exception:

javax.naming.NamingException [Root exception is java.lang.SecurityException: Seed must be between 20 and 64 bytes. Only 8 bytes supplied.]
	at com.sonicsw.jndi.mfcontext.MFConnectionManager.connect(Unknown Source)
	at com.sonicsw.jndi.mfcontext.MFConnectionManager.<init>(Unknown Source)
	at com.sonicsw.jndi.mfcontext.MFConnectionManager.getManager(Unknown Source)
	at com.sonicsw.jndi.mfcontext.MFContext.<init>(Unknown Source)
	at com.sonicsw.jndi.mfcontext.MFContextFactory.getInitialContext(Unknown Source)
	at javax.naming.spi.NamingManager.getInitialContext(NamingManager.java:667)
	at javax.naming.InitialContext.getDefaultInitCtx(InitialContext.java:288)
	at javax.naming.InitialContext.init(InitialContext.java:223)
	at javax.naming.InitialContext.<init>(InitialContext.java:197)
	at examples.JMS.JMSConsumer.start(Unknown Source)
	at examples.JMS.JMSGateway.startGateway(Unknown Source)
	at coldfusion.eventgateway.GenericGateway.start(GenericGateway.java:118)
	at coldfusion.eventgateway.EventServiceImpl$GatewayStarter.run(EventServiceImpl.java:1428)

In my research on this problem, I found several people reporting similar errors, each on CF8, and each talking to 3rd party tools.  Eventually I found the solution through an email discussion between one of the developers in my company and an Adobe developer.  Apparently in CF8, they added FIPS security, which disables the Sun JCE (encryption libraries).  To solve this error, you need to add the following line to your java.args in your jvm.config file in JRun.

-Dcoldfusion.disablejsafe=true

Now restart your server and try again!

Firebug with Firefox 3 in Ubuntu Hardy Heron

Several months ago when I first tried out Firefox 3, I found that I couldn’t get Firebug to work. At that time, I was still on 7.10 (Gutsy) and just rolled back to Firefox 2 and carried on about my business. Once I upgraded to 8.04 (Hardy), where its default Firefox is FF3, I tried again. I still had failures and no matter which “fix” I came across, I still was never able to open Firebug in a panel, but only in a separate window.

That all changed this morning! I was looking through packages and discovered that there is a Firebug package in the Ubuntu repos. I promptly uninstalled Firebug from the extensions settings in the Firefox and closed my browser. I went to terminal and typed:

$ sudo apt-get install firebug

… I then opened up Firefox 3 and BAM! It works exactly like it should. I have no idea what the difference is in this version of Firebug, but for whatever reason, my problems are solved.

CF8 error after upgrading to Ubuntu 8.10 Hardy Heron – libstdc++.so.5

This afternoon I did an upgrade from Gutsy to Hardy on my main development environment. I experienced *almost* no disruption to my system, with one exception (so far!). When I instantiated a ColdFusion 8 application which instantiates a webservice onApplicationStart, I received the following exception:

jikes: error while loading shared libraries: libstdc++.so.5: cannot open shared object file: No such file or directory

Jikes! Well fortunately the fix is quite simple. Go to a terminal and install libstdc++5 like so:

$ sudo apt-get install libstdc++5

Restart your application and carry on! I am not sure what changed between the distros, but apparently the libraries that ColdFusion uses for invoking webservices depend on this package.

Adding spell checking to Evolution mail client

I am not sure why I have never pusued this until today, but I for some reason have never spent the time to figure out why I didn’t have spell checking in my Evolution mail client. I knew that Evolution used the packages aspell and gnome-spell, which I already had installed, so why wasn’t it working?

When I went into my composer settings in the Evolution preferences, I saw a big empty box that was the list of dictionaries that Evolution was using.  You would think there would be some method of adding them from there, but unfortunately it isn’t quite that obvious. To add the English dictionary I had to install the package aspell-en. Once I added this I reopened Evolution and Bamn!

There it is. For the copy/paste inclined, try the following:

#sudo apt-get install aspell gnome-spell aspell-en

How to set JAVA_HOME environment variable in Ubuntu

I am actually creating this blog entry as a bookmark for myself, but since I know that I never remember how to do it, others might benefit as well!

One way that you can set your JAVA_HOME variable and add to your PATH, is be doing the folowing.  As ‘sudo’ open up /etc/bash.bashrc and add the following to the end of the file.  NOTE: Set the java path to whatever the actual path is on your environment if it does not match /usr/lib/jvm/java

JAVA_HOME=/usr/lib/jvm/java
export JAVA_HOME
PATH=$PATH:$JAVA_HOME/bin
export PATH

When you reboot, try running the following:

$ echo $JAVA_HOME
/usr/lib/jvm/java
 echo $PATH
[probably lots of paths]:/usr/lib/jvm/java/bin

Playing with my new webcam under Linux – watch me work!

I made an impulse buy this past week ordering a Tripp-Lite clip-on webcam for my laptop. My wife and I are leaving next weekend to go on a week-long cruise without our kids, and I thought it might be fun to post some video blog entries for them while we are gone so they (and ultimately you as well) can see what we are up to.

I chose the Tripp-Lite camera due to pretty consistently positive cost/value reviews, although I was a bit worried that I couldn’t find a single instance of anyone on the internet actually using one under Linux. Why should that stop me, huh? When it arrived I plugged it in and… nada… nothing! Although my laptop could see the device, I couldn’t seem to get the drivers to work. After doing some digging around I found that it uses the Z-Star Microelectronics Corp. ZC0301 WebCam chipset, which seems to be very common in the cheapo-Chinese-made webcam space. There is an unbelievably awesome project out there where a guy named Michel Xhaard has written drivers for tons of webcam chipsets, and although mine was included I just couldn’t seem to get it to work, no matter what I did.

Eventually it hit me that since I am using an Alpha version of Ubuntu (Hardy Heron), perhaps I should roll to a release version and see what happens. Given how easy it is to swap distros in Linux, I decided to roll back to a 7.04 (Feisty) remaster disc that was laying around. Upon plugging in my camera on the new distro it just worked natively! YAY.

So, now I am playing with the apps a bit. I found Camorama which does video captures and can FTP them to a server at regular intervals. I thought it might be fun to create a custom pod on my blog that shows a current picture of me working – or zoning out… picking my nose… whatever. So, the pic of me you see on the left is the most recent of those. The timestamp text is a little small when I resize the pic, but if you view it in full size (or pull out your magnifying glass), you can see the date.

As for recording video in Linux, I created a launcher that allows me to record AVI files with audio using mencoder. For those interested in doing that, you will first need to install mencoder:

$ sudo apt-get install mencoder

I then created a shortcut icon that starts the recording:

mencoder tv:// -tv driver=v4l:width=320:height=240:device=/dev/video0:forceaudio:adevice=/dev/dsp -ovc lavc -oac mp3lame -lameopts cbr:br=64:mode=3 -o /home/dshuck/Desktop/webcam.avi

Then I have another shortcut icon to stop the video:

killall mencoder

Look for pointless videos in the near future…

`c->xlib.lock’ failed error on Java applications

I am currently using the Alpha 3 release of Ubuntu 8.04 Hardy Heron.  Considering the fact that it is an alpha release, I tend to not get worked up over little errors that might occur.  However, I have found one that I just couldn’t get around.  I use Aqua Data Studio for my database client and since loading Hardy Heron, I have been unable to run it.

When I would start it from a terminal, I would get a dump that looked like this:

#0 /usr/lib/libxcb-xlib.so.0 [0x90d00767]
#1 /usr/lib/libxcb-xlib.so.0(xcb_xlib_unlock+0x31) [0x90d008b1]
#2 /usr/lib/libX11.so.6(_XReply+0xfd) [0x9039429d]
#3 /usr/lib/jvm/java-6-sun-1.6.0.04/jre/lib/i386/xawt/libmawt.so [0x9063e8ce]
#4 /usr/lib/jvm/java-6-sun-1.6.0.04/jre/lib/i386/xawt/libmawt.so [0x9061b067]
#5 /usr/lib/jvm/java-6-sun-1.6.0.04/jre/lib/i386/xawt/libmawt.so [0x9061b318]
#6 /usr/lib/jvm/java-6-sun-1.6.0.04/jre/lib/i386/xawt/libmawt.so(Java_sun_awt_X11GraphicsEnvironment_initDisplay+0x2f) [0x9061b61f]
#7 [0xb4cff3aa]
#8 [0xb4cf7f0d]
#9 [0xb4cf7f0d]
#10 [0xb4cf5249]
#11 /usr/lib/jvm/java-6-sun-1.6.0.04/jre/lib/i386/server/libjvm.so [0x637338d]
#12 /usr/lib/jvm/java-6-sun-1.6.0.04/jre/lib/i386/server/libjvm.so [0x64fd168]
#13 /usr/lib/jvm/java-6-sun-1.6.0.04/jre/lib/i386/server/libjvm.so [0x6373220]
#14 /usr/lib/jvm/java-6-sun-1.6.0.04/jre/lib/i386/server/libjvm.so(JVM_DoPrivileged+0x363) [0x63c90d3]
#15 /usr/lib/jvm/java-6-sun-1.6.0.04/jre/lib/i386/libjava.so(Java_java_security_AccessController_doPrivileged__Ljava_security_PrivilegedAction_2+0x3d) [0xb7d1096d]
#16 [0xb4cff3aa]
#17 [0xb4cf7da7]
#18 [0xb4cf5249]
#19 /usr/lib/jvm/java-6-sun-1.6.0.04/jre/lib/i386/server/libjvm.so [0x637338d]
java: xcb_xlib.c:82: xcb_xlib_unlock: Assertion `c->xlib.lock' failed.
Aborted (core dumped)

Considering the fact that I used the Ubuntu sun-java6-jdk package from the Ubuntu repository, I decided that I would try the self-extracting bin that is available on http://java.sun.com.  After swapping to that JVM, I still received the same dump and abort.  After doing a bit of searching, I came across a patch in one of the bug reporting forums that effectively patches your JVM and prevents this error from occurring.  I ran the patch and now everything works as it should.   If you are receiving this error, create a shell script with the following content and run it.  Assuming that it runs successfully, you should then be able to open the Java application that was failing.

#!/bin/sh
# S. Correia
# 2007 11 21
# A simple script to patch the java library in order
# to solve the problem with "Assertion 'c->xlib.lock' failed."
# see bug http://bugs.sun.com/bugdatabase/view_bug.do?bug_id=6532373
LIB_TO_PATCH=libmawt.so
for f in `find /usr/lib/jvm -name "$LIB_TO_PATCH"`
do
echo "Patching library $f"
sudo sed -i 's/XINERAMA/FAKEEXTN/g' "$f"
done

Big thanks to “S. Correia” for getting me back on my feet!

Wow… rough move from Ubuntu to PCLinuxOS!

Seeing as it has been a few months since I tried out a new distro, I got a wild hair today and decided to give PCLinuxOS (Gnome version) a shot.  The way that I keep my drives partitioned – specifically keeping my /home directory as a separate partition- swapping distros is usually a pretty painless endeavor and I can be back up and running within an our or so, with all my old apps in place and with all my preferences still in tact.

As I booted to the PCLinux Live CD, everything seemed to be business as usual.  The only notable point was that I thought that PCLinux has a nice default theme and icon set in the Gnome version of the distro.

So without too much hesitation, I went ahead and began the install process.  After choosing the appropriate keyboard and timezone settings, I was presented with the choice of how I would like my partitions setup, which by default uses the entire physical disk.  By selecting the “do it yourself” mode, I expected to be able to choose my smaller /dev/sda2 partition as my / mount point, format it for the OS, and leave my /dev/sda3 alone mounting it has /home.

I entered what appeared to be a nice little partition configuration tool (Disk Drake I think?), which appeared to be exactly what I was needed.  I then selected the /dev/sda2 partition as the place I wanted my / mount point, and chose /dev/sda3 as my /home mount point.  When I chose the option to format my / mount point, I got a an error message that said that the partion could not be formatted.  Considering that my plan was to wipe it out anyway, I went ahead and removed that partition, and re-added it using that utility.   As I tried to move forward, I got a message that indicated that I needed to reboot, restart the installation process, then choose “Use existing partitions”.  Simple enough right?

I then rebooted to the live CD and entered the installation again.  This time I was presented with a new option.  “Choose the partitions you would like to format” and it only listed my larger /dev/sda3 partition with a checkbox next to it, with no mention of my /dev/sda2.  I found this a bit interesting, and after carefully removing the checkbox I moved forward. As I entered the next step I went to a screen “Copying files…”…. wait… huh?  To where?

Apparently it now considered my /dev/sda3  (which I intended to be /home) as the only drive.  I cancelled the process and opened the terminal.  After browsing to that directory, I found new /usr and /boot directories in that directory, which confirmed my suspicions.

Things then began moving downhill and picking up speed….

I opened the partitioning tool GParted and was suprised to see that not only did my 15GB /dev/sda2 not exist anymore but that /dev/sda3 was now a 145GB partion of unallocated space.   NOT GOOD, considering that about 110GB of it is *very* allocated with data that I didn’t intend on losing.

Even with all the steps I have taken since, I have been unable to mount /dev/sda2.  I even popped in Damn Small Linux to attempt some quick surgery and even it was unable to save me.   I then tried an Ubuntu live CD and it didn’t recogize anything on /dev/sda at all.  At some point during this process I noticed that I was getting “bad magic number” and “corrupted superblock” type messages in relation to that device.

After putting the PCLinuxOS live CD back in, I was a bit relieved to see that it auto-mounted /dev/sda3 as /media/disk, and that I could at least access the files that were once safe and sound in my /home directory.

So…. here I sit waiting for GBs upon GBs of data to upload via FTP to various servers so that I can wipe the enter friggin thing out and start over.  Tomorrow I get the fun of retrieving it all and piecing my laptop world back together….   <sigh/>

More to come….