Sunday, December 30, 2007

Visual C++/Studio: Application configuration incorrect?

If you have just written a program in Microsoft Visual C++ or Visual Studio (2005 and above, I believe), try to run it on another machine, and get the error message “This application has failed to start because the application configuration is incorrect. Reinstalling the application may fix this problem.” then you want to read on. If you just want to see me rant at Microsoft, read on as well.

The problem is really simple. If you write a C++ program, it links dynamically to the C Runtime Library, or CRT for short. This library contains your printf, your malloc, your strtok, etcetera. The library is contained in the file called MSVCR80.DLL. This file is not by default installed on a Windows system, hence the application cannot run.

The solution? Either install the DLL on the target machine through VCREDIST.EXE (the Visual C++ Redistributable Package), or link to the CRT statically (plug the actual code for the used functions straight into your EXE).

Distributing and installing VCREDIST along with a simple application is a pain in the arse, so I went for the second option: static linking. It's really easy: go to your project's properties, unfold C/C++, click Code Generation, and set the Runtime Library to one of the non-DLL options. That's all there is to it.

Now comes the rant part: how much effort it took me to figure all this out. You have been warned.

  1. “This application has failed to start because the application configuration is incorrect. Reinstalling the application may fix this problem.” What kind of error message is that? It's like saying “sorry, your car does not work, because the engine won't start; please buy a new car.”
    In the very least, you could tell me that I'm missing a DLL. Preferably also tell me which particular DLL.
  2. Reinstalling the application, like the error message suggested, did of course not fix my problem. But the message also does not give a hint where to go looking for the error. It took some web searching to figure that out: the Event Log. Itself hidden quite well inside Windows, it told me which particular DLL was missing on the system. The Dependency Walker, that also comes with Visual Studio, told me that MSVCR80.DLL was indeed the culprit.
    The error message should at least point toward the more useful information: “See the Event Log for details.”
  3. I searched around on the web for MSVCR80.DLL and found its purpose. It turned out to be possibly the most basic library any C programmer could wish for. So why the heck is it not installed on any Windows system? It turns out that some older versions of the CRT are installed with Windows, but these are really ancient and buggy, and I honestly wouldn't know how to make Visual Studio link against them.
    So why, in these days of automatically updating systems and always-on internet connections, is this small (612 kB) but very essential DLL not included in service packs, or in Windows Update?
  4. Now, to fix the problem, I had to install the DLL on the target system. Simply dropping it alongside my application didn't work, because nowadays DLLs actually need to be installed. This is because modern DLLs are what Microsoft calls Side-by-side (SxS) Assemblies, which have been introduced in a brave attempt to diminish DLL hell. I don't know the gory details; it's something to do with manifests, and probably lots of candles, pentagrams and holy water as well.
    Anyway, you cannot download the VCREDIST installer straight from the Microsoft website, because there's only an old version there. Or is there? A newer page does give you what you want, and there's a 2008 version too.
  5. Thinking that it must be possible to link statically to the C Runtime Library, I looked into the project options in Visual Studio. I could not find the option there. Not very surprising, considering its name: “Runtime Library.” What runtime library? The Grand Unified DLL of Making Coffee? Or is this just a general option relating to runtime libraries in general? The default value (“Multi-threaded DLL”) seems to suggest this. I dared not touch this option for fear of breaking my application. It's a common problem with Microsoft: they often use a very generic-sounding name for something very specific.
    Had the thing been called “C Runtime Library linkage” instead, I would immediately have grasped its meaning.

On a not completely unrelated note, I'm pleased to announce that this bug in Taekwindow has finally been resolved.

Sunday, December 16, 2007

My ideal filesystem

I have a file server. It has multiple disks of various sizes. Some are old and likely to fail soon, others are brand new and will hopefully fail less soon. The file server contains nearly half a terabyte of data. Some data are big, others are small. Some are important, others I could do without.

The problem: it takes a lot of manual labour to manage all this. I need to decide which data goes where, keep an eye on the free space of each drive, make sure backups are made regularly, shuffle around data when I add a new disk, etcetera. Highly inconvenient.

The solution: My Ideal Filesystem, MIFS for short. Unlike other filesystems, MIFS is not stored on a single disk (or partition, if you like): it is spread out over multiple partitions. Unlike filesystems on a RAID or LVM array, MIFS actually has knowledge of the underlying structure of its disks (or partitions) and uses this knowledge to its advantage.

MIFS presents itself to the operating system simply as one filesystem. You can therefore mount it at a single mount point. There is only one small extension to the interface that normal filesystems expose to the OS: you can tag a file with a number that indicates the ‘importance’ of the file. This number indicates how bad it is if the file gets lost. So I can tag, for example, a thesis that I'm working on as very important, whereas a television series that I downloaded can easily be downloaded again and is therefore less important. There is also a number which specifies a ‘minimum redundancy’ for the file. If no number is specified, it is inherited from the parent directory.

Additionally, the disks comprising the filesystem each have a tag with their relative reliability, so you can indicate which disks are likely to fail soon. This number might be extracted from the SMART data that the disk itself presents, combined with a database of reliabilities of different disk models, if it is possible to build a database like that.

Now when I write a file to this filesystem, MIFS will decide what to do with it, depending on its importance. When the array is mostly empty, MIFS can afford to write files to each and every of the disks, achieving maximum redundancy and complete recovery even if all disks but one fail. When the array fills up, the files that are less important will be erased from some of the disks to make room for more important files. The ‘minimum redundancy’ tag ensures that my important thesis will always be on at least three of the disks. The filesystem is only full when all files are at their minimum redundancy level.

One could even go a step further, and put some of the disks in a machine across a network or even the internet. That would essentially give you automatic, real-time backups in case one of the machines gets fried along with all of its disks.

MIFS has only one huge drawback: it does not exist. Of course there are many technical difficulties to be overcome when implementing MIFS; I am not blind to that. But I think it should be possbile. Anyone who writes this filesystem will earn my eternal gratitude.

Friday, December 14, 2007

Using MS Word – the right way

It's been quite some time since I last used Microsoft Word for any serious document. Nowadays I mostly use LaTeX. Which of the two is “better” is not a discussion I want to get into: each has its own pros and cons and is suitable for a different purpose.

For those who are for some reason stuck with Word, I've been wanting to write an article on “proper Word usage” which takes much of the pain of the program away. However, I just discovered that this article already exists (and how couldn't it?): Living with Microsoft Word: Tips for survival.

No Word user, frequent or occasional, should be without the knowledge in this article. Spread the word! — Erm… sorry.

Thursday, December 6, 2007

A case against student presentations

For my master's in computing science, I am currently following two courses which largely consist of presentations given by the students themselves. The idea is that students research one topic in-depth, and learn about the other topics from others.

I've attended four such presentations today. One was quite good, one was mediocre and two were downright embarrassing. In view of my past experiences with such presentations, I found this a decent score.

Why doesn't this system work?

Firstly, students often don't know the material well enough. The presentation can then go one of two ways. Either the material that is not understood is skimmed over, or it is left out. If the hard stuff is only skimmed, we see slides with many complicated formulas, algorithms, graphs and numbers, but the presenter hardly touches upon them. Upon asking a question to dig up more information, only stutters come out. Equally bad, if the hard stuff is completely left out, we end up with a presentation so shallow that it is nearly without content. When we ask more detail, it turns out that the presenter knows no more than he told.

Secondly, most people cannot teach. Understandable, because teaching and explaining is hard. Why else would teachers have to go through years of training before they are allowed in front of a full classroom? And even then, most teachers are mediocre. University professors, despite knowing their subject very well, have received hardly any training at all, and are usually worse. Therefore students cannot be expected to be able to explain something properly. Those who can are the exception, not the rule.

It is already hard enough to get complicated material into your own mind. To get it into someone else's is much, much harder. Forcing people to attempt both at the same time is a recipe for failure.

Friday, November 30, 2007

50,000: Better late than never!

Official NaNoWriMo 2007 Winner There. I've done it. I just wrote a 50,000 word novel in one month, as I set out to do. I just typed out the 50,078th word, and made it to the finish line in the nick of time.

I wrote last month that the novel was going to suck. Now, I really don't know anymore. It is so hard to judge a work that you've been working on for hours on nearly every day: you become blind to its flaws. But the story may even not suck as badly as I thought it would.

Although the story is finished, the novel is not done yet. The scenes are written out of order and do not connect logically to each other. There is exactly one scene per chapter, and the chapter heading gives a perfect spoiler of what happens in that scene. Inconsistencies crept in while I changed my mind about how things should work. Things that should have been said only once are said twice, thrice, while things that need to be mentioned remain unwritten. As I wrote in English, which is not my native language, the vocabulary I used is probably limited. And I may even have made an occasional typo. Much work remains.

I hope to get this all fixed up during December. Then, if and only if I'm satisfied, I will print out copies for some friends. Whatever I do with the story afterwards much depends on their reaction. If it's total rubbish I might put it online for free; if it's brilliant I might try to get it published. The truth is probably in between, so neither might happen. We'll see.

NaNo taught me a lot of lessons. I might write more about all that later. I might also write up the “making of” story for posterity and future WriMos. But all that will have to wait. Right now, my arms are starting to complain of RSI again, and I have a much neglected social life to attend to.

But I did it!

Thursday, October 18, 2007


There. I've done it. I've signed up for NaNoWriMo, the “National” Novel Writing Month. I'll be writing a 50,000 word novel through November.

It's going to suck. I must keep telling myself that. I'll have to keep that terrible, terrible perfectionist inside me under control. Quantity, not quality. Editing comes later.

Writing nearly one thousand six hundred and sixty seven words each day is not going to be easy. I will need your support and encouragement. I will need a kick in the butt when I'm about to give up. You can see my progress on my profile page.

Sunday, October 7, 2007


I just registered an account on, a site where you build a photo blog by uploading one photo each day. My account name is ttc.

Like this blog, it is intended as a learning experience. I'm not sure I'll be posting each day. Let's see how it goes. Here is my very first blipfoto.

(I was told that one of the great things about blipfoto is the feedback you get, but I hadn't quite expected this much. Even though my photo has only been up for an hour or so, there are already 9 comments!)

Monday, September 17, 2007

The Matrix Revisited

In the Wachowski brothers' movie The Matrix, it is suggested that the world we experience is only a computer simulation. Our real bodies are in coffins on an Earth ruled by robots; our nervous system is connected to the computer to make us experience this simulated world.

The robots keep us that way to use us as batteries for their energy needs. Considering the Law of Conservation of Energy, this is of course utter bullshit. But another theory might be less far-fetched: suppose that the entire universe around us, including ourselves, is just the product of one big computer simulation.

This article gives a fairly rigorous argument that we probably live in a simulation, provided that most civilizations live to be capable of, and interested in, running such a simulation. It also draws some interesting conclusions from this.

The article does not claim that we are simulated beings, and neither do I. Unless some bright hacker rises, follows small mammals around and swallows brightly coloured drugs, we're probably never going to know if we live inside a computer. But there is some empirical evidence. If you look closely at our fundamental laws of physics, it turns out that some of them are very convenient for computer simulation… as if they've been designed that way.

  1. Relativity. In particular, the fact that all movement of matter, energy and information is limited to the speed of light. Consider that our own computers become more and more parallel. It is reasonable to believe that the ultra-advanced computer, let's call it Deep Thought, is also a highly parallel computer. The weak point of parallel computers is the locality of data: transferring data between processors is relatively time-consuming. Limiting communication to the speed of light allows you to run sections of the universe on different processors with minimal communication between them.
  2. Quantization. Energy is quantized: it only occurs in discrete packets. No fractional energy quanta can exist. Any computer we know is discrete in operation, and is unable to store numbers to arbitrary precision. Possibly Deep Thought suffers from the same limitation. I would not be surprised if one day we found that time and distance exist in discrete units as well, and the real numbers turn out to be a concept that exists only in mathematicians' minds.
  3. Schrödinger's cat. One of the main ideas of quantum physics is that the state of an object is undetermined (the living and dead states of the cat are superposed) until someone takes an observation, thereby interfering with the experiment. How convenient: Deep Thought does not even need to compute what's going on with the cat until that information is actually needed! This is called lazy evaluation and is used in many of our own programming languages to save computational effort.
  4. Dark matter. Astronomers found that objects in the universe does not move the way they should, considering the amount of matter attracting them. The difference is attributed to dark matter, which cannot be observed directly, but does produce a gravitational effect. It seems like a hack made up by astronomers, but what if it's actually a hack made up by the programmers of Deep Thought? Suppose you are a superbeing, who finds after having made up all these beautiful laws of physics for your toy universe, that it is not stable? Back to the drawing board – or just add some invisible, intangible matter for some extra gravitation? (It might be possible for us to compute whether our universe could be stable without any dark matter, so this argument could be verified or falsified.)

The problem with this and many similar theories is that they are very hard or impossible to disprove. This particular theory can never be disproved. It might be proved by realizing that a computer system is involved. Any computer system we know contains bugs, so perhaps Deep Thought is not flawless either. If we find a way to trigger a bug in the programming of the universe, we might have evidence of what lies beyond. Or we manage to crash the complete system, and hope that their system administrators have made a decent backup…

Sunday, August 26, 2007

OOXML: defective, but don't exaggerate

In a recent article titled Microsoft Office XML Formats? Defective by design, self-proclaimed file format expert Stéphane Rodriguez explains 13 reasons why Microsoft's Office Open XML (OOXML) format should not become an ISO standard. Although I completely agree with his conclusion, Rodriguez seemingly got carried away by his rant, uttering nonsense in some places. Still, most arguments make sense, so I'll cover only those that don't, below.

1) Self-exploding spreadsheets. Here, he modifies an Excel file manually, and is surprised that even this “simple” change breaks the file. He does not once refer to the specification to see whether the thing he changes may be dependent on things elsewhere in the file. So, probably, the file he created is not at all according to spec. Is it strange, then, that Excel goes boom? An Office document is a lot more complex than you would say at first sight, and the storage format is bound to reflect this.

2) Entered versus stored values. Clearly, the values get processed internally as (binary) floating-point numbers, which explains numbers like 1234.1233999999999 cropping up when converted back to the decimal storage needed for XML. If an implementation simply uses the IEEE floating point format, like pretty much any CPU does, no problem. Besides, the fact that Excel writes out (very slightly) inaccurate numbers does not mean that the OOXML standard is flawed, only Excel's implementation thereof.

6) International, but US English first and foremost. Another complaint of Rodriguez is that the numbers get stored in US English locale format (1,234.56), and not the localised format (e.g. 1 234,56). Also, formulas always use English function names, like SUM. Rodriguez claims that this canonicalisation makes processing more complex. Wait, what? You want us to go store things differently depending on how some people would like to view them on the screen? You think a file will be easier to process when we introduce diversity in locales?

12) Document backwards compatibility subject to neutrino radioactivity. So, Excel 2007 cannot properly import graphs from earlier versions. What else is new? Anyway, I do not see how that can be regarded as a flaw in OOXML; the quality of the Microsoft Office suite is something quite different from the standardisation of the OOXML format.

Unrelated to Rodriguez's rant, there's one gem I would not want to keep from you. The folks at Google discovered [PDF link], buried in the over 6000 pages of the OOXML spec, 51 pages of stuff like the following:

If anyone dares to say these specs aren't bloated, point them to part 4, section 2.18.4, pages 1632–1682. I honestly don't know whether to laugh or cry.

If you like, you can have a look at the standard yourself. It is known as ECMA-376. The specs can be downloaded in either zipped PDF format or in a mysterious format called DOCX. For the latter, alas, no fully functional implementation appears to exist.

Wednesday, August 8, 2007


Shortly after my previous post, Trust you gut, I read about the book Blink: The Power of Thinking Without Thinking by Malcolm Gladwell. Both my conscious and my subconscious mind told me to purchase it. It took me only two days to finish the book.

Gladwell makes essentially the same point as I did in my previous post. In his words, much of our thinking happens behind a “locked door”: you can only sense the outcome, but not where it came from. If you try to figure out why you took that particular decision, only nonsense will come out. If you try to follow the reasoning while your subconscious is deciding, your conscious thoughts appear to interfere with it, and the subconscious is essentially disabled.

This subconscious reasoning is not magic. It relies on subtle clues that are too numerous for your conscious mind to process, and on tacit knowledge that comes from previous experience. For example, I am quite good at troubleshooting computer problems, but only if I'm there to witness them. If somebody tells me “my computer is doing this-and-this, do you know how to fix it?” I can give some general pointers, but it's only at the keyboard that I really get the insights. Not only the text of an error message is meaningful to me, but also its looks, its responses and the precise thing I was doing at the moment it popped up. It's not magic, it's simply lots of experience. And no, I will not fix your computer.

Of course Gladwell took much more time for his research than I did for my blog post, and he also gives examples of situations where you shouldn't trust what your subconscious tells you. For example, when you are in a dangerous situation, your mind goes into a state that psychologists call “arousal”. It shuts down parts of the brain that are deemed non-essential at that moment, including the one that recognises emotions on human faces. This explains why police officers shot a man because they thought he had a gun, while in fact he thought he was being robbed and reached for his wallet in mortal fear.

Whether you like it or not, a lot of our thinking happens subconsciously. I think that we are not aware of over 99% of our own thinking, our consciousness being just a flimsy layer on top of that. I pulled that number straight out of my arse, of course. But my arse may well be a lot smarter than you'd think …

Tuesday, July 17, 2007

Trust your gut

Note: I'm not a psychologist. The following is based on personal experience only, probably including generalizations and skewed perceptions.

With my recent moving to another place, I've had to make many decisions, both big and small. Most people think that, once you've gathered enough information, you make your decisions as follows:

  1. think about the pros and cons of each option;
  2. make decision based on the balance between the pros and cons.

But I noticed that the process is usually more like this:

  1. make decision;
  2. pretend to think about the pros and cons;
  3. reinforce decision by stressing the pros and diminshing the cons.

The catch is that the making of the decision happens unconciously: you have often already decided before you're even aware of it. All thinking after that point only serves to rationalize it for yourself or others.

I also noticed that my unconscious decisions very often turn out to be right. If I overrule them with some rational argument, it often turns out that the rational argumentation was overlooking some important point. Apparently, more thinking happens “behind the scenes” than you know. And once you've decided, how important is the reasoning anyway?

So, I try to keep an eye on my subconscious. When I notice that it has already made the decision, I often go with it. It saves a lot of time and effort spent on needless thinking.

God does play dice

Einstein was wrong. God does play dice. I heard them rolling tonight.

It was nice enough weather for most of the day. A bit warm and clammy, but not much to complain about. Until, at half past nine, I looked out the window and saw it getting at least three stops darker over the course of less than eight minutes.

A quick check of the radar pictures confirmed that a pretty heavy thunderstorm was rapidly coming my way.

I had tried to photograph lightning the day before, but that had been in the afternoon. I couldn't use a long enough exposure time because the daylight was still too bright. Out of hundreds of pictures I only got one with a small lightning bolt on it. This time, it was rapidly getting darker, and not only because evening was falling. I must be able to do better.

I grabbed my camera and tripod and ran up the stairs to the walkway on the 7th floor. Out there the wind was getting stronger, and I had trouble keeping the camera steady on my light tripod. I removed the cord from the camera to make it catch less wind.

Varying the exposure time between 1/4th of a second and 15 seconds, I snapped many pictures in a row. The camera was a bit too slow for the task, so I didn't want to waste time looking whether I got any decent pictures. I took care to heavily underexpose them, so that when lightning arrived, it would not overexpose the picture.

Thunder rolled time and time again. It was as if someone up there was rolling his massive dice over the clouds each time the light flashed. I wondered whether even God would know the outcome of His roll in advance—if so, why was He even playing?

In the meantime it had started to rain, and the walkway on one side of the flat was pretty open to it. I took back a few steps to avoid getting drops on the lens and ruining my pictures. Many awesome lightning bolts flashed before my eyes, but often right in between two exposures. It seemed like the dice were loaded in my disadvantage.

When the best part of the storm seemed over, I folded up the tripod and turned off the camera. I smiled as I saw, right before the automatic lens cap closed, that the lens was dry.

As soon as I was back in my apartment, I pulled the 684 pictures off the card and started flicking through them. Many of the pictures had a light, pink sky, but no directly visible lightning. But I had gotten lucky a few times.

Then, while sorting out the pictures, I saw more flashes out of the corner of my eye. It wasn't over yet! So I mounted the camera on the tripod again and, not taking the time for all the stairs, went out onto the balcony. I could hear from the many shouts and exclamations that I was not the only one watching the show.

Suddenly, there was a flash brighter than all the others had been. Even before the light had died out, a cracking, deafening thunderclap sounded. “Fucking hell!” I exclaimed. The impact couldn't have been much over a hundred metres away.

Then I figured that, after this close call and my profanity, I'd better get back inside. God does play dice. I heard them rolling tonight. And I wouldn't want them to come down on my head.

Thursday, May 31, 2007

JVC Everio GZ-MG575 review

Last week, I bought another camera: the JVC Everio GZ-MG575. It set me back € 995. I'll be returning it today, and I do not have time to write a full review like I did with the Panasonic NV-GS320. But I can give you an impression. Short story: Do not buy this camera. Ever.

Contents of the package

Apart from the camera, the most important part is the dock, which connects to the through a single connection at the bottom. There is the mandatory remote control (including battery). The Dutch manual is translated quite badly and there was no English version in my package. There's an USB cable, an A/V cable, a power adaptor and a shoulder strap.

Picture quality

The picture quality is reasonable, but not outstanding. Widescreen is fake, simply chopping off the top and bottom of the picture. Instead of making the view angle wider, it is actually made less tall.

The picture looks a bit greenish, and there's no setting to correct for this. It is not a matter of white balance.

The (digital) image stabilizer only works if your picture is already nearly still, and even then results in a motion that feels choppy. Once you've worked with an optical, physical stabilization system, you realize how much better that works.

Autofocus is very slow. It focuses solely on the centre, and does not pay attention to other factors, so apart from slow it is also stupid. As is to be expected without an optical viewfinder and a lens ring, manual focus sucks. Your only option is to point where you want to focus, wait for autofocus to kick in, then switch to manual focus to lock the focus.

Auto white balance detection is reasonable, but… it is not smooth. When walking from inside to outside, for example, it will instantly switch the white balance from indoors to outdoors. Something you definitely do not want.

Low-light performance

I haven't tested the low-light performance, but I expect that it does a pretty good job here, judging from the large lens opening.


In a well-lit room, with big windows and a lot of outside light, setting the aperture to F3.5, the camera tried to use a shutter speed of… 1/5 second. In case you're not into photography: it will be impossible to get a sharp picture at that speed, and even cheap and crappy compact cameras like mine do a lot better. I did not bother trying to take any more photos after this insanity.


I haven't tested the internal microphone thoroughly. If it turns out to be crap, you at least have the option to plug in an external microphone.

Recording medium

The camera records to a hard disk. Compressed like a dvd. Even on the maximum quality setting. You're probably going to export to dvd sooner or later anyway, so it may not be a big problem, but I like to start with the highest possible quality of material.

There's also an SD card slot that I haven't tested. Apparently you can record photos as well as video to this card.


The LCD has a pretty limited view angle, especially in the vertical direction. I haven't tested it in direct sunlight, but if it fails there's no viewfinder to fall back on.


The user interface is downright horrible. Let me give you an example. When set to manual mode, with the little wheel at the top, you can switch to manual focus by pressing the joystick downwards. Now switch to, say, S mode where you can set the shutter speed. Manual focus is retained. But you set the shutter speed—you guessed it—by pressing the joystick up and down: you can no longer turn off manual focus!

The menu wraps around like a cilinder, so you don't know when you've seen all options. All menus are animated, and rather slow and annoying too.

All buttons seem to be in just the wrong places, and there's no way you can control this camera with one hand. Many buttons have different functions depending on the mode. All of it just fails to make sense.

There's a “drop detection” that will shut down the camera when it detects a falling motion, to protect the hard disk. It also shuts down if you simply lower it a bit too quickly. So I turned the feature off; the best way of keeping your hard drive intact is still simply not dropping it. But at startup, the camera keeps warning me that drop detection is off, and an annoying blue icon keeps flashing on the screen all the time.

Battery life

I'm not sure how this compares to other hard disk cameras, but in itself, I found the life of the accompanying battery quite limited. Apparently there's a good reason that the box (that you see even before you buy the thing) already states: “Don't Forget A Back Up Battery!”

Capturing and editing

Here's a point where I do like this camera. It comes with a dock, to which you can connect a power supply, USB, FireWire, S-Video and AV. A welcome change from having to plug in two or three cables each time.

The camera shows itself to Windows as the hard disk it contains. You can simply copy off the .mod files. These are, according to the manual, a “proprietary format”, but actually they are simply .vob files like on a dvd. After renaming, Premiere has no problem importing them. No external software is needed.

I had expected editing to go less smooth with compressed material, but I haven't noticed anything of the kind. It all works just as well as with raw DV videos, as far as my (admittedly quick and short) editing session could tell.


Horrible usability, and the picture quality does not make up for it. Do not buy. Definitely not worth your money. I'll be going back to my good old Panasonic, external microphone input or no.

Side note: during the entire time I'm writing this, the camera is erasing its hard disk. Although it says “formatting”, I hope it is actually overwriting all the bits seven times, because nothing else accounts for half an hour of formatting time…

Friday, May 4, 2007

Equality checking on interrelated objects

Most, if not all, object-oriented programming languages allow you to define what it means for two objects to be “equal”. In C# and Java, this is done by overriding the Equals or equals function, respectively.

But what to test when comparing two objects? You'll obviously want to compare (most) instance fields, for example the name of a person or the number of wheels of a car. But it's less obvious when an object is (conceptually) part of another object, or dependent on it in some other way. I ran into such a situation today.

Consider the class Pig. A Pig is associated with a Farmer, the owner. Each farmer has, in turn, a list of ownedPigs. (I use Java capitalization conventions here to make a clear distinction between types and fields/methods.)

Suppose we want to check whether two Pigs are equal. (Do not just return true. Some animals are more equal than others.) Clearly, two Pigs are not equal if they are owned by a different Farmer, so we need to compare their owner fields. Reference equality on this field is not enough: if we can have multiple Pig objects representing the same pig, then the owner fields may well refer to different objects representing the same farmer. So we do a value comparison of the owner fields, by calling owner.equals.

Of course, two Farmers are not equal unless they own the same Pigs. We have to call Pig.equals for each owned pig. This, in turn, calls Farmer.equals, which calls Pig.equals … and so on ad infinitum (which is Latin for “stack overflow”).

How to solve this problem? The key is, when equality of type A depends on equality of A's fields of type B and vice versa, to check not the entire B objects for equality, but only compare the parts we're interested in.

For example, a Pig couldn't care less what the social security number of its owner is. It does, however, care where he lives: a pig on the South Pole will have very different life circumstances from one in the Sahara. In Pig.equals we would then only compare the addresses of the owners, and not the entire Farmer objects.

With this modification, the cycle is already broken. We could also tackle the problem from the other side: the Farmer cares only about how fat his pigs are, but not about their political preference. When comparing two Farmers we could look only at the weight of all their ownedPigs, disregarding their vote field.

I admit this example is a little contrived. I'm not programming an animal farm. But if you ever do, and your Pig.equals method gets called with a Farmer object, remember to return false.

Thursday, May 3, 2007

My goal in life

The standard impossible-to-answer philosophical question, which has become very much a cliché, is “What is the meaning of life?” I've answered this for myself ages ago: life has no meaning by itself. Everybody can create as much or as little meaning for his or her own life as they please.

It helps to have an ultimate goal in life. For every decision you need to make, you can check whether it aligns with your life goal. Having a goal also gives your life purpose and motivate you to keep going.

When I read a post by Steve Pavlina a couple of months back, I was reminded of this. What was my goal in life? Steve suggests to write down potential goals until one makes you cry — that is the goal you're looking for. I discovered I didn't need this: the answer popped right into my head, and had probably been there for a long time.

All that remains is to write it down. My goal in life is to continuously keep improving myself, and thereby the world around me.

I'm a pretty altruistic person. Yes, I want to improve the world. But I also have a selfish part. I want to grow, to learn, to become a better person in every sense. These goals don't have to be in conflict. A “good” me also cares about his surroundings, helps other people, tries to make a difference for the better. That's the person I want to be.

Wednesday, May 2, 2007

Checks, exceptions and assertions

When programming in most modern languages, there are basically three different ways to deal with errors or abnormal situations: checking, exceptions and assertions. However, some people seem to misunderstand when to use which.

A check is the oldest trick in the book. It's simply an if (or equivalent) statement that checks whether a certain error condition holds. An exception can be raised (thrown) at the point where the error occurs, and then handled (caught) by any function lower down the call stack. An assertion is also a sort of check, but usually in the form of a library call or a language construct. A failed assertion usually results in terminating the program. Assertions can often disabled by a compiler parameter.

Now here's the flowchart that'll tell you which way of error handling to use:

  1. Do you think this error will ever occur?
    1. No » Use an assertion.
      Assertions reflect a claim made by the programmer: “this will never happen”. If it does happen, it must signify a bug. If you use assertions for anything else than validating things that you think must be true, you're using them incorrectly. Also, it's perfectly okay for a program to abort (with a meaningful message, mind you) when it encounters a bug during testing.
    2. Yes »
      Will this error occur in normal circumstances?
      1. No » Use an exception.
        Exceptions are made for, well, exceptional circumstances. They're meant for situations that you overlooked, or chose to overlook because they seemed (rightly or wrongly) rare enough that a check wasn't worth the effort. The great thing about exceptions is that they can be handled at a much higher level than the place where the error occured, a power that simple checks don't have. You can use this to catch the rarer errors at a high level, not having to scatter error checks all over the place.
      2. Yes » Use a check.
        Verify whether a file given on the command line exists. Do make sure that a person's last name does not contain any quotes. These are things that are actually expected to fail occur once in a while, even under normal circumstances. You can nearly always handle them locally, so there is no need to throw an expensive (performance-wise) exception around.

It's quite simple, really.

Monday, April 30, 2007

Blogger is bad, mmkay?

Blogger is bad. I chose to use it because I already had a Google account, I'm quite satisfied with Google's other services, especially Gmail and Google Maps, and hoped that their purchase of Blogger/Blogspot would mean that this stuff was equally good. I was wrong.

First of all, Blogger is slow. This I knew before I joined up, by just looking at other Blogger blogs. I hoped that they'd move stuff over to Google's servers so it'd become faster, but it seems this hasn't happened yet.

Second, it's buggy. I used to have a label called “software”, but after some sequence of actions it indicated the wrong post count, and I couldn't get it right again. I'm not the only one with this problem, and it has not been fixed yet as far as I know. I reported the problem. It took them twelve days to respond, seven weeks to acknowlegde the issue on the “Known Issues” list, and it hasn't been fixed yet. And there are more bugs and annoyances beside this one.

Third, and most importantly, the administration panel is terrible.

  • I have to type this post into a tiny box of 17 lines high and 37 characters wide. What did I buy a 24" monitor for again?
  • I have to type HTML tags because the HTML editing mode screws things up – I daren't even click it to find out anymore.
  • The preview uses a different style sheet than the actual blog. No idea what a definition list or a blockquote will look like unless I publish an immature post.
  • You can't get a preview in a separate window without actually publishing the post. This means you cannot edit and preview side-by-side.
  • I cannot save a post without navigating away from the editing page. Yet, given the fleetingness of a browser's edit box, I like to save regularly.
  • The settings panel contains settings with incomplete or unclear descriptions. Getting from there to the help page (which isn't all that helpful) feels like the old Windows 3.1 days all over again. This is the web – what about a hyperlink directly to the relevant page?
  • Because the headings h1 through h3 are used by the Blogger interface, internal post headings should start at h4. However, these are way too small, and h5 is almost unreadable. I had to modify the template to fix this.
  • Uploading an image inserts a thumbnail into the post, which is in JPEG format even if the image is a GIF or PNG, even if the image is small enough not to need any resizing at all.
  • The image thumbnail is placed at the top of the post, not where the cursor was when I clicked the “Add Image” button. (Okay, may be a browser issue, but Opera 9.20 is not really an obscure browser.)

Oh, and whatever I try (cookies, cache, Javascript, …), I cannot log in using Opera 9.20 under Linux.

I got so fed up after writing this post today that I gave WordPress a try. It works a lot smoother and less clunky than Blogger. It has the possibility to import a Blogger blog.

However, the import from Blogger to WordPress does not copy the uploaded images, instead hotlinking them and giving funny preview popups. I'd have to copy all images by hand. I am too lazy for that. Making the switch would also mean that my readers have to update their feed readers. I will assume that you're too lazy for that. Finally, links from other sites to my blog would stop working. People would have to try and find the new location of, especially, this post, and they are probably too lazy for that. That post is linked from a number of locations where I couldn't even edit the link if I weren't too lazy for that.

Recommendation for new bloggers: click here. In the meantime, I'll go pray that somebody at Google will wake up and fix this mess.

Update: I tried to be helpful. I tried to contact the Blogger team and give them the URL to this post. But the closest I could find would be posting in the Blogger Help Group, and as you may understand I'm a bit reluctant to do that. There seems to be no way to contact the developers or even the support people directly …

Review: Free C# code analysis tools

Over the upcoming days, I'll be reviewing the C# code I'm working on for my Bachelor's thesis. It consists of nearly 9.000 lines of code (over 300 kB), so I felt somewhat reluctant to read it all through. I decided to try and identify the most obvious problems using automated code analysis tools first. Because I use Visual Studio 2005 under Windows for C# development, the reviews will mainly be targeted at this environment.

I tried the following programs:

Potential problem detection
FxCop, Code Analyzer, Gendarme, devAdvantage
Quality metrics
devMetrics, NDepend, SourceMonitor, vil
Code coverage
Similarity detection

None of the following reviews are very comprehensive, because I only played around with the tools for a little while, but these reviews can give you a good indication on what to use and what to ignore.


The FxCop program by Microsoft themselves checks assemblies for compliance with the .NET design guidelines, and identifies potential problems within the code.


FxCop checks a lot of issues with Microsoft guidelines. It gives a certainty percentage for each issue found, and also clearly explains what the problem is, and how to fix it. Some of the more interesting issues that are checked for:

  • Are exceptions raised that should not be raised by user code? For example, System.Exception should not be thrown directly.
  • Is an IFormatProvider supplied when converting strings to numbers? Very important if your code is to behave correctly under other locales.
  • Are reference parameters of methods checked to be non-null before use?
  • Are fields initialized to default values that are already assigned by the runtime, like null for reference fields? This would result in an unnecessary extra assignment.
  • Does the string argument to ArgumentOutOfRangeException contain the name of the argument?
  • Are there any unused local variables?
  • Are you using public nested classes? These are considered harmful by the guidelines.
  • Do abstract types have a public constructor? This should be made protected.
  • Are there any unused methods?
  • Are variables like fileName capitalized correctly? “filename” is wrong since “file” and “name” are (apparently) separate words.
  • Are you using a derived type as a method parameter where a base type would suffice?

This list goes on and on. FxCop found 557 issues in my code from dozens of different rules.

You can jump directly from an issue to the corresponding source line(s) in Visual Studio or another application of your choice.


I'm very impressed by the comprehensive list of flaws that this program detects. It is definitely very useful for anything larger than a toy application.

Code Analyzer

At first glance, the Code Analyzer tool seems to do similar things to FxCop. And indeed the website provides us with a useful list of the differences, broken English included:

FxCop advantages (comparing to Code Analyzer):
  • Extensive set of rules available out of box. Code Analyzer provides just limited set of sample rules.
  • Since it works with assembly metadata works with code created in any .NET language. Code Analyzer works now just with C# sources.
Code Analyzer advantages (comparing to FxCop):
  • FxCop is limited to assembly metadata, Code Analyzer works with source code and provides more functionality like comments, position in source code and more.
  • FxCop has flat rules structure, which makes orientation in policy more difficult for larger policies. Code analyzer has hierarchical structure, based on logical rules categories.
  • FxCop provides only one type of report, Code Analyzer is flexible and provides more report types and users can create their own report types.

Especially the first advantage of Code Analyzer, source code inspection (as opposed to assembly inspection) seems worthwhile. Unfortunately, the program crashed on startup so I am unable to test it.


Powered by the Cecil code inspection library, Gendarme tries to identify points of improvement in your code based on a certain set of rules. There is no binary version yet; you'll have to build it yourself from an SVN checkout.


There is no GUI or IDE plugin, so you're stuck with the command line. Gendarme is run on assemblies, so it does not inspect the actual source code. On my program, it identified the following problems at multiple points in my code:

  • You should use String.Empty instead of the literal "", because it gives better performance.
  • A static field is written to by an instance method. (This was intentional: each object gets a unique ID, and I increment the “next ID” field in the constructor.)
  • Newline literals (\r\n or \n) in strings are not portable; use Environment.NewLine instead.

All in all, useful, but nothing spectacular.


This could become a very useful tool, if the rule set is expanded. At the moment it will not identify very much. The lack of a decent user interface limits its practical use.

devAdvantage and devMetrics

devAdvantage is a Visual Studio add-in that helps you identify areas that might use refactoring. devMetrics is an add-in to compute code complexity metrics. The Community Editions can be downloaded for free. The programs look interesting, but do not work on Visual Studio 2005. Bummer.


NDepend is a very feature-rich quality measurement tool, also powered by Cecil. It operates on .NET assemblies, but because it also extracts debug information it can link this back to the original source code. A free one-month version can be downloaded for trial, academic and open-source use. You can view the getting-started animation to get an idea of the possibilities.


NDepend uses CQL, the Code Query Language, to extract information about the code. It allows you to construct your own queries if you're willing to invest the time to learn this. CQL is similar to SQL; take a look at this demo (3 minutes 30 seconds, Flash). For example, you can find all methods with over 200 intermediate language instructions, and sort them by descending number of instructions, using the following query:

SELECT METHODS WHERE NbILInstructions > 200 ORDER BY NbILInstructions DESC

NDepend comes with a few dozen built-in CQL queries that measure certain aspects of your code and can be used to quickly spot potential problems.

NDepend will spit out an HTML file like this one with humongous amounts of information on your project, most of which is just detailed factual information that is almost entirely useless. In my situation, NDepend failed to include the CQL results in the HTML file for some unknown reason.

The HTML file does, however, contain some useful information. It provides you with a table where the worst statistics are highlighted per method. It also lists warnings that, as far as I could tell, are not produced with CQL queries. In my case these were mainly “method so-and-so is protected and could be made private” warnings.

Other interesting features are the TypeRank and MethodRank, computed like the proven Google PageRank. It shows which types and methods are the most important in your program. On my program it did indeed give a very good indication.

The main part of the program is the VisualNDepend. This produces a two-dimensional chart much like the disk-space charts from SequoiaView (among others). The area of each rectangle indicates the value of some metric. by default this is the number of lines of code of the respective class or method, but you can also select metrics like the MethodRank or the cyclomatic complexity.

Unfortunately, there is no easy way to ignore certain source files or methods, e.g. designer-generated code. You'll want to ignore these while scanning the results, because generated code usually makes for terrible metrics. You can use CQL to do this, but you'll have to modify each of the predefined CQL quality metrics.


NDepend is a difficult tool to work with at first. It can give you a wealth of useful information once you get the hang of it, but for a quick inspection it is less practical.


SourceMonitor is a simple free program to compute quality metrics on your code. Apart from C#, it can also be used for C, C++, Java, Visual Basic, VB.NET, Delphi and (strangely) HTML.


SourceMonitor produces a table view of some quality metrics of your code, organized per source file. In the table view, you can double-click on any source file to get more detailed information about this file. This produces, among others, a chart showing how many statements are at a particular “block depth”, the number of brace pairs surrounding it.

The program creates a checkpoint for each measurement, so you can easily track the (hopefully) downward slope of your program's complexity while you are refactoring.


A simple, yet useful tool. Very easy to use and understand.


Everything that Vil does, according to its web site, is done better by NDepend. Also, Vil has no GUI yet and gives a very discontinued impression. I won't bother.


NCover is a code coverage tool. Its main purpose is to determine how much of your code is covered by your unit tests. It does this by simply running the program or tests and looking which lines are actually executed.


NCover is a simple command line tool without many bells and whistles. Simply tell it which program to run. It generates a large XML file with the output data (445 kB already in my relatively small program). The XML can be viewed with an accompanying XSLT style sheet, which you have to copy over to the right directory yourself.

The resulting view gives you a percentage bar for each class, showing the amount of code executed in that class. Clicking the class name expands it, breaking it down into methods. Clicking a method name breaks it down further into its individual lines.

There are ways to run NCover periodically and monitor the coverage of your tests. I haven't tried this.


A simple tool, but more useful than I thought at first glance. It can give you a good indication which parts (especially, which if branches) your unit tests have missed. (Then again, this turns unit testing more into a white-box test when it was intended to be black-box.)


Simian identifies regions of code that are similar. It is a little Java-oriented, but also supports many other popular languages. Simian is free for non-commercial projects and for trial purposes.


There is no GUI or Visual Studio plugin, you'll have to work from the command line. This hugely diminishes the ease of use, especially because the Windows command prompt is so clunky. Simian produces a list with entries like the following:

Found 11 duplicate lines in the following files: Between lines 30 and 63 in maths\MatrixAlgebra.cs Between lines 28 and 61 in maths\Vector.cs

Correct, MatrixAlgebra.cs was split up into Matrix.cs and Vector.cs, and should be removed entirely.


A duplicate code finder sounds very useful, but it's use is very limited. It found nothing useful on my project. The results may vary for other coders. In any case, the lack of IDE integration for .NET analysis makes using this tool more effort than it's worth.


If you care about the details, don't look any further than FxCop. It's very comprehensive and easy to use. Code Analyzer may complement FxCop nicely, if you can get it to run.

For a more general view on things, NDepend can be very useful, if you're willing to invest some hours to get acquainted with it. For a quick overview, SourceMonitor can be a better alternative.

If there's any free program that I've overlooked, please let me know so I can include it!

Monday, April 23, 2007

The importance of teeth brushing

I am not a dentist. I'm not going to tell you how brushing your teeth is good for you because it removes plaque and is healthy for your gums. For me, teeth brushing has another, entirely different use.

My morning ritual looks more or less like this: get up, go to the bathroom, have a shower, get dressed, eat breakfast, brush teeth. After that, I usually went away to university. Currently I'm mostly working and studying from home. In between the getting dressed is often a lot of e-mail reading, blog reading, forum reading etcetera. Because you're a blog reader yourself you know how time-consuming these things can be if you don't put a stop to them. Sometimes half of the morning has already passed before I close the web browser and start working.

It turns out that these are also the times that I neglect to brush my teeth. Teethbrushing signifies the end of my morning ritual, and thereby, the start of work time. The fresh taste in my mouth is the physical reminder of that. When I notice that I'm wasting time on reading blogs, I just have to get up and brush my teeth, and the sense of urgency to start working increases significantly, often up to the point that I start right away.

This conditioning may have been with me from the time I started to go to school. Even back then, brushing my teeth was one of the last things I did before I left. No wonder that the association is ingrained so deeply. A hack though it may be, it's very useful and I must take care to keep it.

Brushing my teeth is also the last thing I do before I go to sleep. It would be interesting to see whether it also makes me sleep better. Perhaps I can combine this experiment with some future power napping experiments.

Sunday, April 15, 2007

Panasonic NV-GS320 review

Last Saturday I bought the NV-GS320 digital video camera from Panasonic. Since this camera is pretty new, I couldn't find many decent reviews on the web, so I decided to write one of my own. (The NV-GS320 seems to be the same camera as the PV-GS320 but with some different names for the features. The spelling of “colour” on the Panasonic web page suggests that the NV was made for the European market.)

This camera sells for 500–600 euros, which places it in the medium- to high-end consumer range. I bought mine at Media Markt for € 578 (prices as of April 2007).

The combination of 3CCD and MiniDV makes this camera almost unique in its price range. Most other 3CCD cameras start around € 1000. How did Panasonic do this? Probably a tape deck is cheaper than a hard disk or a dvd writer. But what else did they leave out? Let's find out.

Contents of the package

Apart from the camera itself and a battery, the package includes a remote control which, for a nice change, includes the required button cell battery. There is a manual (Dutch in my case, no English version included) which is comprehensive and relatively decent, though not excellent. Also included in the package are a USB connector cable (large to small mini A plug), an adaptor and the necessary cables, and an A/V cable to output to S-Video and three phono connectors. A MiniDV tape and a FireWire cable are not included.

Picture quality

The NV-GS320 is one of the few cameras in its price range sporting three CCD sensors (3CCD). This is supposed to give a clearer picture with more vibrant colours. The sensor allows for hardware widescreen (16:9) ratio, without losing quality compared to standard 4:3. The camera uses a Leica Dicomar lens with a maximum zoom factor of 10×.

My first impressions of the picture quality were excellent. The images are very sharp and colourful in daylight:
(Click to enlarge.)
In the enlarged version it looks a bit pixelized, but this is a result of the deinterlacing. Of course, I cannot compare the image quality to that of other cameras, but in the absolute sense these pictures are very good.

The camera can either be set to automatic or manual mode. When switching to manual, the current settings of the automatic mode appear to be retained, which is very handy.

The automatic white balancing can take a few seconds to kick in, but usually finds the right balance. The same holds for the aperture and shutter speed – sudden changes in lighting are not picked up immediately. Whether that is good or bad depends on the situation. Autofocus works just fine and I haven't noticed any unexpected hiccups. Filming through a dirty window, however, is not recommended.

In manual mode, you can configure the aperture, gain (only when the aperture is fully open), shutter speed, and white balance. I have not used the manual mode much, as automatic seemed to work just fine in all conditions.

There is an option called “backlight compensation” which brightens the input at the cost of saturating a light background. This works fairly well and can be very handy when shooting, e.g., a portrait against a bright sky.

Panasonic's O.I.S. (optical image stabilizer), done in hardware by wiggling the lens, promises excellent correction for shaking. Many other cameras do this in software, slightly degrading picture quality along the way. My finding is that the image stabilizer manages very well to correct for small vibrations; if you hold the camera properly, it is possible to compose a fairly stable shot at the full 10× zoom. Larger shaking is not compensated for, but even these motions seem a bit smoother than usual.

Low-light performance

One of the most important factors in a camera is how it performs under bad lighting conditions, like lamp light, candle-light or worse.

Performance under indoor lamp light seems alright:

The picture does tend to get a bit blurry when moving, so shooting from a tripod whenever possible is recommended. However, as you can see, the level of noise is very acceptable. The above picture was taken with the maximum aperture and gain settings (18 dB); apparently the black areas were too difficult even then.

Additionally, there is a feature called “Colour night view” for shooting really dark scenes. The catch is that the framerate drops; I've observed factors between 4 (which may be acceptable sometimes) and 18 (which isn't). The other catch is that anything that moves becomes a big blur. The third catch is that light areas bleed a lot into dark areas.

If you can live with all of that, the night shot is pretty impressive for what I've seen. Here's a shot in the dark, lit only by a TFT monitor:

Admittedly, I tried to hold the camera very still while taking this shot.

Of course, TFT light is a little extreme, so I took the camera out to film by street light:
Left: without colour night view — Right: with colour night view
This seems to be one of the few situations where the white balancing screws up, resulting in a very reddish picture. I could not correct this by setting it to lamp light manually – street light is a different beast altogether. Manual white balancing would probably have fixed it, but I forgot to bring something white along. Also, you can clearly see the light bleeding into the dark areas.


Like with many digital videocameras, the Panasonic NV-GS320 is capable of taking still photographs. The maximum resolution is 2048×1512.

Unfortunately, this resolution is quite pointless. Even in bright light, when the aperture can be nearly closed, the photos taken are not very sharp:

When looking at them up close, it even seems that software sharpening has taken place, judging from the halos:
I suspect the picture is taken at a lower resolution and then scaled up in software.


One of the biggest weaknesses of this camera is the lack of an input for an external microphone, as well as headphone output. If you don't like the sound of the internal stereo microphone, you're out of luck.

That being said, the internal mic is quite decent. Any noise, from the tape motor or otherwise, got drowned out by the environment noise in places where I filmed. Handling of buttons (especially zoom) goes nearly unnoticed as well.

There is a setting to “zoom” the microphone. This, however, means applying gain to the signal, not altering the area over which sound is picked up. The microphone also picks up a lot of sound from the environment, which can be a good or a bad thing depending on circumstance.

The camera includes a wind noise filter. It's hard for me to judge how good this works; when shooting straight against the wind, noise is certainly there, but this may be normal. During 15 minutes of shooting outside on a medium-windy day, wind noise occurred only a few times, so it's not too bad overall.

Recording medium

This camera is one of the few that still record to MiniDV tapes; most cameras nowadays record to either a hard disk or some mini-dvd format. MiniDV still has some advantages: its compression factor is less, supposedly resulting in better image quality, and the tapes are quite cheap and widely available. Its prime disadvantage is the linearity of a tape, and the limited capacity (just over one hour). But tapes can be swapped, while hard disks cannot.

Still photographs are recorded to an SD card up to 2 GB or an SDHC card up to 4 GB.

LCD and viewfinder

The NV-GS320, unlike many of its colleagues, still has a viewfinder. Many other cameras nowadays rely only on the LCD display. Not only does this drain your battery, but also the picture can be hard to see in bright light.

However, LCD technology has come a long way, and even in broad daylight with the sun right behind me, I could still see the picture on the LCD quite well. But if you don't look under just the right angle, the LCD tends to show clipped whites where they aren't, suggesting over-exposure that isn't there. Looking straight at the screen, the problem disappears, but this is something to keep in mind especially when fine-tuning in manual mode.


The controls of the camera take some getting used to, because nearly everything is controlled by a little 4-way joystick which also functions as a push button. Once you get the hang of it, it's really quite easy and intuitive. The joystick controls an on-screen pie menu with options relating to the current mode (filming, playback etc.). The joystick is also used in the configuration menu.

The pie menu contains a tiny help feature, explaining the meaning of the little icons. This is convenient, because the text labels of the options cannot be seen before you activate or deactivate them. On the other hand, toggling an option to find out what it does is faster than calling up the help menu.

My overall impression of the menu structure is okay, though not perfect. But the menu is not deep, and you'll quickly learn where to find every feature.

Manual focus has to be done with the joystick, which is not half as convenient as having a proper focus ring, and on a small LCD it's hard to see whether you have focused properly. The LCD does not zoom in to assist you, nor does it show to what distance the focus is currently set.

Another annoyance is that you cannot hold down the button to increment or decrement values in manual mode; you have to keep wiggling the joystick to make large adjustments.

Some of the buttons cannot be reached when filming with one hand, most notably the menu button and the auto/manual switch. But you won't use these buttons while recording anyway.

The camera comes with a remote control, which duplicates most of the buttons on the camera, allowing for nearly complete control. There are also dedicated buttons for playback mode. One feature that is only accessible through the remote is “audio dub”, allowing you to create a voiceover right there on the camera. If you recorded the audio in 12 bits instead of 16, you'll be able to record the voiceover on a separate track without losing the original audio of the filmed material.

Battery life and power supply

According to the manual, the packaged battery can be used for 30 minutes when actively using the camera. It requires 1 hour and 40 minutes to recharge. Batteries with an effective lifetime of up to 1 hour 45 can be bought separately. However, I found that 30 minutes is not as bad as it sounds: I've been out filming for over an hour and the battery was still nearly full. I shot about 15 minutes of film in this time.

The accompanying adaptor can be used to power the camera directly, or to charge the battery while it's not in the camera, but not both at the same time. Unfortunately the FireWire and USB connections on the camera are located below the battery, so you'll have to switch to the adaptor while capturing to the computer, which means you can't recharge the battery at the same time. This could be problematic in some situations.

Capturing and editing

The camera can be connected to a computer using either USB 2.0 (cable included) or FireWire (cable not included). Adobe Premiere fans will like the FireWire, because Premiere Pro 2.0 is not really suitable for USB capturing. The accompanying software does a better job at USB capturing, because device control works. However, to use scene detection (place each captured clip into its own file), the software will rewind the tape a bit at every splitting point. I can't imagine that this is good for the tape nor the tape mechanism, and it's also completely unnecessary because Premiere has no problems capturing and splitting it all in one go.

Two editing programs are supplied: SweetMovieLife for basic editing and MotionDV STUDIO for (slightly) more advanced work. Because Premiere is my preferred piece of editing software, I only used MotionDV STUDIO for the USB capturing. First (and last) impression: have seen worse.


If image quality is your primary concern, this is the camera for you. In low light, too, it remains very usable. The picture stabilizer works pretty good. But don't buy this camera to take still photographs.

Audio quality is decent, but the mic picks up sound from all around the camera. The lack of a microphone input is a severe shortcoming.

The rest of the feature set is excellent, and the backlight compensation and night shot are nice additions. Remember to buy a FireWire cable if you intend to use anything but the accompanying software.

On the usability front, this camera is decent, but not excellent. If you're afraid of buttons and menus I'd recommend looking elsewhere, but anyone with a little bit of technical experience will have no problem controlling this camera.

Personally, I'll be returning this beast because I know I'll want to plug in an external microphone at some point. But if it weren't for that … I'd definitely go for it.

Saturday, March 17, 2007

Getting better response to your e-mails

People don't read. This is especially true for e-mail. People often check their mail in between other activities and will not take the time to properly read what you've written — especially if you want something from them. So here are some little psychological tricks to make your e-mails more effective. (This applies to some other things, like forum posts, as well.) I'm not sure how much of this I figured out myself, but in my experience these techniques work.

First, and this may seem very obvious, be very clear on what you want. If I receive an e-mail that is vague about the precise answer or action expected from me, I'll happily ignore it — less work for me! Here's a real-life example:

I used to be able to access your website from my mobile phone. Now this no longer works. Have you changed anything?

My response:

Not that I know of, and I should know if anything had changed.

Does that make me a bastard for not helping him? Doesn't matter, the point is that you don't want to be the person on the other side of this conversation. He could have written instead:

I used to be able to access your website from my mobile phone. Now this no longer works. Do you have any idea what could cause this problem?

This would force me to think about this problem and come up with some possible causes and solutions. (Yes, I could also have answered that with a simple “yes” or “no”, but I'm not that big a bastard.)

You may know that the first and last sentences of a paragraph are the most important. In my experience, it mostly comes down to the last one, and the last sentence in your e-mail in particular. This is the sentence that keeps on ringing in people's heads after they read (or even skimmed!) your mail. So make sure the last sentence counts.

If you need a piece of information from someone, or you need someone to do something for you, start your e-mail in whatever way you wish. You'll probably want to explain what you need and why. People will refer back to this when answering your mail. But if you want to avoid that your e-mail ends up marked as ‘read’ and forgotten, remember to always finish the e-mail with a concrete question. For example, would you feel more inclided to respond to this:

You showed part of a movie on YouTube in your presentation, but I cannot find it. But your presentation was great! I really loved the very visual way in which you presented this matter.

or to this:

Your presentation was great! I really loved the very visual way in which you presented this matter. You also showed part of a YouTube movie, but I cannot find it. Could you please send me the URL?

If your mail is a reply in a thread where you've asked the question before, it can be a good idea to restate the question.

If you want to ask multiple things, you cannot all put them in the last sentence. Put multiple items into a list, either bulleted or (preferably) numbered. In this way, it becomes very hard for someone to (consciously or unconsciously) ignore one of the items. Not so long ago, I wrote something like:

You could send me a file containing a test case, saved from the program. […] I would also like to know from Ben what can be improved on the export function. […]

I never got a reply. This could better be formulated as:

  1. Could you please save a file from the program and send it to me?
  2. Could you please ask Ben what he thinks can be improved on the export function?

Observing these simple rules in the e-mails you send can make your life just a little bit easier.

Saturday, March 10, 2007

The Coffee Paradox

Apparently, the phrase “coffee paradox” is used in economics to mean something like coffee trade making rich, coffee-consuming countries richer while poor, coffee-producing countries get poorer. I'm not sure I got that exactly right, and it doesn't matter anyway, because it's not what I mean when I use the phrase.

The more down-to-earth variety of Coffee Paradox is this: if you get up in the morning, but are only physically awake and need coffee to get your mind going, how is coffee ever going to happen? And yet it happens. Yet we manage to complete this huge amount of mind-bogglingly difficult steps, and usually in the right order too: filling the coffee pot with water (“How much water again? There are no lines on this damn thing.”), pouring the water into the water tank (into it), putting the pot back in its place (making sure it is in just the right place), putting in a filter (possibly needing to throw out the old one), putting the coffee into the filter (how many spoons again?), closing all necessary lids and doors and hitting the switch. It really is paradoxical that we can accomplish this while we've only been awake for a minute or so. (Americans would probably go to a Starbucks. By car. Which leads to the so-called Driving to Starbucks Paradox.)

For a long time, I though I was the inventor of the phrase Coffee Paradox. I was pleased to see it appear on a web forum, written by someone I know in real life. It's really cool if your inventions go viral.

But today, when I mentioned this among some friends, someone else also claimed to be the inventor of the Coffee Paradox. And someone else else said that someone else else else (hm, perhaps I should've named them) had been using the phrase for many years, before I had ever made a pot of coffee myself.

So it's possible that I heard about the Coffee Paradox from someone else else else, forgot about it, and came up with it later thinking it was my own invention.

In science, there's also the problem of not knowing where something came from. They have a solution for it. If you state something, clearly indicate whether it is your own, and if not, give a reference. We should genetically enhance our brains to do the same. That would get rid of this nasty Coffee Paradox Paradox.

Wednesday, March 7, 2007

Workrave: more than an RSI prevention tool

Workrave is a little program (for Linux as well as Windows) that aims to prevent RSI. With its default settings, it encourages you to take a 30-second “micro-break” every 3 minutes and a 10-minute “rest break” every 45 minutes. It also implements a “daily limit” of 4 hours. The breaks themselves do not count for these times, nor does time that you're not using the computer — in fact, if you do not touch the mouse and keyboard for long enough this will count as a break and reset the timer.

I've been using Workrave ever since I felt a strange kind of pain in my wrists and elbows. As a computer science student whose main hobbies also involve computers, I cannot afford problems with RSI. Not that I really feel scared of it, but I know that I should. Not being able to use a computer would drastically change my life. Oh wait, I don't have one. Never mind.

The following experiences come mainly from days of programming, i.e. writing actual source code. Note that this is very different from, for example, the work of a typist, who hammers away at the keyboard all the time, or a Photoshop artist, who uses the mouse a lot but also switches to the keyboard for shortcuts.

The first thing I noticed was that a daily limit of 4 hours is plenty for a full workday of coding. This may sound odd, but apparently I spend at least another 4 hours just looking at the code, or just thinking without looking at the screen at all. This goes to show that coding is a difficult business indeed.

The second thing I noticed came as a surprise. You might think that a tool like Workrave is bad for your productivity. I can say from experience that the opposite is true. Because a break is always just around the corner, you do your best to do as much as possible in the little time you have left. It is easier not to get distracted because you know you can allow yourself to be distracted in just a few minutes.

Thirdly, a break brings about a change in perspective. During the breaks, especially the micro-breaks, my mind shifts from the gory one-line-at-a-time perspective to a higher level of the work: the entire function, class, namespace or program architecture. It is very refreshing to look at your work in this way. I see problems that I would otherwise have noticed only later, when they would need fixing instead of preventing. Or I would not have noticed them at all. I'm quite sure that using Workrave improved the quality of my code.

Fourth, what is a 10-minute rest break good for? It's one of those little time slots that you can fill with one of those infinitely many little things that need doing. Make a phone call, clean out your wastebasket, tidy your desk... there are always these little things to do, and rest breaks encourage you to do them. But just as often I end up pouring lots and lots of tea into me, which is not bad either.

I heard from several people that they have similar experiences with the program. So, whether or not you ever had any RSI symptoms: if you do any kind of work at the computer that involves thinking, you really should give Workrave a try.

I just have time to read over this post once more before my rest break. Then I'll have a nice cup of tea.

Tuesday, March 6, 2007

Redefining a movie's "realism"

Many people have an aversion against science fiction “because it's not realistic”. For them, a movie (I'll stick to movies and tv series here, but this applies to any story) has to be consistent with the real world. If there are any space ships, lightsabers, monsters or elves the movie “does not make sense” and they quit. Let's call this kind of consistency external consistency.

Internal consistency is then consistency with the universe of the story. It's okay that Jedi have sword-like things while the rest of the universe use guns, because a lightsaber is clearly more useful for a Jedi. It is not okay if a movie pretends to be set in the here and now but, say, planes keep landing during a bomb threat on an airport.

Whether or not something is internally consistent also depends on how seriously the story takes itself. In Star Trek, for example, the transporter needs to break down, be jammed, be out of power or be stolen by the Ferengi every other episode, or they could just have beamed out of every dangerous situation. If the transporter was still online this would violate internal consistency. Doctor Who, on the other hand, often ridicules itself and clearly takes itself a lot less seriously. As a result pretty absurd things can happen without anyone caring: the story is itself a bit of a joke.

I don't care one bit whether a story is externally consistent. I like (some) science fiction and fantasy as much as anything else. But when a story is internally inconsistent, I start to dislike it pretty quickly.

This happend a few weeks ago when I saw Spielberg's “Close Encounters of the Third Kind”. Although it's not realistic that aliens would come to visit the Earth, I don't mind that. It's perfectly consistent with a story of, well, aliens coming to visit the Earth. The first thing that disappointed me, though, was UFO's being chased by police cars. What would you do if you were a UFO pilot, being able to move freely in all three dimensions, and you were chased by a police car on a road, being able to move only in one dimension? Saying that aliens are stupid doesn't cut it, because they clearly have the ability to build space ships. Also, if you're an alien race, trying to make contact, how can you have learned how latitude and longtitude work but not be able to say “Hello World”? And if you're a benevolent race, would you really kidnap people, shift them through time and throw them out nearly half a century later? All that does not make sense internally and made the movie quite disappointing for me. (Apart from all the strange things that are left entirely unexplained, but seem to be there only for the sake of overall strangeness.)

Of course this is not the only factor by which I judge movies. I don't mind a little bit of internal inconsistency here and there. I really did enjoy Casino Royale.

Monday, March 5, 2007

Why Vista file tagging has to suck

Windows Vista has the possibility to add tags (labels) to files, much like Gmail and nearly all photo management programs, for example. Gina Trapani on Lifehacker has written a nice article on how to use this feature.

However, you cannot tag each and every file: the file type has to support metadata. So you can tag Office documents, JPEG files, MP3s, but forget about tagging, say, txt files, TeX files, source code, or even files that do support tagging but are not supported by Vista (I suppose it is possible to add extra types through plugins, though).

I agree that this is a severe limitation, but I understand the decision and agree with Microsoft that this is the best way to do it. There are basically five options to implement file tagging:

  1. Store metadata in the file system. WinFS would support this, but this new filesystem (oh, sorry, “future storage”) was eventually dropped from Longhorn. But, in fact, NTFS also supports tagging, and in fact Windows XP already has an interface for this, but it seems that many people don't know this.
    However, you would only be able to tag files residing on certain filesystems, which would trash the tags if the file is copied to a USB drive (FAT32), a cd or dvd, uploaded, e-mailed, zipped, stored in Subversion, placed on a Novell or Samba network share, backed up, etcetera, etcetera. Vista could issue a warning if a copy or move action would destroy the metadata, but these warnings would be annoying, and also confusing to users who have never used the tagging to begin with.
  2. Store metadata in a database per filesystem. You would need one database per volume, let's say c:\metadata.dat. This would of course be a hidden and system file, so it would not get in the way.
    This approach has the advantage of working on every filesystem, including USB drives and network shares, but still wouldn't work if you zip or e-mail a file or burn it to a dvd. As long as the web, e-mail, cds and dvds don't support external metadata, we cannot ever expect this to be possible. With “we” I mean “we software developers”: the average user will be expecting his metadata to be retained!
    Also, this option has some implementation issues that need a lot of thought: the central database will get big, so you'll probably want some sort of caching, but in these mobile and Plug'n'Play days the OS cannot rely on a filesystem being available all the time. They could probably work something out, but it wouldn't be perfect in all situations.
    Moreover, what if the file is moved, changed or removed by a system that does not understand the metadata file? The database would get out of synch with the actual contents of the files and the filesystem, and things would basically become a mess.
  3. Store metadata in a database per directory. Like the desktop.ini files used to store folder settings, a metadata.dat file could be added to every folder that contains tagged files.
    This approach has the same advantages and drawbacks of the previous one.
  4. Store metadata in a file per file. When you save a file named index.html from Internet Explorer, it creates a directory index.html_files containing all dependencies of the HTML file (images, style sheets, Javascripts etc.). Windows treats the HTML file and its associated directory as a unit, based on their filenames. Something similar could be done for metadata: for every file.ext, add a hidden and system file named file.ext_metadata that is always copied and moved along with the file.
    As long as you use Vista's Explorer to copy and move files along, this will be fine. But again, even when working only under Vista, some programs will still drop metadata without notice: think of cd burning tools, backup tools or compression tools. All these applications would need to be updated for Vista, which will take time.
  5. Store metadata in the files themselves. This is the most localized approach, and it is the one that Microsoft decided to use.
    Its one big advantage over the previous methods is that metadata will never, ever get lost in a file transfer. If you tag a file, it will remain tagged for the rest of its life, no matter if you zip it, export it to punch cards, or send it to Jupiter and back. But... only some types of files can be tagged.

All solutions above will confuse users at a certain point. People will inevitably start to rely on tagging features, so we shouldn't treat tags as an ‘extra’ which can be discarded lightly. The first four options will allow you to tag anything (including directories, incidentally), or at least anything on certain filesystems. But using this approach, the tags may get lost in mysterious ways that the average user won't understand. Tags getting lost will lead to a lot of very unhappy people. And I haven't even started on the lock-in that results if the tag database format is not open.

On the other hand, some users (myself included) will not be very happy if they are only allowed to tag certain files but not others. Many people will not understand the technical reasons for this. But if we consider that the “average user” uses her computer for office work, digital photography or a music collection, we see that these kinds of files are all taggable, and she may never even notice this limitation.

In short, I'd rather be safe but a bit limited, than randomly and unexpectedly losing my tags. Software should treat the user's data as the most precious thing on Earth.