Recently at Scalathon I have been made aware of this article, detailing a data structure for high-throughput data processing. I could not help thinking that the ideas expressed in that article are sound very familiar: while doing my PhD research at the COMPASS experiment at CERN, I implemented an online event filter in 2004 which utilized the same data structure for the central stream buffer. I shortly described it in my PhD thesis (page 38ff) and an IEEE conference record for NSS/MIC 2007. We named the application “Cinderella” after the fairy-tale, as its purpose was to sort the bad (i.e. uninteresting) events from the stream of good ones before it hit the disk buffers.
Recently, I came across the need to generate test data for a protocol converter: a pair of functions converting one set of classes into another and back. To give you a bit more background, each set of classes represents the set of messages which can be exchanged between a service and a network client: one set which is used by the service internally, and the other set which is tied to the specific serialization protocol implementation (in this case Google protocol buffers). The protocol converter then boils down to some large pattern matches with quite boring code, hence the need for good and thorough verification.
In former times, I would have written down a smallish set of test data by hand, hoping to cover most cases. From that developed a technique where I explicitly convert some of these test instances into functions, leaving out one or two of the arguments and applying some randomly generated data to these. This approach does not scale, so inspired by the new Future.flow mechanism in the upcoming Akka 1.1 release (based on delimited continuations and monadic composition) I decided to try out something new: monadic test data generators.
Fink has been very helpful over the years—I started using it when I bought my first MacBook in 2003—but I always dreaded making the upgrade to Snow Leopard, especially after having shunned the upgrade to Leopard before. It turns out that I've been wrong: starting from scratch, it took a handful of keystrokes and 10 hours of compiling to get everything up and running again.
Inspired by Higher Order Perl, I once tackled the problem of merging several log-files into one common time-line. While the solution itself is useful, this article also discusses the iterator pattern in Perl.
Shell level access to a Linux host enables a rich universe of possibilities, even if (initially) lacking administrative privileges. If you find yourself in the situation that you have to provide access to certain applications or functions to someone who you do not trust on a moral or technical basis, a custom login shell might be one possible solution.