Recently, I came across the need to generate test data for a protocol converter: a pair of functions converting one set of classes into another and back. To give you a bit more background, each set of classes represents the set of messages which can be exchanged between a service and a network client: one set which is used by the service internally, and the other set which is tied to the specific serialization protocol implementation (in this case Google protocol buffers). The protocol converter then boils down to some large pattern matches with quite boring code, hence the need for good and thorough verification.
In former times, I would have written down a smallish set of test data by hand, hoping to cover most cases. From that developed a technique where I explicitly convert some of these test instances into functions, leaving out one or two of the arguments and applying some randomly generated data to these. This approach does not scale, so inspired by the new Future.flow mechanism in the upcoming Akka 1.1 release (based on delimited continuations and monadic composition) I decided to try out something new: monadic test data generators.
The principle is quite simple: provide generators for simple types and support composition for building up complex generators. The latter is naturally supported by Scala's for-comprehensions and based upon implementing map and flatMap. The generator skeleton looks surprisingly simple:
def map[TT](ff : T => TT)= Generator(()=> ff(f.apply))
def flatMap[TT](ff : T => Generator[TT])= Generator(()=> ff(f.apply).apply)
Luckily—though I don't really believe in coincidence—the generator can be covariant in its generated type. The first apply is the single-element generator, while the second apply(Int) generates a sequence of elements. A generator for one type can be used to construct a generator for another type by applying the conversion function to map. flatMap does the very same thing and is needed for the proper function of for-comprehensions.
The next ingredient is a seed of simple generators:
The for-comprehension basically binds the three arguments to A's constructor to the three generators, thereby creating a new generator. The result is then exemplarily applied 12 times to generate a sequence of twelve possible values of type A. The monadic structure enables using this pattern without limit to construct arbitrarily complex generators. And this should be compared to my previous naive approach:
val someAs =0 to 10 map ( x => A(true, Random.nextInt, 1234L))
val someMore =0 to 10 map (x => A(false, 42, Random.nextLong))
I am new to functional programming and I have the following question:
why the monadic approach should be preferred to:
val someAs = 0 to 10 map ( x => A(Random.nextBoolean, Random.nextInt, Random.nextLong) )
This one is much simpler to code/understand/maintain.
In the end my code leads to the execution of the same commands, this is true, and I started out exactly in the same way as you propose. But then I wanted to combine generators for A's and B's and some primitive types into a generator for C's, which suddenly produced more visual clutter than I liked, so I started thinking and remembered that composition is the one thing monads should be good at, so I tried it.
In my specific use case this approach actually did save some typing and the result looks less confusing.