[Shootout-list] Directions of various benchmarks

Einar Karttunen ekarttun@cs.helsinki.fi
Wed, 18 May 2005 17:11:35 +0300


skaller <skaller@users.sourceforge.net> writes:
> How the heck can you judge if a solution in C or Ocaml
> or Haskell is 'the same way' as a Java program when one
> language is imperative with variables and the other
> is functional?

This should not be a hard problem if the description of the way is
textual. Of course it will be prone to people finding new
interpretations of the specification.

> This isn't a rhetorical question or philosophising: already
> Isaac had the audacity to ban the Felix solution to the
> EH test because it was orders of magnitude faster than every
> other solution, with eth excuse that "by my(skaller) own 
> admission it wasn't using the same mechanism as C++/Java" 
> -- but failed to similarly ban the C solution and 
> the Scheme solution because they
> both also use variants of the same technique -- all three
> languages use continuation passing, C with Jumpbufs,
> Scheme with Call/CC, and Felix with a non-local goto.

Wasn't the problem that Felix said "jump to handler X" and other
languages were saying "jump to the topmost handler for this
exception" (I don't remember so I can be wrong too).

> Similarly 'concurrency' tests are in question, because
> a requirement to use Posix thread is absurd: should we
> really measure OS thread handling limitations? 

The requirement is not posix threads but pre-emptive threads. 
There is a semantic difference between them and coroutines.

> OTOH if you allow user space context switching, as supported by 
> several systems (including Felix and I think Haskell?), and also
> supported by Ocaml bytecode interpreter and Python .. then there's
> no way to know if you will really get concurrency .. and in the
> test cases there was none .. the Felix optimiser just eliminated
> the overhead, again outperforming other solutions because even
> the dumb optimiser I have implemented is smarter than the testers :)
> <no offense intended>

All other languages could switch to using co-routines and have a similar
performance boost - but we are trying to measure something
different.

> Example. Suppose you want to say 'sort this file using
> a bubble sort'. Do it 'the same way' at the algorithmic level.

This is fine - we all know (or should know) how to do this.

> Well you CANNOT sanely require that. So, what you require instead
> is 'sort this array with less than 5 lines of code without allocating
> more than a constant amount of memory'.

You cannot get a sorting algorithm like that very easily in Haskell. It
needs mutable arrays for the constant memory and thus the LOC will climb.

> And you can add that 'the intent is to use a bubble sort'. 
> (Maybe its 6, but you can't implement a quicksort in such a small loc :)

Actually the classic (inefficient) Haskell quicksort is 5 lines:

qsort []= []
qsort (x:xs) = qsort small ++ mid ++ qsort large
   where small = [y | y<-xs, y<x]
         mid   = [y | y<-xs, y=x] ++ [x]
         large = [y | y<-xs, y>x]


> So anywhere you really want to specify a particular algorithm,
> it can almost always be done by specifying limits on the space
> and time complexity order plus perhaps a constraint on the LOC of code.
> There is a way to get what we want *without* gratuitous and arbitrary
> limititations.

Limitations like that can result in unexpected problems.

- Einar Karttunen