[Shootout-list] Re: Coding For Speed
Shae Matijs Erisson
Thu, 24 Mar 2005 21:55:21 +0100
I think the essence of this email is something I've been wondering about
> When engaging in the exercise of "benchmarking" inherent in that
> exercise (to me) is to identify coding techniques and paradigms
> within the idiom of the language being used to perform the
> stated tasks as fast as possible.
Similarly, my priorities are understandability and programmer time.
For a given task, I want to be able to turn out a working solution that can
easily be understood by readers in as little time as possible.
To me, source code is primarily communication with other programmers, most
often myself some years after I wrote the original code.
But there's a much greater value in code as communication.
It's hard for me to find library code that fits my clients' needs, partially
because I don't know their needs ahead of time (usually, neither do they).
When I find code that does part of what I need, if it's well-written, if it
communicates well, then I get the greatest value because I can quickly change
or extend that code to do the rest of the task.
> Thus, I disagree that benchmarks are about applying some
> arbitrary programming paradigm to a set of languages, just so
> the "look" of the programs are similar. Benchmarks should
> illustrate how a given language can be used to perform a
> given task (the benchmark) in that language, particularly
> using the best programming techniques and idioms unique to
> that language.
> Thus, to me, the primary essence of benchmarking is comparing
> how different languages can be uniquely applied to optimally
> perform the same tasks, and comparing those results.
> If there are other characteristics you are really trying to
> compare between languages, it would be best to explicitly state
> them (such as shortest program, etc) and not confuse issues
The problem I see is that it's easy to directly test LOC, CPU time, RAM usage,
but it's really hard to test readability and good use of idioms.
At some level that's like unit testing poetry.
There are code complexity tests, you could use code metrics like cyclomatic
complexity. But I don't know a satisfactory way to test code for elegance.
(On the subject of cross-language code complexity, Serguey Zefirov suggested
But, I would most value a shootout that has a 'code as literature' section.
I don't know how that would work, and I've never heard of a multi-language
website that does that critiques benchmarks for elegance,
but I do know I would want to read from and contribute to such a website.
Is a 'code as literature' a possibile addition to the Shootout?
I'd like something option that encourages clarity and understandability.
Programming is the Magic Executable Fridge Poetry, | www.ScannedInAvian.com
It is machines made of thought, fueled by ideas. | -- Shae Matijs Erisson