[Shootout-list] Directions of various benchmarks
Bengt Kleberg
bengt.kleberg@ericsson.com
Tue, 31 May 2005 17:09:54 +0200
On 2005-05-25 14:28, John Skaller wrote:
...deleted
> In my opinion, on the benchmarking side (as opposed
> to the web site), it is quite easy to write any
> code to do anything. The HARD part is design (as usual).
how about the folowing file system layout?
<top>/src
/bin
/computers
/limits
/tests
/wc
/n
/expected_result
/description
/setup
/x
/n
/expected_result
/description
/setup
/langauges
/c
/translator
/src
/wc.c
/x.c
/awk
/translator
/header
/src
/wc
/x
/metrics
/c-wc-<n>
/c-x-<n>
/awk-wc-<n>
/awk-x-<n>
/doc
<root>/tests/x/expected_result could be several files since the result
is different for each ''n''. some results are so big that it might be a
good idea to have each result in a separate file, even though most
results for many ''n''' would fit in a single file.
<root>/metrics/awk-x-<n> would be the time stamped metric from the
''n'' test run. the last line of all these files would be the latest
result.
<root>/tests/x/setup could build the largest needed environment (files?
servers running? etc), could take ''n'' as an argument or could be
several files (one for each ''n'').
<root>/langauges/awk/header would be the place to find the awk
interpreter (#! /bin/awk). i think this would be easier. ie to build the
script (translator) from header + source, instead of editing all the
source with the proper location for the interpreter.
bengt