[Bash-completion-devel] Roadmap proposal

Freddy Vulto fvulto at gmail.com
Sun Feb 15 22:27:10 UTC 2009


Freddy Vulto <fvulto <at> gmail.com> writes:

> Isn't there a concern that "splitting every command into its own file" is going
> to make bash_completion slower?  If so, maybe we'd better release
> bash_completion-2 (or ..200902xx) with the improvements made so far, stop
> supporting bash-2, branch and make the "splitting every command into its own
> file" part of the new branch: bash_completion-3?

I did some math regarding the concern that "splitting every command into its own
file" is going to make bash_completion slower:

Currently, sourcing `bash_completion' (HEAD) is taking my machine .842s:

   $ time . bash_completion
   real	0m0.842s
   user	0m0.764s
   sys	0m0.080s

Removing half the functions (removing `mutt' to `minicom') reduces this with
.257s to .590s:

   $ time . bash_completion2
   real	0m0.590s
   user	0m0.520s
   sys	0m0.072s

Sourcing an average example file a hundred times, e.g. `contrib/gkrellm' takes
about .155s:

   $ time for (( i=0; i < 100; i++)); do . contrib/gkrellm; done
   real	0m0.155s
   user	0m0.148s
   sys	0m0.008s

I figure main bash_completion defines 247 functions:

   $ grep '^[^[:space:]]\+()' bash_completion | wc -l
   247

So, should we "split every command", say 180 functions, that's gonna add (1.8 *
.155 = .3s to bash_completion, but by removing functions from the main file we
save at least .3s as well. 

Conclusion: overall performance is not going to make bash_completion much slower
if "every file is splitted into its own file", i.e. moved from `bash_completion'
to `contrib'.

Freddy





More information about the Bash-completion-devel mailing list