[sane-devel] Network protocol packet sizes

Oliver Rauch oliver.rauch@rauch-domain.de
Fri, 15 Mar 2002 13:01:02 +0100


Hi Dave,


(I quote the full mail because it did not get into the SANE-DEVEL mailing list,
please press "Reply all" to CC your mails also to the mailing list).


Dave Close wrote:
> 
> >The frontend does not need to double buffer the full image in this
> >case. It only has to buffer one scan line. And in this point there
> >is no difference if the frontend or the backend does buffer this line.
> 
> By "double-buffer", I meant that it needs a separate buffer for the
> response to a read and the actual scan line it is processing. Neither
> needs to be a full image. If it only has the scan line buffer, it
> can't allow a read to return more than will fit into that area.
> 
> >Please tell me where there is a lot of overhead?
> 
> I guess I'm just old-fashioned. I don't believe that extra work is a
> good idea, even if, on a fast processor, it doesn't take long to do.
> More than once in my experience I've encountered a short cut which
> seemed innocuous at the time only to be a pain later.
> 
> >If you have any performence problems
> 
> I have no performance problems. I was merely surprised by the variation
> in behavior between a local scanner and the same one via a network. And
> I fail to see the logic in defining a block with a size word, then
> not returning that entire block in a single read. Given that the size
> word has been set, the backend must already have all the data. So
> if I were calling the backend directly, I would presumably get the
> whole block. The network forces the division into packets but the net
> backend doesn't have to accept that and could put the packets back
> together to mimic the behavior of the true backend.
> 
> It seems to me, as a general proposition, that network protocols should
> be designed to make the network transparent. Whatever happens behind
> the scenes should not be so apparent to the audience. In this case,
> network usage can be easily guessed from the read returns.

There is no relvant difference between the network backend and a local scanner
backend. When you call sane_read to a local scanner backend the maximum amount
of data you get is the maximum buffer size of the used stream (pipe=4096 bytes (for linux,
may be different on other systems)). For the net backend you also get the
maximum size of the used stream but in this case it is not a local pipe
but a network stream with a block size of 1xxx bytes

> 
> Are you the maintainer for net? If so, I guess we'll just have to
> agree to disagree. Cordially, I hope.

There were two are three people who cared a bit about the net backend
in the last year. I can not say if anyone feels responsible as
"active maintainer" of the net backend in the moment.

But things that are of public interest like meta backedns (net, dll)
have to be public discussed. And adding a buffering routine into a 
meta backend can reduce the response time of a GUI frontend.
It makes sense how it is done now. The frontend wants to have all
data that is available for one sane_read but it does not want to
wait for any data. In general we have the oppisite situation implemented
with blocking/non blocking mode (wait if not data is available or not).
What you suggest is a function that waits although there is data
available. This is very bad vor GUI frontends.

The net backend certainly has several places where it has to be
improved, but not at this point.

Bye
Oliver

-- 
Homepage:	http://www.rauch-domain.de
sane-umax:	http://www.rauch-domain.de/sane-umax
xsane:		http://www.xsane.org
E-Mail:		mailto:Oliver.Rauch@rauch-domain.de