[sane-devel] memory problem

Oliver Schirrmeister oschirr@abm.de
Thu, 26 Aug 2004 09:42:20 +0200


Hi,

Am Mittwoch 25 August 2004 18:30 schrieb Gerhard Jaeger:
> Hi,
>
> as long as you run the stuff on linux, I suggest to use the process
> model and NOT the pthread model. The pthread implementation
> on Linux is somewhat crappy.

Shouldn't I use pthreads on Linux at all or only the sanei_thread
functions? Do you know some references about that topic?

I'm calling the sane functions from java. Using fork() duplicates the
fat java-process.

> Anyway it should not happen, that all of your memory will be used,
> so I'm pretty sure that you missed something obvious.
> What about linking the libs statically and using valgrind or efence?


I've linked the sane backends statically with the 'frontend'. The problem
is the same. The pthread lib is linked dynamically.
I've tried valgrind. That revealed only two 'real' memory leaks in some
init-routines of sanei_usb, the test backend and the pthread library, 
nothing severe. 
Under control of valgrind the vm-size doesn't increase !?!

Ciao

Oliver

> Ciao,
>   Gerhard
>
> On Wednesday 25 August 2004 17:49, Oliver Schirrmeister wrote:
> > Hi,
> >
> > I'm trying to find a memory leak in the fujitsu backend.
> > When I do a duplex scan (for example with scanadf) the vm-size
> > of the process increases aproximatly the size of one image
> > (for example ~4MB for DIN-A5-gray and ~8MB for DIN-A4-gray
> > at 300dpi). So all 250 sheets when the process reaches the 2GB
> > limit I run into problems.
> >
> > I've configured sane to use pthreads and use sanei_thread_begin
> > to fork a reader thread that reads the data from the scanner and
> > writes it into two pipes (one for the frontside and one for the
> > backside).
> >
> > The scanner returns the image data in alternate order, one block
> > frontside, one block backside ...
> > So I write the front side data directly to the frontside-pipe and
> > write the backside data in a buffer of the size of the image.
> > When the transfer is complete, I write that buffer into the backside
> > pipe.
> > I've checked it several times. I'm really freeing that buffer. The
> > address I free is really the address I got from alloc.
> >
> > If I don't use a pipe for the backside and use a file (so I don't have to
> > allocate that buffer) there is no problem.
> >
> > When I don't use threads (--enable-fork-process=YES) I don't have
> > problems because the reader process terminates and frees all memory. So
> > I think the problem is in the reader thread.
> >
> > I'm not killing the reader thread. I think it should terminate when the
> > reader-procedure ends.
> >
> > Any suggestions are welcome.
> >
> > Oliver