Bug#1016369: IO::Handle ->error does not work, always saying "fine"

Ian Jackson ijackson at chiark.greenend.org.uk
Sat Aug 6 20:44:18 BST 2022


Niko Tyni writes ("Re: Bug#1016369: IO::Handle ->error does not work, always saying "fine""):
> Hi, thanks Ian for the report and Damyan for looking into the issues.

Indeed, thanks to you and to Damyan.
> > > Actual output
> > > 
> > >     0 Bad file descriptor
> > >     0 No space left on device
> 
> 
> FWIW I get 
> 
>     0 Is a directory 
>     1 No space left on device 
> 
> on sid (perl_5.34.0-5). I'm not sure why you'd expect -1. The documentation
> for IO::Handle::error() only mentions it reporting a true value.

Interesting.  I forget the details (can't easily check now) but some
test that passed for me gave -1.  I agree that 1 is better.

> The first issue (reading a directory as a plain file) seems to be about
> the error flag getting cleared when reading past EOF or something like
> that.

I used reading from a directory as an example for two reason:

Firstly, it was the situation that actually happened to me.  I was
writing a program and it had a bug that caused it to erroneously read
a directory, rather than some file within it.  The program became
convinced the "file" was empty, leading to strange malfunctions.  I
would have expected the file reading error checks ought to have
detected this earlier.

Secondly, having discovered this bug, reading from a directory is one
of very few ways to get something you can open() for reading, which
then returns errors if you call read(2).

Another way to obtain this erroneous behaviour is to pass a perl
script a stdin which is not, in fact, open for reading (easily
achieved with shell redirection).  However, I didn't use that for my
bug report (even though it does repro the bug) because STDIN seems
like it will have special magic.

I would love a more portable name for something which can be opened
for reading, but which can't then be read.

I think the reasons so few people have reported this bug before are as
follows:

1. This can only occur if you have something open for reading but
which turns out not to be readable.  In the absence of strange bugs,
that usually means a hardware failure resulting in EIO, which is very
rare.

2. If and when someone does get data loss due to this bug in a
situation where they got EIO due to hardware failure, they are very
likely to (a) be preoccupied with the disaster that is no doubt
unfolding (b) unable to precisely reproduce the situation or indeed
have precise and accurate records.
    
3. Very few people are so careful about error handling anyway, sadly.

So IMO the fact that this bug has been rarely reported does not mean
that it isn't a data loss bug.  Rather, it's a data loss bug which
mostly has the effect of burning up more things when your system is
already on fire.

> Anyway, the data loss argument seems misplaced given we've read all the
> data there is when we are at EOF?

No.

The handle in the test case can never be successfully read, so it has
*not* reached EOF.  All the calls to read(2) return errors.  If perl
claims that the handle has reached EOF, that is itself a bug.  (A data
loss bug, because a fslse EOF causes the rest of the data to be
ignored.)

I am almost certain that this bug would trip if you got EIO while
trying to read a plain file.  (But this is hard to test.)

> See also https://github.com/Perl/perl5/issues/12782 which has not
> received attention in almost ten years.

Yikes.  That does seem like the very same bug.

> The second issue (writing to /dev/full) is indeed fixed in sid / Perl
> 5.34.  It was https://github.com/Perl/perl5/issues/6799 and reportedly
> only affects things like character devices (including /dev/full) and
> sockets. I've verified that trying to write to a normal file on a full
> filesystem does set the error() flag on stable / Perl 5.32.

I wonder what is different about plain files.  I suspect the fact that
it "works" (correctly returning errors) for you in this case may be
due to luck (the precise series of calls).

> I think that makes the issue less severe, and I'm not very inclined to
> fix it in stable.  But in case we end up doing that anyway, these would
> be the commits needed:
> 
>     https://github.com/Perl/perl5/commit/89341f87f9fc65c4d7133e497bb04586e86b8052
>     https://github.com/Perl/perl5/commit/8a2562bec7cd9f8eff6812f340f99dddd028bb33
> 
> Downgrading the severity, but let me know what you think based on the above.

I still think this is a data loss bug.  (Two bugs.)

I will think about trying to produce more convincing repros.  Would
you convinced by something involving an LD_PRELOAD to cause "syscalls"
to fail ?  Something involving ptrace ?

Damyan writes:
> Note that the recommended way to read files line by line is (perldoc 
> -f readline):
> 
>     while ( ! eof($fh) ) {
>         defined( $_ = readline $fh ) or die "readline failed: $!";
>         ...
>     }

I don't find this particularly convincing.  This argument seems to be
saying "never use <> to read lines" which is pretty strange.  Surely
it should be possible to use "<>" in its line-reading mode, without
data loss.  (And, with autodie, without having to do an explicit error
check.)  The behaviour of ->error() seems contrary to the
documentation, and the "use <> and then check ->error()" idiom seems
to me to be both justifiable by the text, and reasonable.

Thanks,
Ian.

-- 
Ian Jackson <ijackson at chiark.greenend.org.uk>   These opinions are my own.  

Pronouns: they/he.  If I emailed you from @fyvzl.net or @evade.org.uk,
that is a private address which bypasses my fierce spamfilter.




More information about the Perl-maintainers mailing list