Greg Hudson wrote:
> On Tue, 2004-01-13 at 04:16, Russell Yanofsky wrote:
>> Then surely you can see that what this linux people did is pretty
>> reasonable. Instead of changing off_t from 32 bits to 64 when they
>> introduced 64 bit file I/O, they decided that making 64 bits default
>> wasn't worth breaking backwards compatibility over.
> Perhaps you're not familiar with "the *BSD route" I mentioned above.
> For each libc symbol accepting an off_t or struct stat, they created a
> new symbol and used header file redirects to point newly-compiled
> source files at the new symbol.
Well sure. The Linux library actually does the exact same thing when
__USE_FILE_OFFSET64 is defined. And, like you say, this doesn't provide
binary compatibility with 3rd party libraries which expose off_t in their
interfaces. So they took a conservative position, keeping the posix
interfaces 32 bit, and providing a parallel 64 bit interface for
applications that need it (off64_t, struct stat64, lseek64, etc).
>> IMO, it is perl that is at fault here. It would be fine if perl just
>> used __USE_FILE_OFFSET internally to do 64 bit I/O on linux, but it
>> has no business imposing nonstandard library options on libraries
>> that call into it.
> Well, perl may be at fault for doing that, but Linux is delegating an
> awful lot of complexity to libraries here. It's not at all surprising
> that some will get it wrong, or will just ignore the problem and only
> provide 32-bit file I/O.
Linux developers only need to know two things:
1) sizeof(off_t) == 4
2) there's a separate, nonposix interface for 64-bit i/o
Surely we could expect the big brains at perl.com to handle this
information. But instead they decided to mess with glibc library options to
wangle 64-bit I/O. Messing with library options is fine if you are, say,
writing a prototype app or supporting legacy software, but it's not the
brightest thing to do when you are building something that needs to interact
with third party libraries
>>> My input:
>>> * I'm waffling on what we should do with the svn_io APIs which
>>> simply wrap APR functions. I think we want to fix those as well
>>> (i.e. use svn_filesize_t), and then if APR ever gets a largefile
>>> story we should be in a good position to make use of it.
>> If APR gets a large filesize story, won't they just make the
>> definition of apr_off_t 64 bits? If so, then we're already in a
>> position to make use of it. Why complicate the wrapper functions if
>> they are already portable?
> If APR simply bashes apr_off_t on Linux to 64 bits, breaking ABI
> compatibility, then by avoiding apr_off_t, at least our own ABIs won't
> break, although that would be small consolation to apps which use APR
> functions in addition to svn functions.
> If APR makes apr_off_t on Linux 64 bits for new compiles but uses
> #defines so as to leave around 32-bit functions for old compiles (the
> *BSD route), then by avoiding apr_off_t, our ABI won't break when it
> otherwise would.
> If APR makes apr_off_t 64 bits only when a special symbol is defined
> (the Linux route), then we'll be free to define that symbol and get
> 64-bit I/O without fear of breaking our ABI.
I was assuming that APR would take the apr_off_t bashing route. Is this not
a safe assumption?
To unsubscribe, e-mail: firstname.lastname@example.org
For additional commands, e-mail: email@example.com
Received on Tue Jan 13 19:30:11 2004