On Tue, 2004-01-13 at 04:16, Russell Yanofsky wrote:
> Greg Hudson wrote:
> > Well, Linux has certainly created a mess here. Pity they didn't go
> > the *BSD route and actually wind up with a 64-bit off_t.
> I agree with you here, but wow. This is from the same Greg Hudson that
> expects the 1.0 subversion API to be supported 5 years from now? The one who
> goes around saying that if a library isn't backwards compatible it might as
> well not exist at all? :)
> Then surely you can see that what this linux people did is pretty
> reasonable. Instead of changing off_t from 32 bits to 64 when they
> introduced 64 bit file I/O, they decided that making 64 bits default wasn't
> worth breaking backwards compatibility over.
Perhaps you're not familiar with "the *BSD route" I mentioned above.
For each libc symbol accepting an off_t or struct stat, they created a
new symbol and used header file redirects to point newly-compiled source
files at the new symbol. Other system libraries which used off_t or
struct stat had to be addressed similarly. Old binaries continued to
work just fine. There was a compatibility issue with third-party
libraries using off_t or struct stat (you had to recompile them before
you could link against them with newly-compiled source, and without
special effort, the recompiled library would not be compatible with old
libraries, so you had to bump the major rev or something), but Back in
the Day, there weren't such a huge number of third-party libraries
running around, so it wasn't very traumatic.
In today's age of GNOME's towering library dependency chart and so on,
the damage would probably be too great.
(I'm not certain what form the "header file redirects" were. They might
have been straight #defines, but they might have used compiler magic so
that they didn't have to infect all namespaces.)
> IMO, it is perl that is at fault here. It would be fine if perl just used
> __USE_FILE_OFFSET internally to do 64 bit I/O on linux, but it has no
> business imposing nonstandard library options on libraries that call into
Well, perl may be at fault for doing that, but Linux is delegating an
awful lot of complexity to libraries here. It's not at all surprising
that some will get it wrong, or will just ignore the problem and only
provide 32-bit file I/O.
> > My input:
> > ...
> > * I'm waffling on what we should do with the svn_io APIs which
> > simply wrap APR functions. I think we want to fix those as well
> > (i.e. use svn_filesize_t), and then if APR ever gets a largefile
> > story we should be in a good position to make use of it.
> If APR gets a large filesize story, won't they just make the definition of
> apr_off_t 64 bits? If so, then we're already in a position to make use of
> it. Why complicate the wrapper functions if they are already portable?
If APR simply bashes apr_off_t on Linux to 64 bits, breaking ABI
compatibility, then by avoiding apr_off_t, at least our own ABIs won't
break, although that would be small consolation to apps which use APR
functions in addition to svn functions.
If APR makes apr_off_t on Linux 64 bits for new compiles but uses
#defines so as to leave around 32-bit functions for old compiles (the
*BSD route), then by avoiding apr_off_t, our ABI won't break when it
If APR makes apr_off_t 64 bits only when a special symbol is defined
(the Linux route), then we'll be free to define that symbol and get
64-bit I/O without fear of breaking our ABI.
To unsubscribe, e-mail: email@example.com
For additional commands, e-mail: firstname.lastname@example.org
Received on Tue Jan 13 17:22:28 2004