On Mon, Mar 08, 2004 at 01:33:05PM -0500, Alvin Thompson wrote:
> Andreas Kostyrka wrote:
> >-) Handling of files >2GB.
> so pick a format that handles files over 2GB
Well, your argument went, that there are more than enough ready to use
libraries to do these feat: Please name one that works well enough for this.
(>2GB support, apr style portability, etc.)
> first, as i mentioned before virtually any processor still in use can
virtually any is not the same as all. But you are probably right that
decompression is not a problem performancewise. Compression is a bigger one.
And managing changes in the files without writing the whole file from scratch
after every update makes the problem even thornier ;)
> keep up with a light compression algorithm. you don't need the densest
> possible compression algorithm for this application.
> second, it's not everyday that you need to compress/decompress the
> entire thing. most compression schemes today are smart about which
> 'hunks' it needs to play with.
Well, again you haven't still named any candidates for a ready-to-use library.
> >-) Not as portable as apr/svn -> new portability problems.
> how does this affect portability? explain.
Well, it's trivial. svn basically lives in the "apr" world. svn is
as portable as the apr. So your magical beast would have to be as portable
as apr, or else it would limit the portability of svn.
Actually it would probably be best if it would be implemented on top of apr,
so that any future apr ports are automatically supported, but that's not
realistic to expect.
> >So basically we need a new format (.zip files don't cut the 2GB limit), and
> >new tools to examine the working copy, new tools for everything.
> >For little gain. And these are real problems, as you might have noticed svn
> >is quite "high-quality" source code. :)
> as i said, there are about a billion compression libraries out there.
> and as i said, most would *not* require a substantial rewrite of code.
> the only difference would mostly be how it gets its file streams.
Well, again, just name some, that are
-) portable (apr style),
-) can deal with all kinds of files (>2GB, etc)
-) easy interoperable with apr (apr for example more or less mandates memory
pools, etc.; while it can work without that it would make for uglier code)
> please reread that thread. basically, since it is essentially a branch
> of Tortoise CVS, it uses much of the same code. since cvs doesn't have
> to worry about pristine copies or (usually) large numbers of files in
> the .cvs directory, it uses a simple recursive algorithm when scanning
> for files for that icon overlay feature. that slows Tortoise SVN down
> because it doesn't ignore the .svn directories. while they probably
> will/should fix this, it is yet another difference between the two which
> makes the code harder to maintain. and once again, wouldn't it make more
> sense to implement this than to require that every other program on the
> planet be SVN aware?
So you are again advocating a bug (albeit a performance bug) in Tortoise SVN
as an argument for changing the clean and nice way svn works now?
> what? 'management areas inside a working copy aren't svn speciality'?
> why on earth wouldn't they be? how does that answer that question? i
> assume you don't have an answer so you just randomly strung together
> some words. :P
Well, don't get personal:
RCS: RCS symlinks
CVS: CVS dirs.
SVN: .svn dirs.
Sniff: .sniff (well sniff is not actually a cms, but includes cms functionality)
So basically these CMS all have a concept of a directory inside the working
copy. So it's nothing special about Subversion.
> >And as to the ".NET/ant" Wheenies, they've got the source code, so they can
> >solve their problems themselves? Or did I miss somewhere the declaration
> >working around bugs in certain MS IDEs is a high priority item for svn?
> i will repeat this yet again: wouldn't it make more sense to implement
> this than to require that every other program on the planet be SVN aware?
Well, most tools (including custom build systems for closed source shops)
have already the concept that it has to ignore management directories.
And how would your idea solve that? I mean it still lives some binary file
in the working copy area that some tools might find offensive. Actually to some
tools this file might be more offensive, as the current .svn area doesn't
contain binary data (at least not as long you do not manage binary files).
So how is your "solution" to replace
the trivial open/read/write/close operations by some library calls
(to a library you haven't yet specified, you just stipulated that such a beast
exists), that work on some human unreadable file make the situation better?
To unsubscribe, e-mail: firstname.lastname@example.org
For additional commands, e-mail: email@example.com
Received on Mon Mar 8 20:04:26 2004