Meant to send this to the list...
------- Start of forwarded message -------
To: Greg Stein <email@example.com>
Subject: Re: dir/file plus more (was: Re: Are svn_fs_dir_t and svn_fs_file_t worth it?)
From: Jim Blandy <firstname.lastname@example.org>
Date: 08 Nov 2000 20:40:03 -0500
> This would be fine with me. There are points where I need a "file" to fetch
> file-like information or content. Since these can already return an error,
> there aren't any additional checks for me. But even better: I already know
> whether it is a file or dir before I ever try to do that; therefore, I
> shouldn't be messing up what I try to do with a node.
> Of course, the svn_fs_node_is_dir() must stick around.
Great. 'twil be done.
> On a separate note, there are some gaps in the API that would be nice to
> have and/or explained "how to do [because I'm missing it]":
> *) fetch the svn_fs_id_t for a given node
> *) when I call svn_fs_file_contents(), I get back a stream baton. is there
> some way to mark that as "no longer needed" ? Note that reading to EOF is
> not the answer, as I might read a portion of the stream. [is it relevant
> to mark it as unneeded?]
> *) svn_fs__file_from_skel() appears to copy the file contents into memory.
> This isn't going to scale to multi-megabyte (or gigabyte!) files. It
> would be good to have a way that directly maps from DB3 to a seek/read
> function. The ideal interface for me will allow me to seek to a point in
> the "stream" and then read "n" bytes from it. Preferably, the read would
> not allocate memory (say, if DB3 mmap'd the record, then I'd just get a
> pointer into that mamp).
> Basically, I'm looking at a case where we have a gigabyte file stored
> into Subversion. The client requests bytes 100-120 and bytes
> 2000000-2000100. For optimum Apache behavior, I could get just those
> bytes without any memory allocation. (beyond what mmap does, or possibly
> reading from a file descriptor into some allocated memory (of just the
> right size))
> What kinds of mechanisms does DB3 support for reading large content? Can
> you give me a pointer to the doc/API? (so that I can be a bit more
> intelligent in my request here)
This is a TODO item.
I'll be adding a new table, `strings', which will hold file content.
There will be two forms used to refer to content: (string KEY),
which means the text whose key is KEY in the `strings' table, and
(file NAME), where the text is stored in an external file named NAME.
We'll have to bind that into the transactions somehow.
------- End of forwarded message -------
Received on Sat Oct 21 14:36:14 2006