[svn.haxx.se] · SVN Dev · SVN Users · SVN Org · TSVN Dev · TSVN Users · Subclipse Dev · Subclipse Users · this month's index

dir/file plus more (was: Re: Are svn_fs_dir_t and svn_fs_file_t worth it?)

From: Greg Stein <gstein_at_lyra.org>
Date: 2000-11-09 01:20:52 CET

On Wed, Nov 08, 2000 at 07:01:20PM -0500, Jim Blandy wrote:
>...
> So I think I'd like to eliminate svn_fs_dir_t and svn_fs_file_t, and
> use svn_fs_node_t throughout. It would make svn_fs_node_t more like
> Unix file descriptors --- a uniform way to reference whatever's out
> there.
>
> What do folks think? Greg, I'm especially interested in your opinion,
> since you've actually written code that uses the fs interface.

This would be fine with me. There are points where I need a "file" to fetch
file-like information or content. Since these can already return an error,
there aren't any additional checks for me. But even better: I already know
whether it is a file or dir before I ever try to do that; therefore, I
shouldn't be messing up what I try to do with a node.

Of course, the svn_fs_node_is_dir() must stick around.

On a separate note, there are some gaps in the API that would be nice to
have and/or explained "how to do [because I'm missing it]":

*) fetch the svn_fs_id_t for a given node

*) when I call svn_fs_file_contents(), I get back a stream baton. is there
   some way to mark that as "no longer needed" ? Note that reading to EOF is
   not the answer, as I might read a portion of the stream. [is it relevant
   to mark it as unneeded?]

*) svn_fs__file_from_skel() appears to copy the file contents into memory.
   This isn't going to scale to multi-megabyte (or gigabyte!) files. It
   would be good to have a way that directly maps from DB3 to a seek/read
   function. The ideal interface for me will allow me to seek to a point in
   the "stream" and then read "n" bytes from it. Preferably, the read would
   not allocate memory (say, if DB3 mmap'd the record, then I'd just get a
   pointer into that mamp).

   Basically, I'm looking at a case where we have a gigabyte file stored
   into Subversion. The client requests bytes 100-120 and bytes
   2000000-2000100. For optimum Apache behavior, I could get just those
   bytes without any memory allocation. (beyond what mmap does, or possibly
   reading from a file descriptor into some allocated memory (of just the
   right size))

   What kinds of mechanisms does DB3 support for reading large content? Can
   you give me a pointer to the doc/API? (so that I can be a bit more
   intelligent in my request here)

thx,
-g

-- 
Greg Stein, http://www.lyra.org/
Received on Sat Oct 21 14:36:14 2006

This is an archived mail posted to the Subversion Dev mailing list.

This site is subject to the Apache Privacy Policy and the Apache Public Forum Archive Policy.