On 1/17/07, David Anderson <firstname.lastname@example.org> wrote:
> On 1/17/07, Ivan Zhakov <email@example.com> wrote:
> > > This behavior is off by default however. The default is to crawl the
> > > subtree rooted at cwd to work out what was edited, and to sanity check
> > > metadata as you go. An option passed to svn checkout makes all WC
> > > files read-only, and relies solely on the metadata to operate on the
> > > wc, unless a particular operation forces a crawl.
> > Hmm. Why you need this feature? IMHO crawling tree data spends most
> > time in reading entries and creating locks, not in reading file
> > timestamps.
> > I think that most users like ability to any edit without any commands.
> My main motivation for this is seeing NFS perform in an enterprise
> setting. Its performance is not stellar (far, far from it), and
> crawling the working copy requires a lot of readdir and stat calls,
> all of which go out to the network as RPC calls. I would like to have
> some mechanism to avoid those, if the user chooses to.
AFAIR sqllite doesn't support NFS, because none of NFS supports
*right* locking. Does it?
> However, for users who don't want to adopt this, the default checkout
> behavior would be to crawl. You do lose some of what you gained by
> using a central store, but you still get the speed improvements that
> come from not spreading the metadata all over the wc.
It's ok, if it will be not default behavior. But I'm just don't see
many reasons to spend time on this option. Just my opinion.
> > > Text-bases now. By default, they are stored in the metadata sqlite
> > > database (or maybe in a separate text-base sqlite DB alongside the
> > > regular metadata. Details.). I would however like to have a clear line
> > > drawn in the internals of libsvn_wc_sqlite, where we could add other
> > > behaviors in the future. Say, no text-bases and fail all operations
> > > that require them for ultra lightweight working copies, or no
> > > text-bases but retrieved via the ra api when needed (which opens the
> > > way for webdav caching proxies to work their magic).
> > Btw, Are you sure that sqllite is ready to store very big data inside?
> > Like 100 mb field?
> The SQLite faq recommends switching to another storage system at the
> 100GB mark. I think a 100GB working copy has other problems to attend
> to before looking at making sqlite groan :-).
Ok, I didn't know SQLite limits.
> According to DannyB, the biggest problem of SQLite is concurrent
> access, which is gracefully handled in our case, as touching a working
> copy takes exclusive locks anyway. However, there will be a wc
> somewhere whose text-bases SQLite can't handle properly. That is why I
> want a clean cut API within libsvn_wc_sqlite where we can drop in
> alternative mechanisms to handle text-base storage, or lack of
> storage. My favorite idea is the caching webdav proxy that can blindly
> dole out the resources as fast as the wire can carry them, but there
> would surely be other plausible ways to manage the text-bases.
Agreed. Concurrent access is rare situation, but it can occur when
some plugins like for Windows Explorer and Visual Studio updates
status in background. Anyway I think two application can be count as
not concurrent access.
To unsubscribe, e-mail: firstname.lastname@example.org
For additional commands, e-mail: email@example.com
Received on Wed Jan 17 13:32:47 2007