[svn.haxx.se] · SVN Dev · SVN Users · SVN Org · TSVN Dev · TSVN Users · Subclipse Dev · Subclipse Users · this month's index

Re: request for new API function

From: Stefan Sperling <stsp_at_elego.de>
Date: Sat, 5 Feb 2011 16:46:30 +0100

On Sat, Feb 05, 2011 at 04:22:29PM +0100, Stefan Küng wrote:
> On 05.02.2011 13:56, Stefan Sperling wrote:
>
> >I think we should go into this direction.
> >In fact, I think we should simply change the existing APIs to use
> >the fastest possible way of getting at information.
>
> Well, currently there is no API that does what I suggested
> (basically return all results of a db query without even touching
> any files in the WC or have do this for every file/folder
> separately).

Well, the svn_proplist case I'm looking at is the same thing.
I want to answer the request "Give me all properties within this
working copy" by issuing as few sqlite queries as possible.

You are talking about such requests in general.
I am talking about one specific instance (proplist).
But essentially we want the same thing.

> I've read up on that thread. It seems the problem you're facing
> comes from the fact that you need to stay compatible with pre 1.7
> APIs and clients, and the fact that you can't enforce clients to
> behave, only to ask them to behave and then hope for the best.

Yes.

> However what I'm asking for here are *new* APIs which do something
> no existing API currently does. So staying compatible wouldn't be a
> problem.

New APIs don't make the problems we have with existing APIs go away.

The backwards compat problem doesn't affect TortoiseSVN, since
you will simply provide builds linked to 1.7 and tell all users to upgrade.

But we need to keep existing clients that were compiled against 1.6.x
and earlier working. So it's not "not a problem", it's just not
TortoiseSVN's problem :)

> And if you're worried about clients not behaving properly,
> why not get rid of the callback completely and just return all
> information at once in one big chunk of memory.
> Talking about UI clients, this won't be a problem because they
> usually have to store all information they receive in a callback
> anyway so they have it ready to show in the UI. So for them, the
> memory use wouldn't be bigger at all.

Really? Even with gigantic working copies?
What if the amount of information requested simply doesn't fit
into memory? I'd prefer a system that cannot fail in this way.
I'd prefer passing the information to the client piece by piece,
and letting the client worry about where to store it.
If at all possible, the library shouldn't allocate huge slabs of
memory outside of the client's control.
 
> Of course, those APIs I'm asking for might not be very useful for
> existing APIs or other stuff that is done in the svn library. Those
> might only be useful for some svn clients. But I hope that's not a
> blocker for implementing those.

I hope that we'll get a good set of APIs for 1.7 that will
satisfy all clients out there, including TortoiseSVN.
What these APIs will look like isn't set in stone yet.

> I also thought of just query the SQLite db myself directly, but then
> I don't like to do something that's not really allowed.
> However: I did a quick test with the Check-for-modifications dialog
> in TSVN. It has a feature where you can enable showing all
> properties. To do that, a separate thread is started which lists all
> properties of all items in the working copy. On one of my working
> copies, this takes about 50 seconds. Using a simple SQLite query on
> the NODE table took in average 1260ms. Parsing the data and
> preparing it for use in the UI took another 3.5 seconds. Now *that*
> a speed improvement I really like.

How is your query any different from the new proplist API and
implementation I added in r1039808? I think that provides what you need
(for the proplist case). It opens the db, runs a query on it and streams
the results to the client via a callback. Very low overhead.

Stefan
Received on 2011-02-05 16:47:14 CET

This is an archived mail posted to the Subversion Dev mailing list.

This site is subject to the Apache Privacy Policy and the Apache Public Forum Archive Policy.