[svn.haxx.se] · SVN Dev · SVN Users · SVN Org · TSVN Dev · TSVN Users · Subclipse Dev · Subclipse Users · this month's index

Re: functions that would help TSVN

From: Stefan Küng <tortoisesvn_at_gmail.com>
Date: Tue, 1 Mar 2011 13:20:20 +0100

On Tue, Mar 1, 2011 at 12:45, Stefan Sperling <stsp_at_elego.de> wrote:
>> Since most UI clients need all the data in memory anyway, I'd like
>> to have a separate svn_client_proplist() API that does *one* db
>> query and returns all the results in one go.
>> There are several reasons:
>> * as mentioned, most UI clients will need all data in memory anyway.
>> For example in TSVN I just add the data in the callback to one big
>> list/vector/map and start using that data after the function
>> returns.
> I don't think we need a separate function that does the allocations
> on behalf of the callback.
> The callback is free to store the data in any way it wants.

I'm not requesting such function because I'm lazy and just want svn to
do the work for me. I'm requesting those for performance reasons. Of
course I'm free the store the data any way I want and need in a
Also I've never said that the current approach doesn't work, I only
mentioned that it's slower than necessary.

Let me illustrate this a little bit:
Assume: 1M properties in 100k folders
svn_proplist recursive.
Callback called 100k times
for every callback:
 - svn lib allocates memory for the data
 - calls callback function, passes data
 - UI client receives the data, copies the data to big memory buffer
 - svn lib deallocates memory for data

memory allocations/deallocations are slow, especially in
multi-threaded processes (meaning: not big of a problem for the CL
client but for UI clients).
In this scenario, there are 100k allocations and deallocations which
could get reduced to one big allocation and one deallocation.

>> * it is much faster (and I mean *much* faster here, from several
>> seconds or even minutes down to a few milliseconds or maybe two or
>> three seconds)
>> * in case there's not enough RAM available: I can always tell users
>> to install more RAM to get it working. But there's no way to make it
>> faster with the current callback implementation - there just are no
>> faster harddrives or much faster processors.
> If the callback takes care of allocations, it can fail more gracefully
> than the libraries can. E.g. the callback could decide to cancel the
> operation, or to display data it's already got, free some memory, and
> continue.
>> * the chance that there's not enough RAM available is very small:
>> assuming a million properties, each property using 1kb will result
>> in 1GB or RAM - most computers today have 3GB, new ones have 4GB and
>> more. So even in such rare situations with *huge* working copies the
>> chance of too less RAM is very small.
> Some operating systems still have resource limits that are lower than that.

Yes. What's your point?

>> So: for UI clients please provide fast APIs that use more RAM - keep
>> the existing APIs that use as less memory as possible for those
>> clients who need those.
> The libraries provide great flexibility with just one API.
> The existing API already gives you the option of using memory the way
> you want. So I don't see a reason to add a special-purpose API that
> does the allocation on behalf of the callback.

performance. performance.
That's what my whole post was about.
I never questioned the flexibility or that I wasn't able to get the
data I want with the existing APIs.
Again: I want those additional APIs for performance reasons.
Not because I can't get what I want with the existing APIs
Not because I'm too lazy to do the memory allocations myself in the callback
Not because the APIs aren't flexible enough
But because of performance.

I hope I made myself clear this time.


  oo  // \\      "De Chelonian Mobile"
 (_,\/ \_/ \     TortoiseSVN
   \ \_/_\_/>    The coolest Interface to (Sub)Version Control
   /_/   \_\     http://tortoisesvn.net
Received on 2011-03-01 13:21:14 CET

This is an archived mail posted to the Subversion Dev mailing list.

This site is subject to the Apache Privacy Policy and the Apache Public Forum Archive Policy.