[svn.haxx.se] · SVN Dev · SVN Users · SVN Org · TSVN Dev · TSVN Users · Subclipse Dev · Subclipse Users · this month's index

Re: [Subclipse-dev] Revision graph and cache implementation

From: Alberto Gimeno <gimenete_at_gmail.com>
Date: Thu, 31 Jul 2008 21:49:48 +0200

Hi!

Thank you very much for your comments. I've been too busy today to
read them. I've been collaborating with Google here in Valencia
(Spain). I talked about my experience at Summer of Code in general
terms (timeline, payments, personal experience,...) and a little bit
about the project. They will put the video on the official YouTube
channel for Google Spain. So sorry it'll be in spanish :P

Now the sky is dark and I'm tired. I'll reply you tomorrow, but thanks again :)

On Thu, Jul 31, 2008 at 5:49 PM, <Stefan.Fuhrmann_at_etas.com> wrote:
> "Alberto Gimeno" <gimenete_at_gmail.com> wrote on 07/23/2008 11:46:53 PM:
>
>> In your conclusions you say that the current implementation is not
>> scalable ("Large histories will take days or even weeks to get
>> cached"). I agree that this is because of the 'exploding' issue.
>
> Only as a side note: If I remember correctly, it took me ~30 hours
> to fetch the log from apache.org. So, waiting days for the initial
> cache fill is already the case for certain large, public repository
> servers.
>
>> About the second approach you say that "is likely to deliver a worse
>> "perceived" performance". I think you say that because this approach
>> needs to do more things when the user wants to see a graph. With the
>> current implementation all graphs are calculated at the same time.
>> With the 'alternative' approach this is different: the user needs to
>> wait more for each single graph.
>
> Agreed. My point with the "perceived" performance was as follows.
> The best user experience you will get when things are fast enough
> to be interactive. So, just loading the results should be considerably
> faster than creating them. On the other hand, this advantage will
> soon be offset by time necessary to update an "exploded" cache.
>
> Another side note. There is a way to combine both approaches.
> It would calculate file ids, organize changes by file id but only
> add a smallish, constant factor to the data size. I made some
> sketches a few months ago and hope to get the design done by
> the end of this year. If all goes well, it will provide interactive
> performance while effectively evaluating all copy and merge graphs
> at once.
>
>> However, what implementation do you think is the best? I think the
>> second one. Because it is scalable.
>
> Agreed. With your "incremental graph update" proposal it should
> scale well for all common uses.
>
>
> Regards,
> Stefan^2.
>
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: dev-unsubscribe_at_subclipse.tigris.org
> For additional commands, e-mail: dev-help_at_subclipse.tigris.org
>
>

-- 
Alberto Gimeno Brieba
Presidente y fundador de
Ribe Software S.L.
http://www.ribesoftware.com
ribe_at_ribesoftware.com
Contacto personal
eMail: gimenete_at_gmail.com
GTalk: gimenete_at_gmail.com
msn: gimenete_at_hotmail.com
página web: http://gimenete.net
teléfono móvil: +34 625 24 64 81
---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe_at_subclipse.tigris.org
For additional commands, e-mail: dev-help_at_subclipse.tigris.org
Received on 2008-07-31 21:51:01 CEST

This is an archived mail posted to the Subclipse Dev mailing list.

This site is subject to the Apache Privacy Policy and the Apache Public Forum Archive Policy.