On 10/2/06, root <firstname.lastname@example.org> wrote:
> > > svn: REPORT request failed on '/svnroot/magnus/!svn/vcc/default'
> > > svr: REPORT of '/svnroot/magnus/!svn/vcc/default': Could not read response body: Secure connection truncated (https://svn.sourceforge.net)
> > Something is very messed up with sourceforge's svn server. Report
> > this to sourceforge! It's not normal at all.
> I fully agree that sourceforge is broken. My gut feel (and this is
> just a guess) is that SVN relies heavily on tmp space in order to
> do 'transactions'. sourceforge probably randomly runs out of tmp space
> depending on how many SVN sessions are running. Since sourceforge would
> not SEE the errors they can validly claim that it isn't their fault.
The error message above looks like a full-out apache crash, not a
simple 'ran out of tmp space'. The sourceforge guys should see a real
crash in their logs.
Yes, when you run 'svn update', a tiny description of your working
copy is sent to the server (what versions of what files you have), and
it sits in a tmpfile. The server then reads that tmpfile and knows
what to send back. The tmpfile is automatically deleted when the
> My issue isn't with the setup of the system but with the design. It
> is my belief that SVN does not properly recover from these failures in
> any convenient way. It should be possible to automatically retry a
> transaction or, if not that, to restart a transaction. Network and
> storage errors are usually temporary but SVN does not recover well at
> all. Often times my work system gets wedged into a state with 'locks'.
> In work we use 'tortoise' as a windows cover and it recommends that we
> run cleanup. (BTW, why have cleanup? Why not just fail to a clean
> state instead?) Cleanup takes a long time and NEVER succeeds. Once a
> transaction fails SVN leaves the world in a broken state and the only
> recovery action available is to redo the work
I admit, I'm not following your train of thought here. I initially
thought you were talking about retrying commit-attempts in the
repository (which we refer to as 'transactions' before the commit
succeeds.) But I think you're talking about the fragility of the
working copy, rather?
The working copy, unlike CVS, is a completely journaled system. If
you 'kill -9' a cvs update, you're likely to get silently corrupted
data. If you do the same to an svn update, the whole process was
journaled, and that's what 'svn cleanup' is for... to execute the
journals and return the working copy to a consistent state.
Obviously, one cannot 'just fail to a clean state', because
interruptions aren't perfectly predictable. That's why journaling is
a great technique. Granted, journals can't always recover things, but
they usually do.
> Claiming I'm an 'outlier' on the usage curve is not solution.
I didn't claim it was a solution to your problem. I'm just saying
that you're falling into the classic trap of "it doesn't work for me,
therefore it must be a fundamentally broken design, and it must be
lousy for everyone else in the world." That's not a rational position
to take. :-)
The only solution I can recommend is to send detailed transcripts of
your errors to a support list and figure out what's going on in your
*particular* setup. Generalization isn't productive. Specific
> Perhaps I'm using SVN in some stupid way and don't know all of the
> special switches. But really, how hard is this? Checkout, change,
> commit. That's all I ever do. It should 'just work'.
Agreed, it should just work. And it does for 99% of people.
I shouldn't be intruding on your discussions, so I'll bow out. Good
luck to you guys! Tim, if you want to discuss theory/design or just
troubleshoot stuff, I invite you to join email@example.com.
To unsubscribe, e-mail: firstname.lastname@example.org
For additional commands, e-mail: email@example.com
Received on Mon Oct 2 17:37:43 2006