[svn.haxx.se] · SVN Dev · SVN Users · SVN Org · TSVN Dev · TSVN Users · Subclipse Dev · Subclipse Users · this month's index

Re: Core dump checking out large repositories

From: Dominic Mitchell <dom_at_happygiraffe.net>
Date: 2005-01-22 17:15:15 CET

On Fri, Jan 21, 2005 at 07:39:00PM -0700, SmartServ Hosting wrote:
> %uname -rs
> %du -sh /svnroot
> 7.5M /svnroot/
> Subversion: 1.1.1
> %svn co file:///svnroot/trunk
> [some files checkout/not all files]
> Segmentation fault (core dumped)
> %
> backtrace:
> (gdb) bt full
> #0 0x280d97f3 in update_entry (b=Cannot access memory at address 0xbfad1d98
> ) at subversion/libsvn_repos/reporter.c:589
> s_root = (svn_fs_root_t *) Cannot access memory at address 0xbfad1ddc

This is exactly the same problem that I am seeing, on FreeBSD-CURRENT.
Although I'm accessing over webdav. Now that I try it using a file:///
url, it goes wrong too. Which is odd, because when I tried it, it
didn't fail...

At least this should make the debugging easier, without apache


To unsubscribe, e-mail: users-unsubscribe@subversion.tigris.org
For additional commands, e-mail: users-help@subversion.tigris.org
Received on Sat Jan 22 17:17:39 2005

This is an archived mail posted to the Subversion Users mailing list.