Christopher Kreager wrote:
> ...
> db_recover: PANIC: fatal region error detected; run recovery
> db_recover: DB_ENV->open: DB_RUNRECOVERY: Fatal error, run database recovery
Somehow that reminds of the old "Keyboard not connected, press F11 to continue".
:-)
> From here:
> At this point, I either have to try creating it new repo again, with
> Import ( someone's working copy without .svn folders ) or migrate back
> to DB 4.2 or even back to SVN
If you can do that, I would suggest the following:
- Stop all accesses to the repository, also stop BDB.
- Put your old BDB (is it 4.2?) in place.
- Move the repository aside, e.g. with 'mv -iv REPOS REPOS.old'
- To avoid issues with dump-files getting too big, use the attached
'dumprev' script (after modifying it to your needs) to create
incremental dumps from REPOS.old.
If that fails, *shrug* well, stop here and go home.
You might feel the urge to reach for an axe, but please resist.
- If that worked: switch (back) to BDB 4.3
- create a new (empty) repository that has the name of the old one.
- Use the attached 'loadrev' script (after modifying it to your needs)
to load the incremental dumps into the new repository.
- After that finished successfully, restart BDB and all access methods.
These scripts are tested just for the dumping from an existing backup
repository (created with 'svnadmin hotcopy ...') and loading into a
new repository under linux and work fine. I wrote 'dumprev' some time
ago and 'loadrev' maybe half an hour ago and did one test with it.
If you have any questions about it, just ask. :-)
But I use FSFS, so I cannot speak for BDB repositories, esp. ones
that already seem damaged.
> I've already tried hotbackup and also, dump then load. All of these
> make progress up till the end and fill before compleation.
Hmm, then maybe you might want to add 'sleep 1' into the loop that
loads the incremental dumpfiles, so that svn has 1s after each load
operation to think about what it just did. ;-)
You might also add 'echo -n "."' after the 'svn dump ...' inside the
loop in 'dumprev' so that you see that is still running. I didn't need
that because we only had 56 revnums in our
> Note: I was able to get SVNADMIN verify f:\repos\Art to run just fine.
> At one point, after doing this, we could update and commint (once or
> twice) and then faild. Trying this again did not help and verify always
> compleates successfuly with no errors *strange
>
> Thanks for everyone's help, please keep the suggestions comming. We
> like the product and have 6 or our 7 repos converted and running just
> fine under 1.2.0, just not this 2+ GB one. All of our other upgrades
> from Oct. 04 have been fine.
>
> Cheers,
Good luck
Dirk
#!/bin/bash
# directory "incdump" should contain the incremental dumpfiles
#MAXNUM=56
#MINNUM=0
#NULLSTR="00000"
MY_NEW_REPOS="/srv/svn/REPOS/test_AN"
for INCDUMP in incdumps/*.dump
do
svnadmin load $MY_NEW_REPOS < $INCDUMP
done
# for (( REVNUM=0 ; REVNUM<MAXNUM ; REVNUM++ ))
#do
# XLEN=${#REVNUM}
# REVNULL=${NULLSTR:0:5-$XLEN}
# svnadmin load YOUR_REPOSITORY < incdumps/AN.backup.$REVNULL$REVNUM.dump
#done
#!/bin/bash
# Switch off all access methods to the repository,
# stop the database, then copy YOUR_REPOSITORY to
# YOUR_REPOSITORY.copy.
#
# The directory "incdump" should exist before starting this
MAXNUM=56
MINNUM=0
NULLSTR="00000"
for (( REVNUM=MINNUM ; REVNUM<MAXNUM ; REVNUM++ ))
do
XLEN=${#REVNUM}
REVNULL=${NULLSTR:0:5-$XLEN}
svnadmin -r $REVNUM --incremental dump YOUR_REPOSITORY.copy > incdumps/R-$REVNULL$REVNUM.dump
done
---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscribe@subversion.tigris.org
For additional commands, e-mail: users-help@subversion.tigris.org
Received on Mon Jul 4 17:44:55 2005