[svn.haxx.se] · SVN Dev · SVN Users · SVN Org · TSVN Dev · TSVN Users · Subclipse Dev · Subclipse Users · this month's index

[PATCH] Implement recovery for FSFS

From: Malcolm Rowe <malcolm-svn-dev_at_farside.org.uk>
Date: 2007-03-05 06:09:11 CET

All,

The attached patch makes "svnadmin recover" do something useful on FSFS
filesystems. Specifically, it will now recreate the db/current file
from scratch.

While this doesn't fix a problem that I'm aware that our users are
encountering (db/current going missing / becoming corrupt), it is a
feature that's useful for repository admins, as it allows them much more
flexibility with their backups.

For example, one backup option that's now possible is running asynchronous
parallel backups of the rev files as they are renamed into revs/ (and
revprops/), presumably as part of a post-commit script. Previously,
all the backup jobs had to serialise on db/current, while now they
can ignore db/current and just recreate it when a restore is required.
(You can also now ignore the fact that you might not be able to get a
db/current file that's consistent with any of the backups you're doing,
if it's moving that fast).

Recovery isn't fast - it needs to scan all the revision files in the
repository - but by hand-coding some of the logic for reading the files,
I've managed to keep it O(N) for N revisions rather than the O(insanity)
that the pre-existing generic read functions would have caused.

Comments most welcome. If there are no objections, I'd like to commit
this shortly, so I'd really appreciate any time that anyone can give
towards a review.

Many thanks,
Malcolm

[[[
FSFS: Make 'svnadmin recover' recreate the db/current file.

This makes is possible for repository admins to back up the repository
using either a naive incremental backup strategy or a parallelised
asynchronous strategy, without requiring them to pick a serialisation point
to back up the 'current' file (a point that doesn't exist in the parallelised
case).

* subversion/svnadmin/main.c
  (cmd_table): Adjust the description for 'recover' to differentiate
    the BDB and FSFS cases.

* subversion/include/svn_fs.h
  (svn_fs_berkeley_recover): Add notes to explain functionality for FSFS.

* subversion/libsvn_fs/fs-loader.h
  (struct fs_library_vtable_t): Rename the recover() vtable function from
    bdb_recover(), and move it out of the provider-specific section.

* subversion/libsvn_fs/fs-loader.c
  (svn_fs_berkeley_recover): Adjust for the above rename.

* subversion/libsvn_fs_fs/fs_fs.c
  (write_current): New function to recreate db/current, factored out
    from write_final_current()
  (write_final_current): Use write_current() to write the current file.

  (recover_get_largest_revision): New. Determines the largest
    revision in a filesystem.
  (struct recover_read_from_file_baton, read_handler_recover): New.
    Support for streams wrapping a range of bytes in an APR file.
  (recover_find_max_ids): New. Recursively determines the largest
    node and copy id used in a revision.
  (recover_body): New. Body of the recovery function, called under the
    fs-wide write-lock.
  (svn_fs_fs__recover): New. Calls recover_body() under the write-lock.

* subversion/libsvn_fs_fs/fs_fs.h
  (svn_fs_fs__recover): New.

* subversion/libsvn_fs_fs/fs.c
  (fs_recover): Implement by delegating to svn_fs_fs__recover().

* notes/fsfs
  (Note: Backups): Note that the suggested improvement is now implemented.
]]]

Index: subversion/svnadmin/main.c
===================================================================
--- subversion/svnadmin/main.c (revision 23493)
+++ subversion/svnadmin/main.c (working copy)
@@ -379,9 +379,9 @@ static const svn_opt_subcommand_desc_t c
 
   {"recover", subcommand_recover, {0}, N_
    ("usage: svnadmin recover REPOS_PATH\n\n"
- "Run the Berkeley DB recovery procedure on a repository. Do\n"
- "this if you've been getting errors indicating that recovery\n"
- "ought to be run. Recovery requires exclusive access and will\n"
+ "Run the recovery procedure on a repository. Do this if you've\n"
+ "been getting errors indicating that recovery ought to be run.\n"
+ "Berkeley DB recovery requires exclusive access and will\n"
     "exit if the repository is in use by another process.\n"),
    {svnadmin__wait} },
 
Index: subversion/include/svn_fs.h
===================================================================
--- subversion/include/svn_fs.h (revision 23493)
+++ subversion/include/svn_fs.h (working copy)
@@ -262,10 +262,16 @@ svn_error_t *svn_fs_set_berkeley_errcall
                                          void (*handler)(const char *errpfx,
                                                          char *msg));
 
-/** Perform any necessary non-catastrophic recovery on a Berkeley
- * DB-based Subversion filesystem, stored in the environment @a path.
+/** Perform any necessary non-catastrophic recovery on the Subversion
+ * filesystem located at @a path. Despite the name, this operates on
+ * FSFS filesystems as well (though prior to 1.5.0, it was a no-op for
+ * FSFS).
+ *
  * Do any necessary allocation within @a pool.
  *
+ * @note For FSFS filesystems, recovery is currently limited to recreating
+ * the db/current file. Recovery for BDB filesystems is described below.
+ *
  * After an unexpected server exit, due to a server crash or a system
  * crash, a Subversion filesystem based on Berkeley DB needs to run
  * recovery procedures to bring the database back into a consistent
Index: subversion/libsvn_fs/fs-loader.h
===================================================================
--- subversion/libsvn_fs/fs-loader.h (revision 23493)
+++ subversion/libsvn_fs/fs-loader.h (working copy)
@@ -72,10 +72,10 @@ typedef struct fs_library_vtable_t
   svn_error_t *(*hotcopy)(const char *src_path, const char *dest_path,
                           svn_boolean_t clean, apr_pool_t *pool);
   const char *(*get_description)(void);
+ svn_error_t *(*recover)(const char *path, apr_pool_t *pool);
 
   /* Provider-specific functions should go here, even if they could go
      in an object vtable, so that they are all kept together. */
- svn_error_t *(*bdb_recover)(const char *path, apr_pool_t *pool);
   svn_error_t *(*bdb_logfiles)(apr_array_header_t **logfiles,
                                const char *path, svn_boolean_t only_unused,
                                apr_pool_t *pool);
Index: subversion/libsvn_fs/fs-loader.c
===================================================================
--- subversion/libsvn_fs/fs-loader.c (revision 23493)
+++ subversion/libsvn_fs/fs-loader.c (working copy)
@@ -452,7 +452,7 @@ svn_fs_berkeley_recover(const char *path
   fs_library_vtable_t *vtable;
 
   SVN_ERR(fs_library_vtable(&vtable, path, pool));
- return vtable->bdb_recover(path, pool);
+ return vtable->recover(path, pool);
 }
 
 svn_error_t *
Index: subversion/libsvn_fs_fs/fs_fs.c
===================================================================
--- subversion/libsvn_fs_fs/fs_fs.c (revision 23493)
+++ subversion/libsvn_fs_fs/fs_fs.c (working copy)
@@ -4078,6 +4078,34 @@ svn_fs_fs__move_into_place(const char *o
   return err;
 }
 
+/* Atomically update the current file to hold the specifed REV, NEXT_NODE_ID,
+ and NEXT_COPY_ID. Perform temporary allocations in POOL. */
+static svn_error_t *
+write_current(svn_fs_t *fs, svn_revnum_t rev, const char *next_node_id,
+ const char *next_copy_id, apr_pool_t *pool)
+{
+ char *buf;
+ const char *tmp_name, *name;
+ apr_file_t *file;
+
+ /* Now we can just write out this line. */
+ buf = apr_psprintf(pool, "%ld %s %s\n", rev, next_node_id, next_copy_id);
+
+ name = path_current(fs, pool);
+ SVN_ERR(svn_io_open_unique_file2(&file, &tmp_name, name, ".tmp",
+ svn_io_file_del_none, pool));
+
+ SVN_ERR(svn_io_file_write_full(file, buf, strlen(buf), NULL, pool));
+
+ SVN_ERR(svn_io_file_flush_to_disk(file, pool));
+
+ SVN_ERR(svn_io_file_close(file, pool));
+
+ SVN_ERR(svn_fs_fs__move_into_place(tmp_name, name, name, pool));
+
+ return SVN_NO_ERROR;
+}
+
 /* Update the current file to hold the correct next node and copy_ids
    from transaction TXN_ID in filesystem FS. The current revision is
    set to REV. Perform temporary allocations in POOL. */
@@ -4092,9 +4120,6 @@ write_final_current(svn_fs_t *fs,
   const char *txn_node_id, *txn_copy_id;
   char new_node_id[MAX_KEY_SIZE + 2];
   char new_copy_id[MAX_KEY_SIZE + 2];
- char *buf;
- const char *tmp_name, *name;
- apr_file_t *file;
   
   /* To find the next available ids, we add the id that used to be in
      the current file, to the next ids from the transaction file. */
@@ -4103,23 +4128,7 @@ write_final_current(svn_fs_t *fs,
   svn_fs_fs__add_keys(start_node_id, txn_node_id, new_node_id);
   svn_fs_fs__add_keys(start_copy_id, txn_copy_id, new_copy_id);
 
- /* Now we can just write out this line. */
- buf = apr_psprintf(pool, "%ld %s %s\n", rev, new_node_id,
- new_copy_id);
-
- name = path_current(fs, pool);
- SVN_ERR(svn_io_open_unique_file2(&file, &tmp_name, name, ".tmp",
- svn_io_file_del_none, pool));
-
- SVN_ERR(svn_io_file_write_full(file, buf, strlen(buf), NULL, pool));
-
- SVN_ERR(svn_io_file_flush_to_disk(file, pool));
-
- SVN_ERR(svn_io_file_close(file, pool));
-
- SVN_ERR(svn_fs_fs__move_into_place(tmp_name, name, name, pool));
-
- return SVN_NO_ERROR;
+ return write_current(fs, rev, new_node_id, new_copy_id, pool);
 }
 
 /* Get a write lock in FS, creating it in POOL. */
@@ -4495,6 +4504,315 @@ svn_fs_fs__create(svn_fs_t *fs,
   return SVN_NO_ERROR;
 }
 
+/* Part of the recovery procedure. Return the largest revision *REV in
+ filesystem FS. Use POOL for temporary allocation. */
+static svn_error_t *
+recover_get_largest_revision(svn_fs_t *fs, svn_revnum_t *rev, apr_pool_t *pool)
+{
+ /* Discovering the largest revision in the filesystem would be an
+ expensive operation if we did a readdir() or searched linearly,
+ so we'll do a form of binary search. left is a revision that we
+ know exists, right a revision that we know does not exist. */
+ apr_pool_t *iterpool;
+ svn_revnum_t left, right = 1;
+
+ iterpool = svn_pool_create(pool);
+ /* Keep doubling right, until we find a revision that doesn't exist. */
+ while (1)
+ {
+ svn_node_kind_t kind;
+ SVN_ERR(svn_io_check_path(svn_fs_fs__path_rev(fs, right, iterpool),
+ &kind, iterpool));
+ svn_pool_clear(iterpool);
+
+ if (kind == svn_node_none)
+ break;
+
+ right <<= 1;
+ }
+
+ left = right >> 1;
+
+ /* We know that left exists and right doesn't. Do a normal bsearch to find
+ the last revision. */
+ while (left + 1 < right)
+ {
+ svn_revnum_t probe = left + ((right - left) / 2);
+ svn_node_kind_t kind;
+
+ SVN_ERR(svn_io_check_path(svn_fs_fs__path_rev(fs, probe, iterpool),
+ &kind, iterpool));
+ svn_pool_clear(iterpool);
+
+ if (kind == svn_node_none)
+ right = probe;
+ else
+ left = probe;
+ }
+
+ svn_pool_destroy(iterpool);
+
+ /* left is now the largest revision that exists. */
+ *rev = left;
+ return SVN_NO_ERROR;
+}
+
+/* A baton for reading a fixed amount from an open file. For
+ recover_find_max_ids() below. */
+struct recover_read_from_file_baton
+{
+ apr_file_t *file;
+ apr_pool_t *pool;
+ apr_size_t remaining;
+};
+
+/* A stream read handler used by recover_find_max_ids() below.
+ Read and return at most BATON->REMAINING bytes from the stream,
+ returning nothing after that to indicate EOF. */
+static svn_error_t *
+read_handler_recover(void *baton, char *buffer, apr_size_t *len)
+{
+ struct recover_read_from_file_baton *b = baton;
+ apr_size_t bytes_to_read = *len;
+
+ if (b->remaining == 0)
+ {
+ /* Return a successful read of zero bytes to signal EOF. */
+ *len = 0;
+ return SVN_NO_ERROR;
+ }
+
+ if (bytes_to_read > b->remaining)
+ bytes_to_read = b->remaining;
+ b->remaining -= bytes_to_read;
+
+ return svn_io_file_read_full(b->file, buffer, bytes_to_read, len, b->pool);
+}
+
+/* Part of the recovery procedure. Read the directory noderev at offset
+ OFFSET of file REV_FILE (the revision file of revision REV of
+ filesystem FS), and set MAX_NODE_ID and MAX_COPY_ID to be the node-id
+ and copy-id of that node, if greater than the current value stored
+ in either. Recurse into any child directories that were modified in
+ this revision.
+
+ MAX_NODE_ID and MAX_COPY_ID must be arrays of at least MAX_KEY_SIZE.
+
+ Perform temporary allocation in POOL. */
+static svn_error_t *
+recover_find_max_ids(svn_fs_t *fs, svn_revnum_t rev,
+ apr_file_t *rev_file, apr_off_t offset,
+ char *max_node_id, char *max_copy_id,
+ apr_pool_t *pool)
+{
+ apr_hash_t *headers;
+ char *value;
+ node_revision_t noderev;
+ struct rep_args *ra;
+ struct recover_read_from_file_baton baton;
+ svn_stream_t *stream;
+ apr_hash_t *entries;
+ apr_hash_index_t *hi;
+ apr_pool_t *iterpool;
+
+ SVN_ERR(svn_io_file_seek(rev_file, APR_SET, &offset, pool));
+ SVN_ERR(read_header_block(&headers, rev_file, pool));
+
+ /* We're going to populate a skeletal noderev - just the id and data_rep. */
+ value = apr_hash_get(headers, HEADER_ID, APR_HASH_KEY_STRING);
+ noderev.id = svn_fs_fs__id_parse(value, strlen(value), pool);
+
+ /* Check that this is a directory. It should be. */
+ value = apr_hash_get(headers, HEADER_TYPE, APR_HASH_KEY_STRING);
+ if (value == NULL || strcmp(value, KIND_DIR) != 0)
+ return svn_error_create(SVN_ERR_FS_CORRUPT, NULL,
+ _("Recovery encountered a non-directory node"));
+
+ /* Get the data location. No data location indicates an empty directory. */
+ value = apr_hash_get(headers, HEADER_TEXT, APR_HASH_KEY_STRING);
+ if (!value)
+ return SVN_NO_ERROR;
+ SVN_ERR(read_rep_offsets(&noderev.data_rep, value, NULL, FALSE, pool));
+
+ /* If the directory's data representation wasn't changed in this revision,
+ we've already scanned the directory's contents for noderevs, so we don't
+ need to again. This will occur if a property is changed on a directory
+ without changing the directory's contents. */
+ if (noderev.data_rep->revision != rev)
+ return SVN_NO_ERROR;
+
+ /* We could use get_dir_contents(), but this is much cheaper. It does
+ rely on directory entries being stored as PLAIN reps, though. */
+ offset = noderev.data_rep->offset;
+ SVN_ERR(svn_io_file_seek(rev_file, APR_SET, &offset, pool));
+ SVN_ERR(read_rep_line(&ra, rev_file, pool));
+ if (ra->is_delta)
+ return svn_error_create(SVN_ERR_FS_CORRUPT, NULL,
+ _("Recovery encountered a deltified directory "
+ "representation"));
+
+ /* Now create a stream that's allowed to read only as much data as is
+ stored in the representation. */
+ baton.file = rev_file;
+ baton.pool = pool;
+ baton.remaining = noderev.data_rep->expanded_size;
+ stream = svn_stream_create(&baton, pool);
+ svn_stream_set_read(stream, read_handler_recover);
+
+ /* Now read the entries from that stream. */
+ entries = apr_hash_make(pool);
+ SVN_ERR(svn_hash_read2(entries, stream, SVN_HASH_TERMINATOR, pool));
+ SVN_ERR(svn_stream_close(stream));
+
+ /* Now check each of the entries in our directory to find new node and
+ copy ids, and recurse into new subdirectories. */
+ iterpool = svn_pool_create(pool);
+ for (hi = apr_hash_first(NULL, entries); hi; hi = apr_hash_next(hi))
+ {
+ void *val;
+ char *str_val;
+ char *str, *last_str;
+ svn_node_kind_t kind;
+ svn_fs_id_t *id;
+ const char *node_id, *copy_id;
+ apr_off_t child_dir_offset;
+
+ svn_pool_clear(iterpool);
+
+ apr_hash_this(hi, NULL, NULL, &val);
+ str_val = apr_pstrdup(iterpool, *((char **)val));
+
+ str = apr_strtok(str_val, " ", &last_str);
+ if (str == NULL)
+ return svn_error_create(SVN_ERR_FS_CORRUPT, NULL,
+ _("Directory entry corrupt"));
+
+ if (strcmp(str, KIND_FILE) == 0)
+ kind = svn_node_file;
+ else if (strcmp(str, KIND_DIR) == 0)
+ kind = svn_node_dir;
+ else
+ {
+ return svn_error_create(SVN_ERR_FS_CORRUPT, NULL,
+ _("Directory entry corrupt"));
+ }
+
+ str = apr_strtok(NULL, " ", &last_str);
+ if (str == NULL)
+ return svn_error_create(SVN_ERR_FS_CORRUPT, NULL,
+ _("Directory entry corrupt"));
+
+ id = svn_fs_fs__id_parse(str, strlen(str), iterpool);
+
+ if (svn_fs_fs__id_rev(id) != rev)
+ {
+ /* If the node wasn't modified in this revision, we've already
+ checked the node and copy id. */
+ continue;
+ }
+
+ node_id = svn_fs_fs__id_node_id(id);
+ copy_id = svn_fs_fs__id_copy_id(id);
+
+ if (svn_fs_fs__key_compare(node_id, max_node_id) > 0)
+ strcpy(max_node_id, node_id);
+ if (svn_fs_fs__key_compare(copy_id, max_copy_id) > 0)
+ strcpy(max_copy_id, copy_id);
+
+ if (kind == svn_node_file)
+ continue;
+
+ child_dir_offset = svn_fs_fs__id_offset(id);
+ SVN_ERR(recover_find_max_ids(fs, rev, rev_file, child_dir_offset,
+ max_node_id, max_copy_id, iterpool));
+ }
+ svn_pool_destroy(iterpool);
+
+ return SVN_NO_ERROR;
+}
+
+/* The work-horse for svn_fs_fs__recover, called with the FS
+ write lock. This implements the svn_fs_fs__with_write_lock()
+ 'body' callback type. BATON is a 'svn_fs_t *' filesystem. */
+static svn_error_t *
+recover_body(void *baton, apr_pool_t *pool)
+{
+ svn_fs_t *fs = baton;
+ svn_revnum_t rev, max_rev;
+ apr_pool_t *iterpool;
+ char max_node_id[MAX_KEY_SIZE] = "0", max_copy_id[MAX_KEY_SIZE] = "0";
+ char next_node_id[MAX_KEY_SIZE], next_copy_id[MAX_KEY_SIZE];
+ apr_size_t len;
+
+ /* First, we need to know the largest revision in the filesystem. */
+ SVN_ERR(recover_get_largest_revision(fs, &max_rev, pool));
+
+ /* Next we need to find the maximum node id and copy id in use across the
+ filesystem. Unfortunately, the only way we can get this information
+ is to scan all the noderevs of all the revisions and keep track as
+ we go along. */
+ iterpool = svn_pool_create(pool);
+ for (rev = 0; rev <= max_rev; rev++)
+ {
+ apr_file_t *rev_file;
+ apr_off_t root_offset;
+
+ svn_pool_clear(iterpool);
+
+ SVN_ERR(svn_io_file_open(&rev_file,
+ svn_fs_fs__path_rev(fs, rev, iterpool),
+ APR_READ | APR_BUFFERED, APR_OS_DEFAULT,
+ iterpool));
+ SVN_ERR(get_root_changes_offset(&root_offset, NULL, rev_file, iterpool));
+ SVN_ERR(recover_find_max_ids(fs, rev, rev_file, root_offset,
+ max_node_id, max_copy_id, iterpool));
+ }
+ svn_pool_destroy(iterpool);
+
+ /* Now that we finally have the maximum revision, node-id and copy-id, we
+ can bump the two ids to get the next of each, and store them all in a
+ new current file. */
+ len = strlen(max_node_id);
+ svn_fs_fs__next_key(max_node_id, &len, next_node_id);
+ len = strlen(max_copy_id);
+ svn_fs_fs__next_key(max_copy_id, &len, next_copy_id);
+
+ SVN_ERR(write_current(fs, max_rev, next_node_id, next_copy_id, pool));
+
+ return SVN_NO_ERROR;
+}
+
+svn_error_t *
+svn_fs_fs__recover(const char *path,
+ apr_pool_t *pool)
+{
+ svn_fs_t *fs;
+ /* Recovery for FSFS is currently limited to recreating the current
+ file from the latest revision. */
+
+ /* Things are much easier if we can just use a regular fs pointer.
+ The only thing we have to watch out for is that the current file
+ might not exist. So we'll try to create it here unconditionally,
+ and just ignore any errors that might indicate that it's already
+ present. (We'll need it to exist later anyway as a source for the
+ new file's permissions). */
+
+ /* Create a dummy fs pointer first to create current. This will fail
+ if it already exists, but we don't care about that. */
+ fs = svn_fs_new(NULL, pool);
+ fs->path = (char *)path;
+ svn_error_clear(svn_io_file_create(path_current(fs, pool), "0 1 1\n", pool));
+
+ /* We should now be able to reopen the filesystem properly. */
+ SVN_ERR(svn_fs_open(&fs, path, NULL, pool));
+
+ /* We have no way to take out an exclusive lock in FSFS, so we're
+ restricted as to the types of recovery we can do. Luckily,
+ we just want to recreate the current file, and we can do that just
+ by blocking other writers. */
+ return svn_fs_fs__with_write_lock(fs, recover_body, fs, pool);
+}
+
 svn_error_t *
 svn_fs_fs__get_uuid(svn_fs_t *fs,
                     const char **uuid_p,
Index: subversion/libsvn_fs_fs/fs_fs.h
===================================================================
--- subversion/libsvn_fs_fs/fs_fs.h (revision 23493)
+++ subversion/libsvn_fs_fs/fs_fs.h (working copy)
@@ -32,6 +32,11 @@ svn_error_t *svn_fs_fs__hotcopy(const ch
                                 const char *dst_path,
                                 apr_pool_t *pool);
 
+/* Recover the fsfs filesystem at PATH.
+ Use POOL for temporary allocations. */
+svn_error_t *svn_fs_fs__recover(const char *path,
+ apr_pool_t *pool);
+
 /* Set *NODEREV_P to the node-revision for the node ID in FS. Do any
    allocations in POOL. */
 svn_error_t *svn_fs_fs__get_node_revision(node_revision_t **noderev_p,
Index: subversion/libsvn_fs_fs/fs.c
===================================================================
--- subversion/libsvn_fs_fs/fs.c (revision 23493)
+++ subversion/libsvn_fs_fs/fs.c (working copy)
@@ -224,14 +224,14 @@ fs_hotcopy(const char *src_path,
 
 
 
-/* This function is included for Subversion 1.0.x compatibility. It has
- no effect for fsfs backed Subversion filesystems. It conforms to
- the fs_library_vtable_t.bdb_recover() API. */
+/* This implements the fs_library_vtable_t.recover() API.
+ Recover the Subversion filesystem at PATH.
+ Perform all temporary allocations in POOL. */
 static svn_error_t *
 fs_recover(const char *path,
            apr_pool_t *pool)
 {
- /* This is a no-op for FSFS. */
+ SVN_ERR(svn_fs_fs__recover(path, pool));
 
   return SVN_NO_ERROR;
 }
Index: notes/fsfs
===================================================================
--- notes/fsfs (revision 23493)
+++ notes/fsfs (working copy)
@@ -237,9 +237,10 @@ populated when it was copied.
 The "svnadmin hotcopy" command avoids this problem by copying the
 "current" file before copying the revision files. But a backup using
 the hotcopy command isn't as efficient as a straight incremental
-backup. FSFS may evolve so that "svnadmin recover" (currently a
-no-op) knows how to recover from the inconsistency which might result
-from a naive backup.
+backup. As of Subversion 1.5.0, "svnadmin recover" is able to recover
+from the inconsistency which might result from a naive backup by
+recreating the "current" file. However, this does require reading
+every revision file in the repository, and so may take some time.
 
 Naively copying an FSFS repository might also copy in-progress
 transactions, which would become stale and take up extra room until
@@ -252,5 +253,4 @@ repository, configure the software to co
 the numbered revision files, if possible, and configure it not to copy
 the "transactions" directory. If you can't do those things, use
 "svnadmin hotcopy", or be prepared to cope with the very occasional
-need for manual repair of the repository upon restoring it from
-backup.
+need for repair of the repository upon restoring it from backup.

  • application/pgp-signature attachment: stored
Received on Mon Mar 5 06:09:41 2007

This is an archived mail posted to the Subversion Dev mailing list.

This site is subject to the Apache Privacy Policy and the Apache Public Forum Archive Policy.