[svn.haxx.se] · SVN Dev · SVN Users · SVN Org · TSVN Dev · TSVN Users · Subclipse Dev · Subclipse Users · this month's index

[PATCH][DRAFT] Hot copy functionality to eliminate race conditions in hot-backup.py

From: Vladimir Berezniker <vmpn_at_tigris.org>
Date: 2003-08-28 20:12:33 CEST

This is the first draft of the patch. The patch implements svnadmin hotcopy
command with optional --archive-logs flag. If flag is specified, after the
copying is complete, unused *copied* log files are deleted from the source
repository.

There is something that you should be aware of. The /db and /locks directories
are *not* copied file by file. Only the files that should be there are copied.
So if user places their own files in those folders, they will *not* be copied.
However, I believe user should not be putting anything in /db and /lock are
specific to one repository. Any objections?

In this implementation you will see some redundant code. I did that for two
reasons. First, it makes very clear what changes I made and it does not touch
existing code at all. Second, I wanted to check before touching any existing code.

For the next step I want to consolidate redundant code. I ask for permission to
modify existing subversion code as described in the section following the log
message in this email.

I have one more question regarding /dav directory. Do I need to copy it, or
discard it? As I understood the code it is only used to keep track of actions
while client works with the module, so the data is not aplicable to the copy.

-----------------
Log Message:
-----------------
Implemented hot copy functionality for subversion. This functionality fixes two
race conditions existing in hot-backup.py. Only the logs that have been
successfully copied are archived, therefore any log modified while copy is in
progress is not archived. Also (svn_repos_hotcopy) takes out shared lock on db
lock, eliminating possibility of corruption, if running recovery in parallel to
automated backup.
Updated hot-backup.py to wrap around the new hot copy functionality.

* subversion/include/svn_fs.h
     (svn_fs_hotcopy_berkeley): Added prototype for Berkeley hot copy function.

* subversion/include/svn_repos.h
     (svn_repos_hotcopy): Added prototype for subversion repository hot copy
function.

* subversion/libsvn_fs/fs.c:
     (svn_fs__copy_file): Implemented function for copying a file between two
directories.
     (SVN_ERR_POOL): Implemented macro to handle subversion errors in code that
uses subpools.
     (svn_fs__archive_logs): Implemented function that archives only copied
unused Berkeley DB logs.
     (svn_fs_hotcopy_berkeley): Implemented hot copy functionality in accordance
with Berkeley DB documentation.

* subversion/libsvn_repos/repos.h
     (SVN_REPOS__DB_LOGS_LOCKFILE): Added new definition for BDB logs files lock
file.

* subversion/libsvn_repos/repos.c
     (hotcopy_structure): Implemented function for copying repository structure
with exception of /db and /locks directories. Based function on (copy_structure).
     (svn_repos_db_logs_lockfile): Implemented function to return path to db
logs lock file.
     (create_db_logs_lock): Implemented function for creation of db logs lock file.
     (lockfile_lock): Implemented function for file locking.
     (lock_db_logs_file): Function for locking db logs lock file.
     (svn_repos_hotcopy): Implemented function to make a hot copy of a repository.

* subversion/svnadmin/main.c
     Added new flag "archive-logs" to specify that logs are to be archived after
the hot copy is complete.
     (subcommand_hotcopy): Implemented new hotcopy subcommand.

* tools/backup/hot-backup.py.in
     Updated hot backup script to utilize the new hot copy functionality.

------------------------------------------------------------------
Here are my proposed code consolidation steps.
------------------------------------------------------------------

* subversion/svnadmin/main.c:
     Factor out code used to parse path to local repository into
parse_repos_local_path() function.

* subversion/libsvn_repos/repos.c:
     Create svn_io_file_create(const char *path, const char * contents,
apr_pool_t *pool).
     Create svn_io_file_lock(const char *path, svn_boolean_t exclusive,
apr_pool_t *pool).

     Teach svn_repos_create and svn_repos_hotcopy to use common copying function.

* subversion/libsvn_fs/fs.c:
     SVN_ERR_POOL(): Move SVN_ERR_POOL macro into subversion/include/svn_error.h.
     Refactor svn_fs__copy_file() as svn_io_dir_file_copy()

Sincerely,
Vladimir Berezniker

The patch:

Index: subversion/svnadmin/main.c
===================================================================
--- subversion/svnadmin/main.c (revision 6903)
+++ subversion/svnadmin/main.c (working copy)
@@ -55,6 +55,7 @@
  /** Subcommands. **/

  static svn_opt_subcommand_t
+ subcommand_hotcopy,
    subcommand_create,
    subcommand_createtxn,
    subcommand_dump,
@@ -80,7 +81,8 @@
      svnadmin__force_uuid,
      svnadmin__parent_dir,
      svnadmin__bdb_txn_nosync,
- svnadmin__config_dir
+ svnadmin__config_dir,
+ svnadmin__archive_logs
    };

  /* Option codes and descriptions.
@@ -134,6 +136,9 @@
      {"config-dir", svnadmin__config_dir, 1,
       "read user configuration files from directory ARG"},

+ {"archive-logs", svnadmin__archive_logs, 0,
+ "delete copied, unused log files from the source repository."},
+
      {NULL}
    };

@@ -169,6 +174,11 @@
       "Display this usage message.\n",
       {svnadmin__version} },

+ {"hotcopy", subcommand_hotcopy, {0},
+ "usage: svnadmin hotcopy REPOS_PATH NEW_REPOS_PATH\n\n"
+ "Makes a hot copy of a repository.\n\n",
+ {svnadmin__archive_logs} },
+
      {"load", subcommand_load, {0},
       "usage: svnadmin load REPOS_PATH\n\n"
       "Read a 'dumpfile'-formatted stream from stdin, committing\n"
@@ -238,6 +248,7 @@
  struct svnadmin_opt_state
  {
    const char *repository_path;
+ const char *new_repository_path;
    svn_opt_revision_t start_revision, end_revision; /* -r X[:Y] */
    svn_boolean_t help; /* --help or -? */
    svn_boolean_t version; /* --version */
@@ -245,6 +256,7 @@
    svn_boolean_t follow_copies; /* --copies */
    svn_boolean_t quiet; /* --quiet */
    svn_boolean_t bdb_txn_nosync; /* --bdb-txn-nosync */
+ svn_boolean_t archive_logs; /* --archive-logs */
    enum svn_repos_load_uuid uuid_action; /* --ignore-uuid,
                                                         --force-uuid */
    const char *on_disk;
@@ -655,6 +667,21 @@
  }

+/* This implements `svn_opt_subcommand_t'. */
+svn_error_t *
+subcommand_hotcopy (apr_getopt_t *os, void *baton, apr_pool_t *pool)
+{
+ struct svnadmin_opt_state *opt_state = baton;
+
+ SVN_ERR (svn_repos_hotcopy(opt_state->repository_path,
+ opt_state->new_repository_path,
+ opt_state->archive_logs,
+ pool));
+
+ return SVN_NO_ERROR;
+}
+
+
  
  /** Main. **/

@@ -820,6 +847,9 @@
          opt_state.config_dir = apr_pstrdup (pool, svn_path_canonicalize(opt_arg,
                                                                         pool));
          break;
+ case svnadmin__archive_logs:
+ opt_state.archive_logs = TRUE;
+ break;
        default:
          {
            subcommand_help (NULL, NULL, pool);
@@ -898,6 +928,45 @@
        opt_state.repository_path = repos_path;
      }

+
+ /* If command is copy the third argument will be the new repository path.
+ */
+ if (subcommand->cmd_func == subcommand_hotcopy)
+ {
+ const char *new_repos_path = NULL;
+
+ if (os->ind < os->argc)
+ {
+ opt_state.new_repository_path = os->argv[os->ind++];
+
+ SVN_INT_ERR (
+ svn_utf_cstring_to_utf8 (&(opt_state.new_repository_path),
+ opt_state.new_repository_path,
+ NULL, pool));
+ new_repos_path
+ = svn_path_internal_style (opt_state.new_repository_path, pool);
+ }
+
+ if (new_repos_path == NULL)
+ {
+ fprintf (stderr, "new repository argument required\n");
+ subcommand_help (NULL, NULL, pool);
+ svn_pool_destroy (pool);
+ return EXIT_FAILURE;
+ }
+ else if (svn_path_is_url (new_repos_path))
+ {
+ fprintf (stderr,
+ "'%s' is a url when it should be a path\n",
+ new_repos_path);
+ svn_pool_destroy (pool);
+ return EXIT_FAILURE;
+ }
+
+ /* Copy new repos path into the OPT_STATE structure. */
+ opt_state.new_repository_path = new_repos_path;
+ }
+
    /* Check that the subcommand wasn't passed any inappropriate options. */
    for (i = 0; i < num_opts; i++)
      {
Index: subversion/include/svn_fs.h
===================================================================
--- subversion/include/svn_fs.h (revision 6903)
+++ subversion/include/svn_fs.h (working copy)
@@ -118,6 +118,15 @@
   */
  svn_error_t *svn_fs_create_berkeley (svn_fs_t *fs, const char *path);

+/** Hot copy Subversion filesystem, stored in a Berkeley DB environment under
+ * @a src_path to @a dest_path. If @a archive_logs is used is @c TRUE,
+ * delete copied unused log files from source repository at @a src_path
+ * Using @a pool for any necessary memory allocations.
+ */
+svn_error_t *svn_fs_hotcopy_berkeley (const char *src_path,
+ const char *dest_path,
+ svn_boolean_t archive_logs,
+ apr_pool_t *pool);

  /** Make @a fs refer to the Berkeley DB-based Subversion filesystem at
   * @a path. @a path is utf8-encoded, and must refer to a file or directory
Index: subversion/include/svn_repos.h
===================================================================
--- subversion/include/svn_repos.h (revision 6903)
+++ subversion/include/svn_repos.h (working copy)
@@ -99,6 +99,16 @@
                                 apr_hash_t *fs_config,
                                 apr_pool_t *pool);

+/** Make a hot copy of the Subversion repository found at @a src_path
+ * to @a dst_path.
+ *
+ * @copydoc svn_fs_hotcopy_berkeley()
+ */
+svn_error_t * svn_repos_hotcopy (const char *src_path,
+ const char *dst_path,
+ svn_boolean_t archive_logs,
+ apr_pool_t *pool);
+
  /** Destroy the Subversion repository found at @a path, using @a pool for any
   * necessary allocations.
   */
Index: subversion/libsvn_fs/fs.c
===================================================================
--- subversion/libsvn_fs/fs.c (revision 6903)
+++ subversion/libsvn_fs/fs.c (working copy)
@@ -531,6 +531,150 @@
    return svn_err;
  }

+svn_error_t *
+svn_fs__copy_file(const char *src_path,
+ const char *dest_path,
+ const char *file,
+ apr_pool_t *pool)
+{
+ const char *file_dest_path = svn_path_join (dest_path, file, pool);
+ const char *file_src_path = svn_path_join (src_path, file, pool);
+
+ SVN_ERR (svn_io_copy_file(file_src_path, file_dest_path, TRUE, pool));
+
+ return SVN_NO_ERROR;
+}
+
+/** A statement macro, very similar to @c SVN_ERR.
+ * This macro will check for error and if one exists
+ * will destroy the specified pool. Usefull for functions
+ * that create local subpools that need to be properly disposed
+ * of in case of an error
+ */
+#define SVN_ERR_POOL(expr, subpool) \
+ do { \
+ svn_error_t *svn_err__temp = (expr); \
+ if (svn_err__temp) { \
+ svn_pool_destroy(subpool); \
+ return svn_err__temp; \
+ } \
+ } while (0)
+
+/**
+ * Archive all unused log files that have been copied to a different location
+ */
+svn_error_t *
+svn_fs__archive_logs(const char *live_path,
+ const char *backup_path,
+ apr_pool_t *pool)
+{
+
+ apr_array_header_t *logfiles;
+
+ SVN_ERR (svn_fs_berkeley_logfiles (&logfiles,
+ live_path,
+ TRUE, /* Only unused logs */
+ pool));
+
+ if (logfiles == NULL) {
+ return SVN_NO_ERROR;
+ }
+
+ { /* Process unused logs from live area */
+ int log;
+ apr_pool_t *sub_pool = svn_pool_create(pool);
+
+ /* Process log files. */
+ for (log = 0; log < logfiles->nelts; svn_pool_clear(sub_pool), log++)
+ {
+ const char * log_file = APR_ARRAY_IDX (logfiles, log, const char *);
+ const char *live_log_path
+ = svn_path_join (live_path, log_file, sub_pool);
+ const char *backup_log_path
+ = svn_path_join (backup_path, log_file, sub_pool);
+
+ { /* Compare files. No point in using MD5 and waisting CPU cycles as we
+ got full copies of both logs */
+ svn_boolean_t files_match = FALSE;
+ svn_error_t * err = svn_io_files_contents_same_p(&files_match,
+ live_log_path,
+ backup_log_path,
+ sub_pool);
+ if (err) {
+ svn_error_clear(err);
+ files_match = FALSE;
+ }
+
+
+ if(files_match == FALSE) {
+ continue;
+ }
+ }
+
+ SVN_ERR_POOL(svn_io_remove_file(live_log_path, sub_pool), sub_pool);
+ }
+
+ svn_pool_destroy(sub_pool);
+ }
+
+ return SVN_NO_ERROR;
+}
+
+svn_error_t *
+svn_fs_hotcopy_berkeley (const char *src_path,
+ const char *dest_path,
+ svn_boolean_t archive_logs,
+ apr_pool_t *pool)
+{
+ /* Check DBD version, just in case */
+ SVN_ERR (check_bdb_version (pool));
+
+ /* Copy the DB_CONFIG file. */
+ SVN_ERR(svn_fs__copy_file(src_path, dest_path, &"DB_CONFIG", pool));
+
+ /* Copy the databases. */
+ SVN_ERR(svn_fs__copy_file(src_path, dest_path, &"nodes", pool));
+ SVN_ERR(svn_fs__copy_file(src_path, dest_path, &"revisions", pool));
+ SVN_ERR(svn_fs__copy_file(src_path, dest_path, &"transactions", pool));
+ SVN_ERR(svn_fs__copy_file(src_path, dest_path, &"copies", pool));
+ SVN_ERR(svn_fs__copy_file(src_path, dest_path, &"changes", pool));
+ SVN_ERR(svn_fs__copy_file(src_path, dest_path, &"representations", pool));
+ SVN_ERR(svn_fs__copy_file(src_path, dest_path, &"strings", pool));
+ SVN_ERR(svn_fs__copy_file(src_path, dest_path, &"uuids", pool));
+
+ {
+ apr_array_header_t *logfiles;
+ int log;
+
+ SVN_ERR (svn_fs_berkeley_logfiles (&logfiles,
+ src_path,
+ FALSE, /* All logs */
+ pool));
+
+ if (logfiles == NULL) {
+ return SVN_NO_ERROR;
+ }
+
+ /* Process log files. */
+ for (log = 0; log < logfiles->nelts; log++)
+ {
+ SVN_ERR(svn_fs__copy_file(src_path, dest_path,
+ APR_ARRAY_IDX (logfiles, log, const char *),
+ pool));
+ }
+ }
+
+ /* Since this is a copy we will have exclusive access to the repository. */
+ SVN_ERR(svn_fs_berkeley_recover(dest_path, pool));
+
+ if(archive_logs == TRUE)
+ {
+ SVN_ERR(svn_fs__archive_logs(src_path, dest_path, pool));
+ }
+
+ return SVN_NO_ERROR;
+}
+
  
  /* Gaining access to an existing Berkeley DB-based filesystem. */

Index: subversion/libsvn_repos/repos.c
===================================================================
--- subversion/libsvn_repos/repos.c (revision 6903)
+++ subversion/libsvn_repos/repos.c (working copy)
@@ -1343,3 +1343,251 @@

    return SVN_NO_ERROR;
  }
+
+
+static svn_error_t *hotcopy_structure (void *baton,
+ const char *path,
+ const apr_finfo_t *finfo,
+ apr_pool_t *pool)
+{
+ const struct copy_ctx_t *cc = baton;
+ apr_size_t len = strlen (path);
+ const char *target;
+ svn_boolean_t fs_dir = FALSE;
+
+ if (len == cc->base_len)
+ {
+ /* The walked-path is the template base. Therefore, target is
+ the repository base path. */
+ target = cc->path;
+ }
+ else
+ {
+ /* Take whatever is after the source base path, and append that
+ to the repository base path. Note that we get the right
+ slashes in here, based on how we slice the walked-pat. */
+ const char * sub_path = &path[cc->base_len+1];
+
+ /* Check if we are inside db directory and if so skip it
+ TODO: Modify svn_io_walk_func_t to add new parameter to
+ indicate to svn_io_dir_walk that we do not want directory to
+ be examined. */
+ if(svn_path_compare_paths(
+ svn_path_get_longest_ancestor(SVN_REPOS__DB_DIR, sub_path, pool),
+ SVN_REPOS__DB_DIR) == 0) {
+ return SVN_NO_ERROR;
+ }
+
+
+ if(svn_path_compare_paths(
+ svn_path_get_longest_ancestor(SVN_REPOS__LOCK_DIR, sub_path, pool),
+ SVN_REPOS__LOCK_DIR) == 0) {
+ return SVN_NO_ERROR;
+ }
+
+ /* Change apr_strcont to svn_path_join, since svn_io_dir_walk uses it. */
+ target = svn_path_join (cc->path, sub_path, pool);
+ }
+
+ if (finfo->filetype == APR_DIR)
+ {
+ SVN_ERR (create_repos_dir (target, pool));
+ }
+ else
+ {
+ apr_status_t apr_err;
+
+ assert (finfo->filetype == APR_REG);
+
+ apr_err = apr_file_copy (path, target, APR_FILE_SOURCE_PERMS, pool);
+ if (apr_err)
+ return svn_error_createf (apr_err, NULL,
+ "could not copy `%s'", path);
+ }
+
+ return SVN_NO_ERROR;
+}
+
+
+const char *
+svn_repos_db_logs_lockfile (svn_repos_t *repos, apr_pool_t *pool)
+{
+ return svn_path_join (repos->lock_path, SVN_REPOS__DB_LOGS_LOCKFILE, pool);
+}
+
+
+/* Create the DB logs lockfile. */
+static svn_error_t *
+create_db_logs_lock (svn_repos_t *repos, apr_pool_t *pool) {
+ apr_status_t apr_err;
+ apr_file_t *f = NULL;
+ apr_size_t written;
+ const char *contents;
+ const char *lockfile_path;
+
+ lockfile_path = svn_repos_db_logs_lockfile (repos, pool);
+ SVN_ERR_W (svn_io_file_open (&f, lockfile_path,
+ (APR_WRITE | APR_CREATE | APR_EXCL),
+ APR_OS_DEFAULT,
+ pool),
+ "creating logs lock file");
+
+ contents =
+ "DB logs lock file, representing locks on the versioned filesystem logs.\n"
+ "\n"
+ "All log manipulators of the repository's\n"
+ "Berkeley DB environment take out exclusive locks on this file\n"
+ "to ensure that only one accessor manupulates the logs at the time.\n"
+ "\n"
+ "You should never have to edit or remove this file.\n";
+
+ apr_err = apr_file_write_full (f, contents, strlen (contents), &written);
+ if (apr_err)
+ return svn_error_createf (apr_err, NULL,
+ "writing log lock file `%s'", lockfile_path);
+
+ apr_err = apr_file_close (f);
+ if (apr_err)
+ return svn_error_createf (apr_err, NULL,
+ "closing log lock file `%s'", lockfile_path);
+ return SVN_NO_ERROR;
+
+}
+
+/* Lock lock file. */
+static svn_error_t *
+lockfile_lock (const char *lockfile_path,
+ svn_boolean_t exclusive,
+ apr_pool_t *pool)
+{
+ int locktype = APR_FLOCK_SHARED;
+ apr_file_t *lockfile_handle;
+ apr_int32_t flags;
+ apr_status_t apr_err;
+
+ if(exclusive == TRUE) {
+ locktype = APR_FLOCK_EXCLUSIVE;
+ }
+
+
+ flags = APR_READ;
+ if (locktype == APR_FLOCK_EXCLUSIVE)
+ flags |= APR_WRITE;
+ SVN_ERR_W (svn_io_file_open (&lockfile_handle, lockfile_path,
+ flags, APR_OS_DEFAULT, pool),
+ "lock_file: error opening lockfile");
+
+ /* Get some kind of lock on the filehandle. */
+ apr_err = apr_file_lock (lockfile_handle, locktype);
+ if (apr_err)
+ {
+ const char *lockname = "unknown";
+ if (locktype == APR_FLOCK_SHARED)
+ lockname = "shared";
+ if (locktype == APR_FLOCK_EXCLUSIVE)
+ lockname = "exclusive";
+
+ return svn_error_createf
+ (apr_err, NULL,
+ "lock_file: %s lock on file `%s' failed",
+ lockname, lockfile_path);
+ }
+
+ /* Register an unlock function for the lock. */
+ apr_pool_cleanup_register (pool, lockfile_handle, clear_and_close,
+ apr_pool_cleanup_null);
+ return SVN_NO_ERROR;
+}
+
+
+static svn_error_t *
+lock_db_logs_file (svn_repos_t *repos,
+ svn_boolean_t exclusive,
+ apr_pool_t *pool)
+{
+ const char * lock_file = svn_repos_db_logs_lockfile(repos, pool);
+ svn_node_kind_t kind;
+ SVN_ERR(svn_io_check_path(lock_file, &kind, pool));
+
+ /* Try to create a lock file if it is missing.
+ Ignore creation errors just in case file got created by another process
+ while we were checking. */
+ if(kind == svn_node_none) {
+ svn_error_t * err = create_db_logs_lock(repos, pool);
+ if(err) {
+ svn_error_clear(err);
+ }
+ }
+
+ SVN_ERR(lockfile_lock(lock_file, exclusive, pool));
+
+ return SVN_NO_ERROR;
+}
+
+
+/* Make a copy of a repository with hot backup of fs. */
+svn_error_t *
+svn_repos_hotcopy (const char *src_path,
+ const char *dst_path,
+ svn_boolean_t archive_logs,
+ apr_pool_t *pool)
+{
+ svn_repos_t *src_repos = NULL;
+ svn_repos_t *dst_repos = NULL;
+ struct copy_ctx_t cc;
+
+ /* Try to open original repository */
+ SVN_ERR (get_repos (&src_repos, src_path,
+ APR_FLOCK_SHARED,
+ FALSE, /* don't try to open the db yet. */
+ pool));
+
+ /* If we are going to archive logs, then get an exclusive lock on
+ db-logs.lock, to ensure that no one else will work with logs.
+
+ If we are just copying, then get a shared lock to ensure that
+ no one else will archive logs while we copying them */
+
+ SVN_ERR (lock_db_logs_file(src_repos, archive_logs, pool));
+
+ /* Copy the repository to a new path, except the fs files */
+ cc.path = dst_path;
+ cc.base_len = strlen (src_path);
+
+ SVN_ERR (svn_io_dir_walk (src_path,
+ 0,
+ hotcopy_structure,
+ &cc,
+ pool));
+
+ /* Prepare dst_repos object so that we may create locks,
+ so that we may open repository */
+
+ /* Allocate a repository object. */
+ dst_repos = apr_pcalloc (pool, sizeof (*dst_repos));
+
+ /* Initialize the repository paths. */
+ dst_repos->path = apr_pstrdup (pool, dst_path);
+ init_repos_dirs (dst_repos, pool);
+
+ /* Create the lock directory. */
+ SVN_ERR (create_locks (dst_repos, dst_repos->lock_path, pool));
+
+ SVN_ERR (create_repos_dir(dst_repos->db_path, pool));
+ /* Reopen repository the right way. Since above is just work around
+ util we can create locks without an open repository */
+
+ /* Exclusevly lock the new repository.
+ No one should be accessing it at the moment */
+ SVN_ERR (get_repos (&dst_repos, dst_path,
+ APR_FLOCK_EXCLUSIVE,
+ FALSE, /* don't try to open the db yet. */
+ pool));
+
+
+ SVN_ERR ( svn_fs_hotcopy_berkeley(src_repos->db_path, dst_repos->db_path,
+ archive_logs, pool));
+
+ return SVN_NO_ERROR;
+}
+
Index: subversion/libsvn_repos/repos.h
===================================================================
--- subversion/libsvn_repos/repos.h (revision 6903)
+++ subversion/libsvn_repos/repos.h (working copy)
@@ -46,6 +46,7 @@

  /* Things for which we keep lockfiles. */
  #define SVN_REPOS__DB_LOCKFILE "db.lock" /* Our Berkeley lockfile. */
+#define SVN_REPOS__DB_LOGS_LOCKFILE "db-logs.lock" /* BDB logs lockfile. */

  /* In the repository hooks directory, look for these files. */
  #define SVN_REPOS__HOOK_START_COMMIT "start-commit"
Index: tools/backup/hot-backup.py.in
===================================================================
--- tools/backup/hot-backup.py.in (revision 6903)
+++ tools/backup/hot-backup.py.in (working copy)
@@ -30,12 +30,9 @@
  # Path to svnlook utility
  svnlook = "@SVN_BINDIR@/svnlook"

-# Path to db_archive program
-db_archive = "/usr/local/BerkeleyDB.4.0/bin/db_archive"
+# Path to svnadmin program with lsbdlogs subcommand
+svnadmin = "@SVN_BINDIR@/svnadmin"

-# Path to db_recover progrem
-db_recover = "/usr/local/BerkeleyDB.4.0/bin/db_recover"
-
  # Number of backups to keep around (0 for "keep them all")
  num_backups = 64

@@ -101,7 +98,7 @@
  print "Youngest revision is", youngest

-### Step 2: copy the whole repository structure.
+### Step 2: Find next available backup path

  backup_subdir = os.path.join(backup_dir, repo + "-" + youngest)

@@ -124,93 +121,20 @@
    else:
      backup_subdir = os.path.join(backup_dir, repo + "-" + youngest + "-1")

-print "Backing up repository to '" + backup_subdir + "'..."
-shutil.copytree(repo_dir, backup_subdir)
-print "Done."
-
-
-### Step 3: re-copy the Berkeley logfiles. They must *always* be
+### Step 3: Ask subversion to make a hot copy of a repository.
  ### copied last.

-infile, outfile, errfile = os.popen3(db_archive + " -l -h "
- + os.path.join(repo_dir, "db"))
-stdout_lines = outfile.readlines()
-stderr_lines = errfile.readlines()
-outfile.close()
-infile.close()
-errfile.close()
+print "Backing up repository to '" + backup_subdir + "'..."
+err_code = os.spawnl(os.P_WAIT, svnadmin, " hotcopy ", repo_dir,
+ backup_subdir, "--archive-logs")
+if(err_code != 0):
+ print "Unable to backup the repository."
+ sys.exit(err_code)
+else:
+ print "Done."

-print "Re-copying logfiles:"

-for item in stdout_lines:
- logfile = string.strip(item)
- src = os.path.join(repo_dir, "db", logfile)
- dst = os.path.join(backup_subdir, "db", logfile)
- print " Re-copying logfile '" + logfile + "'..."
- shutil.copy(src, dst)
-
-print "Backup completed."
-
-
-### Step 4: put the archived database in a consistent state and remove
-### the shared-memory environment files.
-
-infile, outfile, errfile = os.popen3(db_recover + " -h "
- + os.path.join(backup_subdir, "db"))
-stdout_lines = outfile.readlines()
-stderr_lines = errfile.readlines()
-outfile.close()
-infile.close()
-errfile.close()
-
-print "Running db_recover on the archived database:"
-map(sys.stdout.write, stdout_lines)
-map(sys.stdout.write, stderr_lines)
-
-print "Done."
-
-
-### Step 5: look for a write `lock' file in the backup area, else make one.
-
-lockpath = os.path.join(backup_dir, repo + 'lock')
-if os.path.exists(lockpath):
- print "Cannot cleanup logs: lockfile already exists in", backup_dir
- sys.exit(0)
-
-print "Writing lock for logfile cleanup..."
-fp = open(lockpath, 'a') # open in (a)ppend mode
-fp.write("cleaning logfiles for repository " + repo_dir)
-fp.close()
-
-
-### Step 6: ask db_archive which of the live logfiles can be
-### expunged, and remove them.
-
-infile, outfile, errfile = os.popen3(db_archive + " -a -h "
- + os.path.join(repo_dir, "db"))
-stdout_lines = outfile.readlines()
-stderr_lines = errfile.readlines()
-outfile.close()
-infile.close()
-errfile.close()
-
-print "Cleaning obsolete logfiles:"
-
-for item in stdout_lines:
- logfile = string.strip(item)
- print " Deleting '", logfile, "'..."
- os.unlink(logfile)
-
-print "Done."
-
-
-### Step 7: remove the write lock.
-
-os.unlink(lockpath)
-print "Lock removed. Cleanup complete."
-
-
-### Step 8: finally, remove all repository backups other than the last
+### Step 4: finally, remove all repository backups other than the last
  ### NUM_BACKUPS.

  if num_backups > 0:

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@subversion.tigris.org
For additional commands, e-mail: dev-help@subversion.tigris.org
Received on Thu Aug 28 20:12:59 2003

This is an archived mail posted to the Subversion Dev mailing list.

This site is subject to the Apache Privacy Policy and the Apache Public Forum Archive Policy.