This is the forth version of the patch. I am using a different MUA, so
hopefully the white space is properly preserved.
There the following known issues, with this patch:
1. It only works if the location of BDB data and log files were not altered and
they remain in their /db directory.
2. I am still not sure about proper handing for /dav directory. Whether it
should be copied or discarded. For now it is copied.
Sincerely,
Vladimir Berezniker
-----------------
Log Message:
-----------------
Implemented hot copy functionality for subversion. This functionality fixes
two race conditions existing in hot-backup.py. Only the logs that have been
successfully copied are deleted, therefore any logs modified while copy was in
progress are retained. Also (svn_repos_hotcopy) takes out shared db lock,
eliminating possibility of corruption, if running recovery in parallel to
an automated backup.
Updated hot-backup.py to wrap around the new hot copy functionality.
* subversion/include/svn_error.h
(SVN_ERR_POOL): Implemented macro to properly handle subversion errors in
code that uses subpools.
* subversion/include/svn_io.h
* subversion/libsvn_subr/io.c
(svn_io_file_create): Factored out function for file creation.
(svn_io__file_clear_and_close): Moved from
subversion/libsvn_repos/repos.c (clear_and_close).
(svn_io_file_lock): Factored out function for file locking.
(svn_io_dir_file_copy): Implemented function for copying a file between
two directories.
* subversion/include/svn_fs.h
(svn_fs_hotcopy_berkeley): Added prototype for Berkeley hot copy function.
* subversion/include/svn_repos.h
(svn_repos_hotcopy): Added prototype for subversion repository hot copy
function.
* subversion/libsvn_fs/fs.c:
(svn_fs__clean_logs): Implemented function that deletes only copied
unused Berkeley DB logs.
(svn_fs_hotcopy_berkeley): Implemented hot copy functionality in accordance
with Berkeley DB documentation.
* subversion/libsvn_repos/repos.h
(SVN_REPOS__DB_LOGS_LOCKFILE): Added new definition for BDB logs files lock
file.
* subversion/libsvn_repos/repos.c
(create_db_lock): Factored out function for creation of db lock file.
(create_locks): Cleanup. Deleted redundant path parameter.
(create_hooks): Cleanup. Deleted redundant path parameter. Updated code to
use (svn_io_file_create).
(hotcopy_ctx_t): New structure for use by (hotcopy_structure).
(hotcopy_structure): Adapted, deleted, (copy_structure) for copying
repository structure with exception of /db and /locks directories.
(svn_repos_db_logs_lockfile): Implemented function to return path to db
logs lock file.
(create_db_logs_lock): Implemented function for creation of db logs lock
file.
(lock_db_logs_file): Function for locking db logs lock file.
(svn_repos_hotcopy): Implemented function to make a hot copy of a
repository.
(get_repos): Cleanup. Updated code to use (svn_io_file_lock).
(clear_and_close): Moved to
subversion/libsvn_subr/io.c (svn_io__file_clear_and_close).
(create_repos_structure): Cleanup. Updated code to use (svn_io_file_create).
* subversion/svnadmin/main.c
Added new flag "--clean-logs" to specify that unused copied logs are to be
deleted after the hot copy is complete.
(parse_local_repos_path): Factored out function for parsing and validating
local repository path.
(subcommand_hotcopy): Implemented new hotcopy subcommand.
* tools/backup/hot-backup.py.in
Updated hot backup script to utilize the new hot copy functionality.
-----------------
Patch:
-----------------
Index: subversion/svnadmin/main.c
===================================================================
--- subversion/svnadmin/main.c (revision 7052)
+++ subversion/svnadmin/main.c (working copy)
@@ -50,11 +50,45 @@
return SVN_NO_ERROR;
}
+/** Helper to parse local repository path.
+ * Try parsing next parameter of @a os as a local path to repository.
+ * If successfull *@a repos_path will contain internal style path to
+ * the repository.
+ */
+static svn_error_t *
+parse_local_repos_path(apr_getopt_t *os, const char ** repos_path,
+ apr_pool_t *pool)
+{
+ *repos_path = NULL;
+ /* Check to see if there is one more paramater. */
+ if (os->ind < os->argc)
+ {
+ const char * path = os->argv[os->ind++];
+ SVN_ERR (svn_utf_cstring_to_utf8 (repos_path, path, NULL, pool));
+ *repos_path = svn_path_internal_style (*repos_path, pool);
+ }
+
+ if (*repos_path == NULL)
+ {
+ return svn_error_create (SVN_ERR_CL_ARG_PARSING_ERROR, NULL,
+ "repository argument required\n");
+ }
+ else if (svn_path_is_url (*repos_path))
+ {
+ return svn_error_createf (SVN_ERR_CL_ARG_PARSING_ERROR, NULL,
+ "'%s' is an url when it should be a path\n",
+ *repos_path);
+ }
+
+ return SVN_NO_ERROR;
+}
+
/** Subcommands. **/
static svn_opt_subcommand_t
+ subcommand_hotcopy,
subcommand_create,
subcommand_createtxn,
subcommand_dump,
@@ -78,7 +112,8 @@
svnadmin__force_uuid,
svnadmin__parent_dir,
svnadmin__bdb_txn_nosync,
- svnadmin__config_dir
+ svnadmin__config_dir,
+ svnadmin__clean_logs
};
/* Option codes and descriptions.
@@ -126,6 +161,9 @@
{"config-dir", svnadmin__config_dir, 1,
"read user configuration files from directory ARG"},
+ {"clean-logs", svnadmin__clean_logs, 0,
+ "delete copied, unused log files from the source repository."},
+
{NULL}
};
@@ -160,6 +198,11 @@
"Display this usage message.\n",
{svnadmin__version} },
+ {"hotcopy", subcommand_hotcopy, {0},
+ "usage: svnadmin hotcopy REPOS_PATH NEW_REPOS_PATH\n\n"
+ "Makes a hot copy of a repository.\n\n",
+ {svnadmin__clean_logs} },
+
{"load", subcommand_load, {0},
"usage: svnadmin load REPOS_PATH\n\n"
"Read a 'dumpfile'-formatted stream from stdin, committing\n"
@@ -229,6 +272,7 @@
struct svnadmin_opt_state
{
const char *repository_path;
+ const char *new_repository_path; /* hotcopy dest. path */
svn_opt_revision_t start_revision, end_revision; /* -r X[:Y] */
svn_boolean_t help; /* --help or -? */
svn_boolean_t version; /* --version */
@@ -236,6 +280,7 @@
svn_boolean_t follow_copies; /* --copies */
svn_boolean_t quiet; /* --quiet */
svn_boolean_t bdb_txn_nosync; /* --bdb-txn-nosync */
+ svn_boolean_t clean_logs; /* --clean-logs */
enum svn_repos_load_uuid uuid_action; /* --ignore-uuid,
--force-uuid */
const char *parent_dir;
@@ -644,6 +689,21 @@
}
+/* This implements `svn_opt_subcommand_t'. */
+svn_error_t *
+subcommand_hotcopy (apr_getopt_t *os, void *baton, apr_pool_t *pool)
+{
+ struct svnadmin_opt_state *opt_state = baton;
+
+ SVN_ERR (svn_repos_hotcopy (opt_state->repository_path,
+ opt_state->new_repository_path,
+ opt_state->clean_logs,
+ pool));
+
+ return SVN_NO_ERROR;
+}
+
+
/** Main. **/
@@ -788,6 +848,9 @@
opt_state.config_dir = apr_pstrdup (pool, svn_path_canonicalize(opt_arg,
pool));
break;
+ case svnadmin__clean_logs:
+ opt_state.clean_logs = TRUE;
+ break;
default:
{
subcommand_help (NULL, NULL, pool);
@@ -834,36 +897,36 @@
here and store it in opt_state. */
if (subcommand->cmd_func != subcommand_help)
{
- const char *repos_path = NULL;
-
- if (os->ind < os->argc)
+ err = parse_local_repos_path (os,
+ &(opt_state.repository_path),
+ pool);
+ if(err)
{
- opt_state.repository_path = os->argv[os->ind++];
- SVN_INT_ERR (svn_utf_cstring_to_utf8 (&(opt_state.repository_path),
- opt_state.repository_path,
- NULL, pool));
- repos_path
- = svn_path_internal_style (opt_state.repository_path, pool);
- }
-
- if (repos_path == NULL)
- {
- fprintf (stderr, "repository argument required\n");
- subcommand_help (NULL, NULL, pool);
+ svn_handle_error (err, stderr, 0);
+ svn_opt_subcommand_help (subcommand->name, cmd_table,
+ options_table, pool);
svn_pool_destroy (pool);
return EXIT_FAILURE;
}
- else if (svn_path_is_url (repos_path))
+
+ }
+
+
+ /* If command is hot copy the third argument will be the new
+ repository path. */
+ if (subcommand->cmd_func == subcommand_hotcopy)
+ {
+ err = parse_local_repos_path (os,
+ &(opt_state.new_repository_path),
+ pool);
+ if(err)
{
- fprintf (stderr,
- "'%s' is a url when it should be a path\n",
- repos_path);
+ svn_handle_error (err, stderr, 0);
+ svn_opt_subcommand_help (subcommand->name, cmd_table,
+ options_table, pool);
svn_pool_destroy (pool);
return EXIT_FAILURE;
}
-
- /* Copy repos path into the OPT_STATE structure. */
- opt_state.repository_path = repos_path;
}
/* Check that the subcommand wasn't passed any inappropriate options. */
Index: subversion/include/svn_fs.h
===================================================================
--- subversion/include/svn_fs.h (revision 7052)
+++ subversion/include/svn_fs.h (working copy)
@@ -118,6 +118,15 @@
*/
svn_error_t *svn_fs_create_berkeley (svn_fs_t *fs, const char *path);
+/** Hot copy Subversion filesystem, stored in a Berkeley DB environment under
+ * @a src_path to @a dest_path. If @a clean_logs is used is @c TRUE,
+ * delete copied, unused log files from source repository at @a src_path
+ * Using @a pool for any necessary memory allocations.
+ */
+svn_error_t *svn_fs_hotcopy_berkeley (const char *src_path,
+ const char *dest_path,
+ svn_boolean_t clean_logs,
+ apr_pool_t *pool);
/** Make @a fs refer to the Berkeley DB-based Subversion filesystem at
* @a path. @a path is utf8-encoded, and must refer to a file or directory
Index: subversion/include/svn_repos.h
===================================================================
--- subversion/include/svn_repos.h (revision 7052)
+++ subversion/include/svn_repos.h (working copy)
@@ -88,6 +88,16 @@
apr_hash_t *fs_config,
apr_pool_t *pool);
+/** Make a hot copy of the Subversion repository found at @a src_path
+ * to @a dst_path.
+ *
+ * @copydoc svn_fs_hotcopy_berkeley()
+ */
+svn_error_t * svn_repos_hotcopy (const char *src_path,
+ const char *dst_path,
+ svn_boolean_t clean_logs,
+ apr_pool_t *pool);
+
/** Destroy the Subversion repository found at @a path, using @a pool for any
* necessary allocations.
*/
Index: subversion/include/svn_io.h
===================================================================
--- subversion/include/svn_io.h (revision 7052)
+++ subversion/include/svn_io.h (working copy)
@@ -292,8 +292,31 @@
const char *file2,
apr_pool_t *pool);
+/** Create file at @a file with contents @a contents.
+ * will be created. Path @a file is utf8-encoded.
+ * Use @a pool for memory allocations.
+ */
+svn_error_t *svn_io_file_create (const char *file,
+ const char *contents,
+ apr_pool_t *pool);
+/** Lock file at @a lock_file. If @exclusive is TRUE,
+ * obtain exclusive lock, otherwise obtain shared lock.
+ * Lock will be automaticaly released when @a pool is cleared or destroyed.
+ * Use @a pool for memory allocations.
+ */
+svn_error_t *svn_io_file_lock (const char *lock_file,
+ svn_boolean_t exclusive,
+ apr_pool_t *pool);
+/** Copy file @a file from location @a src_path to location @a dest_path.
+ * Use @a pool for memory allocations.
+ */
+svn_error_t *svn_io_dir_file_copy (const char *src_path,
+ const char *dest_path,
+ const char *file,
+ apr_pool_t *pool);
+
/** Generic byte-streams
*
Index: subversion/libsvn_fs/fs.c
===================================================================
--- subversion/libsvn_fs/fs.c (revision 7052)
+++ subversion/libsvn_fs/fs.c (working copy)
@@ -527,6 +527,124 @@
return svn_err;
}
+/**
+ * Delete all unused log files from DBD enviroment at @a live_path that exist
+ * in @a backup_path.
+ */
+svn_error_t *
+svn_fs__clean_logs(const char *live_path,
+ const char *backup_path,
+ apr_pool_t *pool)
+{
+
+ apr_array_header_t *logfiles;
+
+ SVN_ERR (svn_fs_berkeley_logfiles (&logfiles,
+ live_path,
+ TRUE, /* Only unused logs */
+ pool));
+
+ if (logfiles == NULL)
+ return SVN_NO_ERROR;
+
+ { /* Process unused logs from live area */
+ int log;
+ apr_pool_t *sub_pool = svn_pool_create (pool);
+
+ /* Process log files. */
+ for (log = 0; log < logfiles->nelts; log++)
+ {
+ const char *log_file = APR_ARRAY_IDX (logfiles, log, const char *);
+ const char *live_log_path
+ = svn_path_join (live_path, log_file, sub_pool);
+ const char *backup_log_path
+ = svn_path_join (backup_path, log_file, sub_pool);
+
+ { /* Compare files. No point in using MD5 and waisting CPU cycles as we
+ got full copies of both logs */
+
+ svn_boolean_t files_match = FALSE;
+ svn_node_kind_t kind;
+
+ /* Check to see if there is a corresponding log file in the backup
directory */
+ SVN_ERR (svn_io_check_path (backup_log_path, &kind, pool));
+
+ /* If the copy of the log exists, compare them */
+ if (kind == svn_node_file)
+ SVN_ERR (svn_io_files_contents_same_p (&files_match,
+ live_log_path,
+ backup_log_path,
+ sub_pool));
+
+ /* If log files do not match, go to the next log filr. */
+ if (files_match == FALSE)
+ continue;
+ }
+
+ SVN_ERR (svn_io_remove_file (live_log_path, sub_pool));
+ svn_pool_clear (sub_pool);
+ }
+
+ svn_pool_destroy (sub_pool);
+ }
+
+ return SVN_NO_ERROR;
+}
+
+svn_error_t *
+svn_fs_hotcopy_berkeley (const char *src_path,
+ const char *dest_path,
+ svn_boolean_t clean_logs,
+ apr_pool_t *pool)
+{
+ /* Check DBD version, just in case */
+ SVN_ERR (check_bdb_version (pool));
+
+ /* Copy the DB_CONFIG file. */
+ SVN_ERR (svn_io_dir_file_copy (src_path, dest_path, &"DB_CONFIG", pool));
+
+ /* Copy the databases. */
+ SVN_ERR (svn_io_dir_file_copy (src_path, dest_path, &"nodes", pool));
+ SVN_ERR (svn_io_dir_file_copy (src_path, dest_path, &"revisions", pool));
+ SVN_ERR (svn_io_dir_file_copy (src_path, dest_path, &"transactions", pool));
+ SVN_ERR (svn_io_dir_file_copy (src_path, dest_path, &"copies", pool));
+ SVN_ERR (svn_io_dir_file_copy (src_path, dest_path, &"changes", pool));
+ SVN_ERR (svn_io_dir_file_copy (src_path, dest_path, &"representations",
+ pool));
+ SVN_ERR (svn_io_dir_file_copy (src_path, dest_path, &"strings", pool));
+ SVN_ERR (svn_io_dir_file_copy (src_path, dest_path, &"uuids", pool));
+
+ {
+ apr_array_header_t *logfiles;
+ int log;
+
+ SVN_ERR (svn_fs_berkeley_logfiles (&logfiles,
+ src_path,
+ FALSE, /* All logs */
+ pool));
+
+ if (logfiles == NULL)
+ return SVN_NO_ERROR;
+
+ /* Process log files. */
+ for (log = 0; log < logfiles->nelts; log++)
+ {
+ SVN_ERR (svn_io_dir_file_copy (src_path, dest_path,
+ APR_ARRAY_IDX (logfiles, log,
+ const char *),
+ pool));
+ }
+ }
+
+ /* Since this is a copy we will have exclusive access to the repository. */
+ SVN_ERR (svn_fs_berkeley_recover (dest_path, pool));
+
+ if (clean_logs == TRUE)
+ SVN_ERR (svn_fs__clean_logs (src_path, dest_path, pool));
+
+ return SVN_NO_ERROR;
+}
+
/* Gaining access to an existing Berkeley DB-based filesystem. */
Index: subversion/libsvn_subr/io.c
===================================================================
--- subversion/libsvn_subr/io.c (revision 7052)
+++ subversion/libsvn_subr/io.c (working copy)
@@ -182,7 +182,7 @@
-/*** Copying and appending files. ***/
+/*** Creating, copying and appending files. ***/
svn_error_t *
svn_io_copy_file (const char *src,
@@ -463,7 +463,44 @@
#endif
}
+svn_error_t *svn_io_file_create (const char *file,
+ const char *contents,
+ apr_pool_t *pool)
+{
+ apr_status_t apr_err;
+ apr_file_t *f;
+ apr_size_t written;
+ SVN_ERR (svn_io_file_open (&f, file,
+ (APR_WRITE | APR_CREATE | APR_EXCL),
+ APR_OS_DEFAULT,
+ pool));
+
+ apr_err = apr_file_write_full (f, contents, strlen (contents), &written);
+ if (apr_err)
+ return svn_error_createf
+ (apr_err, NULL, "svn_io_file_create: error writing '%s'", file);
+
+ apr_err = apr_file_close (f);
+ if (apr_err)
+ return svn_error_createf (apr_err, NULL,
+ "svn_io_file_create: error closing '%s'", file);
+ return SVN_NO_ERROR;
+}
+
+svn_error_t *svn_io_dir_file_copy (const char *src_path,
+ const char *dest_path,
+ const char *file,
+ apr_pool_t *pool)
+{
+ const char *file_dest_path = svn_path_join (dest_path, file, pool);
+ const char *file_src_path = svn_path_join (src_path, file, pool);
+
+ SVN_ERR (svn_io_copy_file (file_src_path, file_dest_path, TRUE, pool));
+
+ return SVN_NO_ERROR;
+}
+
/*** Modtime checking. ***/
@@ -730,7 +767,72 @@
return SVN_NO_ERROR;
}
+
+/*** File locking. ***/
+/* Clear all outstanding locks on ARG, an open apr_file_t *. */
+static apr_status_t
+svn_io__file_clear_and_close (void *arg)
+{
+ apr_status_t apr_err;
+ apr_file_t *f = arg;
+ /* Remove locks. */
+ apr_err = apr_file_unlock (f);
+ if (apr_err)
+ return apr_err;
+
+ /* Close the file. */
+ apr_err = apr_file_close (f);
+ if (apr_err)
+ return apr_err;
+
+ return 0;
+}
+
+
+svn_error_t *svn_io_file_lock (const char *lock_file,
+ svn_boolean_t exclusive,
+ apr_pool_t *pool)
+{
+ int locktype = APR_FLOCK_SHARED;
+ apr_file_t *lockfile_handle;
+ apr_int32_t flags;
+ apr_status_t apr_err;
+
+ if(exclusive == TRUE)
+ locktype = APR_FLOCK_EXCLUSIVE;
+
+ flags = APR_READ;
+ if (locktype == APR_FLOCK_EXCLUSIVE)
+ flags |= APR_WRITE;
+
+ SVN_ERR (svn_io_file_open (&lockfile_handle, lock_file, flags,
+ APR_OS_DEFAULT,
+ pool));
+
+ /* Get lock on the filehandle. */
+ apr_err = apr_file_lock (lockfile_handle, locktype);
+ if (apr_err)
+ {
+ const char *lockname = "unknown";
+ if (locktype == APR_FLOCK_SHARED)
+ lockname = "shared";
+ if (locktype == APR_FLOCK_EXCLUSIVE)
+ lockname = "exclusive";
+
+ return svn_error_createf
+ (apr_err, NULL, "svn_io_file_lock: %s lock on file `%s' failed",
+ lockname, lock_file);
+ }
+
+ apr_pool_cleanup_register (pool, lockfile_handle,
+ svn_io__file_clear_and_close,
+ apr_pool_cleanup_null);
+
+ return SVN_NO_ERROR;
+}
+
+
/* TODO write test for these two functions, then refactor. */
Index: subversion/libsvn_repos/repos.c
===================================================================
--- subversion/libsvn_repos/repos.c (revision 7052)
+++ subversion/libsvn_repos/repos.c (working copy)
@@ -35,7 +35,6 @@
a builtin template of this name. */
#define DEFAULT_TEMPLATE_NAME "default"
-
/* Path accessor functions. */
@@ -69,6 +68,12 @@
const char *
+svn_repos_db_logs_lockfile (svn_repos_t *repos, apr_pool_t *pool)
+{
+ return svn_path_join (repos->lock_path, SVN_REPOS__DB_LOGS_LOCKFILE, pool);
+}
+
+const char *
svn_repos_hook_dir (svn_repos_t *repos, apr_pool_t *pool)
{
return apr_pstrdup (pool, repos->hook_path);
@@ -137,67 +142,74 @@
return err;
}
+/* Create the DB logs lockfile. */
+static svn_error_t *
+create_db_logs_lock (svn_repos_t *repos, apr_pool_t *pool) {
+ const char *contents;
+ const char *lockfile_path;
+ lockfile_path = svn_repos_db_logs_lockfile (repos, pool);
+ contents =
+ "DB logs lock file, representing locks on the versioned filesystem logs.\n"
+ "\n"
+ "All log manipulators of the repository's\n"
+ "Berkeley DB environment take out exclusive locks on this file\n"
+ "to ensure that only one accessor manupulates the logs at the time.\n"
+ "\n"
+ "You should never have to edit or remove this file.\n";
+
+ SVN_ERR_W (svn_io_file_create (lockfile_path, contents, pool),
+ "creating db logs lock file");
+
+ return SVN_NO_ERROR;
+}
+
+/* Create the DB lockfile. */
static svn_error_t *
-create_locks (svn_repos_t *repos, const char *path, apr_pool_t *pool)
+create_db_lock (svn_repos_t *repos, apr_pool_t *pool) {
+ const char *contents;
+ const char *lockfile_path;
+
+ lockfile_path = svn_repos_db_lockfile (repos, pool);
+ contents =
+ "DB lock file, representing locks on the versioned filesystem.\n"
+ "\n"
+ "All accessors -- both readers and writers -- of the repository's\n"
+ "Berkeley DB environment take out shared locks on this file, and\n"
+ "each accessor removes its lock when done. If and when the DB\n"
+ "recovery procedure is run, the recovery code takes out an\n"
+ "exclusive lock on this file, so we can be sure no one else is\n"
+ "using the DB during the recovery.\n"
+ "\n"
+ "You should never have to edit or remove this file.\n";
+
+ SVN_ERR_W (svn_io_file_create (lockfile_path, contents, pool),
+ "creating db lock file");
+
+ return SVN_NO_ERROR;
+}
+
+static svn_error_t *
+create_locks (svn_repos_t *repos, apr_pool_t *pool)
{
- apr_status_t apr_err;
-
/* Create the locks directory. */
- SVN_ERR_W (create_repos_dir (path, pool),
+ SVN_ERR_W (create_repos_dir (repos->lock_path, pool),
"creating lock dir");
- /* Create the DB lockfile under that directory. */
- {
- apr_file_t *f = NULL;
- apr_size_t written;
- const char *contents;
- const char *lockfile_path;
+ SVN_ERR (create_db_lock (repos, pool));
+ SVN_ERR (create_db_logs_lock (repos, pool));
- lockfile_path = svn_repos_db_lockfile (repos, pool);
- SVN_ERR_W (svn_io_file_open (&f, lockfile_path,
- (APR_WRITE | APR_CREATE | APR_EXCL),
- APR_OS_DEFAULT,
- pool),
- "creating lock file");
-
- contents =
- "DB lock file, representing locks on the versioned filesystem.\n"
- "\n"
- "All accessors -- both readers and writers -- of the repository's\n"
- "Berkeley DB environment take out shared locks on this file, and\n"
- "each accessor removes its lock when done. If and when the DB\n"
- "recovery procedure is run, the recovery code takes out an\n"
- "exclusive lock on this file, so we can be sure no one else is\n"
- "using the DB during the recovery.\n"
- "\n"
- "You should never have to edit or remove this file.\n";
-
- apr_err = apr_file_write_full (f, contents, strlen (contents), &written);
- if (apr_err)
- return svn_error_createf (apr_err, NULL,
- "writing lock file `%s'", lockfile_path);
-
- apr_err = apr_file_close (f);
- if (apr_err)
- return svn_error_createf (apr_err, NULL,
- "closing lock file `%s'", lockfile_path);
- }
-
return SVN_NO_ERROR;
}
static svn_error_t *
-create_hooks (svn_repos_t *repos, const char *path, apr_pool_t *pool)
+create_hooks (svn_repos_t *repos, apr_pool_t *pool)
{
const char *this_path, *contents;
- apr_status_t apr_err;
- apr_file_t *f;
- apr_size_t written;
/* Create the hook directory. */
- SVN_ERR_W (create_repos_dir (path, pool),
+ SVN_ERR_W (create_repos_dir (repos->hook_path, pool),
"creating hook directory");
/*** Write a default template for each standard hook file. */
@@ -208,12 +220,6 @@
svn_repos_start_commit_hook (repos, pool),
SVN_REPOS__HOOK_DESC_EXT);
- SVN_ERR_W (svn_io_file_open (&f, this_path,
- (APR_WRITE | APR_CREATE | APR_EXCL),
- APR_OS_DEFAULT,
- pool),
- "creating hook file");
-
contents =
"#!/bin/sh"
APR_EOL_STR
@@ -298,15 +304,8 @@
"exit 0"
APR_EOL_STR;
- apr_err = apr_file_write_full (f, contents, strlen (contents), &written);
- if (apr_err)
- return svn_error_createf (apr_err, NULL,
- "writing hook file `%s'", this_path);
-
- apr_err = apr_file_close (f);
- if (apr_err)
- return svn_error_createf (apr_err, NULL,
- "closing hook file `%s'", this_path);
+ SVN_ERR_W (svn_io_file_create (this_path, contents, pool),
+ "creating start-commit hook");
} /* end start-commit hook */
/* Pre-commit hook. */
@@ -315,12 +314,6 @@
svn_repos_pre_commit_hook (repos, pool),
SVN_REPOS__HOOK_DESC_EXT);
- SVN_ERR_W (svn_io_file_open (&f, this_path,
- (APR_WRITE | APR_CREATE | APR_EXCL),
- APR_OS_DEFAULT,
- pool),
- "creating hook file");
-
contents =
"#!/bin/sh"
APR_EOL_STR
@@ -433,15 +426,8 @@
"exit 0"
APR_EOL_STR;
- apr_err = apr_file_write_full (f, contents, strlen (contents), &written);
- if (apr_err)
- return svn_error_createf (apr_err, NULL,
- "writing hook file `%s'", this_path);
-
- apr_err = apr_file_close (f);
- if (apr_err)
- return svn_error_createf (apr_err, NULL,
- "closing hook file `%s'", this_path);
+ SVN_ERR_W (svn_io_file_create (this_path, contents, pool),
+ "creating pre-commit hook");
} /* end pre-commit hook */
@@ -451,12 +437,6 @@
svn_repos_pre_revprop_change_hook (repos, pool),
SVN_REPOS__HOOK_DESC_EXT);
- SVN_ERR_W (svn_io_file_open (&f, this_path,
- (APR_WRITE | APR_CREATE | APR_EXCL),
- APR_OS_DEFAULT,
- pool),
- "creating hook file");
-
contents =
"#!/bin/sh"
APR_EOL_STR
@@ -562,15 +542,8 @@
"exit 1"
APR_EOL_STR;
- apr_err = apr_file_write_full (f, contents, strlen (contents), &written);
- if (apr_err)
- return svn_error_createf (apr_err, NULL,
- "writing hook file `%s'", this_path);
-
- apr_err = apr_file_close (f);
- if (apr_err)
- return svn_error_createf (apr_err, NULL,
- "closing hook file `%s'", this_path);
+ SVN_ERR_W (svn_io_file_create (this_path, contents, pool),
+ "creating pre-revprop-change hook");
} /* end pre-revprop-change hook */
@@ -580,12 +553,6 @@
svn_repos_post_commit_hook (repos, pool),
SVN_REPOS__HOOK_DESC_EXT);
- SVN_ERR_W (svn_io_file_open (&f, this_path,
- (APR_WRITE | APR_CREATE | APR_EXCL),
- APR_OS_DEFAULT,
- pool),
- "creating hook file");
-
contents =
"#!/bin/sh"
APR_EOL_STR
@@ -665,15 +632,8 @@
"log-commit.py --repository \"$REPOS\" --revision \"$REV\""
APR_EOL_STR;
- apr_err = apr_file_write_full (f, contents, strlen (contents), &written);
- if (apr_err)
- return svn_error_createf (apr_err, NULL,
- "writing hook file `%s'", this_path);
-
- apr_err = apr_file_close (f);
- if (apr_err)
- return svn_error_createf (apr_err, NULL,
- "closing hook file `%s'", this_path);
+ SVN_ERR_W (svn_io_file_create (this_path, contents, pool),
+ "creating post-commit hook");
} /* end post-commit hook */
@@ -683,12 +643,6 @@
svn_repos_post_revprop_change_hook (repos, pool),
SVN_REPOS__HOOK_DESC_EXT);
- SVN_ERR_W (svn_io_file_open (&f, this_path,
- (APR_WRITE | APR_CREATE | APR_EXCL),
- APR_OS_DEFAULT,
- pool),
- "creating hook file");
-
contents =
"#!/bin/sh"
APR_EOL_STR
@@ -776,61 +730,13 @@
"propchange-email.pl \"$REPOS\" \"$REV\" \"$USER\" \"$PROPNAME\"
watchers@example.org"
APR_EOL_STR;
- apr_err = apr_file_write_full (f, contents, strlen (contents), &written);
- if (apr_err)
- return svn_error_createf (apr_err, NULL,
- "writing hook file `%s'", this_path);
-
- apr_err = apr_file_close (f);
- if (apr_err)
- return svn_error_createf (apr_err, NULL,
- "closing hook file `%s'", this_path);
+ SVN_ERR_W (svn_io_file_create (this_path, contents, pool),
+ "creating post-revprop-change hook");
} /* end post-revprop-change hook */
return SVN_NO_ERROR;
}
-
-/* This code manages repository locking, which is motivated by the
- * need to support DB_RUN_RECOVERY. Here's how it works:
- *
- * Every accessor of a repository's database takes out a shared lock
- * on the repository -- both readers and writers get shared locks, and
- * there can be an unlimited number of shared locks simultaneously.
- *
- * Sometimes, a db access returns the error DB_RUN_RECOVERY. When
- * this happens, we need to run svn_fs_berkeley_recover() on the db
- * with no other accessors present. So we take out an exclusive lock
- * on the repository. From the moment we request the exclusive lock,
- * no more shared locks are granted, and when the last shared lock
- * disappears, the exclusive lock is granted. As soon as we get it,
- * we can run recovery.
- *
- * We assume that once any berkeley call returns DB_RUN_RECOVERY, they
- * all do, until recovery is run.
- */
-
-/* Clear all outstanding locks on ARG, an open apr_file_t *. */
-static apr_status_t
-clear_and_close (void *arg)
-{
- apr_status_t apr_err;
- apr_file_t *f = arg;
-
- /* Remove locks. */
- apr_err = apr_file_unlock (f);
- if (apr_err)
- return apr_err;
-
- /* Close the file. */
- apr_err = apr_file_close (f);
- if (apr_err)
- return apr_err;
-
- return 0;
-}
-
-
static void
init_repos_dirs (svn_repos_t *repos, const char *path, apr_pool_t *pool)
{
@@ -856,14 +762,13 @@
"creating DAV sandbox dir");
/* Create the lock directory. */
- SVN_ERR (create_locks (repos, repos->lock_path, pool));
+ SVN_ERR (create_locks (repos, pool));
/* Create the hooks directory. */
- SVN_ERR (create_hooks (repos, repos->hook_path, pool));
+ SVN_ERR (create_hooks (repos, pool));
/* Write the top-level README file. */
{
- apr_status_t apr_err;
apr_file_t *readme_file = NULL;
const char *readme_file_name
= svn_path_join (path, SVN_REPOS__README, pool);
@@ -889,20 +794,8 @@
"Visit http://subversion.tigris.org/ for more information."
APR_EOL_STR;
- SVN_ERR (svn_io_file_open (&readme_file, readme_file_name,
- APR_WRITE | APR_CREATE, APR_OS_DEFAULT,
- pool));
-
- apr_err = apr_file_write_full (readme_file, readme_contents,
- strlen (readme_contents), NULL);
- if (apr_err)
- return svn_error_createf (apr_err, 0,
- "writing to `%s'", readme_file_name);
-
- apr_err = apr_file_close (readme_file);
- if (apr_err)
- return svn_error_createf (apr_err, 0,
- "closing `%s'", readme_file_name);
+ SVN_ERR_W (svn_io_file_create (readme_file_name, readme_contents, pool),
+ "creating readme file");
}
/* Write the top-level FORMAT file. */
@@ -1024,7 +917,6 @@
svn_boolean_t open_fs,
apr_pool_t *pool)
{
- apr_status_t apr_err;
svn_repos_t *repos;
/* Verify the validity of our repository format. */
@@ -1043,37 +935,15 @@
/* Locking. */
{
const char *lockfile_path;
- apr_file_t *lockfile_handle;
- apr_int32_t flags;
+ svn_boolean_t exclusive = FALSE;
/* Get a filehandle for the repository's db lockfile. */
lockfile_path = svn_repos_db_lockfile (repos, pool);
- flags = APR_READ;
if (locktype == APR_FLOCK_EXCLUSIVE)
- flags |= APR_WRITE;
- SVN_ERR_W (svn_io_file_open (&lockfile_handle, lockfile_path,
- flags, APR_OS_DEFAULT, pool),
+ exclusive = TRUE;
+
+ SVN_ERR_W (svn_io_file_lock (lockfile_path, exclusive, pool),
"get_repos: error opening db lockfile");
-
- /* Get some kind of lock on the filehandle. */
- apr_err = apr_file_lock (lockfile_handle, locktype);
- if (apr_err)
- {
- const char *lockname = "unknown";
- if (locktype == APR_FLOCK_SHARED)
- lockname = "shared";
- if (locktype == APR_FLOCK_EXCLUSIVE)
- lockname = "exclusive";
-
- return svn_error_createf
- (apr_err, NULL,
- "get_repos: %s db lock on repository `%s' failed",
- lockname, path);
- }
-
- /* Register an unlock function for the lock. */
- apr_pool_cleanup_register (pool, lockfile_handle, clear_and_close,
- apr_pool_cleanup_null);
}
/* Open up the Berkeley filesystem only after obtaining the lock. */
@@ -1147,6 +1017,25 @@
}
+/* This code uses repository locking, which is motivated by the
+ * need to support DB_RUN_RECOVERY. Here's how it works:
+ *
+ * Every accessor of a repository's database takes out a shared lock
+ * on the repository -- both readers and writers get shared locks, and
+ * there can be an unlimited number of shared locks simultaneously.
+ *
+ * Sometimes, a db access returns the error DB_RUN_RECOVERY. When
+ * this happens, we need to run svn_fs_berkeley_recover() on the db
+ * with no other accessors present. So we take out an exclusive lock
+ * on the repository. From the moment we request the exclusive lock,
+ * no more shared locks are granted, and when the last shared lock
+ * disappears, the exclusive lock is granted. As soon as we get it,
+ * we can run recovery.
+ *
+ * We assume that once any berkeley call returns DB_RUN_RECOVERY, they
+ * all do, until recovery is run.
+ */
+
svn_error_t *
svn_repos_recover (const char *path,
apr_pool_t *pool)
@@ -1245,3 +1134,150 @@
return SVN_NO_ERROR;
}
+
+/** Hot copy structure copy context.
+ */
+struct hotcopy_ctx_t {
+ const char *dest; /* target location to construct */
+ unsigned int src_len; /* len of the source path*/
+};
+
+/** Called by (svn_io_dir_walk).
+ * Copies the repository structure with exception of
+ * @c SVN_REPOS__DB_DIR and @c SVN_REPOS__LOCK_DIR.
+ * Those directories are handled separetly.
+ * @a baton is a pointer to (struct hotcopy_ctx_t) specifying
+ * destination path to copy to and the length of the source path.
+ *
+ * @copydoc svn_io_dir_walk()
+ */
+static svn_error_t *hotcopy_structure (void *baton,
+ const char *path,
+ const apr_finfo_t *finfo,
+ apr_pool_t *pool)
+{
+ const struct hotcopy_ctx_t *ctx = ((struct hotcopy_ctx_t *) baton);
+ const char *sub_path;
+ const char *target;
+ svn_boolean_t fs_dir = FALSE;
+
+ if (strlen (path) == ctx->src_len)
+ {
+ sub_path = "";
+ }
+ else
+ {
+ sub_path = &path[ctx->src_len+1];
+
+ /* Check if we are inside db directory and if so skip it */
+ if (svn_path_compare_paths(
+ svn_path_get_longest_ancestor (SVN_REPOS__DB_DIR, sub_path, pool),
+ SVN_REPOS__DB_DIR) == 0)
+ return SVN_NO_ERROR;
+
+ if (svn_path_compare_paths(
+ svn_path_get_longest_ancestor (SVN_REPOS__LOCK_DIR, sub_path, pool),
+ SVN_REPOS__LOCK_DIR) == 0)
+ return SVN_NO_ERROR;
+ }
+
+ target = svn_path_join (ctx->dest, sub_path, pool);
+
+ if (finfo->filetype == APR_DIR)
+ {
+ SVN_ERR (create_repos_dir (target, pool));
+ }
+ else if (finfo->filetype == APR_REG)
+ {
+
+ SVN_ERR(svn_io_copy_file(path, target, TRUE, pool));
+ }
+
+ return SVN_NO_ERROR;
+}
+
+
+/** Obtain a lock on db logs lock file. Create one if it does not exist.
+ */
+static svn_error_t *
+lock_db_logs_file (svn_repos_t *repos,
+ svn_boolean_t exclusive,
+ apr_pool_t *pool)
+{
+ const char * lock_file = svn_repos_db_logs_lockfile (repos, pool);
+
+ /* Try to create a lock file, in case if it is missing. As in case of the
+ repositories created before hotcopy functionality. */
+ svn_error_clear (create_db_logs_lock (repos, pool));
+
+ SVN_ERR (svn_io_file_lock (lock_file, exclusive, pool));
+
+ return SVN_NO_ERROR;
+}
+
+
+/* Make a copy of a repository with hot backup of fs. */
+svn_error_t *
+svn_repos_hotcopy (const char *src_path,
+ const char *dst_path,
+ svn_boolean_t clean_logs,
+ apr_pool_t *pool)
+{
+ svn_repos_t *src_repos;
+ svn_repos_t *dst_repos;
+ struct hotcopy_ctx_t hotcopy_context;
+
+ /* Try to open original repository */
+ SVN_ERR (get_repos (&src_repos, src_path,
+ APR_FLOCK_SHARED,
+ FALSE, /* don't try to open the db yet. */
+ pool));
+
+ /* If we are going to clean logs, then get an exclusive lock on
+ db-logs.lock, to ensure that no one else will work with logs.
+
+ If we are just copying, then get a shared lock to ensure that
+ no one else will clean logs while we copying them */
+
+ SVN_ERR (lock_db_logs_file (src_repos, clean_logs, pool));
+
+ /* Copy the repository to a new path, with exception of
+ specially handled directories */
+
+ hotcopy_context.dest = dst_path;
+ hotcopy_context.src_len = strlen (src_path);
+ SVN_ERR (svn_io_dir_walk (src_path,
+ 0,
+ hotcopy_structure,
+ &hotcopy_context,
+ pool));
+
+ /* Prepare dst_repos object so that we may create locks,
+ so that we may open repository */
+
+ dst_repos = apr_pcalloc (pool, sizeof (*dst_repos));
+
+ init_repos_dirs (dst_repos, dst_path, pool);
+
+ SVN_ERR (create_locks (dst_repos, pool));
+
+ SVN_ERR (create_repos_dir (dst_repos->db_path, pool));
+
+ /* Open repository, since before we only initialized the directories.
+ Above is a work around because lock creation functions expect a
+ pointer to (svn_repos_t) with initialized paths. */
+
+ /* Exclusively lock the new repository.
+ No one should be accessing it at the moment */
+ SVN_ERR (get_repos (&dst_repos, dst_path,
+ APR_FLOCK_EXCLUSIVE,
+ FALSE, /* don't try to open the db yet. */
+ pool));
+
+
+ SVN_ERR (svn_fs_hotcopy_berkeley (src_repos->db_path, dst_repos->db_path,
+ clean_logs, pool));
+
+ return SVN_NO_ERROR;
+}
+
Index: subversion/libsvn_repos/repos.h
===================================================================
--- subversion/libsvn_repos/repos.h (revision 7052)
+++ subversion/libsvn_repos/repos.h (working copy)
@@ -46,6 +46,7 @@
/* Things for which we keep lockfiles. */
#define SVN_REPOS__DB_LOCKFILE "db.lock" /* Our Berkeley lockfile. */
+#define SVN_REPOS__DB_LOGS_LOCKFILE "db-logs.lock" /* BDB logs lockfile. */
/* In the repository hooks directory, look for these files. */
#define SVN_REPOS__HOOK_START_COMMIT "start-commit"
Index: tools/backup/hot-backup.py.in
===================================================================
--- tools/backup/hot-backup.py.in (revision 7052)
+++ tools/backup/hot-backup.py.in (working copy)
@@ -30,12 +30,9 @@
# Path to svnlook utility
svnlook = "@SVN_BINDIR@/svnlook"
-# Path to db_archive program
-db_archive = "/usr/local/BerkeleyDB.4.0/bin/db_archive"
+# Path to svnadmin program with lsbdlogs subcommand
+svnadmin = "@SVN_BINDIR@/svnadmin"
-# Path to db_recover progrem
-db_recover = "/usr/local/BerkeleyDB.4.0/bin/db_recover"
-
# Number of backups to keep around (0 for "keep them all")
num_backups = 64
@@ -101,7 +98,7 @@
print "Youngest revision is", youngest
-### Step 2: copy the whole repository structure.
+### Step 2: Find next available backup path
backup_subdir = os.path.join(backup_dir, repo + "-" + youngest)
@@ -124,93 +121,20 @@
else:
backup_subdir = os.path.join(backup_dir, repo + "-" + youngest + "-1")
-print "Backing up repository to '" + backup_subdir + "'..."
-shutil.copytree(repo_dir, backup_subdir)
-print "Done."
-
-
-### Step 3: re-copy the Berkeley logfiles. They must *always* be
+### Step 3: Ask subversion to make a hot copy of a repository.
### copied last.
-infile, outfile, errfile = os.popen3(db_archive + " -l -h "
- + os.path.join(repo_dir, "db"))
-stdout_lines = outfile.readlines()
-stderr_lines = errfile.readlines()
-outfile.close()
-infile.close()
-errfile.close()
+print "Backing up repository to '" + backup_subdir + "'..."
+err_code = os.spawnl(os.P_WAIT, svnadmin, " hotcopy ", repo_dir,
+ backup_subdir, "--clean-logs")
+if(err_code != 0):
+ print "Unable to backup the repository."
+ sys.exit(err_code)
+else:
+ print "Done."
-print "Re-copying logfiles:"
-for item in stdout_lines:
- logfile = string.strip(item)
- src = os.path.join(repo_dir, "db", logfile)
- dst = os.path.join(backup_subdir, "db", logfile)
- print " Re-copying logfile '" + logfile + "'..."
- shutil.copy(src, dst)
-
-print "Backup completed."
-
-
-### Step 4: put the archived database in a consistent state and remove
-### the shared-memory environment files.
-
-infile, outfile, errfile = os.popen3(db_recover + " -h "
- + os.path.join(backup_subdir, "db"))
-stdout_lines = outfile.readlines()
-stderr_lines = errfile.readlines()
-outfile.close()
-infile.close()
-errfile.close()
-
-print "Running db_recover on the archived database:"
-map(sys.stdout.write, stdout_lines)
-map(sys.stdout.write, stderr_lines)
-
-print "Done."
-
-
-### Step 5: look for a write `lock' file in the backup area, else make one.
-
-lockpath = os.path.join(backup_dir, repo + 'lock')
-if os.path.exists(lockpath):
- print "Cannot cleanup logs: lockfile already exists in", backup_dir
- sys.exit(0)
-
-print "Writing lock for logfile cleanup..."
-fp = open(lockpath, 'a') # open in (a)ppend mode
-fp.write("cleaning logfiles for repository " + repo_dir)
-fp.close()
-
-
-### Step 6: ask db_archive which of the live logfiles can be
-### expunged, and remove them.
-
-infile, outfile, errfile = os.popen3(db_archive + " -a -h "
- + os.path.join(repo_dir, "db"))
-stdout_lines = outfile.readlines()
-stderr_lines = errfile.readlines()
-outfile.close()
-infile.close()
-errfile.close()
-
-print "Cleaning obsolete logfiles:"
-
-for item in stdout_lines:
- logfile = string.strip(item)
- print " Deleting '", logfile, "'..."
- os.unlink(logfile)
-
-print "Done."
-
-
-### Step 7: remove the write lock.
-
-os.unlink(lockpath)
-print "Lock removed. Cleanup complete."
-
-
-### Step 8: finally, remove all repository backups other than the last
+### Step 4: finally, remove all repository backups other than the last
### NUM_BACKUPS.
if num_backups > 0:
---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@subversion.tigris.org
For additional commands, e-mail: dev-help@subversion.tigris.org
Received on Fri Sep 12 18:35:15 2003