[svn.haxx.se] · SVN Dev · SVN Users · SVN Org · TSVN Dev · TSVN Users · Subclipse Dev · Subclipse Users · this month's index

Re: [SVNDev] Re: Problems with transaction file "next-ids" on Windows

From: Philip Martin <philip.martin_at_wandisco.com>
Date: Thu, 04 Aug 2011 09:41:14 +0100

Mathias Weinert <wein_at_mccw.de> writes:

> Daniel Shahaf <d.s_at_daniel.shahaf.name> wrote:
>
>> Philip Martin wrote on Wed, Jul 20, 2011 at 15:12:51 +0100:
>>> Daniel Shahaf <d.s_at_daniel.shahaf.name> writes:
>>>
>>> > Mathias Weinert wrote on Wed, Jul 20, 2011 at 14:59:23 +0200:
>>> >>
>>> >> each time when I am loading a certain dump file on Windows which
>>> >> contains one revision with over 100K changed paths I get the error
>>> >> "Can't open file
>>> >> 'c:\Repositories\test\db\transactions\5445-479.txn\next-ids': The
>>> >> requested operation cannot be performed on a file with a user-mapped
>>> >> section open.".

>>> The current implementation writes the file inplace:
>>>
>>> static svn_error_t *
>>> write_next_ids(svn_fs_t *fs,
>>> const char *txn_id,
>>> const char *node_id,
>>> const char *copy_id,
>>> apr_pool_t *pool)
>>> {
>>> apr_file_t *file;
>>> svn_stream_t *out_stream;
>>>
>>> SVN_ERR(svn_io_file_open(&file, path_txn_next_ids(fs, txn_id, pool),
>>> APR_WRITE | APR_TRUNCATE,
>>> APR_OS_DEFAULT, pool));
>>>
>>> out_stream = svn_stream_from_aprfile2(file, TRUE, pool);
>>>
>>> SVN_ERR(svn_stream_printf(out_stream, pool, "%s %s\n", node_id, copy_id));
>>>
>>> SVN_ERR(svn_stream_close(out_stream));
>>> return svn_io_file_close(file, pool);
>>> }
>>>
>>> Is there any reason we don't switch to our standard pattern: write to a
>>> temp file and rename? That would give us Subversion's standard retry
>>> loop -- would that fix "requested operation cannot be performed"?

> I replaced svn_io_file_open, svn_stream_printf and svn_stream_close
> with svn_io_write_unique and move_into_place (see attached patch).
> Although this works correctly it shows bad performance. Loading a dump
> from a little repo which mainly has a 2000 files commit now takes
> about 400s while with the original version it only took about 100s.

svn_io_write_unique flushes to disk (except 1.7 on Windows) so that
probably explains the slowdown.

Looking at this again, the call to svn_io_file_open already has a retry
loop, so the original error seems to imply that either a) the file is in
use for more than the retry delay or that b) there is some error code
missing from the retry logic.

-- 
uberSVN: Apache Subversion Made Easy
http://www.uberSVN.com
Received on 2011-08-04 10:42:10 CEST

This is an archived mail posted to the Subversion Dev mailing list.