[svn.haxx.se] · SVN Dev · SVN Users · SVN Org · TSVN Dev · TSVN Users · Subclipse Dev · Subclipse Users · this month's index

Re: Problems with transaction file "next-ids" on Windows

From: Mathias Weinert <wein_at_mccw.de>
Date: Thu, 04 Aug 2011 07:37:07 +0200

Mathias Weinert <wein_at_mccw.de> wrote:

> Daniel Shahaf <d.s_at_daniel.shahaf.name> wrote:
>
>> Philip Martin wrote on Wed, Jul 20, 2011 at 15:12:51 +0100:
>>> Daniel Shahaf <d.s_at_daniel.shahaf.name> writes:
>>>
>>>> Mathias Weinert wrote on Wed, Jul 20, 2011 at 14:59:23 +0200:
>>>>>
>>>>> each time when I am loading a certain dump file on Windows which
>>>>> contains one revision with over 100K changed paths I get the error
>>>>> "Can't open file
>>>>> 'c:\Repositories\test\db\transactions\5445-479.txn\next-ids': The
>>>>> requested operation cannot be performed on a file with a user-mapped
>>>>> section open.". After looking at the mailing list archives and other
>>>>> mailing lists I found out that I am not the only one to encounter
>>>>> this problem and that in most cases a virus scanner was the cause of
>>>>> the problem. And indeed, adding next-ids to the exclusion list
>>>>> solved the problem.
>>>>>
>>>>> But now I wonder if svnadmin couldn't handle this case a bit more
>>>>> elegantly. IMHO it would make sense not to quit the load immediately
>>>>> but to try it again maybe after waiting half a second or so if this
>>>>> specific error occurs (some other users reported that they got the
>>>>> error "The process cannot access the file because it is being used
>>>>> by another process."). If we can't access next-ids after trying it
>>>>> let's say 5 times with a little pause after each try we still can
>>>>> quit the load process.
>>>>
>>>> It would be good to solve this now as that is one of the concerns with
>>>> the (partially implemented) design for revprop packing, due for release
>>>> in 1.8.
>>>
>>> The current implementation writes the file inplace:
>>>
>>> static svn_error_t *
>>> write_next_ids(svn_fs_t *fs,
>>> const char *txn_id,
>>> const char *node_id,
>>> const char *copy_id,
>>> apr_pool_t *pool)
>>> {
>>> apr_file_t *file;
>>> svn_stream_t *out_stream;
>>>
>>> SVN_ERR(svn_io_file_open(&file, path_txn_next_ids(fs, txn_id, pool),
>>> APR_WRITE | APR_TRUNCATE,
>>> APR_OS_DEFAULT, pool));
>>>
>>> out_stream = svn_stream_from_aprfile2(file, TRUE, pool);
>>>
>>> SVN_ERR(svn_stream_printf(out_stream, pool, "%s %s\n", node_id, copy_id));
>>>
>>> SVN_ERR(svn_stream_close(out_stream));
>>> return svn_io_file_close(file, pool);
>>> }
>>>
>>> Is there any reason we don't switch to our standard pattern: write to a
>>> temp file and rename? That would give us Subversion's standard retry
>>> loop -- would that fix "requested operation cannot be performed"?
>>
>> fs_fs.c:move_into_file() already does the rename loop, so no objection.
>> (assuming we document that the use of move_into_file() is for
>> performance and virus scanners rather than for concurrent reader
>> correctness)
>>
>
> I replaced svn_io_file_open, svn_stream_printf and svn_stream_close
> with svn_io_write_unique and move_into_place (see attached patch).
> Although this works correctly it shows bad performance. Loading a
> dump from a little repo which mainly has a 2000 files commit now
> takes about 400s while with the original version it only took about
> 100s.

It was 20,000 files, not 2000.
Received on 2011-08-04 07:37:54 CEST

This is an archived mail posted to the Subversion Dev mailing list.

This site is subject to the Apache Privacy Policy and the Apache Public Forum Archive Policy.