On Wed, Sep 10, 2014 at 5:35 PM, Ivan Zhakov <ivan_at_visualsvn.com> wrote:
> I've accidentally ran Subversion test suite under elevated local
> administrator account on Windows and got failures in named-atomics
> svn_tests: E200006: assertion 'value == 42' failed at
> FAIL: named_atomic-test.exe 1: basic r/w access to a single atomic
> svn_tests: E200006: assertion 'value == 42 * HUGE_VALUE' failed at
> FAIL: named_atomic-test.exe 2: atomics must be 64 bits
> svn_tests: E200006: assertion 'value1 == 46 * HUGE_VALUE' failed at
> FAIL: named_atomic-test.exe 3: basic r/w access to multiple atomics
> The investigation revealed the following:
> 1. The named_atomics tests are skipped unless executed under elevated
> local administrator on Windows.
What worries me more is that our Windows build bots don't report
those tests as "SKIP" - although the common init_test_shm() function
in named_atomics-test.c will return SVN_ERR_TEST_SKIPPED.
Are they reported as skipped in your setup when you run the test
without local admin rights?
> 2. The named_atomics are never properly worked on Windows.
Yes, that is probably the case. The only good bit about the whole
thing is that the tests themselves were looking to the right kind
> A little bit more detailed explanation:
> Originally "named atomics" framework was using shared memory for
> Creating shared memory on Windows requires administrative permissions, so
> svn_named_atomic__is_supported() had check if application has necessary
> permissions to create shared memory.
> In r1404112  "named atomics" framework was rewritten to use apr_mmap
> of apr_shm_*, but code in svn_named_atomic__is_supported() was not updated
> reflect this change.
> The bug itself introduced in r1327458  where call to InterlockedAdd64()
> replaced with call to InterlockedExchangeAdd64(). But functions have
> return values: InterlockedAdd64() returns result of operation, while
> InterlockedExchangeAdd64() returns original value.
That analysis is correct.
I see two ways how resolve these test failures:
> 1. Fix the code somehow.
Which would not be too hard doing something like this:
--- subversion/libsvn_subr/named_atomic.c (revision 1623003)
+++ subversion/libsvn_subr/named_atomic.c (working copy)
@@ -105,7 +105,7 @@
#define synched_read(mem) *mem
#define synched_write(mem, value) InterlockedExchange64(mem, value)
-#define synched_add(mem, delta) InterlockedExchangeAdd64(mem, delta)
+#define synched_add(mem, delta) (InterlockedExchangeAdd64(mem, delta) +
#define synched_cmpxchg(mem, value, comperand) \
InterlockedCompareExchange64(mem, value, comperand)
@@ -330,31 +330,9 @@ svn_named_atomic__is_supported(void)
- static svn_tristate_t result = svn_tristate_unknown;
- if (result == svn_tristate_unknown)
- /* APR SHM implementation requires the creation of global objects */
- HANDLE handle = CreateFileMappingA(INVALID_HANDLE_VALUE,
- if (handle != NULL)
- result = svn_tristate_true;
- result = svn_tristate_false;
- return result == svn_tristate_true;
-#endif /* _WIN32 */
2. Remove it since "named atomics" framework is only used for currently
> disabled revprop caching.
... I'm +1 on getting rid of the SHM code altogether (we are using MMAP to
get shared memory). It turned out to be a poor choice as all sorts of
platform specifics make it hard caused us to deviate further and further
the initial APR-based design. Some of the Windows-specific issues, e.g. the
file retry races in the init code, should have been reported before the
FWIW, that is a good lesson any future "repo server" design. Don't use any
explicit form of shared memory.
> Personally I don't see reason to spend resources fixing unused code,
> that even code on 'revprop-caching-ng' branch removes it also. Any
> other opinions?
No, I agree.
The revpro-caching-ng branch pretty much exactly removes the SHM
code. The remaining revprop cache update logic is quite small - about
twice the size of the FSFS instance ID patch. And even with that patch
applied to /trunk, if the tests should reveal some serious problem with
the revprop caching, we could still disable feature in the same way as
we currently with the old implementation.
Received on 2014-09-11 18:29:10 CEST