Hi,
Von: Bert Huijben [mailto:bert_at_qqmail.nl]
> From: Markus Schaber [mailto:m.schaber_at_codesys.com]
> > Von: Philip Martin [mailto:philip.martin_at_wandisco.com]
> > > Philip Martin <philip.martin_at_wandisco.com> writes:
> > > > Stefan Fuhrmann <stefan.fuhrmann_at_wandisco.com> writes:
> > > >> But we do write the database strictly in revision order.
> > > >> So, if we could simply read the latest record once upon opening
> > > >> the DB. If that refers to a future revision, read "current" and compare.
> > > >> If the DB it still "in the future", abort txn, i.e. prevent any
> > > >> future commit until rep-cache.db gets deleted by the admin.
> > > >
> > > > That might work. The rep-cache for a new revision is not written
> > > > strictly in revision order but is written after updating HEAD. So
> > > > such a check would not be as strong as "highest revision" but
> > > > would be a useful extra check if we can implement it efficiently
> > > > it without a table scan. Is sqlite3_last_insert_rowid() the function we want?
> > >
> > > Bert pointed out that is the wrong function and there isn't really a suitable
> > > function. So to do this check we need something like Julian's
> > > suggestion: a one row table containing the most recent revision added
> > > to the rep-cache that gets updated with each write. It doesn't
> > > catch every possible failure but it does catch some.
> >
> > To increase the backwards compatibility: Could this row be updated by
> > a trigger?
>
> Keeping backwards compatibility here would require a time machine :-)
>
> We would have to change the database schema, which requires a format bump...
> (not just for the trigger; also for the new table)
But AFAICS, adding a new table which is maintained by a trigger is backwards compatible, existing (old) code continues to work fine.
When the table is maintained explicitly by the SVN code, old svn code won't keep the value, so it will be out of date often, increasing the risk of a "miss".
> And a trigger would perform this for any file update in any revision; not
> just once per revision.
>
> If we just update it after writing all revisions (and right before the
> existing sqlite transaction is committed) there should only be a single db
> page update, so it should only make sqlite a very tiny bit slower. With a
> trigger the intermediate state in the sqlite log would grow more than a bit
> during the transaction.
I'm not sure whether the performance difference is that big. Benchmark anyone? :-)
Best regards
Markus Schaber
CODESYS(r) a trademark of 3S-Smart Software Solutions GmbH
Inspiring Automation Solutions
3S-Smart Software Solutions GmbH
Dipl.-Inf. Markus Schaber | Product Development Core Technology
Memminger Str. 151 | 87439 Kempten | Germany
Tel. +49-831-54031-979 | Fax +49-831-54031-50
E-Mail: m.schaber@codesys.com | Web: http://www.codesys.com | CODESYS store: http://store.codesys.com
CODESYS forum: http://forum.codesys.com
Managing Directors: Dipl.Inf. Dieter Hess, Dipl.Inf. Manfred Werner | Trade register: Kempten HRB 6186 | Tax ID No.: DE 167014915
Received on 2014-01-28 08:23:16 CET