[svn.haxx.se] · SVN Dev · SVN Users · SVN Org · TSVN Dev · TSVN Users · Subclipse Dev · Subclipse Users · this month's index

Re: svn commits: r33357, r33358

From: Daniel Shahaf <d.s_at_daniel.shahaf.name>
Date: Sun, 5 Oct 2008 13:41:22 +0300 (Jerusalem Daylight Time)

Greg Stein wrote on Fri, 3 Oct 2008 at 03:50 -0700:
> On Thu, Oct 2, 2008 at 11:39 PM, Daniel Shahaf <d.s_at_daniel.shahaf.name> wrote:
> >> - place the checksum directly into this table, rather than a "text_id"
> >> field. due to operational issues, the text_id may not be a satisfiable
> >> foreign key (we may not want to insert a row).
> >
> > Can you explain this scenario? When do you see TEXT_ID not a valid
> > foreign key *and* CHECKSUM a valid foreign key (at the same time)?
> > Before this change, the schema was "id PRIMARY KEY, checksum NOT NULL".
>
> CHECKSUM might not be valid, but it is a known quantity when the row
> is inserted into NODE. Say, during a checkout, I get all the metadata
> for the file, but I haven't started to fetch it. So we create a row in
> NODE. Later, as we begin the fetching of the content, then we insert a
> row into TEXT_BASE.
>

IOW, the foreign key won't be enforced by sqlite, in this case.

> If we kept it as an ID, then we'd have to create a row in TEXT_BASE
> well before any intent of actually fetching that. Now, that said, the
> TEXT_BASE.STORED_SIZE would be NULL, so we already know it is not
> there or is incomplete. So that might actually be fine.

So, the question is whether to create the TEXT_BASE row before or after
fetching the file itself.

Suppose you check out two wc's of the same repos at the same time. If
the TEXT_BASE row was updated immediately, then as soon as the first wc
fetched its list of checksums of files to download, the second wouldn't
download any of them; whereas, if TEXT_BASE is only updated after
download completes, then it's possible for both wc's to download the
same (identical) file simultaneously -- unnecessarily.

That said, I probably would download one wc, duplicate it, and run
'switch' on the duplicate, so that use-case might be invalid. But
I suspect recording early the intent to create the base file might
have other benefits (even if I can think of none now).

> (and, in which case, we could go back to an ID style keying).
>

We'll still need the checksums to be unique (to store them in the
pristine/ dir). If the ID-keying is faster we can use it, but most
other arguments I think of are in favour of using the checksums
directly. (Speaking of 'faster', it might be faster to store the
checksum in the db as an integer rather than as TEXT?)

> Thoughts?
>
> >> + /* ### used to verify the pristine file is "proper". NULL if unknown,
> >> + ### and (thus) the pristine copy is incomplete/unusable. */
> >> + stored_size INTEGER,
> >
> > Do we need this?
>
> Yes. Remember that we do not have transactions across the table and
> the filesystem.

Noted.

> My plan is that we insert the row first, *then* we
> fetch the file into the filesystem. At the completion of that event,
> we stash the size back into the table. The presence of the size tells
> us what it *should* be (detect corruption), and it tells us that the
> file should be there [now].
>

I didn't catch that it also functions as a "download complete" flag.

> It seems more problematic to fetch the content first, then insert a
> row. That's just kind of a gut feel... I don't have immediate reasons
> why offhand.
>

(see above: I also prefer inserting the row first, but haven't good
reasons either)

> > Most of the time the pristine files don't change under our feet. (When
> > they do, they're corrupted.) So we can skip the size check and always
> > do only the more expensive check (calculating the checksum). And if we
> > skip the size check, we won't need stored_size in the database.
>
> We can do a quick apr_stat() to see if the file is present and the
> right size, and have a high confidence that it is usable.
>

I wouldn't trust the size alone: it's too easy to run 'find | xargs sed'
that doesn't modify the sizes. That's why I assumed we would checksum
the base file before declaring it usable.

> If the stored_size is not event present, then the file is not going to
> be there, or it has only been partially-fetched.
>

I agree that it's useful for the partially-fetched case. But in that case
a boolean would work as well (just store an arbitrary non-NULL value at
the point you would store the size).

The rest is pure logic: since I don't trust the size as corruption
indicator and since I think a boolean would work as well as the size for
detecting the partial-fetch case, therefore I am not yet convinced that
we need to store the size at all.

(You're welcome to convince me otherwise, of course :))

> >> - refcount INTEGER NOT NULL
> >> + refcount INTEGER NOT NULL
> >
> > This value can be derived from the other tables, can't it? (See? Who
> > said whitespace-only changes were useless?)
>
> I realized this while driving yesterday, yes.
>
> Right now, I only plan to have a TEXT_BASE table, which means I do
> actually need a refcount. When I later add a NODE and NODE_CHANGES
> table, then I can drop the refcount and just use COUNT(*) on the other
> tables. Which means that I also want to add an index on the CHECKSUM
> column in those tables. (or ID if we decide to switch back)
>
> Alternatively, keep the refcount and use SQLite triggers to maintain
> it. But I think a COUNT(*) should be more than fast enough against an
> indexed column.
>

Agreed.

> >> svn_error_t *
> >> svn_wc__db_pristine_dirhandle(svn_wc__db_pdh_t **pdh,
> >> svn_wc__db_t *db,
> >> const char *dirpath,
> >> apr_pool_t *result_pool,
> >> apr_pool_t *scratch_pool)
> >> {
> >> /* ### need to fix this up. we'll probably get called with a subdirectory
> >> ### of some dirpath that we opened originally. that means we probably
> >> ### won't have the subdir in the hash table. need to be able to
> >> ### incrementally grow the hash of per-dir structures. */
> >>
> >> *pdh = apr_hash_get(db->dir_data, dirpath, APR_HASH_KEY_STRING);
> >>
> >> return SVN_NO_ERROR;
> >> }
> >
> > The metadata location (per-dir, wc root, etc.) is typically per-wc, not
> > per-versioned-dir. As the code is, we'll just set a key in the hash for
> > every single directory in the wc, recursively. But we could store the
> > information (whether the pristine dir is stored in-place or at the wc
> > root) just for the wc root and let deeper dirs inherit its setting.
>
> Whenever an operation hits a subdirectory, we don't want to scan
> upwards to find the wc-root. We want to just *go* to where the data
> is. Thus, we have pdh.base_dir.
>

I thought we wouldn't need to scan because each dir would know what wc
it is part of. (If we're going to put metadata at wc-root by default,
having a cheap way to find it seems natural.) And the scan would be in
the hash, not in the disk (except for the first scan). (Lastly, isn't
there a recursion a few layers above us that we can take advantage of?)

> I also didn't envision proactively setting subdir values, but lazily
> filling them in. Today, the code will create a new pdh structure since
> we're doing per-versioned-dir base_dir values.

Lazy filling or not, by the end of the checkout you'll have a hash key
for every versioned dir in the wc just checked out... right? Couldn't
memory usage, etc., become significant?

> In the future, yes, they could all share the same pdh.
>

*nod* Good point.

> And note that we can't just have db.wc_root ... any given operation
> could cover *multiple* wc roots (think of svn:externals or switched
> subdirs).
>

open_many() allows that too, no?

> > Also, for the values, why use strings? An enum should cover the 2-3
> > most common cases ("in ./.svn", "in wc root" (we know where it is),
> > "in ~/.subversion").
>
> Given a directory, you don't know where the wc-root is, without
> looking for it. The ~/.subversion/ or /var/svn options are based on
> the config data, so you'd have to scrawl through there if your enum
> said HOME or GIVEN_PATH. Rather than scrawl each time, again: just
> drop in the result.
>

See above. I assumed there ought to be an easy way to avoid reading the
env (config data, wc root, etc.) more than once. (I didn't try to think
how that 'easy way' might work, I simply assumed its existence.)

> Note that if the result is HOME or GIVEN_PATH, then all the hash
> values would point to the same PDH structure. Well... at least for
> a given WC. I envision an eventual possibility of different configs
> for different WCs. For example, a WC on an NFS mount might be WC_ROOT,
> while your personal WCs are all GIVEN_PATH.
>

The part that surprises me is "eventual". Are you saying we won't be able
to operate on metadata-at-.svn-dirs and on metadata-at-wc-root wc's in the
same operation ('svn up wc1 wc2') or in subsequent operations under the
same environment ('svn up wc1; svn up wc2') by default?

> >> svn_error_t *
> >> svn_wc__db_pristine_decref(int *new_refcount,
> >> svn_wc__db_pdh_t *pdh,
> >> svn_checksum_t *checksum,
> >> apr_pool_t *scratch_pool)
> >> {
> >> return svn_error__malfunction(__FILE__, __LINE__, "Not implemented.");
> >> }
> >
> > Interesting spelling of the error handling.
>
> hehe. Figured I'd go there rather than abort() :-)
>
> Note that these functions will probably just go away. It looks like we
> don't have to do separate reference count management.
>

Noted, but irrelevant (the same spelling of abort() is used in other
functions).

> Cheers,
> -g
>

Daniel
(who just learnt that not only Vim, but Ctags as well, must be manually
taught that *.sql3 is an SQL file.)

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe_at_subversion.tigris.org
For additional commands, e-mail: dev-help_at_subversion.tigris.org
Received on 2008-10-05 17:54:10 CEST

This is an archived mail posted to the Subversion Dev mailing list.

This site is subject to the Apache Privacy Policy and the Apache Public Forum Archive Policy.