On Mon, Dec 10, 2018 at 9:10 PM Tom Browder <tom.browder_at_gmail.com> wrote:
>
>
>
> On Mon, Dec 10, 2018 at 19:45 Nico Kadel-Garcia <nkadel_at_gmail.com> wrote:
>>
>> On Mon, Dec 10, 2018 at 5:56 AM Tom Browder <tom.browder_at_gmail.com> wrote:
>> >
>> > On Mon, Dec 10, 2018 at 12:10 AM Nico Kadel-Garcia <nkadel_at_gmail.com> wrote:
>> > > On Sun, Dec 9, 2018 at 6:31 PM Tom Browder <tom.browder_at_gmail.com> wrote:
>> > ...
>> > > > Given that history will be lost, does anyone see any problems with my recovery plan?
>> > ...
>> > > If you have working copies and you don't care about history, why are
>> > > you spending any cycles on doing anything with hotcopy? You've lost
>> > > history anyway, why keep any of it?
>> >
>> > Cycles aren't important, but the size of the data is. Transferring the
>> > working copy from scratch would take a LONG time, while the bulk of
>> > the data are already there in the hotcopy.
>>
>> Under what possible conditions wound importing a single snapshot of
>> the current working copy, without history, take more time than working
>> from a hotcopy to overlay the changes on top of that hotcopy?
>
>
> I don’t know, Nico, I am a real novice at this. Your first answer didn’t help because I didn’t know the ramifications of what I was trying to do.
>
> The original data, from just six months ago, was about 27 Gb, which took a very long time to upload from my home computer to my remote server. Since the only hotcopy, done shortly after the repo was loaded, there has been very little change, so if I could start with the hotcopy and somehow synch my working copy without pushing 27 Gb again, life would be better.
??? An import of the copy of the working data has no history. Is the
*data* 27 GB, with no .svn content, 27 GB ? What in the devil are you
putting in source control?
I'm not objecting to your situation, just really confused by the
content you are dealing with.
Received on 2018-12-11 06:16:12 CET