Database size compression

No more questions - please go to http://www.syncovery.com/forum instead

Moderator: SuperFlexible Administrators

Database size compression

Postby MerleOne on Sun Aug 27, 2017 12:53 pm

Hi,
I have noticed that the folder
C:\Users\All Users\Syncovery\Database
is very big : 9 GB
3 databases are over 2 or 3 GB
Is there some kind of compression / optimization that could reduce the size ?
Thanks
MerleOne
 
Posts: 16
Joined: Fri Feb 26, 2010 5:15 am

Re: Database size compression

Postby MerleOne on Mon Aug 28, 2017 7:53 am

Hi again,
I'ld like to add a complement : compressing this folder with Windows integrated compression tool saves very little space (just a few percents). I was rather thinking about some internal cleanup of the database file, if that's possible.
MerleOne
 
Posts: 16
Joined: Fri Feb 26, 2010 5:15 am

Re: Database size compression

Postby superflexible on Tue Aug 29, 2017 6:18 am

Hello,
the latest Syncovery version uses internal database compression. To benefit from it, you need to delete the databases. Syncovery will recreate them and they will be smaller.

In addition, cloud relarted JSON cache files will be automatically zipped by the latest Syncovery version. So you do not need to delete those.

But in some cases, you need to be aware of possible implications of deleting the database. It depends on which databases they are and which Syncovery features they are used for.

Deleting a cloud cache database can cause lengthy scanning of the folders the next time the job is run.

Deleting a SmartTracking database will cause deleted files not to be recognized the next time the job is run.

Deleting a Glacier cache database causes a new inventory to be requested, which takes around 5 hours to be generated by the servers.

Deleting a database used for Partial File Updating / Block Level Copying / Synthetic Backup will cause the files to be copied completely next time, rather than just changed blocks.
User avatar
superflexible
Site Admin
 
Posts: 2478
Joined: Thu Dec 31, 2009 3:08 pm

Re: Database size compression

Postby MerleOne on Tue Aug 29, 2017 3:18 pm

superflexible wrote:Hello,
the latest Syncovery version uses internal database compression. To benefit from it, you need to delete the databases. Syncovery will recreate them and they will be smaller.

In addition, cloud relarted JSON cache files will be automatically zipped by the latest Syncovery version. So you do not need to delete those.

But in some cases, you need to be aware of possible implications of deleting the database. It depends on which databases they are and which Syncovery features they are used for.

Deleting a cloud cache database can cause lengthy scanning of the folders the next time the job is run.

Deleting a SmartTracking database will cause deleted files not to be recognized the next time the job is run.

Deleting a Glacier cache database causes a new inventory to be requested, which takes around 5 hours to be generated by the servers.

Deleting a database used for Partial File Updating / Block Level Copying / Synthetic Backup will cause the files to be copied completely next time, rather than just changed blocks.


Actually it's smarttracking databases so I guess I'll pass for now. Or at least not try without a backup... Many thanks for this detailed reply !
MerleOne
 
Posts: 16
Joined: Fri Feb 26, 2010 5:15 am

Re: Database size compression

Postby superflexible on Wed Aug 30, 2017 1:35 am

Hi,
SmartTracking databases are actually the ones I'd delete first. Just one sync run after this, Syncovery can't recognize deleted files and will re-copy them from the other side. So if you pick the right moment where deleted files are not so important, it should be fine.
User avatar
superflexible
Site Admin
 
Posts: 2478
Joined: Thu Dec 31, 2009 3:08 pm

Re: Database size compression

Postby MerleOne on Wed Aug 30, 2017 2:25 am

I think actually I never deleted files from the destination. The only thing I want to avoid is that deleted files from the source get deleted in the destination after I delete the database. I have 3 jobs having huge databases (over 3GB), I can always try with one of them, a non critical one and see what happens. Thanks !
MerleOne
 
Posts: 16
Joined: Fri Feb 26, 2010 5:15 am

Re: Database size compression

Postby superflexible on Wed Aug 30, 2017 4:31 am

To avoid files being deleted from the destination, you can just use the Standard copying mode, which does not use any database at all.
User avatar
superflexible
Site Admin
 
Posts: 2478
Joined: Thu Dec 31, 2009 3:08 pm

Re: Database size compression

Postby MerleOne on Wed Aug 30, 2017 5:03 am

superflexible wrote:To avoid files being deleted from the destination, you can just use the Standard copying mode, which does not use any database at all.


I guess it's some kind of a reflex I got since I used SFFS (the former name)...

Will consider that, thanks.

Regards,

.merle1.
MerleOne
 
Posts: 16
Joined: Fri Feb 26, 2010 5:15 am

Re: Database size compression

Postby MerleOne on Wed Sep 06, 2017 10:18 am

Hi again,
I checked today the various jobs and saw that one of them was already in standard sync mode, not smart. So I deleted the database and relaunched the job. Then I saw CRC was being recalculated and I realized that "Partial Copy" mode was on. Not really necessary in that situation. I think that's was caused the databases to be so big too. Now all jobs are in standard mode with Partial Copy off and no database is created. I recovered more than 10% of my system SSD.
Great !
MerleOne
 
Posts: 16
Joined: Fri Feb 26, 2010 5:15 am


Return to Windows Support * new forum: www.syncovery.com/forum