3.4 KiB
Synology DiskStation Manager
Rsync
Requirements:
-
the rsync service is enabled under Control Panel > File Services > rsync
-
the user has the right permissions for the shared folder under either
- Control Panel > Shared Folders > Shared Folder edit window > Permissions, or
- Control Panel > User & Group > User or Group edit window > Permissions
Examples:
# From a shared folder on a NAS to a local one.
rsync \
--archive --hard-links \
--append-verify --partial \
--progress --verbose \
--compress --no-motd \
--exclude "@*" --exclude "#*" \
"user@nas:/volume1/shared_folder/" \
"path/to/local/folder/" \
--delete --dry-run
# Sync all snapshotted data to a folder
find /volume1/@sharesnap/shared_folder \
-maxdepth 1 -mindepth 1 \
-type d
| xargs -I {} -n 1 -t \
rsync \
-AXahvz --chown=user --info=progress2 \
--append-verify --partial \
--no-inc-recursive --no-motd \
{}/ \
/volume2/destination/folder/
Snapshots
Use the Snapshot Replication package available in the Package Center for better control and automation.
Gotchas:
-
when the Make snapshot visible option in a shared folder's settings in Snapshot Replication is ticked:
-
the
#snapshotfolder is created in the shared folder's root directory -
the default snapshots directory for that shared folder is mounted on it in read only mode:
/dev/mapper/cachedev_0 on /volume1/Data/#snapshot type btrfs (ro,nodev,relatime,ssd,synoacl,space_cache=v2,auto_reclaim_space,metadata_ratio=50,block_group_cache_tree,subvolid=266,subvol=/@syno/@sharesnap/Data)
-
Deduplicate blocks in a volume
Requirements:
dockerneeds to be installed from the package manager, as it is simpler to run a container than installingduperemoveand all its dependencies on the machine- the container needs to be run in privileged mode (
--privileged) due to its actions on the disk
Gotchas:
-
the container might fail on very large datasets, usually due to Out Of Memory (OOM) issues; to avoid this:
- offload the hashes from RAM using a hash file (
--hashfile "/volume1/NetBackup/duperemove.tmp") - use smaller datasets where possible, like a shared folder and just one of its snapshots instead of all of them
- offload the hashes from RAM using a hash file (
-
duperemovecan dedupe blocks only if acting on folders in a rw mount; when deduplicating snapshots, use their rw mount path/@syno/@sharesnap/shared_folderinstead of their ro version/volumeN/shared_folder/#snapshot
Examples:
# small/medium dataset
# 2 folders in a shared folder
sudo docker run --privileged \
--rm --name duperemove \
--mount "type=bind,source=/volume1/Data,target=/sharedfolder" \
michelecereda/duperemove:0.11.2 \
-Adhr \
"/sharedfolder/folder1" "/sharedfolder/folder2"
# large dataset
# 1 shared folder and all its snapshots
sudo docker run --privileged \
--rm --name duperemove \
--mount "type=bind,source=/volume1,target=/volume1" \
michelecereda/duperemove:0.11.2 \
-Adhr \
--hashfile "/volume1/NetBackup/duperemove.tmp" \
"/volume1/Data" "/volume1/@sharesnap/Data"