9.1 KiB
Synology DiskStation Manager
Table of contents
- System's shared folders
- Rsync
- Snapshots
- Encrypt data on a USB disk
- Data deduplication
- Use keybase
- Ask for a feature to be implemented
- Further readings
- Sources
System's shared folders
Automatically created by services or packages.
Cannot be changed/removed manually if the package creating them is still active or installed.
/volume1
├── docker # data container for the Docker service, created by it upon installation
├── homes # all users' home directories, created by the SSH service upon activation
├── music # created by the Media Server package upon installation
├── NetBackup # created by the rsync service upon activation
├── photo # created by the Media Server package upon installation
└── video # created by the Media Server package upon installation
USB disks are recognized as shared folders automatically and mounted under /volumeUSBX:
/volumeUSB1
└── whatever
Rsync
Requirements:
-
the rsync service is enabled under Control Panel > File Services > rsync
-
the user has the right permissions for the shared folder under either
- Control Panel > Shared Folders > Shared Folder edit window > Permissions, or
- Control Panel > User & Group > User or Group edit window > Permissions
Examples:
# From a shared folder on a NAS to a local one.
# Use the SSH port defined in the NAS settings.
rsync \
"user@nas:/volume1/shared_folder/" \
"path/to/local/folder/" \
--archive --copy-links --protect-args \
--acls --xattrs --fake-super \
--partial --append-verify --sparse \
--progress -vv --no-inc-recursive \
--compress --no-motd --rsh='ssh -p12345' \
--exclude "@eaDir" --exclude "#recycle" \
--delete --dry-run
# Sync all snapshotted data to a folder.
find /volume1/@sharesnap/shared_folder \
-maxdepth 1 -mindepth 1 \
-type d
| xargs -I {} -n 1 -t \
rsync \
-AXahvz --chown=user --info=progress2 \
--append-verify --partial \
--no-inc-recursive --no-motd \
{}/ \
/volume2/destination/folder/
Snapshots
Use the Snapshot Replication package available in the Package Center for better control and automation.
Gotchas:
-
when the Make snapshot visible option in a shared folder's settings in Snapshot Replication is ticked:
-
the
#snapshotfolder is created in the shared folder's root directory -
the default snapshots directory for that shared folder is mounted on it in read only mode:
/dev/mapper/cachedev_0 on /volume1/Data/#snapshot type btrfs (ro,nodev,relatime,ssd,synoacl,space_cache=v2,auto_reclaim_space,metadata_ratio=50,block_group_cache_tree,subvolid=266,subvol=/@syno/@sharesnap/Data)
-
Encrypt data on a USB disk
Synology DNS does not equip utilities like cryptsetup or TrueCrypt or such. Also, creating a docker container for it is at this time a little bit too much for me. But, it does include ecryptfs.
I found this solution on Reddit to use the included ecryptfs. It has downsides (ecryptfs' vulnerabilities, the fact that terminal commands are logged in /var/log/bash_history.log and a password would be visible, etc), but hey, that is what is used internally, so...
Implementation:
- Create a shared folder called "crypt" [on your normal Synology Diskstation volume]
- Plug in the USB drive if you haven't already
- Log into DSM manager
- Go to network services, and select terminal
- Enable Telnet service. (If you have been manually changing the firewall, make sure you've unblocked port 23)
- Telnet into the Synology box - logging in as root
- Type this command to create the directory on your USB drive: "mkdir /volumeUSB1/usbshare/@crypt@"
- Update the blahblahblah password below and type into your telnet session (note - it should all be on one line): "mount.ecryptfs /volumeUSB1/usbshare/@crypt@ /volume1/crypt -o \key=passphrase:passphrase_passwd=blahblahblah,ecryptfs_cipher=aes,ecryptfs_key_bytes=32,\ecryptfs_passthrough=n,no_sig_cache,ecryptfs_enable_filename_crypto=y"
- Any data you copy into "crypt" above, will now be encrypted and saved in "usbshare1/@crypt@". To check - create a new folder in the folder "crypt" and have a look at how it appears encrypted when you look into "usbshare1/@crypt@" from DSM manager.
- From here - set up any backup jobs you wish to copy into the "crypt" shared folder you created.
- When you are ready to eject the drive make sure you unmount it first by typing into your telnet session "umount /volumeUSB1/usbshare/@crypt@" and then eject it in the normal way from DSM.
- Disable the telnet service if you are no longer using it
Data deduplication
Requirements:
dockerneeds to be installed from the package manager, as it is simpler (and safer?) to run a container than installingduperemoveorjdupesand all their dependencies on the machine
Remove duplicated files with jdupes
Examples:
# `sudo` is only needed if the user has no privileges to run `docker` commands.
sudo docker run \
-it --init --rm --name jdupes \
-v "/volume/shared_folder1:/data1" \
-v "/volume/shared_folder2:/data2" \
ghcr.io/jbruchon/jdupes:latest \
-drOZ \
-X 'nostr:@eaDir' -X 'nostr:#recycle' \
"/data1" "/data2"
Deduplicate blocks in a volume with duperemove
Gotchas:
duperemove's container needs to be run in privileged mode (--privileged) due to it taking actions on the disk- the container might fail on very large datasets, usually due to Out Of Memory (OOM) issues; to avoid this:
- offload the hashes from RAM using a hash file (
--hashfile "/volume1/NetBackup/duperemove.tmp") - use smaller datasets where possible, like a shared folder and just one of its snapshots instead of all of them
- offload the hashes from RAM using a hash file (
duperemovecan dedupe blocks only if acting on folders in a rw mount; when deduplicating snapshots, use their rw mount path/@syno/@sharesnap/shared_folderinstead of their ro version/volumeN/shared_folder/#snapshot
Examples:
# small/medium dataset
# 2 folders in a shared folder
sudo docker run --privileged \
--rm --name duperemove \
--mount "type=bind,source=/volume1/Data,target=/sharedfolder" \
michelecereda/duperemove:0.11.2 \
-Adhr \
"/sharedfolder/folder1" "/sharedfolder/folder2"
# large dataset
# 1 shared folder and all its snapshots
sudo docker run --privileged \
--rm --name duperemove \
--mount "type=bind,source=/volume1,target=/volume1" \
michelecereda/duperemove:0.11.2 \
-Adhr \
--hashfile "/volume1/NetBackup/duperemove.tmp" \
"/volume1/Data" "/volume1/@sharesnap/Data"
Use keybase
Just use a containerized service and execute commands with it:
# Run the service.
docker run -d --name 'keybase' \
-e KEYBASE_SERVICE='1' \
-e KEYBASE_USERNAME='user' \
-e KEYBASE_PAPERKEY='paper key' \
'keybaseio/client:stable'
# Execute commands using the containerized service.
docker exec \
--user 'keybase' \
keybase \
keybase whoami
Manage git repositories with a containerized keybase instance
See the readme for michelecereda/keybaseio-client.
Ask for a feature to be implemented
Use the online feature request form. Posting a request on the community site will not work.
Further readings
Sources
All the references in the further readings section, plus the following: