mirror of
https://gitea.com/mcereda/oam.git
synced 2026-02-09 05:44:23 +00:00
fix: markdownlint suggestions
This commit is contained in:
1
.vscode/settings.json
vendored
1
.vscode/settings.json
vendored
@@ -82,6 +82,7 @@
|
||||
"macports",
|
||||
"makepkg",
|
||||
"markdownlint",
|
||||
"mdlrc",
|
||||
"mktemp",
|
||||
"mpiexec",
|
||||
"netcat",
|
||||
|
||||
@@ -40,7 +40,6 @@ op list items
|
||||
|
||||
- [CLI guide]
|
||||
|
||||
|
||||
## Sources
|
||||
|
||||
All the references in the [further readings] section, plus the following:
|
||||
|
||||
@@ -3,12 +3,12 @@
|
||||
## Table of contents <!-- omit in toc -->
|
||||
|
||||
1. [Troubleshooting](#troubleshooting)
|
||||
1. [While loading, a completion fails with error `No such file or directory`.](#while-loading-a-completion-fails-with-error-no-such-file-or-directory)
|
||||
1. [While loading, a completion fails with error `No such file or directory`](#while-loading-a-completion-fails-with-error-no-such-file-or-directory)
|
||||
1. [Further readings](#further-readings)
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### While loading, a completion fails with error `No such file or directory`.
|
||||
### While loading, a completion fails with error `No such file or directory`
|
||||
|
||||
Example:
|
||||
|
||||
|
||||
@@ -347,8 +347,8 @@ az rest -m 'put' \
|
||||
--url-parameters 'api-version=7.1-preview.1' \
|
||||
--headers Authorization='Bearer ey…pw' Content-Type='application/json' \
|
||||
-b '{
|
||||
"authorizationId": "01234567-abcd-0987-fedc-0123456789ab",
|
||||
"validTo": "2021-12-31T23:46:23.319Z"
|
||||
"authorizationId": "01234567-abcd-0987-fedc-0123456789ab",
|
||||
"validTo": "2021-12-31T23:46:23.319Z"
|
||||
}'
|
||||
az rest … -b @'file.json'
|
||||
|
||||
@@ -457,8 +457,8 @@ az rest \
|
||||
'Authorization=Bearer ey…pw' \
|
||||
'Content-Type=application/json' \
|
||||
-b '{
|
||||
"authorizationId": "01234567-abcd-0987-fedc-0123456789ab",
|
||||
"validTo": "2021-12-31T23:46:23.319Z"
|
||||
"authorizationId": "01234567-abcd-0987-fedc-0123456789ab",
|
||||
"validTo": "2021-12-31T23:46:23.319Z"
|
||||
}'
|
||||
|
||||
az rest \
|
||||
|
||||
@@ -35,6 +35,7 @@ Just follow this procedure:
|
||||
1. configure SSH on all the hosts to let the **server** node connect to all the **client** nodes **without** using a password
|
||||
1. install [MPICH] on all the hosts, possibly the same version
|
||||
1. test the installation:
|
||||
|
||||
```sh
|
||||
# execute `hostname` on all hosts
|
||||
mpiexec -f 'machines_file' -n 'number_of_processes' 'hostname'
|
||||
@@ -47,9 +48,11 @@ See the [Vagrant example].
|
||||
- [Protogonus: The FINAL Labs™ HPC Cluster]
|
||||
- [A simple Beowulf cluster]
|
||||
- Building a Beowulf cluster from old MacBooks:
|
||||
- [part 1][building a beowulf cluster from old macbooks - part 1]
|
||||
- [part 2][building a beowulf cluster from old macbooks - part 2]
|
||||
- [Parallel computing with custom Beowulf cluster]
|
||||
|
||||
- [part 1][building a beowulf cluster from old macbooks - part 1]
|
||||
- [part 2][building a beowulf cluster from old macbooks - part 2]
|
||||
- [Parallel computing with custom Beowulf cluster]
|
||||
|
||||
- [Engineering a Beowulf-style compute cluster]
|
||||
- [Parallel and distributed computing with Raspberry Pi clusters]
|
||||
- [Sequence analysis on a 216-processor Beowulf cluster]
|
||||
|
||||
@@ -164,7 +164,6 @@ In `cc_config.xml`:
|
||||
[radeon™ software for linux® installation]: https://amdgpu-install.readthedocs.io/en/latest/
|
||||
[website]: https://boinc.berkeley.edu/
|
||||
|
||||
|
||||
<!-- In-article sections -->
|
||||
[boinccmd]: boinccmd.md
|
||||
|
||||
|
||||
@@ -83,8 +83,8 @@ This is due to the LUKS2 format using by default the Argon2i key derivation func
|
||||
The solution is simple; either:
|
||||
|
||||
1. switch to LUKS1, or
|
||||
2. use LUKS2, but switch to PBKDF2 (the one used in LUKS1); just add the `--pbkdf pbkdf2` option to luksFormat or to any command that creates keyslots, or
|
||||
3. use LUKS2 but limit the memory assigned to Argon2i function; for example, to use up to 256kB just add the `--pbkdf-memory 256` option to the command as follows:
|
||||
1. use LUKS2, but switch to PBKDF2 (the one used in LUKS1); just add the `--pbkdf pbkdf2` option to luksFormat or to any command that creates keyslots, or
|
||||
1. use LUKS2 but limit the memory assigned to Argon2i function; for example, to use up to 256kB just add the `--pbkdf-memory 256` option to the command as follows:
|
||||
|
||||
```sh
|
||||
$ sudo cryptsetup luksOpen --pbkdf-memory 256 /dev/sdb1 lacie
|
||||
|
||||
@@ -102,25 +102,25 @@ The process describes a completely fresh installation with complete repartitioni
|
||||
- `sudo dd if=/dev/zero of=/dev/mapper/cryptdrive bs=16M` <-- optional, this is to ensure nothing can be recovered from before this install you're doing. Took 2h on my 652 GiB partition.
|
||||
1. Create LVM physical volume, a volume group & logical volumes:
|
||||
- Volumes are sized as follows (example, you should create as many partitions as you need):
|
||||
- OS drive: `60GB`
|
||||
- Swap: `16GB`
|
||||
- Home: `rest`
|
||||
- OS drive: `60GB`
|
||||
- Swap: `16GB`
|
||||
- Home: `rest`
|
||||
- Commands (add extra lvcreate steps if you have more partitions):
|
||||
- `sudo pvcreate /dev/mapper/cryptdrive`
|
||||
- `sudo vgcreate vglinux /dev/mapper/cryptdrive`
|
||||
- `sudo lvcreate -n root -L 60g vglinux`
|
||||
- `sudo lvcreate -n swap -L 16g vglinux`
|
||||
- `sudo lvcreate -n home -l 100%FREE vglinux`
|
||||
- `sudo pvcreate /dev/mapper/cryptdrive`
|
||||
- `sudo vgcreate vglinux /dev/mapper/cryptdrive`
|
||||
- `sudo lvcreate -n root -L 60g vglinux`
|
||||
- `sudo lvcreate -n swap -L 16g vglinux`
|
||||
- `sudo lvcreate -n home -l 100%FREE vglinux`
|
||||
1. Start the installation process using GUI:
|
||||
- Connect to WiFi network
|
||||
- When asked what to do with the disk, pick the option that allows you to manually repartition stuff (IIRC it was labelled `Something else` on 19.04 installer):
|
||||
- Pick `/dev/mapper/vglinux-root` as `ext4` FS & mount it to `/`
|
||||
- Pick `/dev/mapper/vglinux-home` as `ext4` FS & mount it to `/home`
|
||||
- Pick `/dev/mapper/vglinux-swap` as `swap`
|
||||
- Do the same as above if you have extra partitions
|
||||
- Pick `/dev/nvme0n1p2` (created on step 2.5.1) as `ext4` FS & mount it to `/boot`
|
||||
- Without doing this, installation will fail when configuring GRUB
|
||||
- Pick "boot drive" (the select list at the bottom, this is where GRUB goes) and assign it to `/dev/nvme0n1p2` or `/dev/nvem0n1`
|
||||
- Pick `/dev/mapper/vglinux-root` as `ext4` FS & mount it to `/`
|
||||
- Pick `/dev/mapper/vglinux-home` as `ext4` FS & mount it to `/home`
|
||||
- Pick `/dev/mapper/vglinux-swap` as `swap`
|
||||
- Do the same as above if you have extra partitions
|
||||
- Pick `/dev/nvme0n1p2` (created on step 2.5.1) as `ext4` FS & mount it to `/boot`
|
||||
- Without doing this, installation will fail when configuring GRUB
|
||||
- Pick "boot drive" (the select list at the bottom, this is where GRUB goes) and assign it to `/dev/nvme0n1p2` or `/dev/nvem0n1`
|
||||
- Proceed with the installation
|
||||
1. After GUI installation completes, stay within the Live USB environment
|
||||
1. Check the UUID of the LUKS drive:
|
||||
|
||||
@@ -45,6 +45,7 @@ sudo zpool import -a
|
||||
## Use DNF from behind a proxy
|
||||
|
||||
Either:
|
||||
|
||||
- add the line `sslverify=0` to `/etc/dnf/dnf.conf`; **not suggested**, but a quick fix
|
||||
- add the proxie's certificate, in PEM format, to the `/etc/pki/ca-trust/source/anchors/` folder and then run `sudo update-ca-trust`.
|
||||
|
||||
|
||||
@@ -3,6 +3,7 @@
|
||||
Firewalld is a dynamically managed firewall with support for network/firewall zones that define the trust level of network connections or interfaces. It has support for IPv4, IPv6, firewall settings, ethernet bridges and IP sets. It also offers separation of runtime and permanent configuration options.
|
||||
|
||||
It is the default firewall management tool for:
|
||||
|
||||
- RHEL and CentOS 7 and newer
|
||||
- Fedora 18 and newer
|
||||
- (Open)SUSE 15 and newer
|
||||
|
||||
@@ -18,6 +18,7 @@ kubectl exec 'pod-name' -- cat '/proc/1/environ'
|
||||
# This only works if the onboard `ps` is **not** the one from Busybox.
|
||||
ps e -p "$PID"
|
||||
```
|
||||
|
||||
## Further readings
|
||||
|
||||
- [Kubernetes]
|
||||
|
||||
@@ -15,7 +15,7 @@
|
||||
1. [Branches](#branches)
|
||||
1. [Checkout an existing remote branch](#checkout-an-existing-remote-branch)
|
||||
1. [Delete a branch](#delete-a-branch)
|
||||
1. [Delete branches which have been merged or are otherwise absent from a remote.](#delete-branches-which-have-been-merged-or-are-otherwise-absent-from-a-remote)
|
||||
1. [Delete branches which have been merged or are otherwise absent from a remote](#delete-branches-which-have-been-merged-or-are-otherwise-absent-from-a-remote)
|
||||
1. [Merge the master branch into a feature branch](#merge-the-master-branch-into-a-feature-branch)
|
||||
1. [Rebase a branch on top of another](#rebase-a-branch-on-top-of-another)
|
||||
1. [Tags](#tags)
|
||||
@@ -647,7 +647,7 @@ git push 'remote' --delete 'feat-branch'
|
||||
git branch --delete --remotes 'feat-branch'
|
||||
```
|
||||
|
||||
### Delete branches which have been merged or are otherwise absent from a remote.
|
||||
### Delete branches which have been merged or are otherwise absent from a remote
|
||||
|
||||
Command source [here][prune local branches that do not exist on remote anymore].
|
||||
|
||||
|
||||
@@ -5,4 +5,4 @@
|
||||
This helps reloading extensions.
|
||||
|
||||
1. press `Alt + F2`
|
||||
2. insert `r` and press `Enter`
|
||||
1. insert `r` and press `Enter`
|
||||
|
||||
@@ -38,7 +38,7 @@ gpg --expert --full-generate-key
|
||||
# The non-interactive (--batch) option requires a settings file.
|
||||
gpg --generate-key --batch 'setting.txt'
|
||||
gpg --generate-key --batch <<-EOF
|
||||
…
|
||||
…
|
||||
EOF
|
||||
|
||||
# Delete a key from the keyring.
|
||||
|
||||
@@ -13,6 +13,7 @@
|
||||
|
||||
1. open the plugins folder in the terminal; get the path in _Preferences_ > _Plugins_ tab > _Reveal Plugins Folder_ button
|
||||
1. use `npm` to install the plugin in that folder:
|
||||
|
||||
```sh
|
||||
npm i --prefix ./ insomnia-plugin-date-add
|
||||
```
|
||||
|
||||
@@ -56,8 +56,8 @@ Each knock/event begins with a title marker in the form `[name]`, with it being
|
||||
|
||||
```ini
|
||||
[options]
|
||||
UseSyslog
|
||||
Interface = enp0s2
|
||||
UseSyslog
|
||||
Interface = enp0s2
|
||||
|
||||
# Different sequences for opening and closing.
|
||||
[openSSH]
|
||||
@@ -75,12 +75,12 @@ Each knock/event begins with a title marker in the form `[name]`, with it being
|
||||
# If a sequence setting contains the `cmd_timeout` statement, the `stop_command`
|
||||
# will be automatically issued after that amount of seconds.
|
||||
[openClose7777]
|
||||
sequence = 2222:udp,3333:tcp,4444:udp
|
||||
seq_timeout = 15
|
||||
tcpflags = syn
|
||||
cmd_timeout = 10
|
||||
start_command = /usr/bin/firewall-cmd --add-port=7777/tcp --zone=public
|
||||
stop_command = /usr/bin/firewall-cmd --remove-port=7777/tcp --zone=public
|
||||
sequence = 2222:udp,3333:tcp,4444:udp
|
||||
seq_timeout = 15
|
||||
tcpflags = syn
|
||||
cmd_timeout = 10
|
||||
start_command = /usr/bin/firewall-cmd --add-port=7777/tcp --zone=public
|
||||
stop_command = /usr/bin/firewall-cmd --remove-port=7777/tcp --zone=public
|
||||
```
|
||||
|
||||
Sequences can also be defined in files.
|
||||
|
||||
@@ -233,44 +233,44 @@ When a Pod is created, it is also assigned one of the following QoS classes:
|
||||
|
||||
- _Guaranteed_, when **every** Container in the Pod, including init containers, has:
|
||||
|
||||
- a memory limit **and** a memory request, **and** they are the same
|
||||
- a CPU limit **and** a CPU request, **and** they are the same
|
||||
- a memory limit **and** a memory request, **and** they are the same
|
||||
- a CPU limit **and** a CPU request, **and** they are the same
|
||||
|
||||
```yaml
|
||||
spec:
|
||||
containers:
|
||||
…
|
||||
resources:
|
||||
limits:
|
||||
cpu: 700m
|
||||
memory: 200Mi
|
||||
requests:
|
||||
cpu: 700m
|
||||
memory: 200Mi
|
||||
…
|
||||
status:
|
||||
qosClass: Guaranteed
|
||||
```
|
||||
```yaml
|
||||
spec:
|
||||
containers:
|
||||
…
|
||||
resources:
|
||||
limits:
|
||||
cpu: 700m
|
||||
memory: 200Mi
|
||||
requests:
|
||||
cpu: 700m
|
||||
memory: 200Mi
|
||||
…
|
||||
status:
|
||||
qosClass: Guaranteed
|
||||
```
|
||||
|
||||
- _Burstable_, when
|
||||
|
||||
- the Pod does not meet the criteria for the _Guaranteed_ QoS class
|
||||
- **at least one** Container in the Pod has a memory **or** CPU request spec
|
||||
- the Pod does not meet the criteria for the _Guaranteed_ QoS class
|
||||
- **at least one** Container in the Pod has a memory **or** CPU request spec
|
||||
|
||||
```yaml
|
||||
spec:
|
||||
containers:
|
||||
- name: qos-demo
|
||||
…
|
||||
resources:
|
||||
limits:
|
||||
memory: 200Mi
|
||||
requests:
|
||||
memory: 100Mi
|
||||
…
|
||||
status:
|
||||
qosClass: Burstable
|
||||
```
|
||||
```yaml
|
||||
spec:
|
||||
containers:
|
||||
- name: qos-demo
|
||||
…
|
||||
resources:
|
||||
limits:
|
||||
memory: 200Mi
|
||||
requests:
|
||||
memory: 100Mi
|
||||
…
|
||||
status:
|
||||
qosClass: Burstable
|
||||
```
|
||||
|
||||
- _BestEffort_, when the Pod does not meet the criteria for the other QoS classes (its Containers have **no** memory or CPU limits **nor** requests)
|
||||
|
||||
@@ -314,6 +314,7 @@ Some capabilities are assigned to all Containers by default, while others (the o
|
||||
If a Container is _privileged_ (see [Privileged container vs privilege escalation](#privileged-container-vs-privilege-escalation)), it will have access to **all** the capabilities, with no regards of what are explicitly assigned to it.
|
||||
|
||||
Check:
|
||||
|
||||
- [Linux capabilities], to see what capabilities can be assigned to a process **in a Linux system**;
|
||||
- [Runtime privilege and Linux capabilities in Docker containers] for the capabilities available **inside Kubernetes**, and
|
||||
- [Container capabilities in Kubernetes] for a handy table associating capabilities in Kubernetes to their Linux variant.
|
||||
@@ -383,8 +384,8 @@ Each node pool should:
|
||||
|
||||
- have a _meaningful_ **name** (like \<prefix..>-\<randomid>) to make it easy to recognize the workloads running on it or the features of the nodes in it;
|
||||
- have a _minimum_ set of _meaningful_ **labels**, like:
|
||||
- cloud provider information;
|
||||
- node information and capabilities;
|
||||
- cloud provider information;
|
||||
- node information and capabilities;
|
||||
- sparse nodes on multiple **availability zones**.
|
||||
|
||||
## Edge computing
|
||||
@@ -517,7 +518,7 @@ Usage:
|
||||
- [Configure a Pod to use a ConfigMap]
|
||||
- [Distribute credentials securely using Secrets]
|
||||
- [Configure a Security Context for a Pod or a Container]
|
||||
- [Set capabilities for a Container]
|
||||
- [Set capabilities for a Container]
|
||||
- [Using `sysctls` in a Kubernetes Cluster][Using sysctls in a Kubernetes Cluster]
|
||||
|
||||
Concepts:
|
||||
|
||||
@@ -8,20 +8,27 @@
|
||||
## TL;DR
|
||||
|
||||
1. Get a shell on a test container.
|
||||
|
||||
```sh
|
||||
kubectl run --generator='run-pod/v1' --image 'alpine' -it --rm \
|
||||
--limits 'cpu=200m,memory=512Mi' --requests 'cpu=200m,memory=512Mi' \
|
||||
${USER}-mysql-test -- sh
|
||||
```
|
||||
|
||||
1. Install the utility applications needed for the tests.
|
||||
|
||||
```sh
|
||||
apk --no-cache add 'mysql-client' 'netcat-openbsd''
|
||||
```
|
||||
|
||||
1. Test basic connectivity to the external service.
|
||||
|
||||
```sh
|
||||
nc -vz -w3 '10.0.2.15' '3306'
|
||||
```
|
||||
|
||||
1. Test application connectivity.
|
||||
|
||||
```sh
|
||||
mysql --host '10.0.2.15' --port '3306' --user 'root'
|
||||
```
|
||||
|
||||
@@ -411,7 +411,7 @@ The configuration files are loaded as follows:
|
||||
kubectl config --kubeconfig 'config.local' view
|
||||
```
|
||||
|
||||
2. If the `$KUBECONFIG` environment variable is set, then it is used as a list of paths following the normal path delimiting rules for your system; the files are merged:
|
||||
1. If the `$KUBECONFIG` environment variable is set, then it is used as a list of paths following the normal path delimiting rules for your system; the files are merged:
|
||||
|
||||
```sh
|
||||
export KUBECONFIG="/tmp/config.local:.kube/config.prod"
|
||||
@@ -419,7 +419,7 @@ The configuration files are loaded as follows:
|
||||
|
||||
When a value is modified, it is modified in the file that defines the stanza; when a value is created, it is created in the first existing file; if no file in the chain exist, then the last file in the list is created with the configuration.
|
||||
|
||||
3. If none of the above happens, `~/.kube/config` is used, and no merging takes place.
|
||||
1. If none of the above happens, `~/.kube/config` is used, and no merging takes place.
|
||||
|
||||
The configuration file can be edited, or acted upon from the command line:
|
||||
|
||||
@@ -587,7 +587,6 @@ Verbosity | Description
|
||||
`--v=8` | Display HTTP request contents.
|
||||
`--v=9` | Display HTTP request contents without truncation of contents.
|
||||
|
||||
|
||||
## Further readings
|
||||
|
||||
- [Kubernetes]
|
||||
|
||||
@@ -4,8 +4,8 @@
|
||||
|
||||
1. [TL;DR](#tldr)
|
||||
1. [Troubleshooting](#troubleshooting)
|
||||
1. [What happens if I use the _LoadBalancer_ type with Services?](#what-happens-if-i-use-the-loadbalancer-type-with-services)
|
||||
1. [Can I use custom certificates?](#can-i-use-custom-certificates)
|
||||
1. [What happens if one uses the _LoadBalancer_ type with Services](#what-happens-if-one-uses-the-loadbalancer-type-with-services)
|
||||
1. [Use custom certificates](#use-custom-certificates)
|
||||
1. [Further readings](#further-readings)
|
||||
1. [Sources](#sources)
|
||||
|
||||
@@ -66,11 +66,11 @@ minikube delete --all --purge
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### What happens if I use the _LoadBalancer_ type with Services?
|
||||
### What happens if one uses the _LoadBalancer_ type with Services
|
||||
|
||||
On cloud providers that support load balancers, an external IP address would be provisioned to access the Service; on minikube, the _LoadBalancer_ type makes the Service accessible through the `minikube service` command.
|
||||
|
||||
### Can I use custom certificates?
|
||||
### Use custom certificates
|
||||
|
||||
Minikibe's certificates are available in the `~/.minikube/certs` folder.
|
||||
|
||||
|
||||
@@ -30,6 +30,5 @@ nmap -O 192.168.0.1
|
||||
<!-- Upstream -->
|
||||
[os detection]: https://nmap.org/book/man-os-detection.html
|
||||
|
||||
|
||||
<!-- Others -->
|
||||
[cheatsheet]: https://hackertarget.com/nmap-cheatsheet-a-quick-reference-guide/
|
||||
|
||||
@@ -64,15 +64,20 @@ Only do this **after** you created another user and [made it an admin][make othe
|
||||
From the safest to the less safe option:
|
||||
|
||||
1. Lock the account:
|
||||
|
||||
```sh
|
||||
chage -E0 'admin'
|
||||
```
|
||||
|
||||
1. Remove it from the `openmediavault-admin` group:
|
||||
|
||||
```sh
|
||||
gpasswd -d 'admin' 'openmediavault-admin'
|
||||
deluser 'admin' 'openmediavault-admin'
|
||||
```
|
||||
|
||||
1. Delete it completely:
|
||||
|
||||
```sh
|
||||
userdel -r 'admin'
|
||||
deluser 'admin'
|
||||
@@ -113,11 +118,14 @@ To experiment with intermediate values:
|
||||
- Find the `/storage/hdparm` xpath.
|
||||
- Change the values for the disk.
|
||||
- Run this command:
|
||||
|
||||
```sh
|
||||
omv-salt deploy run hdparm
|
||||
```
|
||||
|
||||
- Reboot.
|
||||
- Check if APM has been set:
|
||||
|
||||
```sh
|
||||
hdparm -I "/dev/sdX"
|
||||
```
|
||||
|
||||
@@ -29,12 +29,12 @@ pacman --database --asdeps autoconf
|
||||
|
||||
# install zsh unsupervisioned (useful in scrips)
|
||||
pacman --noconfirm \
|
||||
--sync --needed --noprogressbar --quiet --refresh \
|
||||
fzf zsh-completions
|
||||
--sync --needed --noprogressbar --quiet --refresh \
|
||||
fzf zsh-completions
|
||||
# completely remove virtualbox-guest-utils-nox unsupervisioned (useful in scrips)
|
||||
pacman --noconfirm \
|
||||
--remove --nosave --noprogressbar --quiet --recursive --unneeded \
|
||||
virtualbox-guest-utils-nox
|
||||
--remove --nosave --noprogressbar --quiet --recursive --unneeded \
|
||||
virtualbox-guest-utils-nox
|
||||
```
|
||||
|
||||
## Further readings
|
||||
|
||||
@@ -25,7 +25,7 @@ pre-commit install
|
||||
pre-commit autoupdate
|
||||
|
||||
# Skip check on commit.
|
||||
SKIP="flake8" git commit -m "foo"
|
||||
SKIP="check_id" git commit -m "foo"
|
||||
```
|
||||
|
||||
[Config file example].
|
||||
|
||||
@@ -78,13 +78,13 @@ rsync -AHPXazv --append-verify --no-motd --rsh ssh --exclude "#*" --exclude "@*"
|
||||
rsync -AHPazv --append-verify --no-motd --exclude "#*" --exclude "@*" 'source/dir/' 'user@synology.lan:/shared/folder/' --delete --dry-run
|
||||
rsync -AXaz --append-verify --chown='user' --fake-super --info='progress2' --no-i-r --no-motd --partial -e "ssh -i /home/user/.ssh/id_ed25519 -o UserKnownHostsFile=/home/user/.ssh/known_hosts" 'source/dir/' 'user@synology.lan:/shared/folder/' -n
|
||||
rsync 'data/' 'synology.lan:/volume1/data/' \
|
||||
-ALSXabhs --no-i-r \
|
||||
--partial --append-verify \
|
||||
--info='progress2' \
|
||||
--delete --backup-dir "changes_$(date +'%F_%H-%M-%S')" --exclude "changes_*" \
|
||||
--no-motd --fake-super --super \
|
||||
--numeric-ids --usermap='1000:1026' --groupmap='1000:100' \
|
||||
--exclude={'@eaDir','#recycle'}
|
||||
-ALSXabhs --no-i-r \
|
||||
--partial --append-verify \
|
||||
--info='progress2' \
|
||||
--delete --backup-dir "changes_$(date +'%F_%H-%M-%S')" --exclude "changes_*" \
|
||||
--no-motd --fake-super --super \
|
||||
--numeric-ids --usermap='1000:1026' --groupmap='1000:100' \
|
||||
--exclude={'@eaDir','#recycle'}
|
||||
|
||||
# Parallel sync.
|
||||
# Each thread must use a different directory.
|
||||
|
||||
@@ -88,9 +88,9 @@ Gotchas:
|
||||
- the `#snapshot` folder is created in the shared folder's root directory
|
||||
- the default snapshots directory for that shared folder is mounted on it in **read only** mode:
|
||||
|
||||
> ```txt
|
||||
> /dev/mapper/cachedev_0 on /volume1/Data/#snapshot type btrfs (ro,nodev,relatime,ssd,synoacl,space_cache=v2,auto_reclaim_space,metadata_ratio=50,block_group_cache_tree,subvolid=266,subvol=/@syno/@sharesnap/Data)
|
||||
> ```
|
||||
```txt
|
||||
/dev/mapper/cachedev_0 on /volume1/Data/#snapshot type btrfs (ro,nodev,relatime,ssd,synoacl,space_cache=v2,auto_reclaim_space,metadata_ratio=50,block_group_cache_tree,subvolid=266,subvol=/@syno/@sharesnap/Data)
|
||||
```
|
||||
|
||||
## Encrypt data on a USB disk
|
||||
|
||||
|
||||
@@ -56,6 +56,7 @@ ${THUNDERBIRD_BIN_DIR}/thunderbird-bin -P
|
||||
|
||||
1. Close Thunderbird if it is open.
|
||||
1. Copy the profile folder to another location:
|
||||
|
||||
```sh
|
||||
cp -a "${THUNDERBIRD_PROFILES_DIR}/we12yhij.default" "/Backup/Thunderbird/we12yhij.default"
|
||||
```
|
||||
@@ -64,11 +65,14 @@ ${THUNDERBIRD_BIN_DIR}/thunderbird-bin -P
|
||||
|
||||
1. Close Thunderbird if it is open.
|
||||
1. If the existing profile folder and the profile backup folder have the same name, replace the existing profile folder with the profile backup folder:
|
||||
|
||||
```sh
|
||||
rm -fr "${THUNDERBIRD_PROFILES_DIR}/we12yhij.default"
|
||||
cp -a "/Backup/Thunderbird/we12yhij.default" "${THUNDERBIRD_PROFILES_DIR}/we12yhij.default"
|
||||
```
|
||||
|
||||
> Important: The profile folder names must match exactly for this to work, including the random string of 8 characters.
|
||||
|
||||
1. If the profile folder names do not match, or to move or restore a profile to a different location:
|
||||
1. Use the Profile Manager to create a new profile in the desired location, then exit the Profile Manager.
|
||||
1. Open the profile's backup folder.
|
||||
@@ -78,9 +82,11 @@ ${THUNDERBIRD_BIN_DIR}/thunderbird-bin -P
|
||||
1. Paste the copied contents into the new profile's folder.<br/>
|
||||
Overwrite existing files of the same name.
|
||||
1. Open up the `profiles.ini` file in the application data folder in a text editor.
|
||||
|
||||
```sh
|
||||
vim "${THUNDERBIRD_DATA_DIR}/profiles.ini"
|
||||
```
|
||||
|
||||
1. Check the `Path=` line for the profile is correct.
|
||||
1. Start Thunderbird.
|
||||
|
||||
@@ -127,9 +133,11 @@ Steps to rebuild the Global Database:
|
||||
|
||||
1. Quit Thunderbird.
|
||||
1. Delete the `global-messages-db.sqlite` file in the Thunderbird Profile you want to rebuild the index for.
|
||||
|
||||
```sh
|
||||
rm "${THUNDERBIRD_PROFILES_DIR}/we12yhij.default/global-messages-db.sqlite"
|
||||
```
|
||||
|
||||
1. Start Thunderbird.
|
||||
|
||||
The re-indexing process will start automatically.<br/>
|
||||
|
||||
@@ -1,15 +1,16 @@
|
||||
# TL;DR
|
||||
# TL;DR pages
|
||||
|
||||
## Table of contents <!-- omit in toc -->
|
||||
|
||||
1. [TL;DR](#tldr-1)
|
||||
1. [TL;DR](#tldr)
|
||||
1. [Further readings](#further-readings)
|
||||
|
||||
## TL;DR
|
||||
|
||||
```sh
|
||||
pip3 install tldr # official python client
|
||||
brew install tealdeer # rust client
|
||||
sudo port install tldr-cpp-client # c++ client
|
||||
pip3 install 'tldr' # official python client
|
||||
brew install 'tealdeer' # rust client
|
||||
sudo port install 'tldr-cpp-client' # c++ client
|
||||
```
|
||||
|
||||
## Further readings
|
||||
|
||||
@@ -436,9 +436,9 @@ uci commit 'dhcp' && reload_config && luci-reload
|
||||
Suggestions:
|
||||
|
||||
- [SSH]:
|
||||
- Change the SSH port from the default `22` value.
|
||||
- Restrict login to specific IP addresses.
|
||||
- Restrict authentication options to keys.
|
||||
- Change the SSH port from the default `22` value.
|
||||
- Restrict login to specific IP addresses.
|
||||
- Restrict authentication options to keys.
|
||||
|
||||
## The SFP+ caged module
|
||||
|
||||
|
||||
@@ -22,7 +22,6 @@ To back up data you need an FAT32 or exFAT-formatted USB drive with at least the
|
||||
1. Customize the backup file name and select _Back Up_; this will restart the console and start the backup process;
|
||||
1. remove the USB drive once the console has been started up normally again.
|
||||
|
||||
|
||||
## Upgrade the HDD
|
||||
|
||||
> This procedure has been tested on a PS4 Pro. Other models have different procedures.
|
||||
@@ -62,10 +61,10 @@ To back up data you need an FAT32 or exFAT-formatted USB drive with at least the
|
||||
|
||||
1. copy the installation file in the _UPDATE_ folder created before; the file **must** be named _PS4UPDATE.PUP_.
|
||||
|
||||
2. Plug the USB drive containing the file into the PS4;
|
||||
3. start the console in _Safe Mode_ pressing and hold the power button, and releasing it after the second beep;
|
||||
4. select Safe Mode's option 7: _Initialize PS4 (Reinstall System Software)_;
|
||||
5. confirm at the prompts.
|
||||
1. Plug the USB drive containing the file into the PS4;
|
||||
1. start the console in _Safe Mode_ pressing and hold the power button, and releasing it after the second beep;
|
||||
1. select Safe Mode's option 7: _Initialize PS4 (Reinstall System Software)_;
|
||||
1. confirm at the prompts.
|
||||
|
||||
If the PS4 does not recognize the file, check that the folder names and file name are correct. Enter the folder names and file name using uppercase letters.
|
||||
|
||||
|
||||
@@ -191,25 +191,25 @@ brew install --cask 'openzfs'
|
||||
|
||||
Pool options (`-o option`):
|
||||
|
||||
* `ashift=XX`
|
||||
* XX=9 for 512B sectors, XX=12 for 4KB sectors, XX=16 for 8KB sectors
|
||||
* [reference](http://open-zfs.org/wiki/Performance_tuning#Alignment_Shift_.28ashift.29)
|
||||
* `version=28`
|
||||
* compatibility with ZFS on Linux
|
||||
- `ashift=XX`
|
||||
- XX=9 for 512B sectors, XX=12 for 4KB sectors, XX=16 for 8KB sectors
|
||||
- [reference](http://open-zfs.org/wiki/Performance_tuning#Alignment_Shift_.28ashift.29)
|
||||
- `version=28`
|
||||
- compatibility with ZFS on Linux
|
||||
|
||||
Filesystem options (`-O option`):
|
||||
|
||||
* `atime=off`
|
||||
* `compression=on`
|
||||
* activates compression with the default algorithm
|
||||
* pool version 28 cannot use lz4
|
||||
* `copies=2`
|
||||
* number of copies of data stored for the dataset
|
||||
* `dedup=on`
|
||||
* deduplication
|
||||
* halves write speed
|
||||
* [reference](http://open-zfs.org/wiki/Performance_tuning#Deduplication)
|
||||
* `xattr=sa`
|
||||
- `atime=off`
|
||||
- `compression=on`
|
||||
- activates compression with the default algorithm
|
||||
- pool version 28 cannot use lz4
|
||||
- `copies=2`
|
||||
- number of copies of data stored for the dataset
|
||||
- `dedup=on`
|
||||
- deduplication
|
||||
- halves write speed
|
||||
- [reference](http://open-zfs.org/wiki/Performance_tuning#Deduplication)
|
||||
- `xattr=sa`
|
||||
|
||||
```sh
|
||||
sudo zpool \
|
||||
|
||||
@@ -378,13 +378,13 @@ local COMMAND
|
||||
local FOLDERS=()
|
||||
for (( I = $# ; I >= 0 ; I-- ))
|
||||
do
|
||||
if [[ -d "${@[$I]}" ]]
|
||||
then
|
||||
FOLDERS+="${@[$I]}"
|
||||
else
|
||||
COMMAND="${@[1,-$((${#FOLDERS}+1))]}"
|
||||
break
|
||||
fi
|
||||
if [[ -d "${@[$I]}" ]]
|
||||
then
|
||||
FOLDERS+="${@[$I]}"
|
||||
else
|
||||
COMMAND="${@[1,-$((${#FOLDERS}+1))]}"
|
||||
break
|
||||
fi
|
||||
done
|
||||
|
||||
# Make entries unique in an array.
|
||||
@@ -512,11 +512,11 @@ promptinit; prompt theme-name
|
||||
: "${ZSH_MODULES_DIR:-$HOME/.zshrc.d}"
|
||||
if [[ -d "$ZSH_MODULES_DIR" ]]
|
||||
then
|
||||
for ZSH_MODULE in "$ZSH_MODULES_DIR"/*
|
||||
do
|
||||
[[ -r "$ZSH_MODULE" ]] && source "$ZSH_MODULE"
|
||||
done
|
||||
unset ZSH_MODULE
|
||||
for ZSH_MODULE in "$ZSH_MODULES_DIR"/*
|
||||
do
|
||||
[[ -r "$ZSH_MODULE" ]] && source "$ZSH_MODULE"
|
||||
done
|
||||
unset ZSH_MODULE
|
||||
fi
|
||||
```
|
||||
|
||||
|
||||
Reference in New Issue
Block a user