mirror of
https://gitea.com/mcereda/oam.git
synced 2026-02-09 05:44:23 +00:00
chore: replaced markdown's double spaced line break for html break
This commit is contained in:
@@ -1,6 +1,6 @@
|
|||||||
# Create an admission webhook
|
# Create an admission webhook
|
||||||
|
|
||||||
The example below will create a webhook which acts as both a `ValidatingAdmissionWebhook` and a `MutatingAdmissionWebhook`, but a real world one can act as only one of them. Or more. Your choice.
|
The example below will create a webhook which acts as both a `ValidatingAdmissionWebhook` and a `MutatingAdmissionWebhook`, but a real world one can act as only one of them. Or more. Your choice.<br/>
|
||||||
The procedure is executed in a `minikube` cluster, and will use a self signed certificate for the webhook connection.
|
The procedure is executed in a `minikube` cluster, and will use a self signed certificate for the webhook connection.
|
||||||
|
|
||||||
> Be aware of the pros and cons of an `AdmissionWebhook` before deploying one:
|
> Be aware of the pros and cons of an `AdmissionWebhook` before deploying one:
|
||||||
|
|||||||
@@ -29,7 +29,7 @@ Get the admin user's password:
|
|||||||
kubectl get secret --namespace monitoring grafana -o jsonpath="{.data.admin-password}" | base64 --decode ; echo
|
kubectl get secret --namespace monitoring grafana -o jsonpath="{.data.admin-password}" | base64 --decode ; echo
|
||||||
```
|
```
|
||||||
|
|
||||||
The Grafana server can be accessed via port 80 on `grafana.monitoring.svc.cluster.local` from within the cluster.
|
The Grafana server can be accessed via port 80 on `grafana.monitoring.svc.cluster.local` from within the cluster.<br/>
|
||||||
To get the external URL:
|
To get the external URL:
|
||||||
|
|
||||||
```sh
|
```sh
|
||||||
|
|||||||
@@ -1,6 +1,6 @@
|
|||||||
# Access a jar file's contents
|
# Access a jar file's contents
|
||||||
|
|
||||||
A `.jar` file is nothing more than an archive.
|
A `.jar` file is nothing more than an archive.<br/>
|
||||||
You can find all the files it contains just unzipping it:
|
You can find all the files it contains just unzipping it:
|
||||||
|
|
||||||
```sh
|
```sh
|
||||||
|
|||||||
@@ -12,7 +12,7 @@
|
|||||||
|
|
||||||
### Zsh terminal icons are not getting displayed in Atom PlatformIO Ide Terminal
|
### Zsh terminal icons are not getting displayed in Atom PlatformIO Ide Terminal
|
||||||
|
|
||||||
Change font to `NotoSansMono Nerd Font` in PlatformIO Ide Terminal's settings.
|
Change font to `NotoSansMono Nerd Font` in PlatformIO Ide Terminal's settings.<br/>
|
||||||
See [Why Zsh terminal icons are not getting displayed in Atom PlatformIO Ide Terminal?]
|
See [Why Zsh terminal icons are not getting displayed in Atom PlatformIO Ide Terminal?]
|
||||||
|
|
||||||
## Further readings
|
## Further readings
|
||||||
|
|||||||
@@ -293,11 +293,11 @@ DELIMITER
|
|||||||
- the first line must start with an **optional command** followed by the special redirection operator `<<` and the **delimiting identifier**
|
- the first line must start with an **optional command** followed by the special redirection operator `<<` and the **delimiting identifier**
|
||||||
- one can use **any string** as a delimiting identifier, the most commonly used being `EOF` or `END`
|
- one can use **any string** as a delimiting identifier, the most commonly used being `EOF` or `END`
|
||||||
- if the delimiting identifier is **unquoted**, the shell will substitute all variables, commands and special characters before passing the here-document lines to the command
|
- if the delimiting identifier is **unquoted**, the shell will substitute all variables, commands and special characters before passing the here-document lines to the command
|
||||||
- appending a **minus sign** to the redirection operator (`<<-`), will cause all leading tab characters to be **ignored**
|
- appending a **minus sign** to the redirection operator (`<<-`), will cause all leading tab characters to be **ignored**<br/>
|
||||||
this allows one to use indentation when writing here-documents in shell scripts
|
this allows one to use indentation when writing here-documents in shell scripts<br/>
|
||||||
leading whitespace characters are not allowed, only tabs are
|
leading whitespace characters are not allowed, only tabs are
|
||||||
- the here-document block can contain strings, variables, commands and any other type of input
|
- the here-document block can contain strings, variables, commands and any other type of input
|
||||||
- the last line must end with the delimiting identifier
|
- the last line must end with the delimiting identifier<br/>
|
||||||
white space in front of the delimiter is not allowed
|
white space in front of the delimiter is not allowed
|
||||||
|
|
||||||
```sh
|
```sh
|
||||||
|
|||||||
@@ -1,10 +1,10 @@
|
|||||||
# Beowulf cluster
|
# Beowulf cluster
|
||||||
|
|
||||||
Multi-computer architecture which can be used for parallel computations.
|
Multi-computer architecture which can be used for parallel computations.<br/>
|
||||||
It is usually composed of **commodity**, **non custom** hardware and software components and is trivially reproducible, like any PC capable of running a Unix-like operating system with standard Ethernet adapters and switches.
|
It is usually composed of **commodity**, **non custom** hardware and software components and is trivially reproducible, like any PC capable of running a Unix-like operating system with standard Ethernet adapters and switches.
|
||||||
|
|
||||||
The cluster usually consists of one **server** node, and one or more **client** nodes connected via some kind of network.
|
The cluster usually consists of one **server** node, and one or more **client** nodes connected via some kind of network.<br/>
|
||||||
The server controls the whole cluster, and provides files to the clients. It is also the cluster's console and gateway to the outside world. Large Beowulf machines might have more than one server node, and possibly other nodes dedicated to particular tasks like consoles or monitoring stations.
|
The server controls the whole cluster, and provides files to the clients. It is also the cluster's console and gateway to the outside world. Large Beowulf machines might have more than one server node, and possibly other nodes dedicated to particular tasks like consoles or monitoring stations.<br/>
|
||||||
In most cases, client nodes in a Beowulf system are dumb, and the dumber the better. Clients are configured and controlled by the server, and do only what they are told to do.
|
In most cases, client nodes in a Beowulf system are dumb, and the dumber the better. Clients are configured and controlled by the server, and do only what they are told to do.
|
||||||
|
|
||||||
Beowulf clusters behave more like a single machine rather than many workstations: nodes can be thought of as a CPU and memory package which is plugged into the cluster, much like a CPU or memory module can be plugged into a motherboard.
|
Beowulf clusters behave more like a single machine rather than many workstations: nodes can be thought of as a CPU and memory package which is plugged into the cluster, much like a CPU or memory module can be plugged into a motherboard.
|
||||||
|
|||||||
@@ -77,7 +77,7 @@ Enter passphrase for /dev/sdb1: ***
|
|||||||
killed
|
killed
|
||||||
```
|
```
|
||||||
|
|
||||||
it could be the process is using too much memory.
|
it could be the process is using too much memory.<br/>
|
||||||
This is due to the LUKS2 format using by default the Argon2i key derivation function, that is so called _memory-hard function_ - it requires certain amount of physical memory (to make dictionary attacks more costly).
|
This is due to the LUKS2 format using by default the Argon2i key derivation function, that is so called _memory-hard function_ - it requires certain amount of physical memory (to make dictionary attacks more costly).
|
||||||
|
|
||||||
The solution is simple; either:
|
The solution is simple; either:
|
||||||
|
|||||||
@@ -55,7 +55,7 @@ This should only be done in an encrypted root partition that includes `/boot`, s
|
|||||||
/boot/ root:root 700
|
/boot/ root:root 700
|
||||||
```
|
```
|
||||||
|
|
||||||
If you have other encrypted partitions (e.g. `/home`, `swap`, etc), you can create additional keys to mount them without entering a passphrase.
|
If you have other encrypted partitions (e.g. `/home`, `swap`, etc), you can create additional keys to mount them without entering a passphrase.<br/>
|
||||||
This works exactly as described above in steps 1-4, except that you don't need to add the key for those partitions to the initrd.
|
This works exactly as described above in steps 1-4, except that you don't need to add the key for those partitions to the initrd.
|
||||||
|
|
||||||
## Further readings
|
## Further readings
|
||||||
|
|||||||
@@ -1,6 +1,6 @@
|
|||||||
# Extract attachments from multipart emails
|
# Extract attachments from multipart emails
|
||||||
|
|
||||||
When saved as plain text, emails may be saved as S/MIME files with attachments.
|
When saved as plain text, emails may be saved as S/MIME files with attachments.<br/>
|
||||||
In such cases the text file itself contains the multipart message body of the email, so the attachments are provided as base64 streams:
|
In such cases the text file itself contains the multipart message body of the email, so the attachments are provided as base64 streams:
|
||||||
|
|
||||||
```txt
|
```txt
|
||||||
|
|||||||
@@ -24,7 +24,7 @@ ffmpeg -y -i 'rec.webm' -vf 'palettegen' 'palette.png'
|
|||||||
ffmpeg -y -i 'rec.webm' -i 'palette.png' -filter_complex 'paletteuse' -r 10 'out.gif'
|
ffmpeg -y -i 'rec.webm' -i 'palette.png' -filter_complex 'paletteuse' -r 10 'out.gif'
|
||||||
```
|
```
|
||||||
|
|
||||||
Here `rec.webm` is the recorded video.
|
Here `rec.webm` is the recorded video.<br/>
|
||||||
The first command creates a palette out of the webm file. The second command converts the webm file to gif using the created palette.
|
The first command creates a palette out of the webm file. The second command converts the webm file to gif using the created palette.
|
||||||
|
|
||||||
## Sources
|
## Sources
|
||||||
|
|||||||
@@ -94,7 +94,7 @@ find -samefile 'path/to/file'
|
|||||||
|
|
||||||
Primaries used to check the difference between the file last access, creation or modification time and the time `find` was started.
|
Primaries used to check the difference between the file last access, creation or modification time and the time `find` was started.
|
||||||
|
|
||||||
All time specification primaries take a numeric argument, and allow the number to be preceded by a plus sign (`+`) or a minus sign (`-`).
|
All time specification primaries take a numeric argument, and allow the number to be preceded by a plus sign (`+`) or a minus sign (`-`).<br/>
|
||||||
A preceding plus sign means **more than `n`**, a preceding minus sign means **less than `n`** and neither means **exactly `n`**.
|
A preceding plus sign means **more than `n`**, a preceding minus sign means **less than `n`** and neither means **exactly `n`**.
|
||||||
|
|
||||||
Accepted time information:
|
Accepted time information:
|
||||||
@@ -106,7 +106,7 @@ Accepted time information:
|
|||||||
|
|
||||||
With the `-Xmin` form, times are rounded up to the next full **minute**. This is the same as using `-Xtime Nm`.
|
With the `-Xmin` form, times are rounded up to the next full **minute**. This is the same as using `-Xtime Nm`.
|
||||||
|
|
||||||
With the `-Xtime` form, times depend on the given unit; if no unit is given, it defaults to full 24 hours periods (days).
|
With the `-Xtime` form, times depend on the given unit; if no unit is given, it defaults to full 24 hours periods (days).<br/>
|
||||||
Accepted units:
|
Accepted units:
|
||||||
|
|
||||||
- `s` for seconds
|
- `s` for seconds
|
||||||
@@ -117,7 +117,7 @@ Accepted units:
|
|||||||
|
|
||||||
Any number of units may be combined in one `-Xtime` argument.
|
Any number of units may be combined in one `-Xtime` argument.
|
||||||
|
|
||||||
with the `-newerXY file` form, `find` checks if `file` has a more recent last access time (X=a), inode creation time (X=B), change time (X=c), or modification time (X=m) than the last access time (Y=a), inode creation time (Y=B), change time (Y=c), or modification time (Y=m).
|
with the `-newerXY file` form, `find` checks if `file` has a more recent last access time (X=a), inode creation time (X=B), change time (X=c), or modification time (X=m) than the last access time (Y=a), inode creation time (Y=B), change time (Y=c), or modification time (Y=m).<br/>
|
||||||
If Y=t, `file` is interpreted as a direct date specification of the form understood by `cvs`. Also, `-newermm` is the same as `-newer`.
|
If Y=t, `file` is interpreted as a direct date specification of the form understood by `cvs`. Also, `-newermm` is the same as `-newer`.
|
||||||
|
|
||||||
```sh
|
```sh
|
||||||
|
|||||||
@@ -96,5 +96,5 @@ Terraform will use the provider to connect to the proxy and operate on the SQL i
|
|||||||
|
|
||||||
## Gotchas
|
## Gotchas
|
||||||
|
|
||||||
- As of 2021-05-18 the `root` user will **not be able** to create other users from the MySQL shell because it will lack `CREATE USER` permissions.
|
- As of 2021-05-18 the `root` user will **not be able** to create other users from the MySQL shell because it will lack `CREATE USER` permissions.<br/>
|
||||||
- The documentation says that SQL users created using `gcloud`, the APIs or the cloud console will have the same permissions of the `root` user; in reality, those administrative entities will be able to create users only from the MySQL shell.
|
- The documentation says that SQL users created using `gcloud`, the APIs or the cloud console will have the same permissions of the `root` user; in reality, those administrative entities will be able to create users only from the MySQL shell.
|
||||||
|
|||||||
@@ -384,7 +384,7 @@ git config --list --show-scope
|
|||||||
git config --list --global --show-origin
|
git config --list --global --show-origin
|
||||||
```
|
```
|
||||||
|
|
||||||
The configuration is shown in full for the requested scope (or all if not specified), but it might include the same setting multiple times if it shows up in multiple scopes.
|
The configuration is shown in full for the requested scope (or all if not specified), but it might include the same setting multiple times if it shows up in multiple scopes.<br/>
|
||||||
Render the current value of a setting using the `--get` option:
|
Render the current value of a setting using the `--get` option:
|
||||||
|
|
||||||
```sh
|
```sh
|
||||||
@@ -557,7 +557,7 @@ git diff > 'file.patch'
|
|||||||
git diff --output 'file.patch' --cached
|
git diff --output 'file.patch' --cached
|
||||||
```
|
```
|
||||||
|
|
||||||
The output from `git diff` just shows changes to **text** files by default, no metadata or other information about commits or branches.
|
The output from `git diff` just shows changes to **text** files by default, no metadata or other information about commits or branches.<br/>
|
||||||
To get a whole commit with all its metadata and binary changes, use `git format-patch`:
|
To get a whole commit with all its metadata and binary changes, use `git format-patch`:
|
||||||
|
|
||||||
```sh
|
```sh
|
||||||
@@ -583,7 +583,7 @@ Use `git apply` to apply a patch file to the current index:
|
|||||||
git apply 'file.patch'
|
git apply 'file.patch'
|
||||||
```
|
```
|
||||||
|
|
||||||
The changes from the patch are unstaged and no commits are created.
|
The changes from the patch are unstaged and no commits are created.<br/>
|
||||||
To apply all commits from a patch, use `git am` on a patch created with `git format-patch`:
|
To apply all commits from a patch, use `git am` on a patch created with `git format-patch`:
|
||||||
|
|
||||||
```sh
|
```sh
|
||||||
@@ -594,7 +594,7 @@ The commits are applied one after the other and registered in the repository's l
|
|||||||
|
|
||||||
## The stash stack
|
## The stash stack
|
||||||
|
|
||||||
The _stash_ is a changelist separated from the one in the current working directory.
|
The _stash_ is a changelist separated from the one in the current working directory.<br/>
|
||||||
`git stash` will save the current changes there and cleans the working directory. You can (re-)apply changes from the stash at any time:
|
`git stash` will save the current changes there and cleans the working directory. You can (re-)apply changes from the stash at any time:
|
||||||
|
|
||||||
```sh
|
```sh
|
||||||
|
|||||||
@@ -219,19 +219,19 @@ hQIMAwbYc…
|
|||||||
-----END PGP MESSAGE-----
|
-----END PGP MESSAGE-----
|
||||||
```
|
```
|
||||||
|
|
||||||
OpenPGP defines all text to be in UTF-8, so a comment may be any UTF-8 string.
|
OpenPGP defines all text to be in UTF-8, so a comment may be any UTF-8 string.<br/>
|
||||||
The whole point of armoring, however, is to provide seven-bit-clean data, so if a comment has characters that are outside the US-ASCII range of UTF they may very well not survive transport.
|
The whole point of armoring, however, is to provide seven-bit-clean data, so if a comment has characters that are outside the US-ASCII range of UTF they may very well not survive transport.
|
||||||
|
|
||||||
## Use a GPG key for SSH authentication
|
## Use a GPG key for SSH authentication
|
||||||
|
|
||||||
> Shamelessly copied over from [How to enable SSH access using a GPG key for authentication].
|
> Shamelessly copied over from [How to enable SSH access using a GPG key for authentication].
|
||||||
|
|
||||||
This exercise will use a GPG subkey with only the authentication capability enabled to complete SSH connections.
|
This exercise will use a GPG subkey with only the authentication capability enabled to complete SSH connections.<br/>
|
||||||
You can create multiple subkeys as you would do for SSH key pairs.
|
You can create multiple subkeys as you would do for SSH key pairs.
|
||||||
|
|
||||||
### Create an authentication subkey
|
### Create an authentication subkey
|
||||||
|
|
||||||
You should already have a GPG key. If you don't, read one of the many fine tutorials available on this topic.
|
You should already have a GPG key. If you don't, read one of the many fine tutorials available on this topic.<br/>
|
||||||
You will create the subkey by editing your existing key **in expert mode** to get access to the appropriate options:
|
You will create the subkey by editing your existing key **in expert mode** to get access to the appropriate options:
|
||||||
|
|
||||||
```sh
|
```sh
|
||||||
@@ -286,12 +286,12 @@ Is this correct? (y/N) y
|
|||||||
Really create? (y/N) y
|
Really create? (y/N) y
|
||||||
|
|
||||||
sec rsa2048/8715AF32191DB135
|
sec rsa2048/8715AF32191DB135
|
||||||
created: 2019-03-21 expires: 2021-03-20 usage: SC
|
created: 2019-03-21 expires: 2021-03-20 usage: SC
|
||||||
trust: ultimate validity: ultimate
|
trust: ultimate validity: ultimate
|
||||||
ssb rsa2048/150F16909B9AA603
|
ssb rsa2048/150F16909B9AA603
|
||||||
created: 2019-03-21 expires: 2021-03-20 usage: E
|
created: 2019-03-21 expires: 2021-03-20 usage: E
|
||||||
ssb rsa2048/17E7403F18CB1123
|
ssb rsa2048/17E7403F18CB1123
|
||||||
created: 2019-03-21 expires: never usage: A
|
created: 2019-03-21 expires: never usage: A
|
||||||
[ultimate] (1). Brian Exelbierd
|
[ultimate] (1). Brian Exelbierd
|
||||||
|
|
||||||
gpg> quit
|
gpg> quit
|
||||||
@@ -300,15 +300,15 @@ Save changes? (y/N) y
|
|||||||
|
|
||||||
### Enable SSH to use the GPG subkey
|
### Enable SSH to use the GPG subkey
|
||||||
|
|
||||||
When using SSH, `ssh-agent` is used to manage SSH keys. When using a GPG key, `gpg-agent` is used to manage GPG keys.
|
When using SSH, `ssh-agent` is used to manage SSH keys. When using a GPG key, `gpg-agent` is used to manage GPG keys.<br/>
|
||||||
To get `gpg-agent` to handle requests from SSH, you need to enable its SSH support:
|
To get `gpg-agent` to handle requests from SSH, you need to enable its SSH support:
|
||||||
|
|
||||||
```sh
|
```sh
|
||||||
echo "enable-ssh-support" >> ~/.gnupg/gpg-agent.conf
|
echo "enable-ssh-support" >> ~/.gnupg/gpg-agent.conf
|
||||||
```
|
```
|
||||||
|
|
||||||
You can avoid using `ssh-add` to load the keys pre-specifying which GPG keys to use in the `~/.gnupg/sshcontrol` file.
|
You can avoid using `ssh-add` to load the keys pre-specifying which GPG keys to use in the `~/.gnupg/sshcontrol` file.<br/>
|
||||||
The entries in this file are keygrips—internal identifiers that `gpg-agent` uses to refer to the keys. A keygrip refers to both the public and private key.
|
The entries in this file are keygrips—internal identifiers that `gpg-agent` uses to refer to the keys. A keygrip refers to both the public and private key.<br/>
|
||||||
To find the keygrip use `gpg -K --with-keygrip`, then add that line to the `~/.gnupg/sshcontrol` file:
|
To find the keygrip use `gpg -K --with-keygrip`, then add that line to the `~/.gnupg/sshcontrol` file:
|
||||||
|
|
||||||
```sh
|
```sh
|
||||||
@@ -12,7 +12,7 @@ If you want the filtered redirected output on `stderr` again, add the `>&2` redi
|
|||||||
command 2> >(grep something >&2)
|
command 2> >(grep something >&2)
|
||||||
```
|
```
|
||||||
|
|
||||||
`2>` redirects `stderr` to a pipe, while `>(command)` reads from it. This is _syntactic sugar_ to create a pipe (not a file) and remove it when the process completes. They are effectively anonymous, because they are not given a name in the filesystem.
|
`2>` redirects `stderr` to a pipe, while `>(command)` reads from it. This is _syntactic sugar_ to create a pipe (not a file) and remove it when the process completes. They are effectively anonymous, because they are not given a name in the filesystem.<br/>
|
||||||
Bash calls this _process substitution_:
|
Bash calls this _process substitution_:
|
||||||
|
|
||||||
> Process substitution can also be used to capture output that would normally go to a file, and redirect it to the input of a process.
|
> Process substitution can also be used to capture output that would normally go to a file, and redirect it to the input of a process.
|
||||||
@@ -23,7 +23,7 @@ You can exclude `stdout` and grep `stderr` redirecting it to `null`:
|
|||||||
command 1>/dev/null 2> >(grep -oP "(.*)(?=pattern)")
|
command 1>/dev/null 2> >(grep -oP "(.*)(?=pattern)")
|
||||||
```
|
```
|
||||||
|
|
||||||
> Do note that **the target command of process substitution runs asynchronously**.
|
> Do note that **the target command of process substitution runs asynchronously**.<br/>
|
||||||
> As a consequence, `stderr` lines that get through the grep filter may not appear at the place you would expect in the rest of the output, but even on your next command prompt.
|
> As a consequence, `stderr` lines that get through the grep filter may not appear at the place you would expect in the rest of the output, but even on your next command prompt.
|
||||||
|
|
||||||
## Further readings
|
## Further readings
|
||||||
|
|||||||
@@ -49,7 +49,7 @@ grep --color '[[:digit:]]' 'file.txt'
|
|||||||
|
|
||||||
For simple searches, you might want to use [pdfgrep].
|
For simple searches, you might want to use [pdfgrep].
|
||||||
|
|
||||||
Should you need more advanced grep capabilities not incorporated by pdfgrep, you might want to convert the file to text and search there.
|
Should you need more advanced grep capabilities not incorporated by pdfgrep, you might want to convert the file to text and search there.<br/>
|
||||||
You can to this using [pdftotext](pdfgrep.md) as shown in this example ([source][stackoverflow answer about how to search contents of multiple pdf files]):
|
You can to this using [pdftotext](pdfgrep.md) as shown in this example ([source][stackoverflow answer about how to search contents of multiple pdf files]):
|
||||||
|
|
||||||
```sh
|
```sh
|
||||||
|
|||||||
@@ -53,7 +53,7 @@ Short | Long | Description
|
|||||||
|
|
||||||
Some filters take no value or multiple values.
|
Some filters take no value or multiple values.
|
||||||
|
|
||||||
Filters that can take a numeric option generally support the size multipliers `K`/`M`/`G`/`T`/`P`/`E`, with or without an added `iB` or `B`.
|
Filters that can take a numeric option generally support the size multipliers `K`/`M`/`G`/`T`/`P`/`E`, with or without an added `iB` or `B`.<br/>
|
||||||
Multipliers are binary-style unless the `-B` suffix is used, which will use decimal multipliers. For example, 16k or 16kib = 16384; 16kb = 16000. Multipliers are case-insensitive.
|
Multipliers are binary-style unless the `-B` suffix is used, which will use decimal multipliers. For example, 16k or 16kib = 16384; 16kb = 16000. Multipliers are case-insensitive.
|
||||||
|
|
||||||
Filters have cumulative effects: `jdupes -X size+:99 -X size-:101` will cause only files of exactly 100 bytes in size to be included.
|
Filters have cumulative effects: `jdupes -X size+:99 -X size-:101` will cause only files of exactly 100 bytes in size to be included.
|
||||||
|
|||||||
@@ -66,7 +66,7 @@ podman exec \
|
|||||||
|
|
||||||
## Service execution
|
## Service execution
|
||||||
|
|
||||||
`run_keybase` starts the Keybase service, KBFS and the GUI.
|
`run_keybase` starts the Keybase service, KBFS and the GUI.<br/>
|
||||||
If services are already running, they will be restarted.
|
If services are already running, they will be restarted.
|
||||||
|
|
||||||
Options can also be controlled by setting the related environment variable to 1:
|
Options can also be controlled by setting the related environment variable to 1:
|
||||||
@@ -103,7 +103,7 @@ Use the import form in [Keybase launches encrypted git], or:
|
|||||||
|
|
||||||
## Run as root
|
## Run as root
|
||||||
|
|
||||||
Keybase shouldn't be run as the `root`, and by default it will fail with a message explaining it.
|
Keybase shouldn't be run as the `root`, and by default it will fail with a message explaining it.<br/>
|
||||||
Under some circumnstances (like Docker or other containers) `root` can be the best or only option; run commands in concert with the `KEYBASE_ALLOW_ROOT=1` environment variable to force the execution.
|
Under some circumnstances (like Docker or other containers) `root` can be the best or only option; run commands in concert with the `KEYBASE_ALLOW_ROOT=1` environment variable to force the execution.
|
||||||
|
|
||||||
## Temporary devices
|
## Temporary devices
|
||||||
@@ -123,7 +123,7 @@ keybase oneshot --username user --paperkey 'paper key'
|
|||||||
KEYBASE_PAPERKEY='paper key' KEYBASE_USERNAME='user' keybase oneshot
|
KEYBASE_PAPERKEY='paper key' KEYBASE_USERNAME='user' keybase oneshot
|
||||||
```
|
```
|
||||||
|
|
||||||
Exploding messages work in oneshot mode with the caveat that you cannot run multiple instances of such with the same paperkey at the same time as each instance will try to create ephemeral keys, but require a distinct paperkey to uniquely identify itself as a separate device.
|
Exploding messages work in oneshot mode with the caveat that you cannot run multiple instances of such with the same paperkey at the same time as each instance will try to create ephemeral keys, but require a distinct paperkey to uniquely identify itself as a separate device.<br/>
|
||||||
In addition, ephemeral keys are **purged entirely** when closing the oneshot session, and you will not be able to access any old ephemeral content when starting keybase up again.
|
In addition, ephemeral keys are **purged entirely** when closing the oneshot session, and you will not be able to access any old ephemeral content when starting keybase up again.
|
||||||
|
|
||||||
## Further readings
|
## Further readings
|
||||||
|
|||||||
@@ -215,7 +215,7 @@ Also see [configuration best practices] and the [production best practices check
|
|||||||
|
|
||||||
See [Configure Quality of Service for Pods] for more information.
|
See [Configure Quality of Service for Pods] for more information.
|
||||||
|
|
||||||
QoS classes are used to make decisions about scheduling and evicting Pods.
|
QoS classes are used to make decisions about scheduling and evicting Pods.<br/>
|
||||||
When a Pod is created, it is also assigned one of the following QoS classes:
|
When a Pod is created, it is also assigned one of the following QoS classes:
|
||||||
|
|
||||||
- _Guaranteed_, when **every** Container in the Pod, including init containers, has:
|
- _Guaranteed_, when **every** Container in the Pod, including init containers, has:
|
||||||
|
|||||||
@@ -202,7 +202,7 @@ spec:
|
|||||||
|
|
||||||
`pollingInterval` is the interval in seconds KEDA will check each trigger on.
|
`pollingInterval` is the interval in seconds KEDA will check each trigger on.
|
||||||
|
|
||||||
`successfulJobsHistoryLimit` and `failedJobsHistoryLimit` specify how many _completed_ and _failed_ jobs should be kept, similarly to Jobs History Limits; it allows to learn what the outcome of the jobs are.
|
`successfulJobsHistoryLimit` and `failedJobsHistoryLimit` specify how many _completed_ and _failed_ jobs should be kept, similarly to Jobs History Limits; it allows to learn what the outcome of the jobs are.<br/>
|
||||||
The actual number of jobs could exceed the limit in a short time, but it is going to resolve in the cleanup period. Currently, the cleanup period is the same as the Polling interval.
|
The actual number of jobs could exceed the limit in a short time, but it is going to resolve in the cleanup period. Currently, the cleanup period is the same as the Polling interval.
|
||||||
|
|
||||||
`envSourceContainerName` specifies the name of container in the target Job from which KEDA will retrieve the environment properties holding secrets etc. If not defined, KEDA will try to retrieve the environment properties from the first Container in the target resource's definition.
|
`envSourceContainerName` specifies the name of container in the target Job from which KEDA will retrieve the environment properties holding secrets etc. If not defined, KEDA will try to retrieve the environment properties from the first Container in the target resource's definition.
|
||||||
|
|||||||
@@ -279,7 +279,7 @@ Include a non-code formatted backtick by escaping it normally (with a `\`).
|
|||||||
|
|
||||||
Render it in an inline code block using double backticks instead of single backticks.
|
Render it in an inline code block using double backticks instead of single backticks.
|
||||||
|
|
||||||
Alternatively, use a code block. This will wrap everything in a `<pre>` HTML tag.
|
Alternatively, use a code block. This will wrap everything in a `<pre>` HTML tag.<br/>
|
||||||
To do this, either indent 4 spaces to start a code block, or use fenced code blocks if supported.
|
To do this, either indent 4 spaces to start a code block, or use fenced code blocks if supported.
|
||||||
|
|
||||||
### VS Code and mermaid graph in Markdown preview
|
### VS Code and mermaid graph in Markdown preview
|
||||||
|
|||||||
@@ -50,7 +50,7 @@ salt -C 'G@os:Ubuntu and minion* or S@192.168.50.*' network.interfaces
|
|||||||
|
|
||||||
## Key management
|
## Key management
|
||||||
|
|
||||||
Will use `salt-key`.
|
Will use `salt-key`.<br/>
|
||||||
This needs to be done on the **master** host.
|
This needs to be done on the **master** host.
|
||||||
|
|
||||||
- View all minion connections and whether the connection is accepted, rejected, or pending:
|
- View all minion connections and whether the connection is accepted, rejected, or pending:
|
||||||
|
|||||||
@@ -113,7 +113,7 @@ module "local_vpc_module" {
|
|||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
Run `terraform init` or `terraform get` to install the modules.
|
Run `terraform init` or `terraform get` to install the modules.<br/>
|
||||||
Modules are installed in the `.terraform/modules` directory inside the configuration's working directory; local modules are symlinked from there.
|
Modules are installed in the `.terraform/modules` directory inside the configuration's working directory; local modules are symlinked from there.
|
||||||
|
|
||||||
When terraform processes a module block, that block will inherit the provider from the enclosing configuration.
|
When terraform processes a module block, that block will inherit the provider from the enclosing configuration.
|
||||||
@@ -186,7 +186,7 @@ Terraform will perform the following actions:
|
|||||||
|
|
||||||
### Conditional creation of a resource
|
### Conditional creation of a resource
|
||||||
|
|
||||||
You can conditionally create one or more resources.
|
You can conditionally create one or more resources.<br/>
|
||||||
There are 2 ways to do this:
|
There are 2 ways to do this:
|
||||||
|
|
||||||
- with `count`:
|
- with `count`:
|
||||||
|
|||||||
@@ -43,7 +43,7 @@
|
|||||||
|
|
||||||
Set different options for a particular file.
|
Set different options for a particular file.
|
||||||
|
|
||||||
> The `modeline` option must be enabled in order to take advantage of this.
|
> The `modeline` option must be enabled in order to take advantage of this.<br/>
|
||||||
> This option is **set** by default for Vim running in nocompatible mode, but some notable distributions of Vim disable it in the system's `vimrc` for security. In addition, the option is **off** by default when editing as `root`.
|
> This option is **set** by default for Vim running in nocompatible mode, but some notable distributions of Vim disable it in the system's `vimrc` for security. In addition, the option is **off** by default when editing as `root`.
|
||||||
|
|
||||||
See `:help modeline` for more information.
|
See `:help modeline` for more information.
|
||||||
|
|||||||
@@ -116,7 +116,7 @@ When one writes an alias, one can also press `ctrl-x` followed by `a` to see the
|
|||||||
|
|
||||||
## Parameter expansion
|
## Parameter expansion
|
||||||
|
|
||||||
Parameter expansions can involve flags like `${(@kv)aliases}` and other operators such as `${PREFIX:-"/usr/local"}`.
|
Parameter expansions can involve flags like `${(@kv)aliases}` and other operators such as `${PREFIX:-"/usr/local"}`.<br/>
|
||||||
Nested parameters expand from the inside out.
|
Nested parameters expand from the inside out.
|
||||||
|
|
||||||
If the parameter is a **scalar** (a number or string) then the value, if any, is substituted:
|
If the parameter is a **scalar** (a number or string) then the value, if any, is substituted:
|
||||||
@@ -153,7 +153,7 @@ hello world
|
|||||||
|
|
||||||
#### Check if a variable is set
|
#### Check if a variable is set
|
||||||
|
|
||||||
Use the form `+parameterName`.
|
Use the form `+parameterName`.<br/>
|
||||||
If _name_ is set, even to an empty string, then its value is substituted by _1_, otherwise by _0_:
|
If _name_ is set, even to an empty string, then its value is substituted by _1_, otherwise by _0_:
|
||||||
|
|
||||||
```sh
|
```sh
|
||||||
@@ -172,7 +172,7 @@ $ echo "${+name}"
|
|||||||
|
|
||||||
#### Provide a default value
|
#### Provide a default value
|
||||||
|
|
||||||
Use the forms `parameterName-defaultValue` or `parameterName:-defaultValue`.
|
Use the forms `parameterName-defaultValue` or `parameterName:-defaultValue`.<br/>
|
||||||
If _name_ is set then substitute its value, otherwise substitute _word_:
|
If _name_ is set then substitute its value, otherwise substitute _word_:
|
||||||
|
|
||||||
```sh
|
```sh
|
||||||
@@ -213,7 +213,7 @@ word
|
|||||||
|
|
||||||
#### Just substitute with its value if set
|
#### Just substitute with its value if set
|
||||||
|
|
||||||
Use the forms `parameterName+defaultValue` or `parameterName:+defaultValue`.
|
Use the forms `parameterName+defaultValue` or `parameterName:+defaultValue`.<br/>
|
||||||
If _name_ is set, then substitute it with its value, otherwise substitute nothing:
|
If _name_ is set, then substitute it with its value, otherwise substitute nothing:
|
||||||
|
|
||||||
```sh
|
```sh
|
||||||
@@ -248,7 +248,7 @@ $ echo "${name:+word}"
|
|||||||
|
|
||||||
#### Set a default value and substitute
|
#### Set a default value and substitute
|
||||||
|
|
||||||
Use the forms `parameterName=defaultValue`, `parameterName:=defaultValue` or `parameterName::=defaultValue`.
|
Use the forms `parameterName=defaultValue`, `parameterName:=defaultValue` or `parameterName::=defaultValue`.<br/>
|
||||||
In the first form, if _name_ is unset then set it to _word_:
|
In the first form, if _name_ is unset then set it to _word_:
|
||||||
|
|
||||||
```sh
|
```sh
|
||||||
@@ -317,7 +317,7 @@ word
|
|||||||
|
|
||||||
#### Fail on missing value
|
#### Fail on missing value
|
||||||
|
|
||||||
Use the forms `parameterName?defaultValue` or `parameterName:?defaultValue`.
|
Use the forms `parameterName?defaultValue` or `parameterName:?defaultValue`.<br/>
|
||||||
In the first form, if _name_ is set then substitute its value, otherwise print _word_ and exit from the shell.
|
In the first form, if _name_ is set then substitute its value, otherwise print _word_ and exit from the shell.
|
||||||
|
|
||||||
```sh
|
```sh
|
||||||
@@ -423,14 +423,14 @@ bindkey "^[[3~" delete-char
|
|||||||
|
|
||||||
### Config files read order
|
### Config files read order
|
||||||
|
|
||||||
1. `/etc/zshenv`; this cannot be overridden
|
1. `/etc/zshenv`; this cannot be overridden<br/>
|
||||||
subsequent behaviour is modified by the `RCS` and `GLOBAL_RCS` options:
|
subsequent behaviour is modified by the `RCS` and `GLOBAL_RCS` options:
|
||||||
|
|
||||||
- `RCS` affects all startup files
|
- `RCS` affects all startup files
|
||||||
- `GLOBAL_RCS` only affects global startup files (those shown here with an path starting with a /)
|
- `GLOBAL_RCS` only affects global startup files (those shown here with an path starting with a /)
|
||||||
|
|
||||||
If one of the options is unset at any point, any subsequent startup file(s) of the corresponding type will not be read.
|
If one of the options is unset at any point, any subsequent startup file(s) of the corresponding type will not be read.<br/>
|
||||||
It is also possible for a file in `$ZDOTDIR` to re-enable `GLOBAL_RCS`.
|
It is also possible for a file in `$ZDOTDIR` to re-enable `GLOBAL_RCS`.<br/>
|
||||||
Both `RCS` and `GLOBAL_RCS` are set by default
|
Both `RCS` and `GLOBAL_RCS` are set by default
|
||||||
|
|
||||||
1. `$ZDOTDIR/.zshenv`
|
1. `$ZDOTDIR/.zshenv`
|
||||||
@@ -454,16 +454,16 @@ bindkey "^[[3~" delete-char
|
|||||||
1. `$ZDOTDIR/.zlogout`
|
1. `$ZDOTDIR/.zlogout`
|
||||||
1. `/etc/zlogout`
|
1. `/etc/zlogout`
|
||||||
|
|
||||||
This happens with either an explicit exit via the `exit` or `logout` commands, or an implicit exit by reading `end-of-file` from the terminal.
|
This happens with either an explicit exit via the `exit` or `logout` commands, or an implicit exit by reading `end-of-file` from the terminal.<br/>
|
||||||
However, if the shell terminates due to exec'ing another process, the files are not read. These are also affected by the `RCS` and `GLOBAL_RCS` options.
|
However, if the shell terminates due to exec'ing another process, the files are not read. These are also affected by the `RCS` and `GLOBAL_RCS` options.<br/>
|
||||||
The `RCS` option affects the saving of history files, i.e. if `RCS` is unset when the shell exits, no history file will be saved.
|
The `RCS` option affects the saving of history files, i.e. if `RCS` is unset when the shell exits, no history file will be saved.
|
||||||
|
|
||||||
If `ZDOTDIR` is unset, `HOME` is used instead. Files listed above as being in `/etc` may be in another directory, depending on the installation.
|
If `ZDOTDIR` is unset, `HOME` is used instead. Files listed above as being in `/etc` may be in another directory, depending on the installation.
|
||||||
|
|
||||||
`/etc/zshenv` is run for **all** instances of zsh.
|
`/etc/zshenv` is run for **all** instances of zsh.<br/>
|
||||||
it is a good idea to put code that does not need to be run for every single shell behind a test of the form `if [[ -o rcs ]]; then ...` so that it will not be executed when zsh is invoked with the `-f` option.
|
it is a good idea to put code that does not need to be run for every single shell behind a test of the form `if [[ -o rcs ]]; then ...` so that it will not be executed when zsh is invoked with the `-f` option.
|
||||||
|
|
||||||
When `/etc/zprofile` is installed it will override `PATH` and possibly other variables that a user may set in `~/.zshenv`. Custom `PATH` settings and similar overridden variables can be moved to `~/.zprofile` or other user startup files that are sourced after the `/etc/zprofile`.
|
When `/etc/zprofile` is installed it will override `PATH` and possibly other variables that a user may set in `~/.zshenv`. Custom `PATH` settings and similar overridden variables can be moved to `~/.zprofile` or other user startup files that are sourced after the `/etc/zprofile`.<br/>
|
||||||
If `PATH` must be set in `~/.zshenv` to affect things like non-login ssh shells, one method is to use a separate path-setting file that is conditionally sourced in `~/.zshenv` and also sourced from `~/.zprofile`.
|
If `PATH` must be set in `~/.zshenv` to affect things like non-login ssh shells, one method is to use a separate path-setting file that is conditionally sourced in `~/.zshenv` and also sourced from `~/.zprofile`.
|
||||||
|
|
||||||
### History
|
### History
|
||||||
|
|||||||
Reference in New Issue
Block a user