Improved local shell snippet rendering

This commit is contained in:
Michele Cereda
2022-05-15 00:24:53 +02:00
parent ba75b52244
commit d057f2210d
135 changed files with 313 additions and 298 deletions

View File

@@ -2,7 +2,7 @@
## TL;DR ## TL;DR
```shell ```sh
# installation # installation
brew cask install 1password-cli brew cask install 1password-cli
@@ -23,7 +23,7 @@ op list items
- After you have signed in the first time, you can sign in again using your account shorthand, which is your sign-in address subdomain (in this example, _company_); `op signin` will prompt you for your password and output a command that can save your session token to an environment variable: - After you have signed in the first time, you can sign in again using your account shorthand, which is your sign-in address subdomain (in this example, _company_); `op signin` will prompt you for your password and output a command that can save your session token to an environment variable:
```shell ```sh
op signin company op signin company
``` ```

15
knowledge base/README.md Normal file
View File

@@ -0,0 +1,15 @@
# Knwoledge base
This is the collection of all notes, reminders and whatnot I gathered during the years.
## Conventions
- Use `sh` as document language instead of `shell` when writing shell snippets in code blocks:
```diff
- ```shell
+ ```sh
#!/usr/bin/env zsh
```
The local renderer just displays them better like this.

View File

@@ -3,7 +3,7 @@
A `.jar` file is nothing more than an archive. A `.jar` file is nothing more than an archive.
You can find all the files it contains just unzipping it: You can find all the files it contains just unzipping it:
```shell ```sh
$ unzip file.jar $ unzip file.jar
Archive: file.jar Archive: file.jar
creating: META-INF/ creating: META-INF/

View File

@@ -2,7 +2,7 @@
## TL;DR ## TL;DR
```shell ```sh
# show acls of file test/declarations.h # show acls of file test/declarations.h
getfacl test/declarations.h getfacl test/declarations.h

View File

@@ -2,7 +2,7 @@
## TL;DR ## TL;DR
```shell ```sh
# Install. # Install.
pip3 install --user ansible && port install sshpass # darwin pip3 install --user ansible && port install sshpass # darwin
sudo pamac install ansible sshpass # manjaro linux sudo pamac install ansible sshpass # manjaro linux
@@ -161,7 +161,7 @@ ansible-galaxy remove namespace.role
Roles can be either **created**: Roles can be either **created**:
```shell ```sh
ansible-galaxy init role-name ansible-galaxy init role-name
``` ```
@@ -174,7 +174,7 @@ collections:
- community.docker - community.docker
``` ```
```shell ```sh
ansible-galaxy install mcereda.boinc_client ansible-galaxy install mcereda.boinc_client
ansible-galaxy install --roles-path ~/ansible-roles namespace.role ansible-galaxy install --roles-path ~/ansible-roles namespace.role
ansible-galaxy install namespace.role,v1.0.0 ansible-galaxy install namespace.role,v1.0.0
@@ -203,7 +203,7 @@ dependencies:
Change Ansible's output setting the stdout callback to `json` or `yaml`: Change Ansible's output setting the stdout callback to `json` or `yaml`:
```shell ```sh
ANSIBLE_STDOUT_CALLBACK=yaml ANSIBLE_STDOUT_CALLBACK=yaml
``` ```

View File

@@ -6,14 +6,14 @@
Example: Example:
```shell ```sh
tee: /Users/user/.antigen/bundles/robbyrussell/oh-my-zsh/cache//completions/_helm: No such file or directory tee: /Users/user/.antigen/bundles/robbyrussell/oh-my-zsh/cache//completions/_helm: No such file or directory
/Users/user/.antigen/bundles/robbyrussell/oh-my-zsh/plugins/helm/helm.plugin.zsh:source:9: no such file or directory: /Users/user/.antigen/bundles/robbyrussell/oh-my-zsh/cache//completions/_helm /Users/user/.antigen/bundles/robbyrussell/oh-my-zsh/plugins/helm/helm.plugin.zsh:source:9: no such file or directory: /Users/user/.antigen/bundles/robbyrussell/oh-my-zsh/cache//completions/_helm
``` ```
The issue is due of the `$ZSH_CACHE_DIR/completions` being missing and `tee` not creating it on Mac OS X. Create the missing `completions` directory and re-run antigen: The issue is due of the `$ZSH_CACHE_DIR/completions` being missing and `tee` not creating it on Mac OS X. Create the missing `completions` directory and re-run antigen:
```shell ```sh
mkdir -p $ZSH_CACHE_DIR/completions mkdir -p $ZSH_CACHE_DIR/completions
antigen apply antigen apply
``` ```

View File

@@ -2,7 +2,7 @@
## TL;DR ## TL;DR
```shell ```sh
# Update the package lists. # Update the package lists.
apk update apk update

View File

@@ -2,7 +2,7 @@
## TL;DR ## TL;DR
```shell ```sh
# mark all packages as non-explicitly installed # mark all packages as non-explicitly installed
apt-mark auto $(sudo apt-mark showmanual) apt-mark auto $(sudo apt-mark showmanual)

View File

@@ -4,7 +4,7 @@
## TL;DR ## TL;DR
```shell ```sh
# list installed plugins # list installed plugins
asdf plugin list asdf plugin list
@@ -46,7 +46,7 @@ asdf current helm
## Installation ## Installation
```shell ```sh
# install the program # install the program
brew install asdf brew install asdf
@@ -57,7 +57,7 @@ brew install asdf
## Plugins management ## Plugins management
```shell ```sh
# list installed plugins # list installed plugins
asdf plugin list asdf plugin list
asdf plugin list --urls asdf plugin list --urls
@@ -80,7 +80,7 @@ asdf plugin remove $PLUGIN_NAME
## Versions management ## Versions management
```shell ```sh
# list installed versions for a plugin # list installed versions for a plugin
# asdf list $PLUGIN_NAME # asdf list $PLUGIN_NAME
asdf list elixir asdf list elixir

View File

@@ -7,7 +7,7 @@ Requires [polkit] to be:
- installed - installed
- configured to authorize and authenticate the users - configured to authorize and authenticate the users
```shell ```sh
pkexec COMMAND pkexec COMMAND
``` ```

View File

@@ -2,7 +2,7 @@
## TL;DR ## TL;DR
```shell ```sh
# Install the CLI. # Install the CLI.
brew install awscli brew install awscli
@@ -16,7 +16,7 @@ export AWS_PROFILE="work"
## Profiles ## Profiles
```shell ```sh
# Initialize the default profile. # Initialize the default profile.
# Not specifying a profile means to configure the default profile. # Not specifying a profile means to configure the default profile.
$ aws configure $ aws configure

View File

@@ -2,7 +2,7 @@
## TL;DR ## TL;DR
```shell ```sh
# login # login
az login az login

View File

@@ -2,7 +2,7 @@
## TL;DR ## TL;DR
```shell ```sh
# Run a command or function on exit, kill or error. # Run a command or function on exit, kill or error.
trap "rm -f $tempfile" EXIT SIGTERM ERR trap "rm -f $tempfile" EXIT SIGTERM ERR
trap function-name EXIT SIGTERM ERR trap function-name EXIT SIGTERM ERR

View File

@@ -2,7 +2,7 @@
## TL;DR ## TL;DR
```shell ```sh
# use a project manager # use a project manager
boinccmd --acct_mgr attach http://bam.boincstats.com myAwesomeUsername myAwesomePassword boinccmd --acct_mgr attach http://bam.boincstats.com myAwesomeUsername myAwesomePassword
boinccmd --acct_mgr info boinccmd --acct_mgr info

View File

@@ -2,7 +2,7 @@
## TL;DR ## TL;DR
```shell ```sh
# create a volume with single metadata and double data blocks (useless but good example) # create a volume with single metadata and double data blocks (useless but good example)
sudo mkfs.btrfs --metadata single --data dup /dev/sdb sudo mkfs.btrfs --metadata single --data dup /dev/sdb
@@ -44,7 +44,7 @@ sudo compsize /mnt/btrfs-volume
See also [snapper]. See also [snapper].
```shell ```sh
sudo btrfs send --no-data -p /old/snapshot /new/snapshot | sudo btrfs receive --dump sudo btrfs send --no-data -p /old/snapshot /new/snapshot | sudo btrfs receive --dump
# requires you to be using snapper for your snapshots # requires you to be using snapper for your snapshots

View File

@@ -2,7 +2,7 @@
## TL;DR ## TL;DR
```shell ```sh
# using the exact name of the command # using the exact name of the command
curl cheat.sh/tar curl cheat.sh/tar
curl cht.sh/curl curl cht.sh/curl

View File

@@ -2,7 +2,7 @@
## TL;DR ## TL;DR
```shell ```sh
# access a test container # access a test container
kubectl run --generator=run-pod/v1 --limits 'cpu=200m,memory=512Mi' --requests 'cpu=200m,memory=512Mi' --image alpine ${USER}-mysql-test -it -- sh kubectl run --generator=run-pod/v1 --limits 'cpu=200m,memory=512Mi' --requests 'cpu=200m,memory=512Mi' --image alpine ${USER}-mysql-test -it -- sh

View File

@@ -4,7 +4,7 @@ A multi-machine dotfile manager, written in Go.
## TL;DR ## TL;DR
```shell ```sh
# initialize chezmoi # initialize chezmoi
chezmoi init chezmoi init
chezmoi init https://github.com/username/dotfiles.git chezmoi init https://github.com/username/dotfiles.git
@@ -50,7 +50,7 @@ chezmoi update
## Save the current data to a remote repository ## Save the current data to a remote repository
```shell ```sh
$ chezmoi cd $ chezmoi cd
chezmoi $> git remote add origin https://github.com/username/dotfiles.git chezmoi $> git remote add origin https://github.com/username/dotfiles.git
chezmoi $> git push -u origin main chezmoi $> git push -u origin main

View File

@@ -2,7 +2,7 @@
## TL;DR ## TL;DR
```shell ```sh
# manually update the virus definitions # manually update the virus definitions
# do it once **before** starting a scan or the daemon # do it once **before** starting a scan or the daemon
# the definitions updater deamon must be stopped to avoid complaints from it # the definitions updater deamon must be stopped to avoid complaints from it
@@ -39,7 +39,7 @@ nice -n 15 clamscan && clamscan --bell -i -r /home
- The `--fdpass` option of `clamdscan` (notice the _d_ in the command) sends a file descriptor to clamd rather than a path name, avoiding the need for the `clamav` user to be able to read everyone's files - The `--fdpass` option of `clamdscan` (notice the _d_ in the command) sends a file descriptor to clamd rather than a path name, avoiding the need for the `clamav` user to be able to read everyone's files
- `clamscan` is designed to be single-threaded, so when scanning a file or directory from the command line only a single CPU thread is used; use `xargs` or another executor to run a scan in parallel: - `clamscan` is designed to be single-threaded, so when scanning a file or directory from the command line only a single CPU thread is used; use `xargs` or another executor to run a scan in parallel:
```shell ```sh
find . -type f -printf "'%p' " | xargs -P $(nproc) -n 1 clamscan find . -type f -printf "'%p' " | xargs -P $(nproc) -n 1 clamscan
find . -type f | parallel --group --jobs 0 -d '\n' clamscan {} find . -type f | parallel --group --jobs 0 -d '\n' clamscan {}
``` ```

View File

@@ -2,13 +2,13 @@
Install and use `diffpdf` (preferred) or `diff-pdf`: Install and use `diffpdf` (preferred) or `diff-pdf`:
```shell ```sh
sudo zypper install diff-pdf sudo zypper install diff-pdf
``` ```
As an alternative: As an alternative:
```shell ```sh
# create a pdf with the diff as red pixels # create a pdf with the diff as red pixels
compare -verbose -debug coder $PDF_1 $PDF_2 -compose src /tmp/$OUT_FILE.tmp compare -verbose -debug coder $PDF_1 $PDF_2 -compose src /tmp/$OUT_FILE.tmp

View File

@@ -4,7 +4,7 @@ Default governor is _ondemand_ for older CPUs and kernels and _schedutil_ for ne
## TL;DR ## TL;DR
```shell ```sh
# list the available governors # list the available governors
cpupower frequency-info --governors cpupower frequency-info --governors

View File

@@ -2,7 +2,7 @@
## TL;DR ## TL;DR
```shell ```sh
# Send a single GET request and show its output on stdout. # Send a single GET request and show its output on stdout.
curl http://url.of/file curl http://url.of/file

View File

@@ -2,7 +2,7 @@
## TL;DR ## TL;DR
```shell ```sh
# just check # just check
diff-pdf --verbose file1.pdf file2.pdf diff-pdf --verbose file1.pdf file2.pdf

View File

@@ -2,7 +2,7 @@
## TL;DR ## TL;DR
```shell ```sh
# Show locally available images. # Show locally available images.
docker images -a docker images -a
@@ -101,7 +101,7 @@ The docker daemon is configured using the `/etc/docker/daemon.json` file:
Docker mounts specific system files in all containers to forward its settings: Docker mounts specific system files in all containers to forward its settings:
```shell ```sh
6a95fabde222$ mount 6a95fabde222$ mount
/dev/disk/by-uuid/1bb…eb5 on /etc/resolv.conf type btrfs (rw,…) /dev/disk/by-uuid/1bb…eb5 on /etc/resolv.conf type btrfs (rw,…)
@@ -116,7 +116,7 @@ Those files come from the volume the docker container is using for its root, and
- Containers created with no specified name will be assigned one automatically: - Containers created with no specified name will be assigned one automatically:
```shell ```sh
$ docker create hello-world $ docker create hello-world
8eaaae8c0c720ac220abac763ad4b477d807be4522d58e334337b1b74a14d0bd 8eaaae8c0c720ac220abac763ad4b477d807be4522d58e334337b1b74a14d0bd
@@ -131,7 +131,7 @@ Those files come from the volume the docker container is using for its root, and
- When referring to a container or image using their ID, you just need to use as many characters you need to uniquely specify a single one of them: - When referring to a container or image using their ID, you just need to use as many characters you need to uniquely specify a single one of them:
```shell ```sh
$ docker ps -a $ docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
63b1a0a3e557 alpine "/bin/sh" 34 seconds ago Created alpine 63b1a0a3e557 alpine "/bin/sh" 34 seconds ago Created alpine

View File

@@ -2,7 +2,7 @@
## TL;DR ## TL;DR
```shell ```sh
# add an extra architecture # add an extra architecture
dpkg --add-architecture i386 dpkg --add-architecture i386

View File

@@ -4,12 +4,12 @@ Substitutes environment variables in shell format strings.
## TL;DR ## TL;DR
```shell ```sh
envsubst < input.file envsubst < input.file
envsubst < input.file > output.file envsubst < input.file > output.file
``` ```
```shell ```sh
$ cat hello.file $ cat hello.file
hello $NAME hello $NAME

View File

@@ -2,7 +2,7 @@
## TL;DR ## TL;DR
```shell ```sh
set -o allexport set -o allexport
source envfile source envfile
set +o allexport set +o allexport

View File

@@ -18,7 +18,7 @@ In such cases the text file itself contains the multipart message body of the em
You can use `munpack` to easily extract attachments out of such text files and write them into a proper named files. You can use `munpack` to easily extract attachments out of such text files and write them into a proper named files.
```shell ```sh
$ munpack -f plaintext.eml $ munpack -f plaintext.eml
myawesomefile.tar.gz (application/x-gzip) myawesomefile.tar.gz (application/x-gzip)
``` ```

View File

@@ -6,7 +6,7 @@ RPM Fusion provides software that the Fedora Project or Red Hat doesn't want to
These repositories are not available by default and need to be installed using a remote package: These repositories are not available by default and need to be installed using a remote package:
```shell ```sh
# All flavours but Silverblue-based ones. # All flavours but Silverblue-based ones.
sudo dnf install https://download1.rpmfusion.org/free/fedora/rpmfusion-free-release-$(rpm -E %fedora).noarch.rpm https://download1.rpmfusion.org/nonfree/fedora/rpmfusion-nonfree-release-$(rpm -E %fedora).noarch.rpm sudo dnf install https://download1.rpmfusion.org/free/fedora/rpmfusion-free-release-$(rpm -E %fedora).noarch.rpm https://download1.rpmfusion.org/nonfree/fedora/rpmfusion-nonfree-release-$(rpm -E %fedora).noarch.rpm
@@ -16,7 +16,7 @@ sudo rpm-ostree install https://download1.rpmfusion.org/free/fedora/rpmfusion-fr
After enabling the repositories, you can add their _tainted_ versions for closed or restricted packages: After enabling the repositories, you can add their _tainted_ versions for closed or restricted packages:
```shell ```sh
sudo dnf install rpmfusion-{free,nonfree}-release-tainted sudo dnf install rpmfusion-{free,nonfree}-release-tainted
sudo rpm-ostree install rpmfusion-{free,nonfree}-release-tainted sudo rpm-ostree install rpmfusion-{free,nonfree}-release-tainted
``` ```

View File

@@ -4,7 +4,7 @@
Changes to the base layer are executed in a new bootable filesystem root. This means that the system must be rebooted after a package has been layered. Changes to the base layer are executed in a new bootable filesystem root. This means that the system must be rebooted after a package has been layered.
```shell ```sh
# Check for available upgrades. # Check for available upgrades.
rpm-ostree upgrade --check rpm-ostree upgrade --check

View File

@@ -2,7 +2,7 @@
## TL;DR ## TL;DR
```shell ```sh
# Convert a webm file to GIF. # Convert a webm file to GIF.
ffmpeg -y -i rec.webm -vf palettegen palette.png \ ffmpeg -y -i rec.webm -vf palettegen palette.png \
&& ffmpeg -y -i rec.webm -i palette.png \ && ffmpeg -y -i rec.webm -i palette.png \
@@ -13,7 +13,7 @@ ffmpeg -y -i rec.webm -vf palettegen palette.png \
### Webm to GIF ### Webm to GIF
```shell ```sh
ffmpeg -y -i rec.webm -vf palettegen palette.png ffmpeg -y -i rec.webm -vf palettegen palette.png
ffmpeg -y -i rec.webm -i palette.png -filter_complex paletteuse -r 10 out.gif ffmpeg -y -i rec.webm -i palette.png -filter_complex paletteuse -r 10 out.gif
``` ```

View File

@@ -2,7 +2,7 @@
## TL;DR ## TL;DR
```shell ```sh
# Change the permissions of all files and directories in the current directory, # Change the permissions of all files and directories in the current directory,
# recursively. # recursively.
find . -type d -exec chmod 755 {} + find . -type d -exec chmod 755 {} +
@@ -101,7 +101,7 @@ Any number of units may be combined in one `-Xtime` argument.
with the `-newerXY file` form, `find` checks if `file` has a more recent last access time (X=a), inode creation time (X=B), change time (X=c), or modification time (X=m) than the last access time (Y=a), inode creation time (Y=B), change time (Y=c), or modification time (Y=m). with the `-newerXY file` form, `find` checks if `file` has a more recent last access time (X=a), inode creation time (X=B), change time (X=c), or modification time (X=m) than the last access time (Y=a), inode creation time (Y=B), change time (Y=c), or modification time (Y=m).
If Y=t, `file` is interpreted as a direct date specification of the form understood by `cvs`. Also, `-newermm` is the same as `-newer`. If Y=t, `file` is interpreted as a direct date specification of the form understood by `cvs`. Also, `-newermm` is the same as `-newer`.
```shell ```sh
# Find files last accessed exactly 5 minutes ago. # Find files last accessed exactly 5 minutes ago.
find /dir -amin 5 find /dir -amin 5
find /dir -atime 300s find /dir -atime 300s
@@ -143,14 +143,14 @@ find / -newer file.txt -user wnj -print
- in GNU's `find` the path parameter defaults to the current directory and can be avoided - in GNU's `find` the path parameter defaults to the current directory and can be avoided
```shell ```sh
# Delete all empty folders in the current directory only. # Delete all empty folders in the current directory only.
find -maxdepth 1 -empty -delete find -maxdepth 1 -empty -delete
``` ```
- GNU's `find` also understands fractional time specifications: - GNU's `find` also understands fractional time specifications:
```shell ```sh
# Find files modified in the last 1 hour and 30 minutes. # Find files modified in the last 1 hour and 30 minutes.
find -mtime 1.5h find -mtime 1.5h
``` ```

View File

@@ -9,7 +9,7 @@ It is the default firewall management tool for:
## TL;DR ## TL;DR
```shell ```sh
# Show which zone is currently selected as the default. # Show which zone is currently selected as the default.
firewall-cmd --get-default-zone firewall-cmd --get-default-zone

View File

@@ -2,7 +2,7 @@
## TL;DR ## TL;DR
```shell ```sh
# List installed applications and runtimes. # List installed applications and runtimes.
flatpak list flatpak list
flatpak list --app flatpak list --app

View File

@@ -2,7 +2,7 @@
## TL;DR ## TL;DR
```shell ```sh
# Initialize package managers. # Initialize package managers.
portsnap auto portsnap auto
pkg bootstrap pkg bootstrap

View File

@@ -1,6 +1,6 @@
# Funtoo GNU/Linux # Funtoo GNU/Linux
```shell ```sh
# Portage update. # Portage update.
sudo ego sync sudo ego sync
sudo emerge --sync sudo emerge --sync

View File

@@ -2,7 +2,7 @@
## TL;DR ## TL;DR
```shell ```sh
# Display all detected devices. # Display all detected devices.
fwupdmgr get-devices fwupdmgr get-devices

View File

@@ -6,7 +6,7 @@
## TL;DR ## TL;DR
```shell ```sh
# Login. # Login.
gcloud auth login gcloud auth login
gcloud auth login account gcloud auth login account

View File

@@ -2,7 +2,7 @@
## TL;DR ## TL;DR
```shell ```sh
cat /proc/${PID}/environ cat /proc/${PID}/environ
# Container in kubernetes. # Container in kubernetes.

View File

@@ -2,7 +2,7 @@
## TL;DR ## TL;DR
```shell ```sh
# Set your identity. # Set your identity.
git config user.name 'User Name' git config user.name 'User Name'
git config --global user.email user@email.com git config --global user.email user@email.com
@@ -291,7 +291,7 @@ git show :/cool
## Configuration ## Configuration
```shell ```sh
# Required to be able to commit changes. # Required to be able to commit changes.
git config --local user.email 'me@me.info' git config --local user.email 'me@me.info'
git config --local user.name 'Me' git config --local user.name 'Me'
@@ -311,7 +311,7 @@ git config --global submodule.recurse true
To show the current configuration use the `--list` option: To show the current configuration use the `--list` option:
```shell ```sh
git config --list git config --list
git config --list --show-scope git config --list --show-scope
git config --list --global --show-origin git config --list --global --show-origin
@@ -320,7 +320,7 @@ git config --list --global --show-origin
The configuration is shown in full for the requested scope (or all if not specified), but it might include the same setting multiple times if it shows up in multiple scopes. The configuration is shown in full for the requested scope (or all if not specified), but it might include the same setting multiple times if it shows up in multiple scopes.
Render the current value of a setting using the `--get` option: Render the current value of a setting using the `--get` option:
```shell ```sh
# Get the current user.name value. # Get the current user.name value.
git config --get user.name git config --get user.name
@@ -333,7 +333,7 @@ git config --list \
## Manage changes ## Manage changes
```shell ```sh
# Show changes relative to the current index (not yet staged). # Show changes relative to the current index (not yet staged).
git diff git diff
@@ -370,7 +370,7 @@ git diff --no-index path/to/file/A path/to/file/B
Just save the output from `git diff` to get a patch file: Just save the output from `git diff` to get a patch file:
```shell ```sh
# Just the current changes. # Just the current changes.
# No staged nor committed files. # No staged nor committed files.
git diff > file.patch git diff > file.patch
@@ -382,7 +382,7 @@ git diff --output file.patch --cached
The output from `git diff` just shows changes to **text** files by default, no metadata or other information about commits or branches. The output from `git diff` just shows changes to **text** files by default, no metadata or other information about commits or branches.
To get a whole commit with all its metadata and binary changes, use `git format-patch`: To get a whole commit with all its metadata and binary changes, use `git format-patch`:
```shell ```sh
# Include 5 commits starting with 'commit' and going backwards. # Include 5 commits starting with 'commit' and going backwards.
git format-patch -5 commit git format-patch -5 commit
@@ -401,14 +401,14 @@ git add . && git commit -m 'uncommitted' \
Use `git apply` to apply a patch file to the current index: Use `git apply` to apply a patch file to the current index:
```shell ```sh
git apply file.patch git apply file.patch
``` ```
The changes from the patch are unstaged and no commits are created. The changes from the patch are unstaged and no commits are created.
To apply all commits from a patch, use `git am` on a patch created with `git format-patch`: To apply all commits from a patch, use `git am` on a patch created with `git format-patch`:
```shell ```sh
git am file.patch git am file.patch
``` ```
@@ -419,7 +419,7 @@ The commits are applied one after the other and registered in the repository's l
The _stash_ is a changelist separated from the one in the current working directory. The _stash_ is a changelist separated from the one in the current working directory.
`git stash` will save the current changes there and cleans the working directory. You can (re-)apply changes from the stash at any time: `git stash` will save the current changes there and cleans the working directory. You can (re-)apply changes from the stash at any time:
```shell ```sh
# Stash changes locally. # Stash changes locally.
git stash git stash
@@ -442,7 +442,7 @@ git stash apply stash@{6}
This creates a local branch tracking an existing remote branch. This creates a local branch tracking an existing remote branch.
```shell ```sh
$ git checkout -b local-branch remote/existing-branch $ git checkout -b local-branch remote/existing-branch
Branch 'local-branch' set up to track remote branch 'existing-branch' from 'remote'. Branch 'local-branch' set up to track remote branch 'existing-branch' from 'remote'.
Switched to a new branch 'local-branch' Switched to a new branch 'local-branch'
@@ -450,7 +450,7 @@ Switched to a new branch 'local-branch'
### Delete a branch ### Delete a branch
```shell ```sh
# Delete local branches. # Delete local branches.
git branch --delete local-branch git branch --delete local-branch
git branch -D local-branch git branch -D local-branch
@@ -467,7 +467,7 @@ git branch --delete --remotes feat-branch
Command source [here][prune local branches that do not exist on remote anymore]. Command source [here][prune local branches that do not exist on remote anymore].
```shell ```sh
# Branches merged on the remote are tagged as 'gone' in `git branch -vv`'s output. # Branches merged on the remote are tagged as 'gone' in `git branch -vv`'s output.
git fetch -p \ git fetch -p \
&& awk '/origin/&&/gone/{print $1}' <(git branch -vv) | xargs git branch -d && awk '/origin/&&/gone/{print $1}' <(git branch -vv) | xargs git branch -d
@@ -478,7 +478,7 @@ git branch --merged | grep -vE '(^\*|master|main|dev)' | xargs git branch -d
### Merge the master branch into a feature branch ### Merge the master branch into a feature branch
```shell ```sh
git stash pull git stash pull
git checkout master git checkout master
git pull git pull
@@ -488,7 +488,7 @@ git merge --no-ff master
git stash pop git stash pop
``` ```
```shell ```sh
git checkout feature git checkout feature
git pull origin master git pull origin master
``` ```
@@ -498,7 +498,7 @@ git pull origin master
`git rebase` takes the commits in a branch and appends them on top of the commits in a different branch. `git rebase` takes the commits in a branch and appends them on top of the commits in a different branch.
The commits to rebase are previously saved into a temporary area and then reapplied to the new branch, one by one, in order. The commits to rebase are previously saved into a temporary area and then reapplied to the new branch, one by one, in order.
```shell ```sh
# Rebase main on top of the current branch. # Rebase main on top of the current branch.
git rebase main git rebase main
@@ -513,7 +513,7 @@ git pull --rebase=interactive origin master
_Annotated_ tags are stored as full objects in git's database: _Annotated_ tags are stored as full objects in git's database:
```shell ```sh
# Create annotated tags. # Create annotated tags.
git tag --annotate v0.1.0 git tag --annotate v0.1.0
@@ -529,7 +529,7 @@ git push --follow-tags
while _lightweight_ tags are stored as a pointer to a specific commit: while _lightweight_ tags are stored as a pointer to a specific commit:
```shell ```sh
# Create lightweight tags. # Create lightweight tags.
git tag v0.1.1-rc0 git tag v0.1.1-rc0
git tag 1.12.1 HEAD git tag 1.12.1 HEAD
@@ -537,7 +537,7 @@ git tag 1.12.1 HEAD
Type-generic tag operations: Type-generic tag operations:
```shell ```sh
# Push specific tags. # Push specific tags.
git push origin v1.5 git push origin v1.5
@@ -591,7 +591,7 @@ Those commands need to be wrapped into a one-line function definition:
1. Install the extension: 1. Install the extension:
```shell ```sh
apt install git-lfs apt install git-lfs
brew install git-lfs brew install git-lfs
dnf install git-lfs dnf install git-lfs
@@ -600,7 +600,7 @@ Those commands need to be wrapped into a one-line function definition:
1. If the package manager did not enable it system-wide, enable the extension for your user account: 1. If the package manager did not enable it system-wide, enable the extension for your user account:
```shell ```sh
git lfs install git lfs install
``` ```
@@ -608,14 +608,14 @@ Those commands need to be wrapped into a one-line function definition:
1. Configure file tracking from inside the repository: 1. Configure file tracking from inside the repository:
```shell ```sh
git lfs track "*.exe" git lfs track "*.exe"
git lfs track "enormous_file.*" git lfs track "enormous_file.*"
``` ```
1. Add the `.gitattributes` file to the traced files: 1. Add the `.gitattributes` file to the traced files:
```shell ```sh
git add .gitattributes git add .gitattributes
git commit -m "lfs configured" git commit -m "lfs configured"
``` ```
@@ -624,7 +624,7 @@ Those commands need to be wrapped into a one-line function definition:
See [Git Submodules: Adding, Using, Removing, Updating] for more information. See [Git Submodules: Adding, Using, Removing, Updating] for more information.
```shell ```sh
# Add a submodule to an existing repository. # Add a submodule to an existing repository.
git submodule add https://github.com/ohmyzsh/ohmyzsh lib/ohmyzsh git submodule add https://github.com/ohmyzsh/ohmyzsh lib/ohmyzsh
@@ -640,7 +640,7 @@ To delete a submodule the procedure is more complicated:
1. De-init the submodule: 1. De-init the submodule:
```shell ```sh
git submodule deinit lib/ohmyzsh git submodule deinit lib/ohmyzsh
``` ```
@@ -648,7 +648,7 @@ To delete a submodule the procedure is more complicated:
1. Remove the submodule from the repository's index: 1. Remove the submodule from the repository's index:
```shell ```sh
git rm -rf lib/ohmyzsh git rm -rf lib/ohmyzsh
``` ```
@@ -664,19 +664,19 @@ See [remove files from git commit].
1. **Unstage** the file using `git reset`; specify HEAD as the source: 1. **Unstage** the file using `git reset`; specify HEAD as the source:
```shell ```sh
git reset HEAD secret-file git reset HEAD secret-file
``` ```
1. **Remove** the file from the repository's index: 1. **Remove** the file from the repository's index:
```shell ```sh
git rm --cached secret-file git rm --cached secret-file
``` ```
1. Check the file is no longer in the index: 1. Check the file is no longer in the index:
```shell ```sh
$ git ls-files | grep secret-file $ git ls-files | grep secret-file
$ $
``` ```
@@ -684,13 +684,13 @@ See [remove files from git commit].
1. Add the file to `.gitignore` or remove it from the working directory. 1. Add the file to `.gitignore` or remove it from the working directory.
1. Amend the most recent commit from your repository: 1. Amend the most recent commit from your repository:
```shell ```sh
git commit --amend git commit --amend
``` ```
## Remotes ## Remotes
```shell ```sh
# Add a remote. # Add a remote.
git remote add gitlab git@gitlab.com:user/my-awesome-repo.git git remote add gitlab git@gitlab.com:user/my-awesome-repo.git
@@ -705,7 +705,7 @@ git remote set-url origin git@github.com:user/new-repo-name.git
To always push to `repo1`, `repo2`, and `repo3`, but always pull only from `repo1`, set up the remote 'origin' as follows: To always push to `repo1`, `repo2`, and `repo3`, but always pull only from `repo1`, set up the remote 'origin' as follows:
```shell ```sh
git remote add origin https://exampleuser@example.com/path/to/repo1 git remote add origin https://exampleuser@example.com/path/to/repo1
git remote set-url --push --add origin https://exampleuser@example.com/path/to/repo1 git remote set-url --push --add origin https://exampleuser@example.com/path/to/repo1
git remote set-url --push --add origin https://exampleuser@example.com/path/to/repo2 git remote set-url --push --add origin https://exampleuser@example.com/path/to/repo2
@@ -748,20 +748,20 @@ See <https://git-scm.com/docs/git-config#git-config-branchltnamegtremote>.
When everything else fails, enable tracing: When everything else fails, enable tracing:
```shell ```sh
export GIT_TRACE=1 export GIT_TRACE=1
``` ```
### GPG cannot sign a commit ### GPG cannot sign a commit
> ```shell > ```sh
> error: gpg failed to sign the data > error: gpg failed to sign the data
> fatal: failed to write commit object > fatal: failed to write commit object
> ``` > ```
If gnupg2 and gpg-agent 2.x are used, be sure to set the environment variable GPG_TTY, specially zsh users with Powerlevel10k with Instant Prompt enabled. If gnupg2 and gpg-agent 2.x are used, be sure to set the environment variable GPG_TTY, specially zsh users with Powerlevel10k with Instant Prompt enabled.
```shell ```sh
export GPG_TTY=$(tty) export GPG_TTY=$(tty)
``` ```
@@ -769,7 +769,7 @@ export GPG_TTY=$(tty)
Disable certificate verification: Disable certificate verification:
```shell ```sh
export GIT_SSL_NO_VERIFY=true export GIT_SSL_NO_VERIFY=true
git -c http.sslVerify=false … git -c http.sslVerify=false …
``` ```

View File

@@ -2,7 +2,7 @@
## TL;DR ## TL;DR
```shell ```sh
gopass init gopass init
# multistore init # multistore init
@@ -14,7 +14,7 @@ gopass init --store work --path ~/.password-store.work
### Browserpass ### Browserpass
```shell ```sh
brew tap amar1729/formulae brew tap amar1729/formulae
brew install browserpass brew install browserpass
for b in chromium chrome vivaldi brave firefox; do for b in chromium chrome vivaldi brave firefox; do

View File

@@ -2,7 +2,7 @@
## TL;DR ## TL;DR
```shell ```sh
# List existing keys. # List existing keys.
gpg --list-keys gpg --list-keys
gpg --list-keys --keyid-format short gpg --list-keys --keyid-format short
@@ -65,7 +65,7 @@ brew install gnupg
## Encryption ## Encryption
```shell ```sh
# Single file. # Single file.
gpg --output $DB.key.gpg --encrypt --recipient $RECIPIENT $DB.key gpg --output $DB.key.gpg --encrypt --recipient $RECIPIENT $DB.key
@@ -76,7 +76,7 @@ find . -type f -name secret.txt \
## Decryption ## Decryption
```shell ```sh
# Single file. # Single file.
gpg --output $DB.key --decrypt $DB.key.gpg gpg --output $DB.key --decrypt $DB.key.gpg
@@ -90,7 +90,7 @@ The second command will create the decrypted version of all files in the same di
As the original user, export all public keys to a base64-encoded text file and create an encrypted version of that file: As the original user, export all public keys to a base64-encoded text file and create an encrypted version of that file:
```shell ```sh
gpg --armor --export > mypubkeys.asc gpg --armor --export > mypubkeys.asc
gpg --armor --export email > mypubkeys-email.asc gpg --armor --export email > mypubkeys-email.asc
gpg --armor --symmetric --output mysecretatedpubkeys.sec.asc mypubkeys.asc gpg --armor --symmetric --output mysecretatedpubkeys.sec.asc mypubkeys.asc
@@ -98,14 +98,14 @@ gpg --armor --symmetric --output mysecretatedpubkeys.sec.asc mypubkeys.asc
Export all encrypted private keys (which will also include corresponding public keys) to a text file and create an encrypted version of that file: Export all encrypted private keys (which will also include corresponding public keys) to a text file and create an encrypted version of that file:
```shell ```sh
gpg --armor --export-secret-keys > myprivatekeys.asc gpg --armor --export-secret-keys > myprivatekeys.asc
gpg --armor --symmetric --output mysecretatedprivatekeys.sec.asc myprivatekeys.asc gpg --armor --symmetric --output mysecretatedprivatekeys.sec.asc myprivatekeys.asc
``` ```
Optionally, export gpg's trustdb to a text file: Optionally, export gpg's trustdb to a text file:
```shell ```sh
gpg --export-ownertrust > otrust.txt gpg --export-ownertrust > otrust.txt
``` ```
@@ -113,7 +113,7 @@ gpg --export-ownertrust > otrust.txt
As the new user, execute `gpg --import` commands against the two `.asc` files, or the decrypted content of those files, and then check for the new keys with `gpg -k` and `gpg -K`, e.g.: As the new user, execute `gpg --import` commands against the two `.asc` files, or the decrypted content of those files, and then check for the new keys with `gpg -k` and `gpg -K`, e.g.:
```shell ```sh
gpg --output myprivatekeys.asc --decrypt mysecretatedprivatekeys.sec.asc gpg --output myprivatekeys.asc --decrypt mysecretatedprivatekeys.sec.asc
gpg --import myprivatekeys.asc gpg --import myprivatekeys.asc
gpg --output mypubkeys.asc --decrypt mysecretatedpubkeys.sec.asc gpg --output mypubkeys.asc --decrypt mysecretatedpubkeys.sec.asc
@@ -124,13 +124,13 @@ gpg --list-keys
Optionally import the trustdb file as well: Optionally import the trustdb file as well:
```shell ```sh
gpg --import-ownertrust otrust.txt gpg --import-ownertrust otrust.txt
``` ```
## Key trust ## Key trust
```shell ```sh
$ gpg --edit-key fingerprint $ gpg --edit-key fingerprint
gpg> trust gpg> trust
gpg> quit gpg> quit
@@ -140,7 +140,7 @@ gpg> quit
> The non-interactive (--batch) option requires a settings file. > The non-interactive (--batch) option requires a settings file.
```shell ```sh
# basic key with default values # basic key with default values
gpg --batch --generate-key <<EOF gpg --batch --generate-key <<EOF
%echo Generating a default key %echo Generating a default key
@@ -159,7 +159,7 @@ EOF
## Change a key's password ## Change a key's password
```shell ```sh
$ gpg --edit-key fingerprint $ gpg --edit-key fingerprint
gpg> passwd gpg> passwd
gpg> quit gpg> quit
@@ -193,7 +193,7 @@ You can create multiple subkeys as you would do for SSH keypairs.
You should already have a GPG key. If you don't, read one of the many fine tutorials available on this topic. You should already have a GPG key. If you don't, read one of the many fine tutorials available on this topic.
You will create the subkey by editing your existing key **in expert mode** to get access to the appropriate options: You will create the subkey by editing your existing key **in expert mode** to get access to the appropriate options:
```shell ```sh
$ gpg2 --expert --edit-key fingerprint $ gpg2 --expert --edit-key fingerprint
gpg> addkey gpg> addkey
Please select what kind of key you want: Please select what kind of key you want:
@@ -262,7 +262,7 @@ Save changes? (y/N) y
When using SSH, `ssh-agent` is used to manage SSH keys. When using a GPG key, `gpg-agent` is used to manage GPG keys. When using SSH, `ssh-agent` is used to manage SSH keys. When using a GPG key, `gpg-agent` is used to manage GPG keys.
To get `gpg-agent` to handle requests from SSH, you need to enable its SSH support: To get `gpg-agent` to handle requests from SSH, you need to enable its SSH support:
```shell ```sh
echo "enable-ssh-support" >> ~/.gnupg/gpg-agent.conf echo "enable-ssh-support" >> ~/.gnupg/gpg-agent.conf
``` ```
@@ -270,7 +270,7 @@ You can avoid usinig `ssh-add` to load the keys pre-specifying which GPG keys to
The entries in this file are keygrips—internal identifiers that `gpg-agent` uses to refer to the keys. A keygrip refers to both the public and private key. The entries in this file are keygrips—internal identifiers that `gpg-agent` uses to refer to the keys. A keygrip refers to both the public and private key.
To find the keygrip use `gpg -K --with-keygrip`, then add that line to the `~/.gnupg/sshcontrol` file: To find the keygrip use `gpg -K --with-keygrip`, then add that line to the `~/.gnupg/sshcontrol` file:
```shell ```sh
$ gpg2 -K --with-keygrip $ gpg2 -K --with-keygrip
/home/bexelbie/.gnupg/pubring.kbx /home/bexelbie/.gnupg/pubring.kbx
------------------------------ ------------------------------
@@ -288,7 +288,7 @@ $ echo 7710BA0643CC022B92544181FF2EAC2A290CDC0E >> ~/.gnupg/sshcontrol
Now tell SSH how to access `gpg-agent` by setting the value of the `SSH_AUTH_SOCK` environment variable. Now tell SSH how to access `gpg-agent` by setting the value of the `SSH_AUTH_SOCK` environment variable.
```shell ```sh
export SSH_AUTH_SOCK=$(gpgconf --list-dirs agent-ssh-socket) export SSH_AUTH_SOCK=$(gpgconf --list-dirs agent-ssh-socket)
gpgconf --launch gpg-agent gpgconf --launch gpg-agent
``` ```
@@ -313,7 +313,7 @@ Run `ssh-add -L` to list your public keys and copy them over manually to the rem
**Solution:** if `gnupg2` and `gpg-agent` 2.x are used, be sure to set the environment variable `GPG_TTY`: **Solution:** if `gnupg2` and `gpg-agent` 2.x are used, be sure to set the environment variable `GPG_TTY`:
```shell ```sh
export GPG_TTY=$(tty) export GPG_TTY=$(tty)
``` ```

View File

@@ -2,13 +2,13 @@
If you're using `bash` or `zsh` you can employ anonymous pipes: If you're using `bash` or `zsh` you can employ anonymous pipes:
```shell ```sh
ffmpeg -i 01-Daemon.mp3 2> >(grep -i Duration) ffmpeg -i 01-Daemon.mp3 2> >(grep -i Duration)
``` ```
If you want the filtered redirected output on `stderr` again, add the `>&2` redirection to grep: If you want the filtered redirected output on `stderr` again, add the `>&2` redirection to grep:
```shell ```sh
command 2> >(grep something >&2) command 2> >(grep something >&2)
``` ```
@@ -19,7 +19,7 @@ Bash calls this _process substitution_:
You can exclude `stdout` and grep `stderr` redirecting it to `null`: You can exclude `stdout` and grep `stderr` redirecting it to `null`:
```shell ```sh
command 1>/dev/null 2> >(grep -oP "(.*)(?=pattern)") command 1>/dev/null 2> >(grep -oP "(.*)(?=pattern)")
``` ```

View File

@@ -2,7 +2,7 @@
## TL;DR ## TL;DR
```shell ```sh
# base search # base search
grep 'pattern' path/to/search grep 'pattern' path/to/search
@@ -39,7 +39,7 @@ For simple searches, you might want to use [pdfgrep].
Should you need more advanced grep capabilities not incorporated by pdfgrep, you might want to convert the file to text and search there. Should you need more advanced grep capabilities not incorporated by pdfgrep, you might want to convert the file to text and search there.
You can to this using [pdftotext](pdfgrep.md) as shown in this example ([source][stackoverflow answer about how to search contents of multiple pdf files]): You can to this using [pdftotext](pdfgrep.md) as shown in this example ([source][stackoverflow answer about how to search contents of multiple pdf files]):
```shell ```sh
find /path -name '*.pdf' -exec sh -c 'pdftotext "{}" - | grep --with-filename --label="{}" --color "your pattern"' ';' find /path -name '*.pdf' -exec sh -c 'pdftotext "{}" - | grep --with-filename --label="{}" --color "your pattern"' ';'
``` ```
@@ -48,7 +48,7 @@ find /path -name '*.pdf' -exec sh -c 'pdftotext "{}" - | grep --with-filename --
- Standard editions of `grep` run in a single thread; use another executor like - Standard editions of `grep` run in a single thread; use another executor like
`parallel` or `xargs` to parallelize grepping multiple files: `parallel` or `xargs` to parallelize grepping multiple files:
```shell ```sh
find . -type f | parallel -j 100% grep 'pattern' find . -type f | parallel -j 100% grep 'pattern'
find . -type f -print0 | xargs -0 -n 1 -P $(nproc) grep 'pattern' find . -type f -print0 | xargs -0 -n 1 -P $(nproc) grep 'pattern'
``` ```

View File

@@ -2,7 +2,7 @@
## TL;DR ## TL;DR
```shell ```sh
# delete a bucket and all its contents # delete a bucket and all its contents
gsutil rm -r gs://${BUCKET_NAME} gsutil rm -r gs://${BUCKET_NAME}

View File

@@ -2,7 +2,7 @@
## TL;DR ## TL;DR
```shell ```sh
# install/uninstall on os x # install/uninstall on os x
/usr/bin/ruby -e "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install)" /usr/bin/ruby -e "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install)"
/usr/bin/ruby -e "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/uninstall)" /usr/bin/ruby -e "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/uninstall)"
@@ -21,7 +21,7 @@ brew bundle dump
## Configuration ## Configuration
```shell ```sh
# require SHA check for casks # require SHA check for casks
# change cask installation dir to the Application folder in the user HOME # change cask installation dir to the Application folder in the user HOME
export HOMEBREW_CASK_OPTS="--require-sha --appdir $HOME/Applications" export HOMEBREW_CASK_OPTS="--require-sha --appdir $HOME/Applications"

View File

@@ -6,7 +6,7 @@ Components:
## TL;DR ## TL;DR
```shell ```sh
# scale an image to 50% its original size # scale an image to 50% its original size
convert IMG_20200117_135049.jpg -adaptive-resize 50% IMG_20200117_135049_resized.jpg convert IMG_20200117_135049.jpg -adaptive-resize 50% IMG_20200117_135049_resized.jpg

View File

@@ -1,6 +1,6 @@
# Iperf # Iperf
```shell ```sh
# on the server # on the server
iperf3 -s iperf3 -s
iperf3 -s -p 7575 iperf3 -s -p 7575

View File

@@ -2,7 +2,7 @@
## TL;DR ## TL;DR
```shell ```sh
# Scan for available wireless networks. # Scan for available wireless networks.
iw dev wlp scan iw dev wlp scan

View File

@@ -4,7 +4,7 @@
## TL;DR ## TL;DR
```shell ```sh
# Scan for networks. # Scan for networks.
iwlist wlan0 scan iwlist wlan0 scan
``` ```

View File

@@ -2,7 +2,7 @@
## TL;DR ## TL;DR
```shell ```sh
# prompt to delete all duplicate files # prompt to delete all duplicate files
jdupes -Zdr directory jdupes -Zdr directory

View File

@@ -1,6 +1,6 @@
# Jira # Jira
```shell ```sh
# create a ticket # create a ticket
curl https://${COMPANY}.atlassian.net/rest/api/2/issue \ curl https://${COMPANY}.atlassian.net/rest/api/2/issue \
-D - \ -D - \

View File

@@ -2,7 +2,7 @@
## TL;DR ## TL;DR
```shell ```sh
# add a field # add a field
jq --arg REGION ${AWS_REGION} '.spec.template.spec.containers[]?.env? += [{name: "AWS_REGION", value: $REGION}]' /tmp/service.kube.json jq --arg REGION ${AWS_REGION} '.spec.template.spec.containers[]?.env? += [{name: "AWS_REGION", value: $REGION}]' /tmp/service.kube.json

View File

@@ -2,7 +2,7 @@
## TL;DR ## TL;DR
```shell ```sh
# filter elements # filter elements
# only works on arrays, not on maps # only works on arrays, not on maps
kubectl get serviceaccounts \ kubectl get serviceaccounts \

View File

@@ -2,7 +2,7 @@
## TL;DR ## TL;DR
```shell ```sh
# Configurations picked up from a directory # Configurations picked up from a directory
$ kapp deploy -a my-app -f ./examples/simple-app-example/config-1.yml $ kapp deploy -a my-app -f ./examples/simple-app-example/config-1.yml

View File

@@ -2,7 +2,7 @@
## TL;DR ## TL;DR
```shell ```sh
# Get from '~/.config/kinfocenterrc' the current value for the 'MenuBar' key in # Get from '~/.config/kinfocenterrc' the current value for the 'MenuBar' key in
# the 'MainWindow' group. # the 'MainWindow' group.
kreadconfig5 --file kinfocenterrc --group MainWindow --key MenuBar kreadconfig5 --file kinfocenterrc --group MainWindow --key MenuBar

View File

@@ -51,7 +51,7 @@ KEDA offers a wide range of triggers (A.K.A. _scalers_) that can both detect if
### Helm chart ### Helm chart
```shell ```sh
# Installation. # Installation.
helm repo add kedacore https://kedacore.github.io/charts \ helm repo add kedacore https://kedacore.github.io/charts \
&& helm repo update kedacore \ && helm repo update kedacore \
@@ -67,7 +67,7 @@ helm uninstall keda --namespace keda \
Use the YAML declaration (which includes the CRDs and all the other resources) available on the GitHub releases page: Use the YAML declaration (which includes the CRDs and all the other resources) available on the GitHub releases page:
```shell ```sh
# Installation. # Installation.
kubectl apply -f https://github.com/kedacore/keda/releases/download/v2.0.0/keda-2.0.0.yaml kubectl apply -f https://github.com/kedacore/keda/releases/download/v2.0.0/keda-2.0.0.yaml
@@ -77,7 +77,7 @@ kubectl delete -f https://github.com/kedacore/keda/releases/download/v2.0.0/keda
One can also use the tools in the repository: One can also use the tools in the repository:
```shell ```sh
git clone https://github.com/kedacore/keda git clone https://github.com/kedacore/keda
cd keda cd keda
VERSION=2.0.0 make deploy # installation VERSION=2.0.0 make deploy # installation
@@ -270,7 +270,7 @@ For details and updated information see KEDA's [External Scalers] page.
Use the logs for the keda operator or apiserver: Use the logs for the keda operator or apiserver:
```shell ```sh
kubectl logs --namespace keda keda-operator-8488964969-sqbxq kubectl logs --namespace keda keda-operator-8488964969-sqbxq
kubectl logs --namespace keda keda-operator-metrics-apiserver-5b488bc7f6-8vbpl kubectl logs --namespace keda keda-operator-metrics-apiserver-5b488bc7f6-8vbpl
``` ```
@@ -286,7 +286,7 @@ There is at the moment of writing no way to control which of the replicas get te
Just run the following: Just run the following:
```shell ```sh
kubectl delete -f https://raw.githubusercontent.com/kedacore/keda/main/config/crd/bases/keda.sh_scaledobjects.yaml kubectl delete -f https://raw.githubusercontent.com/kedacore/keda/main/config/crd/bases/keda.sh_scaledobjects.yaml
kubectl delete -f https://raw.githubusercontent.com/kedacore/keda/main/config/crd/bases/keda.sh_scaledjobs.yaml kubectl delete -f https://raw.githubusercontent.com/kedacore/keda/main/config/crd/bases/keda.sh_scaledjobs.yaml
kubectl delete -f https://raw.githubusercontent.com/kedacore/keda/main/config/crd/bases/keda.sh_triggerauthentications.yaml kubectl delete -f https://raw.githubusercontent.com/kedacore/keda/main/config/crd/bases/keda.sh_triggerauthentications.yaml

View File

@@ -2,7 +2,7 @@
## TL;DR ## TL;DR
```shell ```sh
# Start the services. # Start the services.
run_keybase run_keybase
run_keybase -fg run_keybase -fg
@@ -62,19 +62,19 @@ Use the import form in [Keybase launches encrypted git], or:
1. Create the remote repository: 1. Create the remote repository:
```shell ```sh
keybase git create dotfiles keybase git create dotfiles
``` ```
1. Copy the existing repository to a temporary directory: 1. Copy the existing repository to a temporary directory:
```shell ```sh
git clone --mirror https://github.com/user/dotfiles _tmp.git git clone --mirror https://github.com/user/dotfiles _tmp.git
``` ```
1. Push the contents of the old repository to the new one: 1. Push the contents of the old repository to the new one:
```shell ```sh
git -C _tmp.git push --mirror keybase://private/user/dotfiles git -C _tmp.git push --mirror keybase://private/user/dotfiles
``` ```
@@ -89,7 +89,7 @@ Use `keybase oneshot` to establish a temporary device. The resulting process won
`keybase oneshot` needs a username and a paperkey to work, either passed in via standard input, command-line flags, or environment variables: `keybase oneshot` needs a username and a paperkey to work, either passed in via standard input, command-line flags, or environment variables:
```shell ```sh
# Provide login information on the standard input. # Provide login information on the standard input.
keybase oneshot keybase oneshot

View File

@@ -6,7 +6,7 @@
1. ensure the `hid_apple` module is loaded 1. ensure the `hid_apple` module is loaded
```shell ```sh
sudo modprobe hid_apple sudo modprobe hid_apple
# load at boot # load at boot
@@ -15,7 +15,7 @@
1. configure the keyboard's _fn mode_: 1. configure the keyboard's _fn mode_:
```shell ```sh
echo 0 | sudo tee /sys/module/hid_apple/parameters/fnmode echo 0 | sudo tee /sys/module/hid_apple/parameters/fnmode
# load at boot # load at boot

View File

@@ -4,7 +4,7 @@ Validates one or more Kubernetes configuration files.
## TL;DR ## TL;DR
```shell ```sh
$ kubeval my-invalid-rc.yaml || echo "Validation failed" >&2 $ kubeval my-invalid-rc.yaml || echo "Validation failed" >&2
WARN - my-invalid-rc.yaml contains an invalid ReplicationController - spec.replicas: Invalid type. Expected: integer, given: string WARN - my-invalid-rc.yaml contains an invalid ReplicationController - spec.replicas: Invalid type. Expected: integer, given: string
Validation failed Validation failed

View File

@@ -2,7 +2,7 @@
## TL;DR ## TL;DR
```shell ```sh
# Search words *forwards* in the current document. # Search words *forwards* in the current document.
:/keyword <ENTER> :/keyword <ENTER>

View File

@@ -2,7 +2,7 @@
## TL;DR ## TL;DR
```shell ```sh
# install lxc # install lxc
apt-get install lxc apt-get install lxc
snap install lxd snap install lxd
@@ -44,7 +44,7 @@ man lxc.container.conf(5)
## Create new containers as unprivileged user ## Create new containers as unprivileged user
```shell ```sh
# allow user vagrant to create up to 10 veth devices connected to the lxcbr0 bridge # allow user vagrant to create up to 10 veth devices connected to the lxcbr0 bridge
echo "vagrant veth lxcbr0 10" | sudo tee -a /etc/lxc/lxc-usernet echo "vagrant veth lxcbr0 10" | sudo tee -a /etc/lxc/lxc-usernet
``` ```

View File

@@ -14,7 +14,7 @@
## TL;DR ## TL;DR
```shell ```sh
# Install a .pkg file from CLI. # Install a .pkg file from CLI.
# 'target' needs to be a device, not a path. # 'target' needs to be a device, not a path.
installer -pkg /path/to/non-root-package.pkg -target CurrentUserHomeDirectory installer -pkg /path/to/non-root-package.pkg -target CurrentUserHomeDirectory
@@ -76,7 +76,7 @@ scutil --get ComputerName
## Xcode CLI tools ## Xcode CLI tools
```shell ```sh
xcode-select --install xcode-select --install
``` ```
@@ -84,7 +84,7 @@ The tools will be installed into `/Library/Developer/CommandLineTools` by defaul
### Headless installation ### Headless installation
```shell ```sh
# Force the `softwareupdate` utility to list the Command Line Tools. # Force the `softwareupdate` utility to list the Command Line Tools.
touch /tmp/.com.apple.dt.CommandLineTools.installondemand.in-progress touch /tmp/.com.apple.dt.CommandLineTools.installondemand.in-progress
@@ -102,7 +102,7 @@ CLI_TOOLS_LABEL="$(/usr/sbin/softwareupdate -l \
### Removal ### Removal
```shell ```sh
sudo rm -rf $(xcode-select -p) sudo rm -rf $(xcode-select -p)
sudo rm -rf /Library/Developer/CommandLineTools sudo rm -rf /Library/Developer/CommandLineTools
``` ```
@@ -111,7 +111,7 @@ sudo rm -rf /Library/Developer/CommandLineTools
See [How to update Xcode from command line] for details. See [How to update Xcode from command line] for details.
```shell ```sh
# Remove and reinstall. # Remove and reinstall.
sudo rm -rf $(xcode-select -p) sudo rm -rf $(xcode-select -p)
xcode-select --install xcode-select --install
@@ -121,7 +121,7 @@ xcode-select --install
> **Note:** once set something, you'll probably need to restart the dock with `killall Dock` > **Note:** once set something, you'll probably need to restart the dock with `killall Dock`
```shell ```sh
# Show hidden apps indicators in the dock. # Show hidden apps indicators in the dock.
defaults write com.apple.dock showhidden -bool TRUE defaults write com.apple.dock showhidden -bool TRUE
@@ -148,7 +148,7 @@ Note:
* edits the input image * edits the input image
* `-Z` retains ratio * `-Z` retains ratio
```shell ```sh
sips -Z 1000 Downloads/IMG_20190527_013903.jpg sips -Z 1000 Downloads/IMG_20190527_013903.jpg
``` ```
@@ -176,7 +176,7 @@ Combination | Behaviour
## Update the OS from CLI ## Update the OS from CLI
```shell ```sh
# List all available updates. # List all available updates.
softwareupdate --list --all softwareupdate --list --all
@@ -201,7 +201,7 @@ Save a password with the following settings:
> The password's value needs to be given **last**. > The password's value needs to be given **last**.
```shell ```sh
# Add the password to the default keychain. # Add the password to the default keychain.
security add-generic-password -a johnny -s github -w 'b.good' security add-generic-password -a johnny -s github -w 'b.good'
# Also give it some optional data. # Also give it some optional data.

View File

@@ -6,7 +6,7 @@ Default ports install location is `/opt/local`.
## TL;DR ## TL;DR
```shell ```sh
# get help on a command # get help on a command
port help install port help install
port help select port help select

View File

@@ -12,14 +12,14 @@
1. select the boot image; the Magisk app will patch the image to `[Internal Storage]/Download/magisk_patched_<random strings>.img` 1. select the boot image; the Magisk app will patch the image to `[Internal Storage]/Download/magisk_patched_<random strings>.img`
1. copy the patched image to your computer using the file transfer mode or `adb`: 1. copy the patched image to your computer using the file transfer mode or `adb`:
```shell ```sh
adb pull /sdcard/Download/magisk_patched_<random strings>.img adb pull /sdcard/Download/magisk_patched_<random strings>.img
``` ```
1. reboot the device to the bootloader (fastboot) 1. reboot the device to the bootloader (fastboot)
1. flash the modified boot image: 1. flash the modified boot image:
```shell ```sh
sudo fastboot flash boot path/to/modified/boot.img sudo fastboot flash boot path/to/modified/boot.img
``` ```

View File

@@ -12,7 +12,7 @@ One can use the [branch comparison] tool to check in what branch a package is av
## Printing ## Printing
```shell ```sh
pamac install manjaro-printer pamac install manjaro-printer
sudo gpasswd -a ${USER} sys sudo gpasswd -a ${USER} sys
sudo systemctl enable --now cups.service sudo systemctl enable --now cups.service

View File

@@ -4,7 +4,7 @@ Tool to check markdown files and flag style issues.
## TL;DR ## TL;DR
```shell ```sh
# Install. # Install.
gem install mdl gem install mdl

View File

@@ -4,7 +4,7 @@
Every set of changes to the underlying system is executed on a new inactive snapshot, which will be the one the system will boot into on the next reboot. Every set of changes to the underlying system is executed on a new inactive snapshot, which will be the one the system will boot into on the next reboot.
```shell ```sh
# Upgrade the system. # Upgrade the system.
sudo transactional-update dup sudo transactional-update dup
pkcon update pkcon update

View File

@@ -4,7 +4,7 @@ Creates a unique temporary file or directory and returns the absolute path to it
## TL;DR ## TL;DR
```shell ```sh
# create an empty temporary file # create an empty temporary file
mktemp mktemp

View File

@@ -2,7 +2,7 @@
## TL;DR ## TL;DR
```shell ```sh
sudo mount -t cifs -o user=my-user //nas.local/shared_folder local_folder sudo mount -t cifs -o user=my-user //nas.local/shared_folder local_folder
``` ```

View File

@@ -2,7 +2,7 @@
## TL;DR ## TL;DR
```shell ```sh
# by label # by label
mount -L seagate_2tb_usb /media/usb mount -L seagate_2tb_usb /media/usb

View File

@@ -2,7 +2,7 @@
## TL;DR ## TL;DR
```shell ```sh
# connect with user "root" on the local default socket # connect with user "root" on the local default socket
# don't ask password and do not select db # don't ask password and do not select db
mysql mysql

View File

@@ -12,7 +12,7 @@
## TL;DR ## TL;DR
```shell ```sh
# Check port 22 on hosts. # Check port 22 on hosts.
nc -Nnvz 192.168.0.81 22 nc -Nnvz 192.168.0.81 22
parallel -j 0 "nc -Nnvz -w 2 192.168.0.{} 22 2>&1" ::: {2..254} | grep -v "timed out" parallel -j 0 "nc -Nnvz -w 2 192.168.0.{} 22 2>&1" ::: {2..254} | grep -v "timed out"

View File

@@ -4,7 +4,7 @@
## TL;DR ## TL;DR
```shell ```sh
# Install. # Install.
sudo dnf install dnf-utils sudo dnf install dnf-utils
sudo yum install yum-utils sudo yum install yum-utils

View File

@@ -2,7 +2,7 @@
## TL;DR ## TL;DR
```shell ```sh
# scan all 65535 ports on a host # scan all 65535 ports on a host
nmap -p- 192.168.1.1 nmap -p- 192.168.1.1

View File

@@ -20,7 +20,7 @@ Open port 22 on the firewall:
- using [firewall-cmd][firewalld] on the command line: - using [firewall-cmd][firewalld] on the command line:
```shell ```sh
sudo firewall-cmd --add-port=22/tcp --permanent sudo firewall-cmd --add-port=22/tcp --permanent
``` ```
@@ -29,7 +29,7 @@ Start the SSH daemon:
- using Yast: open _Yast2_ > _System services_ and enable _SSHD_ - using Yast: open _Yast2_ > _System services_ and enable _SSHD_
- using [systemctl][systemd] on the command line: - using [systemctl][systemd] on the command line:
```shell ```sh
sudo systemctl enable --now sshd.service sudo systemctl enable --now sshd.service
``` ```
@@ -39,7 +39,7 @@ Install the OS from another computer capable of reading and writing SD cards.
Given `/dev/sdb` being a SD card, use the following: Given `/dev/sdb` being a SD card, use the following:
```shell ```sh
curl -C - -L -o opensuse.raw.xz http://download.opensuse.org/ports/aarch64/tumbleweed/appliances/openSUSE-Tumbleweed-ARM-JeOS-raspberrypi.aarch64.raw.xz curl -C - -L -o opensuse.raw.xz http://download.opensuse.org/ports/aarch64/tumbleweed/appliances/openSUSE-Tumbleweed-ARM-JeOS-raspberrypi.aarch64.raw.xz
xzcat opensuse.raw.xz \ xzcat opensuse.raw.xz \
| sudo dd bs=4M of=/dev/sdb iflag=fullblock oflag=direct status=progress \ | sudo dd bs=4M of=/dev/sdb iflag=fullblock oflag=direct status=progress \
@@ -52,7 +52,7 @@ Connect using SSH and login using `root:linux`.
### Firmware update from a running system ### Firmware update from a running system
```shell ```sh
# Check for an updated firmware. # Check for an updated firmware.
sudo rpi-eeprom-update sudo rpi-eeprom-update

View File

@@ -2,7 +2,7 @@
## TL;DR ## TL;DR
```shell ```sh
# update the list of available packages # update the list of available packages
opkg update opkg update

View File

@@ -9,7 +9,7 @@ Useful options:
## TL;DR ## TL;DR
```shell ```sh
# search an installed package # search an installed package
pacman --query --search ddc pacman --query --search ddc

View File

@@ -2,7 +2,7 @@
## TL;DR ## TL;DR
```shell ```sh
# check if updates are available (in aur too) # check if updates are available (in aur too)
pamac checkupdates --aur pamac checkupdates --aur

View File

@@ -2,7 +2,7 @@
## TL;DR ## TL;DR
```shell ```sh
# group output (--group) # group output (--group)
# fill up cpu threads (--jobs 100%) # fill up cpu threads (--jobs 100%)
# use newline as delimiter for the arguments in input # use newline as delimiter for the arguments in input

View File

@@ -2,7 +2,7 @@
## TL;DR ## TL;DR
```shell ```sh
# combine multiple files # combine multiple files
pdftk file1.pdf file2.pdf file3.pdf cat output newfile.pdf pdftk file1.pdf file2.pdf file3.pdf cat output newfile.pdf

View File

@@ -2,7 +2,7 @@
## TL;DR ## TL;DR
```shell ```sh
# one-step automated install # one-step automated install
curl -sSL https://install.pi-hole.net | bash curl -sSL https://install.pi-hole.net | bash
``` ```

View File

@@ -4,7 +4,7 @@ Allows an _authorized_ user to execute a command as another user. If a username
## TL;DR ## TL;DR
```shell ```sh
pkexec systemctl hibernate pkexec systemctl hibernate
``` ```

View File

@@ -8,7 +8,7 @@ Options are processed first, and affect the operation of all commands. Multiple
## TL;DR ## TL;DR
```shell ```sh
# list the package id of all installed packages # list the package id of all installed packages
pkgutil --pkgs pkgutil --pkgs
pkgutil --packages --volume / pkgutil --packages --volume /

View File

@@ -2,7 +2,7 @@
## TL;DR ## TL;DR
```shell ```sh
# System update. # System update.
sudo emerge --sync sudo emerge --sync
sudo emerge --depclean --ask sudo emerge --depclean --ask

View File

@@ -2,7 +2,7 @@
## TL;DR ## TL;DR
```shell ```sh
# connect to a server # connect to a server
psql --host "${HOSTNAME}" --port "${PORT:-5432}" "${DATABASENAME:-root}" "${USERNAME:-root}" psql --host "${HOSTNAME}" --port "${PORT:-5432}" "${DATABASENAME:-root}" "${USERNAME:-root}"
``` ```

View File

@@ -2,7 +2,7 @@
## TL;DR ## TL;DR
```shell ```sh
# generate a very basic configuration # generate a very basic configuration
pre-commit sample-config > .pre-commit-config.yaml pre-commit sample-config > .pre-commit-config.yaml

View File

@@ -59,14 +59,14 @@ if __name__ == "__main__":
serve(app, host="0.0.0.0", port=8080) serve(app, host="0.0.0.0", port=8080)
``` ```
```shell ```sh
pip install flask waitress pip install flask waitress
python hello.py python hello.py
``` ```
## Maintenance ## Maintenance
```shell ```sh
# generate a list of all outdated packages # generate a list of all outdated packages
pip list --outdated pip list --outdated

View File

@@ -8,7 +8,7 @@ Enable containerization features in the kernel to be able to run containers as i
Add the following properties at the end of the line in `/boot/cmdline.txt`: Add the following properties at the end of the line in `/boot/cmdline.txt`:
```shell ```sh
cgroup_enable=cpuset cgroup_memory=1 cgroup_enable=memory cgroup_enable=cpuset cgroup_memory=1 cgroup_enable=memory
``` ```
@@ -16,7 +16,7 @@ cgroup_enable=cpuset cgroup_memory=1 cgroup_enable=memory
Switch Debian firewall to legacy config: Switch Debian firewall to legacy config:
```shell ```sh
update-alternatives --set iptables /usr/sbin/iptables-legacy update-alternatives --set iptables /usr/sbin/iptables-legacy
update-alternatives --set ip6tables /usr/sbin/ip6tables-legacy update-alternatives --set ip6tables /usr/sbin/ip6tables-legacy
``` ```

View File

@@ -2,7 +2,7 @@
## TL;DR ## TL;DR
```shell ```sh
# debug the server # debug the server
redis-cli -h "${HOST}" -p "${PORT}" --user "${USERNAME}" --askpass MONITOR redis-cli -h "${HOST}" -p "${PORT}" --user "${USERNAME}" --askpass MONITOR

View File

@@ -2,7 +2,7 @@
## TL;DR ## TL;DR
```shell ```sh
exec su -l $USER exec su -l $USER
``` ```

View File

@@ -2,7 +2,7 @@
## TL;DR ## TL;DR
```shell ```sh
lshw -class disk lshw -class disk
smartctl -i /dev/sda smartctl -i /dev/sda
hdparm -i /dev/sda hdparm -i /dev/sda

View File

@@ -2,7 +2,7 @@
## TL;DR ## TL;DR
```shell ```sh
# list all installed packages # list all installed packages
rpm --query --all rpm --query --all

View File

@@ -2,7 +2,7 @@
## TL;DR ## TL;DR
```shell ```sh
# open a new window for every monitor you have connected and show a preview of the theme # open a new window for every monitor you have connected and show a preview of the theme
sddm-greeter --test-mode --theme /usr/share/sddm/themes/breeze sddm-greeter --test-mode --theme /usr/share/sddm/themes/breeze
``` ```

View File

@@ -2,7 +2,7 @@
## TL;DR ## TL;DR
```shell ```sh
# Delete lines matching "OAM" from a file. # Delete lines matching "OAM" from a file.
sed -e '/OAM/d' -i .bash_history sed -e '/OAM/d' -i .bash_history

View File

@@ -2,7 +2,7 @@
## TL;DR ## TL;DR
```shell ```sh
mail -s "Subject" recipient@mail.server mail -s "Subject" recipient@mail.server
echo "" | mail -a attachment.file -s "Subject" recipient@mail.server echo "" | mail -a attachment.file -s "Subject" recipient@mail.server

View File

@@ -2,7 +2,7 @@
## TL:DR ## TL:DR
```shell ```sh
sudo cpupower frequency-set --governor ondemand sudo cpupower frequency-set --governor ondemand
echo 1 | sudo tee /sys/devices/system/cpu/cpufreq/ondemand/ignore_nice_load echo 1 | sudo tee /sys/devices/system/cpu/cpufreq/ondemand/ignore_nice_load

View File

@@ -4,7 +4,7 @@ Gives warnings and suggestions about `bash`/`sh` shell scripts.
## TL;DR ## TL;DR
```shell ```sh
shellcheck /path/to/script.sh shellcheck /path/to/script.sh
``` ```

View File

@@ -2,6 +2,6 @@
## TL;DR ## TL;DR
```shell ```sh
shred --force --remove --verbose --zero file other-file shred --force --remove --verbose --zero file other-file
``` ```

View File

@@ -2,7 +2,7 @@
## TL;DR ## TL;DR
```shell ```sh
# randomize the order of lines in a file and output the result # randomize the order of lines in a file and output the result
shuf filename shuf filename

Some files were not shown because too many files have changed in this diff Show More