mirror of
https://gitea.com/mcereda/oam.git
synced 2026-02-08 21:34:25 +00:00
chore(kb): import notes from an old repository
This commit is contained in:
@@ -3,23 +3,32 @@
|
||||
Open source container orchestration engine for containerized applications.<br />
|
||||
Hosted by the [Cloud Native Computing Foundation][cncf].
|
||||
|
||||
1. [Basics](#basics)
|
||||
1. [Control plane](#control-plane)
|
||||
1. [API server](#api-server)
|
||||
1. [`kube-scheduler`](#kube-scheduler)
|
||||
1. [`kube-controller-manager`](#kube-controller-manager)
|
||||
1. [`cloud-controller-manager`](#cloud-controller-manager)
|
||||
1. [Worker nodes](#worker-nodes)
|
||||
1. [`kubelet`](#kubelet)
|
||||
1. [`kube-proxy`](#kube-proxy)
|
||||
1. [Container runtime](#container-runtime)
|
||||
1. [Addons](#addons)
|
||||
1. [Workloads](#workloads)
|
||||
1. [Pods](#pods)
|
||||
1. [Concepts](#concepts)
|
||||
1. [Control plane](#control-plane)
|
||||
1. [API server](#api-server)
|
||||
1. [`kube-scheduler`](#kube-scheduler)
|
||||
1. [`kube-controller-manager`](#kube-controller-manager)
|
||||
1. [`cloud-controller-manager`](#cloud-controller-manager)
|
||||
1. [Worker nodes](#worker-nodes)
|
||||
1. [`kubelet`](#kubelet)
|
||||
1. [`kube-proxy`](#kube-proxy)
|
||||
1. [Container runtime](#container-runtime)
|
||||
1. [Addons](#addons)
|
||||
1. [Workloads](#workloads)
|
||||
1. [Pods](#pods)
|
||||
1. [Best practices](#best-practices)
|
||||
1. [Volumes](#volumes)
|
||||
1. [hostPaths](#hostpaths)
|
||||
1. [emptyDirs](#emptydirs)
|
||||
1. [configMaps](#configmaps)
|
||||
1. [secrets](#secrets)
|
||||
1. [nfs](#nfs)
|
||||
1. [downwardAPI](#downwardapi)
|
||||
1. [PersistentVolumes](#persistentvolumes)
|
||||
1. [Resize PersistentVolumes](#resize-persistentvolumes)
|
||||
1. [Autoscaling](#autoscaling)
|
||||
1. [Pod scaling](#pod-scaling)
|
||||
1. [Node scaling](#node-scaling)
|
||||
1. [Best practices](#best-practices)
|
||||
1. [Quality of service](#quality-of-service)
|
||||
1. [Containers with high privileges](#containers-with-high-privileges)
|
||||
1. [Capabilities](#capabilities)
|
||||
@@ -27,7 +36,7 @@ Hosted by the [Cloud Native Computing Foundation][cncf].
|
||||
1. [Sysctl settings](#sysctl-settings)
|
||||
1. [Backup and restore](#backup-and-restore)
|
||||
1. [Managed Kubernetes Services](#managed-kubernetes-services)
|
||||
1. [Best practices in cloud environments](#best-practices-in-cloud-environments)
|
||||
1. [Best practices in cloud environments](#best-practices-in-cloud-environments)
|
||||
1. [Edge computing](#edge-computing)
|
||||
1. [Troubleshooting](#troubleshooting)
|
||||
1. [Dedicate Nodes to specific workloads](#dedicate-nodes-to-specific-workloads)
|
||||
@@ -40,7 +49,7 @@ Hosted by the [Cloud Native Computing Foundation][cncf].
|
||||
1. [Further readings](#further-readings)
|
||||
1. [Sources](#sources)
|
||||
|
||||
## Basics
|
||||
## Concepts
|
||||
|
||||
When using Kubernetes, one is using a cluster.
|
||||
|
||||
@@ -56,7 +65,7 @@ fault-tolerance and high availability.
|
||||
|
||||

|
||||
|
||||
## Control plane
|
||||
### Control plane
|
||||
|
||||
Makes global decisions about the cluster (like scheduling).<br/>
|
||||
Detects and responds to cluster events (like starting up a new pod when a deployment has less replicas then it requests).
|
||||
@@ -74,7 +83,7 @@ Control plane components run on one or more cluster nodes.<br/>
|
||||
For ease of use, setup scripts typically start all control plane components on the **same** host and avoid **running**
|
||||
other workloads on it.
|
||||
|
||||
### API server
|
||||
#### API server
|
||||
|
||||
Exposes the Kubernetes API. It is the front end for, and the core of, the Kubernetes control plane.<br/>
|
||||
`kube-apiserver` is the main implementation of the Kubernetes API server, and is designed to scale horizontally (by
|
||||
@@ -108,7 +117,7 @@ The Kubernetes API can be extended:
|
||||
- using _custom resources_ to declaratively define how the API server should provide your chosen resource API, or
|
||||
- extending the Kubernetes API by implementing an aggregation layer.
|
||||
|
||||
### `kube-scheduler`
|
||||
#### `kube-scheduler`
|
||||
|
||||
Detects newly created pods with no assigned node, and selects one for them to run on.
|
||||
|
||||
@@ -121,7 +130,7 @@ Scheduling decisions take into account:
|
||||
- inter-workload interference;
|
||||
- deadlines.
|
||||
|
||||
### `kube-controller-manager`
|
||||
#### `kube-controller-manager`
|
||||
|
||||
Runs _controller_ processes.<br />
|
||||
Each controller is a separate process logically speaking; they are all compiled into a single binary and run in a single
|
||||
@@ -136,7 +145,7 @@ Examples of these controllers are:
|
||||
- the EndpointSlice controller, which populates _EndpointSlice_ objects providing a link between services and pods;
|
||||
- the ServiceAccount controller, which creates default ServiceAccounts for new namespaces.
|
||||
|
||||
### `cloud-controller-manager`
|
||||
#### `cloud-controller-manager`
|
||||
|
||||
Embeds cloud-specific control logic, linking clusters to one's cloud provider's API and separating the components that
|
||||
interact with that cloud platform from the components that only interact with clusters.
|
||||
@@ -156,19 +165,19 @@ The following controllers can have cloud provider dependencies:
|
||||
- the route controller, which sets up routes in the underlying cloud infrastructure;
|
||||
- the service controller, which creates, updates and deletes cloud provider load balancers.
|
||||
|
||||
## Worker nodes
|
||||
### Worker nodes
|
||||
|
||||
Each and every node runs components providing a runtime environment for the cluster, and syncing with the control plane
|
||||
to maintain workloads running as requested.
|
||||
|
||||
### `kubelet`
|
||||
#### `kubelet`
|
||||
|
||||
A `kubelet` runs as an agent on each and every node in the cluster, making sure that containers are run in a pod.
|
||||
|
||||
It takes a set of _PodSpecs_ and ensures that the containers described in them are running and healthy.<br/>
|
||||
It only manages containers created by Kubernetes.
|
||||
|
||||
### `kube-proxy`
|
||||
#### `kube-proxy`
|
||||
|
||||
Network proxy running on each node and implementing part of the Kubernetes Service concept.
|
||||
|
||||
@@ -178,21 +187,21 @@ or outside of one's cluster.
|
||||
It uses the operating system's packet filtering layer, if there is one and it's available; if not, it just forwards the
|
||||
traffic itself.
|
||||
|
||||
### Container runtime
|
||||
#### Container runtime
|
||||
|
||||
The software responsible for running containers.
|
||||
|
||||
Kubernetes supports container runtimes like `containerd`, `CRI-O`, and any other implementation of the Kubernetes CRI
|
||||
(Container Runtime Interface).
|
||||
|
||||
### Addons
|
||||
#### Addons
|
||||
|
||||
Addons use Kubernetes resources (_DaemonSet_, _Deployment_, etc) to implement cluster features.<br/>
|
||||
As such, namespaced resources for addons belong within the `kube-system` namespace.
|
||||
|
||||
See [addons] for an extended list of the available addons.
|
||||
|
||||
## Workloads
|
||||
### Workloads
|
||||
|
||||
Workloads consist of groups of containers ([_pods_][pods]) and a specification for how to run them (_manifest_).<br/>
|
||||
Configuration files are written in YAML (preferred) or JSON format and are composed of:
|
||||
@@ -201,7 +210,7 @@ Configuration files are written in YAML (preferred) or JSON format and are compo
|
||||
- resource specifications, with attributes specific to the kind of resource they are describing, and
|
||||
- status, automatically generated and edited by the control plane.
|
||||
|
||||
### Pods
|
||||
#### Pods
|
||||
|
||||
The smallest deployable unit of computing that one can create and manage in Kubernetes.<br/>
|
||||
Pods contain one or more relatively tightly coupled application containers; they are always co-located (executed on the
|
||||
@@ -218,38 +227,6 @@ Gotchas:
|
||||
- If a Container specifies a memory or CPU `limit` but does **not** specify a memory or CPU `request`, Kubernetes
|
||||
automatically assigns it a resource `request` spec equal to the given `limit`.
|
||||
|
||||
## Autoscaling
|
||||
|
||||
Controllers are available to scale Pods or Nodes automatically, both in number or size.
|
||||
|
||||
Automatic scaling of Pods is done in number by the HorizontalPodAutoscaler, and in size by the VerticalPodAutoscaler.<br/>
|
||||
Automatic scaling of Nodes is done in number by the Cluster Autoscaler, and in size by add-ons like [Karpenter].
|
||||
|
||||
> Be aware of mix-and-matching autoscalers for the same kind of resource.<br/>
|
||||
> One can easily defy the work done by the other and make that resource behave unexpectedly.
|
||||
|
||||
K8S only comes with the HorizontalPodAutoscaler by default.<br/>
|
||||
Managed K8S usually also comes with the [Cluster Autoscaler] if autoscaling is enabled on the cluster resource.
|
||||
|
||||
### Pod scaling
|
||||
|
||||
Autoscaling of Pods by number requires the use of the Horizontal Pod Autoscaler.<br/>
|
||||
Autoscaling of Pods by size requires the use of the Vertical Pod Autoscaler.
|
||||
|
||||
### Node scaling
|
||||
|
||||
Autoscaling of Nodes by number requires the [Cluster Autoscaler].
|
||||
|
||||
1. The Cluster Autoscaler routinely checks for pending Pods.
|
||||
1. Pods fill up the available Nodes.
|
||||
1. When Pods start to fail for lack of available resources, Nodes are added to the cluster.
|
||||
1. When Pods are not failing due to lack of available resources and one or more Nodes are underused, the Autoscaler
|
||||
tries to fit the existing Pods in less Nodes.
|
||||
1. If one or more Nodes can result unused from the previous step (DaemonSets are usually not taken into consideration),
|
||||
the Autoscaler will terminate them.
|
||||
|
||||
Autoscaling of Nodes by size requires add-ons like [Karpenter].
|
||||
|
||||
## Best practices
|
||||
|
||||
Also see [configuration best practices] and the [production best practices checklist].
|
||||
@@ -298,6 +275,374 @@ Also see [configuration best practices] and the [production best practices check
|
||||
- Protect the cluster's ingress points.<br/>
|
||||
Firewalls, web application firewalls, application gateways.
|
||||
|
||||
## Volumes
|
||||
|
||||
Refer [volumes].
|
||||
|
||||
Sources to mount directories from.
|
||||
|
||||
They go by the `volumes` key in Pods' `spec`.<br/>
|
||||
E.g., in a Deployment they are declared in its `spec.template.spec.volumes`:
|
||||
|
||||
```yaml
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
spec:
|
||||
template:
|
||||
spec:
|
||||
volumes:
|
||||
- <volume source 1>
|
||||
- <volume source N>
|
||||
```
|
||||
|
||||
Mount volumes in containers by using the `volumesMount`:
|
||||
|
||||
```yaml
|
||||
apiVersion: apps/v1
|
||||
kind: Pod
|
||||
spec:
|
||||
containers:
|
||||
- name: some-container
|
||||
volumeMounts:
|
||||
- name: my-volume-source
|
||||
mountPath: /path/to/mount
|
||||
readOnly: false
|
||||
subPath: dir/in/volume
|
||||
```
|
||||
|
||||
### hostPaths
|
||||
|
||||
Mount files or directories from the host node's filesystem into Pods.
|
||||
|
||||
**Not** something most Pods will need, but powerful escape hatches for some applications.
|
||||
|
||||
Use cases:
|
||||
|
||||
- Containers needing access to node-level system components<br/>
|
||||
E.g., containers transferring system logs to a central location and needing access to those logs using a read-only
|
||||
mount of `/var/log`.
|
||||
- Making configuration files stored on the host system available read-only to _static_ Pods.
|
||||
This because static Pods **cannot** access ConfigMaps.
|
||||
|
||||
If mounted files or directories on the host are only accessible to `root`:
|
||||
|
||||
- Either the process needs to run as `root` in a privileged container,
|
||||
- Or the files' permissions on the host need to be changed to allow the process to read from (or write to) the volume.
|
||||
|
||||
```yaml
|
||||
apiVersion: apps/v1
|
||||
kind: Pod
|
||||
volumes:
|
||||
- name: example-volume
|
||||
# Mount '/data/foo' only if that directory already exists
|
||||
hostPath:
|
||||
path: /data/foo # location on host
|
||||
type: Directory # optional
|
||||
```
|
||||
|
||||
### emptyDirs
|
||||
|
||||
Scrape disks for **temporary** Pod data.
|
||||
|
||||
**Not** shared between Pods.<br/>
|
||||
All data is **destroyed** once the Pod is removed, but stays intact when Pods restart.
|
||||
|
||||
Use cases:
|
||||
|
||||
- Provide directories to create pid/lock or other special files for 3rd-party software when it's inconvenient or
|
||||
impossible to disable them.<br/>
|
||||
E.g., Java Hazelcast creates lockfiles in the user's home directory and there's no way to disable this behaviour.
|
||||
- Store intermediate calculations which can be lost<br/>
|
||||
E.g., external sorting, buffering of big responses to save memory.
|
||||
- Improve startup time after application crashes if the application in question pre-computes something before or during
|
||||
startup.</br>
|
||||
E.g., compressed assets in the application's image, decompressing data into temporary directory.
|
||||
|
||||
```yaml
|
||||
apiVersion: apps/v1
|
||||
kind: Pod
|
||||
volumes:
|
||||
- name: my-emptydir
|
||||
emptyDir:
|
||||
# Omit the 'medium' field to use disk storage.
|
||||
# The 'Memory' medium will create tmpfs to store data.
|
||||
medium: Memory
|
||||
sizeLimit: 1Gi
|
||||
```
|
||||
|
||||
### configMaps
|
||||
|
||||
Inject configuration data into Pods.
|
||||
|
||||
When referencing a ConfigMap:
|
||||
|
||||
- Provide the name of the ConfigMap in the volume.
|
||||
- Optionally customize the path to use for a specific entry in the ConfigMap.
|
||||
|
||||
```yaml
|
||||
apiVersion: apps/v1
|
||||
kind: Pod
|
||||
spec:
|
||||
containers:
|
||||
- name: test
|
||||
volumeMounts:
|
||||
- name: config-vol
|
||||
mountPath: /etc/config
|
||||
volumes:
|
||||
- name: config-vol
|
||||
configMap:
|
||||
name: log-config
|
||||
items:
|
||||
- key: log_level
|
||||
path: log_level
|
||||
- name: my-configmap-volume
|
||||
configMap:
|
||||
name: my-configmap
|
||||
defaultMode: 0644 # posix access mode, set it to the most restricted value
|
||||
optional: true # allow pods to start with this configmap missing, resulting in an empty directory
|
||||
```
|
||||
|
||||
ConfigMaps **must** be created before they can be mounted.
|
||||
|
||||
One ConfigMap can be mounted into any number of Pods.
|
||||
|
||||
ConfigMaps are always mounted `readOnly`.
|
||||
|
||||
Containers using ConfigMaps as `subPath` volume mounts will **not** receive ConfigMap updates.
|
||||
|
||||
Text data is exposed as files using the UTF-8 character encoding.<br/>
|
||||
Use `binaryData` For any other character encoding.
|
||||
|
||||
### secrets
|
||||
|
||||
Used to pass sensitive information to Pods.<br/>
|
||||
E.g., passwords.
|
||||
|
||||
They behave like ConfigMaps but are backed by `tmpfs`, so they are never written to non-volatile storage.
|
||||
|
||||
Secrets **must** be created before they can be mounted.
|
||||
|
||||
Secrets are always mounted `readOnly`.
|
||||
|
||||
Containers using Secrets as `subPath` volume mounts will **not** receive Secret updates.
|
||||
|
||||
```yaml
|
||||
apiVersion: apps/v1
|
||||
kind: Pod
|
||||
spec:
|
||||
volumes:
|
||||
- name: my-secret-volume
|
||||
secret:
|
||||
secretName: my-secret
|
||||
defaultMode: 0644
|
||||
optional: false
|
||||
```
|
||||
|
||||
### nfs
|
||||
|
||||
mount **existing** NFS shares into Pods.
|
||||
|
||||
The contents of NFS volumes are preserved after Pods are removed and the volume is merely unmounted.<br/>
|
||||
This means that NFS volumes can be pre-populated with data, and that data can be shared between Pods.
|
||||
|
||||
NFS can be mounted by multiple writers simultaneously.
|
||||
|
||||
One **cannot** specify NFS mount options in a Pod spec.<br/>
|
||||
Either set mount options server-side or use `/etc/nfsmount.conf`.<br/>
|
||||
Alternatively, mount NFS volumes via PersistentVolumes as they do allow to set mount options.
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: Pod
|
||||
spec:
|
||||
containers:
|
||||
- image: registry.k8s.io/test-webserver
|
||||
name: test-container
|
||||
volumeMounts:
|
||||
- mountPath: /my-nfs-data
|
||||
name: test-volume
|
||||
volumes:
|
||||
- name: test-volume
|
||||
nfs:
|
||||
server: my-nfs-server.example.com
|
||||
path: /my-nfs-volume
|
||||
readOnly: true
|
||||
```
|
||||
|
||||
### downwardAPI
|
||||
|
||||
Downward APIs expose Pods' and containers' resource declaration or status field values.<br/>
|
||||
Refer [Expose Pod information to Containers through files].
|
||||
|
||||
Downward API volumes make downward API data available to applications as read-only files in plain text format.
|
||||
|
||||
Containers using the downward API as `subPath` volume mounts will **not** receive updates when field values change.
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
labels:
|
||||
cluster: test-cluster1
|
||||
rack: rack-22
|
||||
zone: us-east-coast
|
||||
spec:
|
||||
volumes:
|
||||
- name: my-downwardapi-volume
|
||||
downwardAPI:
|
||||
defaultMode: 0644
|
||||
items:
|
||||
- path: labels
|
||||
fieldRef:
|
||||
fieldPath: metadata.labels
|
||||
|
||||
# Mounting this volume results in a file with contents similar to the following:
|
||||
# ```plaintext
|
||||
# cluster="test-cluster1"
|
||||
# rack="rack-22"
|
||||
# zone="us-east-coast"
|
||||
# ```
|
||||
```
|
||||
|
||||
### PersistentVolumes
|
||||
|
||||
#### Resize PersistentVolumes
|
||||
|
||||
1. Check the `StorageClass` is set with `allowVolumeExpansion: true`:
|
||||
|
||||
```sh
|
||||
kubectl get storageClass 'storage-class-name' -o jsonpath='{.allowVolumeExpansion}'
|
||||
```
|
||||
|
||||
1. Edit the PersistentVolumeClaim's `spec.resources.requests.storage` field.<br/>
|
||||
This will take care of the underlying PersistentVolume's size automagically.
|
||||
|
||||
```sh
|
||||
kubectl edit persistentVolumeClaim 'my-pvc'
|
||||
```
|
||||
|
||||
1. Verify the change by checking the PVC's `status.capacity` field:
|
||||
|
||||
```sh
|
||||
kubectl get pvc 'my-pvc' -o jsonpath='{.status}'
|
||||
```
|
||||
|
||||
Should one see the message
|
||||
|
||||
> Waiting for user to (re-)start a pod to finish file system resize of volume on node
|
||||
|
||||
under the `status.conditions` field, just wait some time.<br/>
|
||||
It should **not** be necessary to restart the Pods, and the capacity should change soon to the requested one.
|
||||
|
||||
Gotchas:
|
||||
|
||||
- It's possible to recreate StatefulSets **without** the need of killing the Pods it controls.<br/>
|
||||
Reapply the STS' declaration with a new PersistentVolume size, and start new pods to resize the underlying filesystem.
|
||||
|
||||
<details>
|
||||
<summary>If deploying the STS via Helm</summary>
|
||||
|
||||
1. Change the size of the PersistentVolumeClaims used by the STS:
|
||||
|
||||
```sh
|
||||
kubectl edit persistentVolumeClaims 'my-pvc'
|
||||
```
|
||||
|
||||
1. Delete the STS **without killing its pods**:
|
||||
|
||||
```sh
|
||||
kubectl delete statefulsets.apps 'my-sts' --cascade 'orphan'
|
||||
```
|
||||
|
||||
1. Redeploy the STS with the changed size.
|
||||
It will retake ownership of existing Pods.
|
||||
|
||||
1. Delete the STS' pods one-by-one.<br/>
|
||||
During Pod restart, the Kubelet will resize the filesystem to match new block device size.
|
||||
|
||||
```sh
|
||||
kubectl delete pod 'my-sts-pod'
|
||||
```
|
||||
|
||||
</details>
|
||||
<details>
|
||||
<summary>If managing the STS manually</summary>
|
||||
|
||||
1. Change the size of the PersistentVolumeClaims used by the STS:
|
||||
|
||||
```sh
|
||||
kubectl edit persistentVolumeClaims 'my-pvc'
|
||||
```
|
||||
|
||||
1. Note down the names of PVs for specific PVCs and their sizes:
|
||||
|
||||
```sh
|
||||
kubectl get persistentVolume 'my-pv'
|
||||
```
|
||||
|
||||
1. Dump the STS to disk:
|
||||
|
||||
```sh
|
||||
kubectl get sts 'my-sts' -o yaml > 'my-sts.yaml'
|
||||
```
|
||||
|
||||
1. Remove any extra field (like `metadata.{selfLink,resourceVersion,creationTimestamp,generation,uid}` and `status`)
|
||||
and set the template's PVC size to the value you want.
|
||||
|
||||
1. Delete the STS **without killing its pods**:
|
||||
|
||||
```sh
|
||||
kubectl delete sts 'my-sts' --cascade 'orphan'
|
||||
```
|
||||
|
||||
1. Reapply the STS.<br/>
|
||||
It will retake ownership of existing Pods.
|
||||
|
||||
```sh
|
||||
kubectl apply -f 'my-sts.yaml'
|
||||
```
|
||||
|
||||
1. Delete the STS' pods one-by-one.<br/>
|
||||
During Pod restart, the Kubelet will resize the filesystem to match new block device size.
|
||||
|
||||
```sh
|
||||
kubectl delete pod 'my-sts-pod'
|
||||
```
|
||||
|
||||
</details>
|
||||
|
||||
## Autoscaling
|
||||
|
||||
Controllers are available to scale Pods or Nodes automatically, both in number or size.
|
||||
|
||||
Automatic scaling of Pods is done in number by the HorizontalPodAutoscaler, and in size by the VerticalPodAutoscaler.<br/>
|
||||
Automatic scaling of Nodes is done in number by the Cluster Autoscaler, and in size by add-ons like [Karpenter].
|
||||
|
||||
> Be aware of mix-and-matching autoscalers for the same kind of resource.<br/>
|
||||
> One can easily defy the work done by the other and make that resource behave unexpectedly.
|
||||
|
||||
K8S only comes with the HorizontalPodAutoscaler by default.<br/>
|
||||
Managed K8S usually also comes with the [Cluster Autoscaler] if autoscaling is enabled on the cluster resource.
|
||||
|
||||
### Pod scaling
|
||||
|
||||
Autoscaling of Pods by number requires the use of the Horizontal Pod Autoscaler.<br/>
|
||||
Autoscaling of Pods by size requires the use of the Vertical Pod Autoscaler.
|
||||
|
||||
### Node scaling
|
||||
|
||||
Autoscaling of Nodes by number requires the [Cluster Autoscaler].
|
||||
|
||||
1. The Cluster Autoscaler routinely checks for pending Pods.
|
||||
1. Pods fill up the available Nodes.
|
||||
1. When Pods start to fail for lack of available resources, Nodes are added to the cluster.
|
||||
1. When Pods are not failing due to lack of available resources and one or more Nodes are underused, the Autoscaler
|
||||
tries to fit the existing Pods in less Nodes.
|
||||
1. If one or more Nodes can result unused from the previous step (DaemonSets are usually not taken into consideration),
|
||||
the Autoscaler will terminate them.
|
||||
|
||||
Autoscaling of Nodes by size requires add-ons like [Karpenter].
|
||||
|
||||
## Quality of service
|
||||
|
||||
See [Configure Quality of Service for Pods] for more information.
|
||||
@@ -694,6 +1039,7 @@ Others:
|
||||
- [Common labels]
|
||||
- [What is Kubernetes?]
|
||||
- [Using RBAC Authorization]
|
||||
- [Expose Pod information to Containers through files]
|
||||
|
||||
<!--
|
||||
Reference
|
||||
@@ -744,6 +1090,7 @@ Others:
|
||||
[container hooks]: https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/#container-hooks
|
||||
[distribute credentials securely using secrets]: https://kubernetes.io/docs/tasks/inject-data-application/distribute-credentials-secure/
|
||||
[documentation]: https://kubernetes.io/docs/home/
|
||||
[expose pod information to containers through files]: https://kubernetes.io/docs/tasks/inject-data-application/downward-api-volume-expose-pod-information/
|
||||
[labels and selectors]: https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/
|
||||
[namespaces]: https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/
|
||||
[no new privileges design proposal]: https://github.com/kubernetes/design-proposals-archive/blob/main/auth/no-new-privs.md
|
||||
@@ -756,6 +1103,7 @@ Others:
|
||||
[using rbac authorization]: https://kubernetes.io/docs/reference/access-authn-authz/rbac/
|
||||
[using sysctls in a kubernetes cluster]: https://kubernetes.io/docs/tasks/administer-cluster/sysctl-cluster/
|
||||
[version skew policy]: https://kubernetes.io/releases/version-skew-policy/
|
||||
[volumes]: https://kubernetes.io/docs/concepts/storage/volumes/
|
||||
|
||||
<!-- Others -->
|
||||
[best practices for pod security in azure kubernetes service (aks)]: https://learn.microsoft.com/en-us/azure/aks/developer-best-practices-pod-security
|
||||
|
||||
@@ -1,11 +1,54 @@
|
||||
# Pandoc
|
||||
|
||||
Haskell library for converting from one markup format to another.<br/>
|
||||
The command-line tool uses this library.
|
||||
|
||||
Pandoc's enhanced version of Markdown includes syntax for tables, definition lists, metadata blocks, footnotes,
|
||||
citations, math, and more.
|
||||
|
||||
1. [TL;DR](#tldr)
|
||||
1. [Further readings](#further-readings)
|
||||
1. [Sources](#sources)
|
||||
1. [Sources](#sources)
|
||||
|
||||
## TL;DR
|
||||
|
||||
Pandoc consists of a set of readers.<br/>
|
||||
Those readers parse text in a given format, and produce:
|
||||
|
||||
- A native representation of the document (an abstract syntax tree or AST), and
|
||||
- A set of writers.
|
||||
|
||||
The writers convert the document's native representation into the target format.
|
||||
|
||||
Adding an input or output format requires only adding a reader or writer.
|
||||
|
||||
Users can run custom pandoc filters to modify the intermediate AST.
|
||||
|
||||
> Pandoc's intermediate representation of a document is less expressive than many of the formats it converts
|
||||
> between.<br/>
|
||||
> As such, one should **not** expect perfect conversions between every format and every other.
|
||||
>
|
||||
> Pandoc attempts to preserve the structural elements of a document, but not formatting details such as margin
|
||||
> size.<br/>
|
||||
> Some document elements (i.e., complex tables) may **not** fit into pandoc's simple document model.
|
||||
|
||||
If no input files are specified, input is read from `stdin`.
|
||||
|
||||
The output goes to `stdout` by default.
|
||||
|
||||
If the input or output format is not specified explicitly, pandoc will attempt to guess it from the extensions of the
|
||||
filenames.<br/>
|
||||
If no input file is specified or if the input files' extensions are unknown, the input format will be assumed to be
|
||||
Markdown.<br/>
|
||||
If no output file is specified or if the output file's extension is unknown, the output format will default to HTML.
|
||||
|
||||
Pandoc uses the UTF-8 character encoding for both input and output.<br/>
|
||||
If one's local character encoding is **not** UTF-8, one should pipe input and output through `iconv`:
|
||||
|
||||
```sh
|
||||
iconv -t 'utf-8' 'input.txt' | pandoc | iconv -f 'utf-8'
|
||||
```
|
||||
|
||||
```sh
|
||||
# Install.
|
||||
apt install 'pandoc'
|
||||
@@ -14,6 +57,10 @@ dnf install 'pandoc'
|
||||
yum install 'pandoc'
|
||||
zypper install 'pandoc-cli'
|
||||
|
||||
# Print the lists of supported formats.
|
||||
pandoc --list-input-formats
|
||||
pandoc --list-output-formats
|
||||
|
||||
# Convert between formats.
|
||||
# If the format is not specified, it will try to guess.
|
||||
pandoc -f 'html' -t 'markdown' 'input.html'
|
||||
@@ -21,33 +68,41 @@ pandoc -r 'html' -w 'markdown' 'https://www.fsf.org'
|
||||
pandoc --from 'markdown' --write 'docx' 'input.md'
|
||||
pandoc --read 'markdown' --to 'rtf' 'input.md'
|
||||
pandoc -o 'output.tex' 'input.txt'
|
||||
|
||||
# By default, pandoc produces document fragments.
|
||||
# Use the '-s', '--standalone' option to produce a standalone document.
|
||||
pandoc -s --output 'output.pdf' 'input.html'
|
||||
|
||||
# If multiple input files are given at once, pandoc will concatenate them all with blank lines between them before
|
||||
# parsing.
|
||||
# Use `--file-scope` to parse files individually.
|
||||
|
||||
# Convert to PDF.
|
||||
# The default way leverages LaTeX, requiring a LaTeX engine to be installed.
|
||||
# Alternative engines allow ConTeXt, roff ms or HTML as intermediate formats.
|
||||
# Alternative engines allow 'ConTeXt', 'roff ms' or 'HTML' as intermediate formats.
|
||||
pandoc … 'input.html'
|
||||
pandoc … --pdf-engine 'context' 'https://www.fsf.org'
|
||||
pandoc … --pdf-engine 'html' -c 'style.css' 'input.html'
|
||||
|
||||
# Render markdown documents and show them in `links`.
|
||||
pandoc --standalone 'docs/pandoc.md' | links
|
||||
```
|
||||
|
||||
## Further readings
|
||||
|
||||
- [Website]
|
||||
- [Manual]
|
||||
|
||||
## Sources
|
||||
|
||||
All the references in the [further readings] section, plus the following:
|
||||
### Sources
|
||||
|
||||
- [Creating a PDF]
|
||||
|
||||
<!--
|
||||
References
|
||||
Reference
|
||||
═╬═Time══
|
||||
-->
|
||||
|
||||
<!-- In-article sections -->
|
||||
[further readings]: #further-readings
|
||||
|
||||
<!-- Upstream -->
|
||||
[creating a pdf]: https://pandoc.org/MANUAL.html#creating-a-pdf
|
||||
[manual]: https://pandoc.org/MANUAL.html
|
||||
[website]: https://pandoc.org/
|
||||
|
||||
43
knowledge base/polkit.md
Normal file
43
knowledge base/polkit.md
Normal file
@@ -0,0 +1,43 @@
|
||||
# Polkit
|
||||
|
||||
Provides an authorization API<br/>.
|
||||
Those are intended to be used by privileged programs (A.K.A. _mechanisms_) that offer services to unprivileged programs
|
||||
(A.K.A. _subjects_).
|
||||
|
||||
Mechanisms typically treat subjects as **untrusted**.<br/>
|
||||
For every request from subjects, mechanisms need to determine if the request is authorized or if they should refuse
|
||||
to service the subject; mechanisms can offload this decision to **the polkit authority** using the polkit APIs.
|
||||
|
||||
The system architecture of polkit is comprised of the _Authority_ and an _Authentication Agent_ per user session.<br/>
|
||||
_Actions_ are defined by applications. Vendors, sites and system administrators can control the authorization policy
|
||||
using _Authorization Rules_.
|
||||
|
||||
The Authentication Agent provided and started by the user's graphical environment
|
||||
|
||||
The Authority is implemented as a system daemon (`polkitd`)<br/>
|
||||
The daemon itself runs as the `polkitd` system user to have little privilege.
|
||||
|
||||
Mechanisms, subjects and authentication agents communicate with the authority using the system message bus.
|
||||
|
||||
In addition to acting as an authority, polkit allows users to obtain temporary authorization through authenticating
|
||||
either an administrative user or the owner of the session the client belongs to.<br/>
|
||||
This is useful for scenarios where mechanisms needs to verify that the operator of the system really is the user or an
|
||||
administrative user.
|
||||
|
||||
## Sources
|
||||
|
||||
- Arch Linux's [Wiki page][arch wiki page]
|
||||
- Polkit's [documentation]
|
||||
- Polkit's [`man` page][man page]
|
||||
|
||||
<!--
|
||||
Reference
|
||||
═╬═Time══
|
||||
-->
|
||||
|
||||
<!-- Upstream -->
|
||||
[documentation]: https://www.freedesktop.org/software/polkit/docs/latest/
|
||||
[man page]: https://www.freedesktop.org/software/polkit/docs/latest/polkit.8.html
|
||||
|
||||
<!-- Others -->
|
||||
[arch wiki page]: https://wiki.archlinux.org/index.php/Polkit
|
||||
@@ -21,7 +21,7 @@ security policy.
|
||||
1. [Management API](#management-api)
|
||||
1. [Take snapshots of the data](#take-snapshots-of-the-data)
|
||||
1. [Further readings](#further-readings)
|
||||
1. [Sources](#sources)
|
||||
1. [Sources](#sources)
|
||||
|
||||
## TL;DR
|
||||
|
||||
@@ -201,6 +201,20 @@ calculates the **per-second rate of change** based on the last two data points o
|
||||
To calculate the overall CPU usage, the idle mode of the metric is used. Since idle percent of a processor is the
|
||||
opposite of a busy processor, the irate value is subtracted from 1. To make it a percentage, it is multiplied by 100.
|
||||
|
||||
<details>
|
||||
<summary>Examples</summary>
|
||||
|
||||
```promql
|
||||
# Get all allocatable CPU cores where the 'node' attribute matches regex ".*-runners-.*" grouped by node
|
||||
sum(kube_node_status_allocatable_cpu_cores{node=~".*-runners-.*"}) BY (node)
|
||||
|
||||
# FIXME
|
||||
sum(rate(container_cpu_usage_seconds_total{namespace="gitlab-runners",container="build",pod_name=~"runner.*"}[30s])) by (pod_name,container) /
|
||||
sum(container_spec_cpu_quota{namespace="gitlab-runners",pod_name=~"runner.*"}/container_spec_cpu_period{namespace="gitlab-runners",pod_name=~"runner.*"}) by (pod_name,container)
|
||||
```
|
||||
|
||||
</details>
|
||||
|
||||
## Storage
|
||||
|
||||
Refer [Storage].
|
||||
@@ -377,9 +391,7 @@ The snapshot now exists at `<data-dir>/snapshots/20171210T211224Z-2be650b6d019eb
|
||||
- [`ordaa/boinc_exporter`][ordaa/boinc_exporter]
|
||||
- [Grafana]
|
||||
|
||||
## Sources
|
||||
|
||||
All the references in the [further readings] section, plus the following:
|
||||
### Sources
|
||||
|
||||
- [Getting started with Prometheus]
|
||||
- [Node exporter guide]
|
||||
@@ -395,15 +407,15 @@ All the references in the [further readings] section, plus the following:
|
||||
- [How to integrate Prometheus and Grafana on Kubernetes using Helm]
|
||||
- [node-exporter's helm chart's values]
|
||||
- [How to set up and experiment with Prometheus remote-write]
|
||||
- [Install Prometheus and Grafana by Helm]
|
||||
- [Prometheus and Grafana setup in Minikube]
|
||||
- [I need to know about the below kube_state_metrics description. Exactly looking is what the particular metrics doing]
|
||||
|
||||
<!--
|
||||
Reference
|
||||
═╬═Time══
|
||||
-->
|
||||
|
||||
<!-- In-article sections -->
|
||||
[further readings]: #further-readings
|
||||
|
||||
<!-- Knowledge base -->
|
||||
[grafana]: grafana.md
|
||||
[node exporter]: node%20exporter.md
|
||||
@@ -434,8 +446,11 @@ All the references in the [further readings] section, plus the following:
|
||||
[how relabeling in prometheus works]: https://grafana.com/blog/2022/03/21/how-relabeling-in-prometheus-works/
|
||||
[how to integrate prometheus and grafana on kubernetes using helm]: https://semaphoreci.com/blog/prometheus-grafana-kubernetes-helm
|
||||
[how to set up and experiment with prometheus remote-write]: https://developers.redhat.com/articles/2023/11/30/how-set-and-experiment-prometheus-remote-write
|
||||
[i need to know about the below kube_state_metrics description. exactly looking is what the particular metrics doing]: https://stackoverflow.com/questions/60440847/i-need-to-know-about-the-below-kube-state-metrics-description-exactly-looking-i#60449570
|
||||
[install prometheus and grafana by helm]: https://medium.com/@at_ishikawa/install-prometheus-and-grafana-by-helm-9784c73a3e97
|
||||
[install prometheus and grafana with helm 3 on a local machine vm]: https://dev.to/ko_kamlesh/install-prometheus-grafana-with-helm-3-on-local-machine-vm-1kgj
|
||||
[ordaa/boinc_exporter]: https://gitlab.com/ordaa/boinc_exporter
|
||||
[prometheus and grafana setup in minikube]: http://blog.marcnuri.com/prometheus-grafana-setup-minikube/
|
||||
[scrape selective metrics in prometheus]: https://docs.last9.io/docs/how-to-scrape-only-selective-metrics-in-prometheus
|
||||
[set up prometheus and ingress on kubernetes]: https://blog.gojekengineering.com/diy-how-to-set-up-prometheus-and-ingress-on-kubernetes-d395248e2ba
|
||||
[snmp monitoring and easing it with prometheus]: https://medium.com/@openmohan/snmp-monitoring-and-easing-it-with-prometheus-b157c0a42c0c
|
||||
|
||||
@@ -3,6 +3,8 @@
|
||||
## Table of contents <!-- omit in toc -->
|
||||
|
||||
1. [First boot](#first-boot)
|
||||
1. [Boot from USB](#boot-from-usb)
|
||||
1. [Raspberry Pi 4B](#raspberry-pi-4b)
|
||||
1. [Repositories](#repositories)
|
||||
1. [Privilege escalation](#privilege-escalation)
|
||||
1. [Disable WiFi and Bluetooth](#disable-wifi-and-bluetooth)
|
||||
@@ -30,12 +32,37 @@
|
||||
1. [LED warning flash codes](#led-warning-flash-codes)
|
||||
1. [Issues connecting to WiFi network using roaming features or WPA3](#issues-connecting-to-wifi-network-using-roaming-features-or-wpa3)
|
||||
1. [Further readings](#further-readings)
|
||||
1. [Sources](#sources)
|
||||
1. [Sources](#sources)
|
||||
|
||||
## First boot
|
||||
|
||||
Unless manually set from the Imager, on first boot the system will ask to create a new initial user.
|
||||
|
||||
## Boot from USB
|
||||
|
||||
Available on Raspberry Pi 2B v1.2, 3A+, 3B, 3B+, 4B, 400, Compute Module 3, Compute Module 3+ and Compute Module 4 only.
|
||||
|
||||
### Raspberry Pi 4B
|
||||
|
||||
The bootloader EEPROM may need to be updated to enable booting from USB mass storage devices.
|
||||
|
||||
To check this, power the Pi up with no SD card inserted and a display attached to one of the HDMI ports.<br/>
|
||||
It will display a diagnostic screen which includes the bootloader EEPROM version at the top.
|
||||
|
||||
The bootloader must be dated **_Sep 3 2020_** or later to support USB mass storage boot.<br/>
|
||||
If the diagnostic screen reports a date earlier than _Sep 3 2020_, or there is no diagnostic screen shown, one will need
|
||||
to update the bootloader EEPROM first to enable USB mass storage boot.
|
||||
|
||||
To update it:
|
||||
|
||||
1. use the _Misc Utility Images_ option in Raspberry Pi Imager to create an SD card with the latest
|
||||
_Raspberry Pi 4 EEPROM boot recovery_ image
|
||||
1. boot the Pi using this SD card
|
||||
1. the bootloader EEPROM will be updated to the latest factory version
|
||||
1. the Pi will flash its green ACT light rapidly and display green on the HDMI outputs to indicate success
|
||||
|
||||
USB mass storage boot on the Pi 4B requires Raspberry Pi OS 2020-08-20 or later.
|
||||
|
||||
## Repositories
|
||||
|
||||
[Repositories], [Mirrors].
|
||||
@@ -239,6 +266,11 @@ sudo nano '/etc/init.d/raspi-config'
|
||||
|
||||
See [Timely tips for speeding up your Raspberry Pi].
|
||||
|
||||
```sh
|
||||
# Run benchmarks.
|
||||
curl -L https://raw.githubusercontent.com/aikoncwd/rpi-benchmark/master/rpi-benchmark.sh | sudo bash
|
||||
```
|
||||
|
||||
## Headless boot
|
||||
|
||||
Manual procedure:
|
||||
@@ -296,7 +328,7 @@ network={
|
||||
|
||||
Use `wpa_passphrase`:
|
||||
|
||||
```
|
||||
```plaintext
|
||||
usage: wpa_passphrase <ssid> [passphrase]
|
||||
If passphrase is left out, it will be read from stdin
|
||||
```
|
||||
@@ -404,10 +436,9 @@ Long term solution: none currently known.
|
||||
- [Country code search]
|
||||
- [`k3s`][k3s]
|
||||
- [Configuration]
|
||||
- [os documentation]
|
||||
|
||||
## Sources
|
||||
|
||||
All the references in the [further readings] section, plus the following:
|
||||
### Sources
|
||||
|
||||
- [Prepare SD card for WiFi on headless Pi]
|
||||
- [Run Kubernetes on a Raspberry Pi with k3s]
|
||||
@@ -416,27 +447,43 @@ All the references in the [further readings] section, plus the following:
|
||||
- [Timely tips for speeding up your Raspberry Pi]
|
||||
- [Repositories]
|
||||
- [Mirrors]
|
||||
- [disabling bluetooth on raspberry pi]
|
||||
- [ghollingworth/overlayfs]
|
||||
- [how to disable onboard wifi and bluetooth on raspberry pi 3]
|
||||
- [how to disable wi-fi on raspberry pi]
|
||||
- [how to disable your raspberry pi's wi-fi]
|
||||
- [how to make your raspberry pi 4 faster with a 64 bit kernel]
|
||||
- [re: raspbian jessie linux 4.4.9 severe performance degradati]
|
||||
- [rp automatic updates]
|
||||
- [sd card power failure resilience ideas]
|
||||
- [alpine linux headless installation]
|
||||
- [alpine linux]
|
||||
- [benchmark]
|
||||
- [preventing filesystem corruption in embedded linux]
|
||||
- [usb mass storage boot]
|
||||
|
||||
<!--
|
||||
References
|
||||
Reference
|
||||
═╬═Time══
|
||||
-->
|
||||
|
||||
<!-- Upstream -->
|
||||
[/boot/config.txt]: https://www.raspberrypi.org/documentation/configuration/config-txt/README.md
|
||||
[configuration]: https://www.raspberrypi.com/documentation/computers/configuration.html
|
||||
[mirrors]: https://www.raspbian.org/RaspbianMirrors
|
||||
[os documentation]: https://www.raspberrypi.org/documentation/computers/os.html
|
||||
[overclocking]: https://www.raspberrypi.org/documentation/configuration/config-txt/overclocking.md
|
||||
[repositories]: https://www.raspbian.org/RaspbianRepository
|
||||
[vcgencmd]: https://www.raspberrypi.com/documentation/computers/os.html#vcgencmd
|
||||
|
||||
<!-- In-article sections -->
|
||||
[further readings]: #further-readings
|
||||
|
||||
<!-- Knowledge base -->
|
||||
[k3s]: kubernetes/k3s.md
|
||||
[rfkill]: rfkill.md
|
||||
|
||||
<!-- Others -->
|
||||
[alpine linux headless installation]: https://wiki.alpinelinux.org/wiki/Raspberry_Pi_-_Headless_Installation
|
||||
[alpine linux]: https://wiki.alpinelinux.org/wiki/Raspberry_Pi
|
||||
[benchmark]: https://github.com/aikoncwd/rpi-benchmark
|
||||
[country code search]: https://www.iso.org/obp/ui/#search/code/
|
||||
[disabling bluetooth on raspberry pi]: https://di-marco.net/blog/it/2020-04-18-tips-disabling_bluetooth_on_raspberry_pi/
|
||||
[ghollingworth/overlayfs]: https://github.com/ghollingworth/overlayfs
|
||||
@@ -445,8 +492,8 @@ All the references in the [further readings] section, plus the following:
|
||||
[how to disable your raspberry pi's wi-fi]: https://pimylifeup.com/raspberry-pi-disable-wifi/
|
||||
[how to make your raspberry pi 4 faster with a 64 bit kernel]: https://medium.com/for-linux-users/how-to-make-your-raspberry-pi-4-faster-with-a-64-bit-kernel-77028c47d653
|
||||
[issue 2067]: https://github.com/k3s-io/k3s/issues/2067#issuecomment-664052806
|
||||
[os documentation]: https://www.raspberrypi.org/documentation/computers/os.html
|
||||
[prepare sd card for wifi on headless pi]: https://raspberrypi.stackexchange.com/questions/10251/prepare-sd-card-for-wifi-on-headless-pi
|
||||
[preventing filesystem corruption in embedded linux]: https://www.embeddedarm.com/assets/preventing-filesystem-corruption-in-embedded-linux
|
||||
[raspbian bug 1929746]: https://bugs.launchpad.net/raspbian/+bug/1929746
|
||||
[re: how to make sure the rpi cpu is not throttled down?]: https://www.raspberrypi.org/forums/viewtopic.php?t=152549#p999931
|
||||
[re: raspbian jessie linux 4.4.9 severe performance degradati]: https://www.raspberrypi.org/forums/viewtopic.php?f=63&t=147781&start=50#p972790
|
||||
@@ -454,3 +501,4 @@ All the references in the [further readings] section, plus the following:
|
||||
[run kubernetes on a raspberry pi with k3s]: https://opensource.com/article/20/3/kubernetes-raspberry-pi-k3s
|
||||
[sd card power failure resilience ideas]: https://www.raspberrypi.org/forums/viewtopic.php?f=63&t=253104&p=1549229#p1549117
|
||||
[timely tips for speeding up your raspberry pi]: https://www.raspberry-pi-geek.com/Archive/2013/01/Timely-tips-for-speeding-up-your-Raspberry-Pi
|
||||
[usb mass storage boot]: https://www.raspberrypi.org/documentation/hardware/raspberrypi/bootmodes/msd.md
|
||||
|
||||
45
knowledge base/shell.md
Normal file
45
knowledge base/shell.md
Normal file
@@ -0,0 +1,45 @@
|
||||
# Shell
|
||||
|
||||
```shell
|
||||
$ cat /etc/locale.conf
|
||||
LANG=en_US.UTF-8
|
||||
LC_NUMERIC=en_GB.UTF-8
|
||||
LC_TIME=en_GB.UTF-8
|
||||
LC_MONETARY=en_GB.UTF-8
|
||||
LC_PAPER=en_GB.UTF-8
|
||||
LC_MEASUREMENT=en_GB.UTF-8
|
||||
|
||||
$ locale
|
||||
LANG=en_US.UTF-8
|
||||
LC_CTYPE="en_US.UTF-8"
|
||||
LC_NUMERIC=en_GB.UTF-8
|
||||
LC_TIME=en_GB.UTF-8
|
||||
LC_COLLATE="en_US.UTF-8"
|
||||
LC_MONETARY=en_GB.UTF-8
|
||||
LC_MESSAGES="en_US.UTF-8"
|
||||
LC_PAPER=en_GB.UTF-8
|
||||
LC_NAME="en_US.UTF-8"
|
||||
LC_ADDRESS="en_US.UTF-8"
|
||||
LC_TELEPHONE="en_US.UTF-8"
|
||||
LC_MEASUREMENT=en_GB.UTF-8
|
||||
LC_IDENTIFICATION="en_US.UTF-8"
|
||||
LC_ALL=
|
||||
```
|
||||
|
||||
## See also
|
||||
|
||||
- [Shellcheck]
|
||||
|
||||
[shellcheck]: https://www.shellcheck.net/
|
||||
|
||||
## Further readings
|
||||
|
||||
- [How can I declare and use boolean variables in a shell script]?
|
||||
- [What does LC_ALL=C do]?
|
||||
- [Exit Codes With Special Meanings]
|
||||
- [How to check if running as root in a bash script]
|
||||
|
||||
[exit codes with special meanings]: https://tldp.org/LDP/abs/html/exitcodes.html
|
||||
[how can i declare and use boolean variables in a shell script]: https://stackoverflow.com/questions/2953646/how-can-i-declare-and-use-boolean-variables-in-a-shell-script#21210966
|
||||
[how to check if running as root in a bash script]: https://stackoverflow.com/questions/18215973/how-to-check-if-running-as-root-in-a-bash-script#21622456
|
||||
[what does lc_all=c do]: https://unix.stackexchange.com/questions/87745/what-does-lc-all-c-do#87763
|
||||
@@ -28,7 +28,15 @@ See usage for details.
|
||||
<summary>Installation and configuration</summary>
|
||||
|
||||
```sh
|
||||
# Install.
|
||||
brew install 'tmux'
|
||||
|
||||
# Get the default settings.
|
||||
# Might need to run from inside a sessions.
|
||||
# Specify a null configuration file so that tmux ends up printing whatever is hard-coded in its source.
|
||||
tmux -f '/dev/null' show-options -s
|
||||
tmux -f '/dev/null' show-options -g
|
||||
tmux -f '/dev/null' list-keys
|
||||
```
|
||||
|
||||
The configuration file is `$HOME/.tmux.conf` or `$XDG_CONFIG_HOME/tmux/tmux.conf`.
|
||||
@@ -120,6 +128,9 @@ tmux kill-session -t 'session-name'
|
||||
- [Tmux has forever changed the way I write code]
|
||||
- [Sending simulated keystrokes in Bash]
|
||||
- [Is it possible to send input to a tmux session without connecting to it?]
|
||||
- [devhints.io]
|
||||
- [hamvocke/dotfiles]
|
||||
- [Default Tmux config]
|
||||
|
||||
<!--
|
||||
Reference
|
||||
@@ -135,6 +146,9 @@ tmux kill-session -t 'session-name'
|
||||
[tmux plugin manager]: https://github.com/tmux-plugins/tpm
|
||||
|
||||
<!-- Others -->
|
||||
[default tmux config]: https://unix.stackexchange.com/questions/175421/default-tmux-config#342975
|
||||
[devhints.io]: https://devhints.io/tmux
|
||||
[hamvocke/dotfiles]: https://github.com/hamvocke/dotfiles/blob/master/tmux/.tmux.conf
|
||||
[is it possible to send input to a tmux session without connecting to it?]: https://unix.stackexchange.com/questions/409861/is-it-possible-to-send-input-to-a-tmux-session-without-connecting-to-it#409863
|
||||
[sending simulated keystrokes in bash]: https://superuser.com/questions/585398/sending-simulated-keystrokes-in-bash#1606615
|
||||
[tmux cheat sheet & quick reference]: https://tmuxcheatsheet.com/
|
||||
|
||||
84
knowledge base/vscodium.md
Normal file
84
knowledge base/vscodium.md
Normal file
@@ -0,0 +1,84 @@
|
||||
# VSCodium
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Zsh terminal icons are not getting displayed in the terminal
|
||||
|
||||
Change font to `NotoSansMono Nerd Font` in the _Terminal_ > _Integrated_ > _Font Family_ settings.
|
||||
See [Why Zsh terminal icons are not getting displayed in Atom Platformio Ide Terminal?]
|
||||
|
||||
## Flatpak version
|
||||
|
||||
In case you missed, the README file is at `/app/share/codium/README.md`
|
||||
|
||||
### FAQ
|
||||
|
||||
This version is running inside a _container_ and is therefore __not able__
|
||||
to access SDKs on your host system!
|
||||
|
||||
#### To execute commands on the host system, run inside the sandbox
|
||||
|
||||
```bash
|
||||
flatpak-spawn --host <COMMAND>
|
||||
```
|
||||
|
||||
#### Host Shell
|
||||
|
||||
To make the Integrated Terminal automatically use the host system's shell,
|
||||
you can add this to the settings of vscodium:
|
||||
|
||||
```json
|
||||
{
|
||||
"terminal.integrated.shell.linux": "/usr/bin/env",
|
||||
"terminal.integrated.shellArgs.linux": ["--", "flatpak-spawn", "--host", "bash"]
|
||||
}
|
||||
```
|
||||
|
||||
#### SDKs
|
||||
|
||||
This flatpak provides a standard development environment (gcc, python, etc).
|
||||
To see what's available:
|
||||
|
||||
```bash
|
||||
flatpak run --command=sh com.vscodium.codium
|
||||
ls /usr/bin (shared runtime)
|
||||
ls /app/bin (bundled with this flatpak)
|
||||
```
|
||||
|
||||
To get support for additional languages, you have to install SDK extensions, e.g.
|
||||
|
||||
```bash
|
||||
flatpak install flathub org.freedesktop.Sdk.Extension.dotnet
|
||||
flatpak install flathub org.freedesktop.Sdk.Extension.golang
|
||||
FLATPAK_ENABLE_SDK_EXT=dotnet,golang flatpak run com.vscodium.codium
|
||||
```
|
||||
|
||||
You can use
|
||||
|
||||
```bash
|
||||
flatpak search <TEXT>
|
||||
```
|
||||
|
||||
to find others.
|
||||
|
||||
#### Run flatpak codium from host terminal
|
||||
|
||||
If you want to run `codium /path/to/file` from the host terminal just add this to your shell's rc file
|
||||
|
||||
```bash
|
||||
alias codium="flatpak run com.vscodium.codium"
|
||||
```
|
||||
|
||||
then reload sources, now you could try:
|
||||
|
||||
```bash
|
||||
$ codium /path/to/
|
||||
# or
|
||||
$ FLATPAK_ENABLE_SDK_EXT=dotnet,golang codium /path/to/
|
||||
```
|
||||
|
||||
## Sources
|
||||
|
||||
- [Why Zsh terminal icons are not getting displayed in Atom Platformio Ide Terminal?]
|
||||
|
||||
[why zsh terminal icons are not getting displayed in atom platformio ide terminal?]: https://forum.manjaro.org/t/why-zsh-terminal-icons-are-not-getting-displayed-in-atom-platformio-ide-terminal/64885/2
|
||||
44
knowledge base/zram.md
Normal file
44
knowledge base/zram.md
Normal file
@@ -0,0 +1,44 @@
|
||||
# ZRAM
|
||||
|
||||
TODO
|
||||
|
||||
1. [TL;DR](#tldr)
|
||||
1. [Further readings](#further-readings)
|
||||
1. [Sources](#sources)
|
||||
|
||||
## TL;DR
|
||||
|
||||
```sh
|
||||
$ grep 'swap' /etc/fstab
|
||||
/dev/zram0 none swap sw 0 0
|
||||
|
||||
$ cat /etc/modules-load.d/zram.conf
|
||||
zram
|
||||
|
||||
# Create a zram block device with total capacity of 2x the total RAM.
|
||||
# Size is determined by the 'echo ...' part.
|
||||
$ cat /etc/udev/rules.d/10-zram.rules
|
||||
KERNEL=="zram0", \
|
||||
SUBSYSTEM=="block", \
|
||||
ACTION=="add", \
|
||||
ATTR{initstate}=="0", \
|
||||
PROGRAM="/bin/sh -c 'echo $(($(LANG=C free --kilo | sed --silent --regexp-extended s/^Mem:\ (0-9+)\ +.$/\1/p)*2))KiB'", \
|
||||
ATTR{disksize}="$result", \
|
||||
RUN+="/sbin/mkswap $env{DEVNAME}", \
|
||||
TAG+="systemd"
|
||||
```
|
||||
|
||||
## Further readings
|
||||
|
||||
### Sources
|
||||
|
||||
<!--
|
||||
Reference
|
||||
═╬═Time══
|
||||
-->
|
||||
|
||||
<!-- In-article sections -->
|
||||
<!-- Knowledge base -->
|
||||
<!-- Files -->
|
||||
<!-- Upstream -->
|
||||
<!-- Others -->
|
||||
Reference in New Issue
Block a user