From 79f939549936b9da0bdbfeb768580b446de4e902 Mon Sep 17 00:00:00 2001 From: Michele Cereda Date: Thu, 10 Apr 2025 00:14:56 +0200 Subject: [PATCH] feat(kb/best-practices): review and add sources --- knowledge base/best practices.md | 36 ++++++++++++++++++++++++-------- 1 file changed, 27 insertions(+), 9 deletions(-) diff --git a/knowledge base/best practices.md b/knowledge base/best practices.md index d888c16..c13336b 100644 --- a/knowledge base/best practices.md +++ b/knowledge base/best practices.md @@ -40,9 +40,12 @@ What really worked for me personally, or in my experience. simplicity.
Always going for the simple solution makes things complicated on a higher level.
Check out [KISS principle is not that simple]. -- Stop modularizing stuff just to [avoid repetitions][don't repeat yourself(dry) in software development]. -- Stop [abstracting away][we have used too many levels of abstractions and now the future looks bleak] stuff that does - not need to be (`docker-cli`/`kubectl` wrappers mapping features 1-to-1, anyone?). +- Modularize stuff when it makes sense, not just to + [avoid repetitions][don't repeat yourself(dry) in software development]. +- Create abstractions that **do** hide away the complexity behind them.
+ Avoid creating wrappers that would map features 1-to-1 with their + [_not-abstracted-anymore_ target object][we have used too many levels of abstractions and now the future looks bleak], + and just use the original processes and tools when in need of control. - Beware of complex things that _should be simple_.
E.g., check what the _[SAFe] delusion_ is. - Focus on what matters, but also set time aside to work on the rest.
@@ -63,7 +66,7 @@ What really worked for me personally, or in my experience. Do **not** adapt your work to specific tools. - Keep track of tools' EOL and keep them updated accordingly. Trackers like [endoflife.date] could help in this. -- Backup your data, especially when you are about to update something.
+- Backup your data, especially when you are about to make changes to something managing or storing it.
[Murphy's law] is lurking. Consider [the 3-2-1 backup strategy]. - [Branch early, branch often]. - [Keep a changelog]. @@ -88,6 +91,9 @@ What really worked for me personally, or in my experience. They still create a lot of discontent even inside Amazon when used _against_ anybody. - Keep Goodhart's law in mind: > When a measure becomes a target, it ceases to be a good measure. +- Always have a plan B. +- When managing permissions, consider [break glass][break glass explained: why you need it for privileged accounts] + procedures and/or tools. ## Teamwork @@ -109,20 +115,26 @@ What really worked for me personally, or in my experience. - Keep _integration_, _delivery_ and _deployment_ **separated**.
They are different concepts, and as such should require different tasks.
This also allows for checkpoints, and to fail fast with less to no unwanted consequence. +- Consider adopting the [_main must be green_ principle][keeping green]. ### Pipelining -- Differentiate what the concept of pipelines really is from the idea of pipelines in approaches like DevOps.
- Pipelines in general are nothing more than _sequences of actions_. Pipelines in DevOps and alike end up most of the - times being _magic tools that take actions away from people_. +- Differentiate what the **concept** of pipelines really is from the **idea** of pipelines in approaches like + DevOps.
+ Pipelines in general should be nothing more than _sequences of actions_. Pipelines in DevOps (and alike) end up most + of the times being _magic tools that take actions away from people_. - Keep in mind [the automation paradox].
Pipelines tend to become complex systems just like Rube Goldberg machines. - Keep tasks as simple, consistent and reproducible as possible.
Avoid like the plague relying on programs or scripts written directly in pipelines: pipeline should act as the _glue_ connecting tasks, not replace full fledged applications. -- Most, if not all, tasks should be able to execute from one's own local machine.
+- Most, if not all, pipeline tasks should be able to execute from one's own local machine.
This allows to fail fast and avoid wasting time waiting for pipelines to run in a black box somewhere. -- DevOps pipelines should be meant to be used as **last mile** steps for specific goals.
+- Pipelines are a good central place from which make changes to critical resources.
+ Developers should **not** have the access privileges to make such changes _by default_, but selected people **shall** + have ways to obtain those permissions for emergencies + ([break glass][break glass explained: why you need it for privileged accounts]). +- DevOps pipelines should be meant to be used as **last mile** steps for **specific** goals.
There **cannot** be a single pipeline for everything, the same way as the _one-size-fits-all_ concept never really works. - Try and strike a balance between what **needs** to be done centrally (e.g. from a repository's `origin` remote) and @@ -203,6 +215,9 @@ Listed in order of addition: - [The 10 Commandments of Navigating Code Reviews] - [Less Is More: The Minimum Effective Dose] - [AWS re:Invent 2023 - Platform engineering with Amazon EKS (CON311)] +- [Break Glass Explained: Why You Need It for Privileged Accounts] +- [Keeping green] +- [Why committing straight to main/master must be allowed]