diff --git a/knowledge base/ai/agent.md b/knowledge base/ai/agent.md
index c52c7e5..17636cd 100644
--- a/knowledge base/ai/agent.md
+++ b/knowledge base/ai/agent.md
@@ -43,6 +43,8 @@ Prefer **requiring** consent by agents when running them.
Agents created by Anthropic and other companies have a history of not caring about agent abuse, and leave users on
their own while hiding behind a disclaimer.
+Some human workers could be replaced for a fraction of the costs.
+
### How much context is too much?
Integrating agents directly into operating systems and applications transforms them from relatively neutral resource
@@ -91,12 +93,13 @@ It also happened that agents modified each other's settings files, helping one a
- [Agentic ProbLLMs - The Month of AI Bugs]
- [ASCII Smuggler Tool: Crafting Invisible Text and Decoding Hidden Codes]
- [Superpowers: How I'm using coding agents in October 2025], and [obra/superpowers] by extension
-- [Moltbot][moltbot/moltbot] and [How a Single Email Turned My ClawdBot Into a Data Leak]
+- [OpenClaw][openclaw/openclaw], [OpenClaw: Who are you?] and [How a Single Email Turned My ClawdBot Into a Data Leak]
### Sources
- [39C3 - AI Agent, AI Spy]
- [39C3 - Agentic ProbLLMs: Exploiting AI Computer-Use and Coding Agents]
+- [xAI engineer fired for leaking secret "Human Emulator" project]
1. [TL;DR](#tldr)
+1. [Reasoning](#reasoning)
+1. [Concerns](#concerns)
1. [Run LLMs Locally](#run-llms-locally)
1. [Further readings](#further-readings)
1. [Sources](#sources)
@@ -59,6 +61,38 @@ They have superseded recurrent neural network-based models.
-->
+## Reasoning
+
+Standard is just autocompletion. Models just try to infer or recall what the most probable next word would be.
+
+Chain of Thought tells models to _show their work_. It _feels_ like the model is calculating or thinking.
+What it really does is just increasing the chances that the answer is correct by breaking the user's questions in
+smaller, more manageable steps, and solving on each of them before giving back the final answer.
+The result is more accurate, but it costs more tokens and requires a bigger context window.
+
+At some point we gave models the ability to execute commands. This way the model can use (or even create) them to get
+or check the answer, instead of just infer or recall it.
+
+The ReAct loop (reason+act) came next, where the model loops on the things above. Breaks the request in smaller steps,
+acts on them using functions if necessary, checks the results, updates the chain of thoughts, repeat until the request
+is satisfied.
+
+Next step is [agentic AI][agent].
+
+## Concerns
+
+- Lots of people currently thinks of LLMs as _real intelligence_, when it is not.
+- People currently gives too much credibility to LLM answers, and trust them more than they trust their teachers,
+ accountants, lawyers or even doctors.
+- AI companies could bias their models to say specific things, subtly promote ideologies, influence elections, or even
+ rewrite history in the mind of those who trust the LLMs.
+- Models can be vulnerable to specific attacks (e.g. prompt injection) that would change the LLM's behaviour, bias it,
+ or hide malware in their tools.
+- People is using LLMs mindlessly too much, mostly due to the convenience they offer but also because they don't understand
+ what those are or how they work. This is causing lack of critical thinking and overreliance.
+- Model training and execution requires resources that are normally not available to the common person. This encourages
+ people to depend from, and hence give power to, AI companies.
+
## Run LLMs Locally
Use one of the following:
@@ -75,6 +109,7 @@ Use one of the following:
### Sources
- [Run LLMs Locally: 6 Simple Methods]
+- [OpenClaw: Who are you?]
+[Agent]: agent.md
[LMStudio]: lmstudio.md
[Ollama]: ollama.md
[vLLM]: vllm.md
@@ -101,4 +137,5 @@ Use one of the following:
[Llama]: https://www.llama.com/
[Llamafile]: https://github.com/mozilla-ai/llamafile
[Mistral]: https://mistral.ai/
+[OpenClaw: Who are you?]: https://www.youtube.com/watch?v=hoeEclqW8Gs
[Run LLMs Locally: 6 Simple Methods]: https://www.datacamp.com/tutorial/run-llms-locally-tutorial