<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Llm on Eryk Tech</title><link>https://eryk.tech/en/tags/llm/</link><description>Recent content in Llm on Eryk Tech</description><generator>Hugo</generator><language>en</language><lastBuildDate>Thu, 30 Apr 2026 00:00:00 +0000</lastBuildDate><atom:link href="https://eryk.tech/en/tags/llm/index.xml" rel="self" type="application/rss+xml"/><item><title>What Harness Engineering Is and Why It Matters</title><link>https://eryk.tech/en/2026/04/30/what-harness-engineering-is-and-why-it-matters/</link><pubDate>Thu, 30 Apr 2026 00:00:00 +0000</pubDate><guid>https://eryk.tech/en/2026/04/30/what-harness-engineering-is-and-why-it-matters/</guid><description>&lt;p>Human evolution has always been shaped by cognitive externalization. Thoughts
were turned into spoken language, language was turned into writing, writing came
to be printed at massive scale, and all of that was eventually transformed into
digital artifacts stored in computers.&lt;/p>
&lt;p>These transitions allowed humanity to reorganize its mental capacity, freeing up
limited space to focus on planning, abstraction, and creativity.&lt;/p>
&lt;p>There is an argument that this same logic can be applied to the design and
development of LLM-based agents, and that the improvement in these agents is not
mainly driven by training ever-larger models. It comes, instead, from
reallocating the model&amp;rsquo;s cognitive load into persistent, inspectable, and
reusable structures, organized into three main pillars mediated by a concept
called Harness Engineering.&lt;/p></description></item></channel></rss>