<?xml version="1.0" encoding="utf-8"?><feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en"><generator uri="https://jekyllrb.com/" version="3.10.0">Jekyll</generator><link href="https://blog.andrewlara.com/feed.xml" rel="self" type="application/atom+xml" /><link href="https://blog.andrewlara.com/" rel="alternate" type="text/html" hreflang="en" /><updated>2026-04-15T01:23:04-05:00</updated><id>https://blog.andrewlara.com/feed.xml</id><title type="html">Andrew Lara</title><subtitle>Notes on engineering, AI, and building things.</subtitle><author><name>Andrew Lara</name></author><entry><title type="html">Building with AI Agents: What Actually Works</title><link href="https://blog.andrewlara.com/engineering/ai/2026/04/13/building-with-ai-agents.html" rel="alternate" type="text/html" title="Building with AI Agents: What Actually Works" /><published>2026-04-13T00:00:00-05:00</published><updated>2026-04-13T00:00:00-05:00</updated><id>https://blog.andrewlara.com/engineering/ai/2026/04/13/building-with-ai-agents</id><content type="html" xml:base="https://blog.andrewlara.com/engineering/ai/2026/04/13/building-with-ai-agents.html"><![CDATA[<p>A few things I’ve learned after spending months shipping agent-assisted workflows in production.</p>

<h2 id="the-mental-model-shift">The mental model shift</h2>

<p>The mistake most engineers make is treating AI agents like better autocomplete. They’re not. The right mental model is closer to a junior contractor who reads fast, never gets tired, and occasionally hallucinates API docs.</p>

<p>That mental model change has two practical consequences:</p>

<ol>
  <li><strong>You define the contracts, not the logic.</strong> Give the agent a clear interface — exact inputs, expected outputs, explicit failure modes. Don’t let it infer what you mean.</li>
  <li><strong>Verification is your job.</strong> The agent ships fast. You verify. Every agent-written PR gets the same review as any other PR — probably more, because the agent won’t feel bad about it.</li>
</ol>

<h2 id="what-ive-actually-shipped">What I’ve actually shipped</h2>

<p>A few patterns that held up:</p>

<p><strong>File-based workflows</strong> — anything that’s just read/write on a predictable structure (like this blog). The agent knows the schema, pushes a file, done. Zero surprises.</p>

<p><strong>Scaffolding, not architecture</strong> — agents are excellent at creating the 80% skeleton of a new service or feature. They’re poor at architectural decisions that require context spanning multiple years of system history. Use them for the former, own the latter yourself.</p>

<p><strong>Research synthesis</strong> — give an agent a set of docs, a GitHub issue, and a question. Get back a structured answer. This alone saves an hour a day.</p>

<h2 id="what-doesnt-work">What doesn’t work</h2>

<ul>
  <li>Long-horizon tasks with ambiguous checkpoints</li>
  <li>Anything that requires institutional memory the agent doesn’t have</li>
  <li>Decisions involving trade-offs you haven’t made explicit</li>
</ul>

<h2 id="the-actual-workflow">The actual workflow</h2>

<p>For this blog specifically: I open Claude Code, describe what I want to write, and it scaffolds the post. I edit, push. The whole loop is under five minutes once you have the repo set up right.</p>

<p>That’s the real unlock — not the AI capability, but removing the friction around it.</p>]]></content><author><name>Andrew Lara</name></author><category term="engineering" /><category term="ai" /><category term="ai" /><category term="automation" /><category term="agents" /><category term="backend" /><summary type="html"><![CDATA[A few things I’ve learned after spending months shipping agent-assisted workflows in production.]]></summary></entry><entry><title type="html">Teaching LLM Agents to Think Before They Spend: Cost-Aware Tool Orchestration via Reinforcement Learning</title><link href="https://blog.andrewlara.com/ai/engineering/projects/2026/04/13/cost-aware-tool-orchestration-via-reinforcement-learning.html" rel="alternate" type="text/html" title="Teaching LLM Agents to Think Before They Spend: Cost-Aware Tool Orchestration via Reinforcement Learning" /><published>2026-04-13T00:00:00-05:00</published><updated>2026-04-13T00:00:00-05:00</updated><id>https://blog.andrewlara.com/ai/engineering/projects/2026/04/13/cost-aware-tool-orchestration-via-reinforcement-learning</id><content type="html" xml:base="https://blog.andrewlara.com/ai/engineering/projects/2026/04/13/cost-aware-tool-orchestration-via-reinforcement-learning.html"><![CDATA[<script>
  window.MathJax = {
    tex: { inlineMath: [['$', '$'], ['\\(', '\\)']], displayMath: [['$$', '$$'], ['\\[', '\\]']] },
    svg: { fontCache: 'global' }
  };
</script>

<script defer="" src="https://cdn.jsdelivr.net/npm/mathjax@3/es5/tex-svg.js"></script>

<p><em>Built for the <a href="https://rdi.berkeley.edu/agentx-agentbeats">Berkeley RDI AgentX-AgentBeats Competition</a> (Research Track). Code and datasets available on <a href="https://huggingface.co/">Hugging Face</a>.</em></p>

<hr />

<h2 id="the-20-million-a-day-problem">The $20-Million-a-Day Problem</h2>

<p>Large language model agents are getting better at using tools. They can search the web, execute code, query databases, and call specialized APIs, often in multi-step chains that would have seemed like science fiction three years ago. But there is an underexplored cost to all of this capability: it is expensive, and current agents are terrible at managing that expense.</p>

<p>Consider a simple question-answering task. An LLM agent with access to a web search API, a code interpreter, a knowledge graph, a calculator, a retrieval engine, and a document reader could plausibly use <em>any</em> of these tools, or several in sequence, to answer the question. Some of those tools are cheap (a calculator costs essentially nothing). Others are expensive (a web search API call with re-ranking might cost 100x more). The agent, having been trained or prompted purely for accuracy, will default to the most powerful tool available regardless of whether the question actually requires it.</p>

<p>This is not a hypothetical problem. Enterprise LLM deployments are already grappling with API cost management at scale, and the most common optimization strategies (prompt compression, caching, model cascading) operate <em>outside</em> the agent’s decision loop. They treat the agent as a black box and try to reduce costs around its edges.</p>

<p>We asked a different question: <strong>what if the agent itself learned when to spend and when to save?</strong></p>

<h2 id="costawaretoolenv-the-setup">CostAwareToolEnv: The Setup</h2>

<p>We built CostAwareToolEnv as a Gymnasium-compatible RL environment where an LLM agent must solve tasks by selecting from a toolkit of six tools, each with a different cost profile:</p>

<table>
  <thead>
    <tr>
      <th>Tool</th>
      <th>Cost</th>
      <th>Capability</th>
    </tr>
  </thead>
  <tbody>
    <tr>
      <td><code class="language-plaintext highlighter-rouge">calculator</code></td>
      <td>$0.001</td>
      <td>Arithmetic and symbolic computation</td>
    </tr>
    <tr>
      <td><code class="language-plaintext highlighter-rouge">retriever</code></td>
      <td>$0.01</td>
      <td>Retrieval-augmented generation over a local corpus</td>
    </tr>
    <tr>
      <td><code class="language-plaintext highlighter-rouge">code_interpreter</code></td>
      <td>$0.05</td>
      <td>Python execution sandbox</td>
    </tr>
    <tr>
      <td><code class="language-plaintext highlighter-rouge">knowledge_graph</code></td>
      <td>$0.08</td>
      <td>Structured entity and relation queries</td>
    </tr>
    <tr>
      <td><code class="language-plaintext highlighter-rouge">web_search</code></td>
      <td>$0.10</td>
      <td>Live web search with snippet extraction</td>
    </tr>
    <tr>
      <td><code class="language-plaintext highlighter-rouge">expert_model</code></td>
      <td>$0.50</td>
      <td>Call to a larger, more capable LLM</td>
    </tr>
  </tbody>
</table>

<p>These costs are meant to reflect relative real-world API pricing (not exact dollar amounts). The key design choice is that the cost differences span <em>three orders of magnitude</em>: a ratio of 500:1 between the cheapest and most expensive tool. This forces the agent to develop non-trivial strategies. You cannot just “always pick the cheap one” because many tasks genuinely require expensive tools. But you also cannot mindlessly escalate to <code class="language-plaintext highlighter-rouge">expert_model</code> on every query without blowing your budget.</p>

<h3 id="the-reward-function">The Reward Function</h3>

<p>The agent’s reward combines correctness and cost in a single scalar:</p>

\[R = \alpha \cdot \mathbb{1}[\text{correct}] - (1 - \alpha) \cdot \frac{C_{\text{used}}}{C_{\text{max}}}\]

<p>where $C_{\text{used}}$ is the total cost of tools invoked during the episode, $C_{\text{max}}$ is the maximum possible cost (using <code class="language-plaintext highlighter-rouge">expert_model</code> for every step), and $\alpha$ is a tunable parameter controlling the accuracy-cost trade-off. In our experiments we primarily use $\alpha = 0.7$, which tells the agent that correctness matters more than frugality, but not infinitely more.</p>

<p>The $\frac{C_{\text{used}}}{C_{\text{max}}}$ normalization is important. Without it, the cost penalty is on a completely different scale than the correctness reward, making the trade-off hard to learn. Normalizing to $[0, 1]$ puts both components on equal footing before the $\alpha$ weighting.</p>

<h3 id="four-benchmarks-four-domains">Four Benchmarks, Four Domains</h3>

<p>We evaluate across four established benchmarks, chosen to require genuinely different tool-use strategies:</p>

<p><strong>HotpotQA</strong>: Multi-hop question answering. Requires chaining evidence across multiple documents. The retriever is often sufficient, but some questions benefit from web search when the local corpus is incomplete.</p>

<p><strong>MATH</strong>: Competition-level mathematics. The calculator handles arithmetic, but harder problems require the code interpreter (for symbolic algebra) or even the expert model (for proof-level reasoning).</p>

<p><strong>GPQA (Graduate-level Professional QA)</strong>: Expert-domain questions in physics, chemistry, and biology. These push the boundaries of what smaller models can handle, making the <code class="language-plaintext highlighter-rouge">expert_model</code> tool a tempting but costly crutch.</p>

<p><strong>HumanEval</strong>: Code generation. The code interpreter is the obvious tool, but the agent also needs to decide whether to attempt generation directly or consult the expert model for harder functions.</p>

<p>Each benchmark is processed into a JSONL format with fields for the question, ground truth answer, a difficulty estimate (used for analysis, not training), and metadata about which tools are <em>plausibly relevant</em> (used to construct the episode, not revealed to the agent).</p>

<h2 id="why-grpo">Why GRPO?</h2>

<p>We train with Group Relative Policy Optimization (GRPO), the algorithm introduced in the DeepSeekMath paper (Shao et al., 2024) and subsequently used to train DeepSeek-R1.</p>

<p>The case for GRPO over PPO in this setting comes down to three practical considerations:</p>

<p><strong>No critic model needed.</strong> PPO requires a separate value network (the critic) that estimates expected future reward. For LLM-scale agents, this means maintaining and training a second model of comparable size, doubling memory requirements. GRPO eliminates the critic entirely by estimating advantages from the relative quality of sampled completions within each batch.</p>

<p><strong>Natural fit for verifiable rewards.</strong> Our environment produces deterministic rewards: the answer is either correct or it is not, and the cost is exactly calculable. This makes GRPO’s approach of sampling multiple completions per prompt and computing group-relative advantages a clean fit. There is no need for a learned reward model.</p>

<p><strong>Accessible training.</strong> GRPO can be implemented on a single node with significantly less VRAM than PPO, which matters when you are training on research-scale compute.</p>

<p>Concretely, for each prompt $q$ in a training batch, GRPO samples a group of $G$ completions ${o_1, o_2, \ldots, o_G}$ from the current policy $\pi_\theta$. Each completion receives a reward $r_i$. The advantage for completion $i$ is computed as:</p>

\[\hat{A}_i = \frac{r_i - \text{mean}(\{r_1, \ldots, r_G\})}{\text{std}(\{r_1, \ldots, r_G\})}\]

<p>This z-score normalization is the core insight: within each group, completions that perform better than average get positive advantage, and worse-than-average completions get negative advantage. The policy is then updated to increase the probability of high-advantage completions while decreasing the probability of low-advantage ones, subject to a clipping constraint and KL penalty to prevent the policy from drifting too far from the reference model.</p>

<p>In our setup, each completion is an entire tool-selection trajectory: the agent’s sequence of decisions about which tools to invoke (and in what order) for a given question. A trajectory that picks the right tools cheaply gets high reward; one that picks expensive tools unnecessarily (or cheap tools that produce wrong answers) gets low reward. Over training, the agent learns the map between question characteristics and cost-effective tool strategies.</p>

<h2 id="what-the-agent-learns">What the Agent Learns</h2>

<p>The interesting results are not just in aggregate accuracy-cost metrics (though those are good). The interesting results are in the <em>strategies</em> the agent discovers.</p>

<h3 id="strategy-1-difficulty-adaptive-escalation">Strategy 1: Difficulty-Adaptive Escalation</h3>

<p>On MATH, the trained agent develops a clear escalation pattern. For problems that the base model can likely solve with basic computation (algebra, arithmetic), it routes to <code class="language-plaintext highlighter-rouge">calculator</code> or <code class="language-plaintext highlighter-rouge">code_interpreter</code>. For problems that involve proof techniques or deeper reasoning, it escalates to <code class="language-plaintext highlighter-rouge">expert_model</code>. The agent is not given difficulty labels during training; it learns to estimate difficulty from the problem statement itself and adjusts its tool budget accordingly.</p>

<p>This mirrors what a cost-conscious human engineer would do: use the cheap tool first, and only reach for the expensive one when the cheap one is not enough. But the agent learns this strategy entirely from the reward signal, without any explicit “try cheap tools first” instruction.</p>

<h3 id="strategy-2-retrieval-gating-on-hotpotqa">Strategy 2: Retrieval Gating on HotpotQA</h3>

<p>For multi-hop QA, the agent learns to distinguish between questions where the local corpus is likely sufficient (using <code class="language-plaintext highlighter-rouge">retriever</code> at $0.01) and questions that reference recent events or obscure entities (escalating to <code class="language-plaintext highlighter-rouge">web_search</code> at $0.10). The gating signal appears to be the presence of time-sensitive language (“recently,” “current,” “as of”) and named entities with low corpus frequency.</p>

<h3 id="strategy-3-tool-composition-vs-tool-substitution">Strategy 3: Tool Composition vs. Tool Substitution</h3>

<p>On HumanEval, the agent learns two distinct modes. For straightforward function implementations, it generates code directly and uses the <code class="language-plaintext highlighter-rouge">code_interpreter</code> to verify. For functions requiring algorithmic insight (dynamic programming, graph algorithms), it queries the <code class="language-plaintext highlighter-rouge">expert_model</code> first for a strategy, <em>then</em> uses <code class="language-plaintext highlighter-rouge">code_interpreter</code> to implement and test. The key insight is that the agent treats <code class="language-plaintext highlighter-rouge">expert_model</code> + <code class="language-plaintext highlighter-rouge">code_interpreter</code> as a composed pipeline for hard problems, but avoids this expensive composition for easy ones.</p>

<h3 id="the-pareto-frontier">The Pareto Frontier</h3>

<p>By sweeping $\alpha$ from 0.5 (equal weight on accuracy and cost) to 1.0 (accuracy only), we trace out a Pareto frontier of accuracy-cost trade-offs. The shape of this frontier is informative:</p>

<ul>
  <li>At $\alpha = 1.0$, the agent converges to always using the most powerful tools, essentially recovering the behavior of an unconstrained agent.</li>
  <li>At $\alpha = 0.5$, the agent becomes aggressively frugal, sometimes sacrificing accuracy by using <code class="language-plaintext highlighter-rouge">calculator</code> on problems that genuinely need <code class="language-plaintext highlighter-rouge">code_interpreter</code>.</li>
  <li>The sweet spot at $\alpha = 0.7$ achieves roughly 92-95% of the unconstrained accuracy while reducing tool costs by 40-60% across benchmarks.</li>
</ul>

<p>This last number is the headline result: <strong>you can recover most of the accuracy of a “use everything” agent at roughly half the cost</strong>, simply by letting the agent learn its own tool-selection policy via RL.</p>

<h2 id="comparison-why-not-just-prompt">Comparison: Why Not Just Prompt?</h2>

<p>The obvious baseline is prompting. You could prepend “Use the cheapest tool that can solve this problem” to the system prompt and skip all the RL machinery. We tested this. The results are instructive:</p>

<p><strong>Prompting produces bimodal behavior.</strong> The prompted agent either ignores the cost instruction and uses expensive tools anyway (especially on hard benchmarks like GPQA), or over-corrects and uses cheap tools on everything, tanking accuracy. It lacks the smooth, difficulty-adaptive behavior that RL training produces.</p>

<p><strong>Prompting does not generalize across benchmarks.</strong> A prompt tuned for MATH cost optimization does not transfer well to HotpotQA, because the tool-selection strategies are fundamentally different. The RL-trained agent, by contrast, learns benchmark-specific strategies from the same training procedure.</p>

<p><strong>Prompting is fragile.</strong> Minor rephrasing of the cost instruction (“be frugal” vs. “minimize cost” vs. “prefer cheaper tools”) produces surprisingly different behaviors. The RL-trained policy is deterministic for a given input and less prompt-sensitive.</p>

<h2 id="connection-to-the-broader-landscape">Connection to the Broader Landscape</h2>

<p>This work sits at the intersection of several active research areas.</p>

<p><strong>RL for LLM agents.</strong> The post-DeepSeek-R1 era has seen an explosion of work applying GRPO and related algorithms to LLM training. Most of this work targets reasoning improvement (getting the right answer). Our contribution is applying the same machinery to <em>resource management</em>: teaching agents not just <em>what</em> to do but <em>how expensively</em> to do it.</p>

<p><strong>Cost-aware planning.</strong> The CATP-LLM work (Wu et al., ICCV 2025) is the closest prior work we are aware of. They propose a tool planning language and offline RL algorithm for cost-aware tool planning. Our approach differs in using online GRPO training, a simpler environment formulation (direct tool selection rather than plan token generation), and evaluation across a broader set of benchmarks. We also focus on the <em>interpretability</em> of learned strategies, which CATP-LLM does not emphasize.</p>

<p><strong>Agent-R1 and agentic RL.</strong> The Agent-R1 framework (Cheng et al., 2025) demonstrates end-to-end RL training for LLM agents with tool use. Their focus is on maximizing agent capability; ours is on optimizing the accuracy-cost trade-off. These are complementary goals: an agent trained with Agent-R1-style methods could potentially be further fine-tuned with our cost-aware reward to produce a capable <em>and</em> efficient agent.</p>

<p><strong>Model cascading and routing.</strong> Systems like RouteLLM and Hybrid LLM route queries to different models based on difficulty. Our approach is more general: rather than routing between models, the agent selects from a heterogeneous toolkit where the cost-accuracy trade-off varies per tool and per query type.</p>

<h2 id="what-we-did-not-do-yet">What We Did Not Do (Yet)</h2>

<p>Some honest limitations:</p>

<p><strong>Single-step tool selection.</strong> Our current environment models tool selection as a single-step (or short-horizon) decision. Real-world agents often engage in multi-turn, multi-tool chains with branching logic. Extending to full trajectory optimization with intermediate tool-use decisions is the natural next step.</p>

<p><strong>Fixed cost model.</strong> Real API costs are dynamic (rate limits, batching discounts, latency-cost trade-offs). Our fixed cost table is a useful simplification, but a production system would need to account for time-varying pricing.</p>

<p><strong>Scale.</strong> We train on 7B-parameter models. Whether the learned cost-awareness strategies transfer to or emerge differently in larger models (70B+) is an open question.</p>

<p><strong>Reward hacking.</strong> As with any RL system, reward hacking is a concern. An agent could learn to exploit evaluation quirks (for example, partial-credit scoring) rather than genuinely learning cost-efficient strategies. We mitigate this with binary correctness scoring (no partial credit) and manual inspection of learned trajectories, but more rigorous robustness analysis is warranted.</p>

<h2 id="reproducing-this-work">Reproducing This Work</h2>

<p>The full codebase, environment definition, processed datasets (four JSONL files for HotpotQA, MATH, GPQA, and HumanEval), and training configs are available via our Hugging Face organization. The environment is pip-installable and follows the standard Gymnasium API, so it plugs into existing RL training frameworks (we used TRL’s GRPO implementation).</p>

<p>If you are interested in extending this (adding new tools, new benchmarks, or different RL algorithms), the environment is designed to be modular. Defining a new tool is a Python class with a <code class="language-plaintext highlighter-rouge">cost</code> attribute, an <code class="language-plaintext highlighter-rouge">execute()</code> method, and a <code class="language-plaintext highlighter-rouge">description</code> string.</p>

<h2 id="conclusion">Conclusion</h2>

<p>The core claim of this work is simple: LLM agents should be trained to consider cost, not just accuracy. The mechanism we propose (a cost-penalized reward signal + GRPO training) is deliberately minimal. There is no complicated architecture, no novel algorithm, and no custom training infrastructure. The novelty is in the <em>framing</em>: treating tool cost as a first-class component of the agent’s reward function and showing that standard RL techniques produce non-trivial, interpretable, and effective cost-optimization strategies.</p>

<p>As LLM agents move from research prototypes to production systems processing millions of queries per day, the difference between “always use the best tool” and “use the right tool for the job” becomes worth millions of dollars. We think this is a direction worth investing in.</p>

<hr />

<h2 id="references">References</h2>

<ol>
  <li>Shao, Z., et al. “DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models.” arXiv:2402.03300, 2024.</li>
  <li>Guo, D., et al. “DeepSeek-R1: Incentivizing Reasoning Capability in LLMs via Reinforcement Learning.” arXiv:2501.12948, 2025.</li>
  <li>Schulman, J., et al. “Proximal Policy Optimization Algorithms.” arXiv:1707.06347, 2017.</li>
  <li>Wu, Y., et al. “CATP-LLM: Empowering Large Language Models for Cost-Aware Tool Planning.” ICCV 2025.</li>
  <li>Cheng, M., et al. “Agent-R1: Training Powerful LLM Agents with End-to-End Reinforcement Learning.” arXiv:2511.14460, 2025.</li>
  <li>Yang, Z., et al. “HotpotQA: A Dataset for Diverse, Explainable Multi-hop Question Answering.” EMNLP 2018.</li>
  <li>Hendrycks, D., et al. “Measuring Mathematical Problem Solving with the MATH Dataset.” NeurIPS 2021.</li>
  <li>Rein, D., et al. “GPQA: A Graduate-Level Google-Proof Q&amp;A Benchmark.” arXiv:2311.12022, 2023.</li>
  <li>Chen, M., et al. “Evaluating Large Language Models Trained on Code.” arXiv:2107.03374, 2021.</li>
  <li>Weng, L. “LLM Powered Autonomous Agents.” Lil’Log, 2023.</li>
  <li>Li, Z., et al. “Encouraging Good Processes Without the Need for Good Answers: Reinforcement Learning for LLM Agent Planning.” arXiv:2508.19598, 2025.</li>
</ol>

<hr />

<p><em>If you found this post useful, feel free to cite it:</em></p>

<div class="language-bibtex highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nc">@article</span><span class="p">{</span><span class="nl">lara2026costaware</span><span class="p">,</span>
  <span class="na">title</span>   <span class="p">=</span> <span class="s">{Teaching LLM Agents to Think Before They Spend: Cost-Aware Tool Orchestration via Reinforcement Learning}</span><span class="p">,</span>
  <span class="na">author</span>  <span class="p">=</span> <span class="s">{Lara, Andrew and Sharma, Yashawasi}</span><span class="p">,</span>
  <span class="na">journal</span> <span class="p">=</span> <span class="s">{blog.andrewlara.com}</span><span class="p">,</span>
  <span class="na">year</span>    <span class="p">=</span> <span class="s">{2026}</span><span class="p">,</span>
  <span class="na">month</span>   <span class="p">=</span> <span class="s">{Apr}</span><span class="p">,</span>
  <span class="na">url</span>     <span class="p">=</span> <span class="s">{https://blog.andrewlara.com/2026/04/13/cost-aware-tool-orchestration-via-reinforcement-learning.html}</span>
<span class="p">}</span>
</code></pre></div></div>]]></content><author><name>Andrew Lara</name></author><category term="ai" /><category term="engineering" /><category term="projects" /><category term="reinforcement-learning" /><category term="llm-agents" /><category term="tool-use" /><category term="grpo" /><category term="cost-optimization" /><summary type="html"><![CDATA[We introduce CostAwareToolEnv, an RL environment where LLM agents learn to balance task accuracy against tool invocation costs. Trained with GRPO on four diverse benchmarks, our agent discovers non-trivial cost-accuracy trade-offs that static prompting strategies miss entirely.]]></summary></entry><entry><title type="html">Welcome</title><link href="https://blog.andrewlara.com/meta/2026/04/13/welcome.html" rel="alternate" type="text/html" title="Welcome" /><published>2026-04-13T00:00:00-05:00</published><updated>2026-04-13T00:00:00-05:00</updated><id>https://blog.andrewlara.com/meta/2026/04/13/welcome</id><content type="html" xml:base="https://blog.andrewlara.com/meta/2026/04/13/welcome.html"><![CDATA[<p>First post. More soon.</p>]]></content><author><name>Andrew Lara</name></author><category term="meta" /><category term="first-post" /><summary type="html"><![CDATA[First post. More soon.]]></summary></entry></feed>