// End of Selection ```

    FEBRUARY 2026

    Building Relationships, Not Just Prompts

    By Alex GilbertLinkedInPersonal Site

    I just finished reading "Agents in the Long Game of AI" by Marjorie McShane, Sergei Nirenburg, and Jesse English, and it offered some interesting insights for building a startup in 2026. We've been coasting on the "veneer of intelligence" from massive language models for a few years now, but the book makes a convincing case that this honeymoon phase is ending. Increasingly, being able to generate a cool response is no longer enough. The authors talk about the black box liability, which is the massive risk we take when we deploy systems that even their developers can't explain. With the increasing use of LLMs, their shortfalls, such as making up facts or using very limited context, are becoming ever more apparent. Because of this, users are starting to trust the "oracle" less and less.

    If we want to build something that people actually rely on, we have to move past the black box and start building systems based on trust and relationships. To get ahead, we need agents that are explainable by design because they use human-inspired computational modeling. The book introduces Language-Endowed Intelligent Agents (LEIAs), which take chatbots to the next level; they are social collaborators that use a "property-rich world model," or an ontology, to truly understand a situation, rather than just matching patterns in a dataset.

    What really struck me was the difference between how a human expert and a standard 2020s-era AI explains things. Standard AI usually gives you an answer based on word co-occurrence probabilities and population-level statistics. By contrast, a "Long Game" agent explains its reasoning using causal chains. Imagine a medical AI: instead of just saying "there's an 80% chance this procedure works," it explains the mechanics of how the procedure actually fixes the problem. That's the kind of transparency that earns trust and builds a real relationship with the user.

    These agents also independently judge if they understand a situation well enough to act. A big part of that relationship is knowing when to stop. If an agent's confidence is low, it should have the "social skills" to pause and ask a human for clarification instead of guessing.

    The "shiny object" phase of LLMs is transitioning into a much more serious era. Building a startup now means focusing on content-centric agents that prioritize meaning over simple scale. We should be building systems that learn through interaction and explain their inner workings in human terms to ensure we are actually solving problems rather than just predicting words.