This week I saw AI step from hype to handles: Showboat and Rodney let agents prove what they built; assistants are starting to choose our stacks; and the real moat is semantics—knowledge graphs and shared layers powering search, SQL QA, agents, and marketplaces. For balance, I dug into minds and motives: Bloom on rational compassion and why smart people believe falsehoods, the right to be disliked, plus industry tremors from lean AI-native teams to compact models and Wikipedia’s data deals.
Others
- ‘Introducing Showboat and Rodney, So Agents Can Demo What They’ve Built’: Simon Willison introduces Showboat and Rodney to help AI coding agents prove what they built. Showboat is a Go CLI that assembles Markdown demos via init/note/exec/image, auto-embedding outputs and screenshots; it supports pop, verify, extract, and its help lets agents self-use it. Rodney uses the Rod Go library to drive Chrome for web UI demos. Paired with TDD, these tools cut manual QA, avoid costly agent swarms, and let humans see real, running results.
- ‘Accept, Accept, Accept: How AI Is Choosing Your Tech Stack’: Developer tool GTM is shifting from sales-/product-led to an AI-native era where coding assistants choose libraries and services with a few accept clicks. To win, be in model training data and project context (CLAUDE.md), ship skills/plugins, zero-setup defaults, AI-readable docs/errors, and strong ecosystem ties. Audit AI recommendations, cut friction, invest in open source and content. An AI-mediated flywheel rewards tools AIs know and use well.
- ’✨ AI Dev X SF 26’: AI is reshaping jobs: most layoffs aren’t from automation, but AI-savvy workers and smaller AI-native teams are replacing those who don’t adapt; opportunities abound. OpenClaw, a viral agent, proved useful yet insecure and overhyped. Moonshot’s Kimi K2.5 adds vision and subagents, leading many benchmarks. Wikipedia struck enterprise data deals with major AI firms. Mistral released compact Ministral models via pruning and distillation.
- ‘Leveraging Knowledge Graphs in Real Estate Search’: Zillow builds a real estate Knowledge Graph to unify MLS data, listing text/images, POIs, and user queries. With a defined ontology, normalization, and ML (SBERT/BERT) plus human-in-the-loop, it disambiguates concepts and links nodes. The KG powers concept search, autocomplete, natural-language query understanding, and user profiles, improving recall and relevance. A content platform feeds near real-time updates and versioning for reliable evolution.
- ‘Powering Agentic AI With Knowledge Graphs’: Agentic AI plans and acts autonomously but needs enterprise context to be safe and accurate. LLMs lack grounded, business-specific relationships, risking errors. Stardog’s Enterprise Knowledge Graph connects and governs all data as a virtual, real-time semantic layer and single source of truth. By integrating with LLMs via APIs/MCP, it lets agents reason across teams, policies, and systems, enabling explainable, reliable decisions and scalable, future-ready AI.
- ‘Beyond Data Modeling’: Data modeling organizes data and eases querying, yet still makes analysts write SQL and reason about joins. A semantic layer adds a compiler and simple interfaces like REST, turning definitions into consistent SQL, boosting governance, security enforcement, cache hits, and performance. It powers reliable, low-latency data products and can cut total cost. Core themes: consistency, interface, security, performance and cost.
- ‘Increasing Accuracy of LLM-Powered Question Answering on SQL Databases: Knowledge Graphs to the Rescue’: Study on LLM Text-to-SQL shows Knowledge Graphs markedly boost accuracy. Using an insurance-domain benchmark with ontology and mappings, GPT-4 zero-shot on raw SQL scored 16% accuracy; asking over the KG raised it to 54%. Leveraging the ontology to detect and repair queries lifted accuracy to 72.55%, plus 8% “I don’t know” cases, yielding ~20% error. Conclusion: investing in KGs improves enterprise QA.
- ‘The AI Iceberg’: AI’s real leverage lies beneath algorithms: data pipelines and semantics. LLMs thrive on high-quality, well-connected data. Most orgs own rich but fragmented data; empower engineers with a big-picture, semantic view to weave it into coherent graphs. Act: model data as graphs, decentralize publishing, centralize a shared ontology to keep schemas consistent, and pair LLMs with ontologies for a reinforcing loop.
- ‘Introducing the Property Graph Index: A Powerful New Way to Build Knowledge Graphs With LLMs’: LlamaIndex launches the Property Graph Index, a labeled property graph for building richer LLM knowledge graphs. It surpasses triples with types, properties, embeddings, and hybrid search. Build graphs via schema-guided, implicit, or free-form extraction. Nodes are embedded with optional vector stores. Query via keyword/synonym, vector similarity, and Cypher. Powered by a PropertyGraphStore abstraction.
- ‘The Role of Knowledge Graphs in Overcoming LLM Limitations’: LLMs excel at text but struggle with multi-section context, contradictions, timeliness, and real understanding. Knowledge graphs add structured, connected, up-to-date facts so models can link evidence (e.g., GDPR cases), specialize by domain, personalize, and enforce ethics. Integration is complex, but tools are improving; most firms need not build proprietary LLMs—use KGs when they close clear context gaps.
Philosophy
- ‘«La gente que se cree las ”fake news” no es tonta. Usa la razón»’: Interview with psychologist Paul Bloom: political leanings are partly heritable; humans are tribal, favoring kin and community, though morality can check prejudice. He critiques Freud as unfalsifiable and psychoanalysis as weak. Empathy is biased; he advocates rational compassion. People use reason to serve goals like group loyalty, so they may accept fake news. Unconscious drives matter. AI is powerful but risky, likely to fuel misinformation and affect elections.
- ‘El Derecho a Caer Mal’: We crave approval, but tying self-worth to others’ opinions breeds conflict and erodes identity. Rooted in childhood and reward-based dynamics, this need is unrealistic: we cannot please everyone, and criticism—especially online—is inevitable and shaped by algorithms. Instead of chasing universal likes, aim for authenticity and assertive communication, improving relationships with those who matter.
- ‘Creencias Profundamente Arraigadas De Las Que Sabemos Muy Poco’: Antonio Ortiz, echoing Andy Masley, argues many deeply held beliefs are unexamined, often amplified online by moral certainty detached from facts. Citing Henrich, he highlights social learning, conformity, and prestige bias; changing minds needs social fit. He notes a historic fertility collapse and suggests people simply prefer fewer children; cites Noah Smith’s call for a well-funded fertility-policy center. He also flags pieces on parenting’s meaning, the right to be disliked, doubts about boredom, and Bloom’s rational compassion.
Data Science
- ‘The Semantic Router’: Tony Seale argues that combining a Semantic Layer (mapping complex data to business terms) with Semantic Web standards enables a Shared Semantic Layer. Using open standards—an internal schema.org, linked data sets, and a connected data catalog—organizations can unify data, enable reasoning across sources, and make it discoverable. This aligns data products with business value, boosts BI/AI, and empowers better, organization-wide decisions.
- ‘Knowledge Graphs: Make This Smart Shift to Your Data & Analytics Approach’: Knowledge graphs capture real-world relationships, turning data into machine-understandable knowledge for smarter search and analytics. Uber Eats used this to expand “udon” to related items, improving discovery without new schemas. They unify siloed, mixed-format data, encode complex logic, adapt quickly, enable secure collaboration and IoT, power semantic search via NLP, and cut time, storage, and IT effort for near-real-time decisions.
Online Marketplaces
- ‘Contextualizing Airbnb by Building Knowledge Graph’: Airbnb built a knowledge graph linking travel entities and relationships to deliver trip context. Backed by a relational store with node/edge types, GUIDs, provenance, a graph query API, daily dumps, and a Kafka-based mutator, it scales reliably. A hierarchical taxonomy categorizes Homes, Experiences, etc., via manual and ML tagging. It powers destination inspiration and amenity/landmark hints on PDPs, while tackling fuzzy queries and data reconciliation.