‘Redfin Joins Zillow in Banning Off-MLS Listings, Urges ‘Coming-Soon’ Compromise’: Redfin, following Zillow’s recent move, has announced it will no longer display homes publicly marketed before MLS listing, advocating for more transparency in the real estate market. CEO Glenn Kelman suggests a “coming-soon” policy as a compromise, allowing sellers to gauge interest without affecting market data. This decision aligns with Redfin’s principles amid its impending acquisition by Rocket Companies and amidst broader industry debates on transparency, with support and opposition from various brokerages.
‘CoStar Boss Says Zillow”s MLS “Power Play” Threatens Agents’: CoStar CEO Andy Florance criticized Zillow’s decision to block listings that are not submitted to a Multiple Listing Service (MLS) within 24 hours of public marketing, calling it a bold power play. The move follows a rollback of the National Association of Realtors’ Clear Cooperation Policy and is seen as Zillow taking control over how listings are marketed, which Florance argues benefits Zillow at the expense of agents. In contrast, Homes.com, which operates under CoStar, connects leads directly to agents without competing for commissions.
‘Spanish PropTech Inversiva Closes €1.2 Million Funding Round’: Spanish PropTech Inversiva has successfully closed a funding round of 1.2 million euros, aiming to utilize the funds for national and international growth, developing new business units, and enhancing its technology platform with AI and machine learning. Established as a significant player in Spain since 2022, Inversiva uses a technology-driven passive real estate investment model to cater to clients lacking the time or expertise. It has managed over 50 million euros in transactions for over 400 clients, achieving a 96.2% occupancy rate in major Spanish cities. With the recent funding, Inversiva plans to invest an additional 50 million euros in real estate within the next year.
‘How Airbnb Measures Listing Lifetime Value’: Airbnb utilizes a Listing Lifetime Value (LTV) framework to assess the value of accommodations by estimating baseline, incremental, and marketing-induced LTV. This approach helps identify valuable listings, guide hosts, and develop marketing strategies. Challenges include accounting for incremental value versus cannibalization within the marketplace and adapting to market changes, such as the COVID-19 pandemic, which necessitated updating LTV estimates. Accurate LTV estimation enhances Airbnb’s community engagement by optimizing listings and identifying effective marketing initiatives.
AI
‘On Jagged AGI: O3, Gemini 2.5, and Everything After’: In the article “On Jagged AGI: O3, Gemini 2.5, and Everything After,” Ethan Mollick explores the challenges in assessing AI intelligence and the capabilities of the newly released AI models Gemini 2.5 and OpenAI’s o3. These models demonstrate significant advances, such as the ability to carry out complex tasks with vague instructions, like marketing planning or geo-guessing based on photos. Despite these impressive capabilities, AI’s performance remains uneven, excelling in some areas while faltering in simple tasks—a concept Mollick calls the “Jagged Frontier.” This unevenness questions whether AI can be considered true AGI (Artificial General Intelligence). He suggests a “Jagged AGI” status, as these models show superhuman skills in some tasks while lacking consistency, requiring human judgment to determine their reliability. Mollick also addresses the adoption speed of AI technologies, noting their potential to integrate into human systems more rapidly than past technologies due to their agentic capabilities.
’🔥 Google Unveils 27B Gemma 3: Quantized for Consumer GPUs’: Google has released Quantization-Aware Trained (QAT) versions of its Gemma 3 models, notably the 27B model, which maintain performance while significantly reducing memory requirements, making them suitable for consumer GPUs. The 27B QAT model, for example, now requires just 14.1 gigabytes of VRAM. This reduction is achieved through int4 quantization, which has enabled memory savings across various model sizes, with the 27B model’s size reduced from 54 gigabytes to 14.1 gigabytes.
‘Classifier Factory’: The “Classifier Factory” by mistral.ai discusses the critical role of classification models across various domains to enhance efficiency and user experience, ensuring compliance. These models have diverse applications, such as moderation, intent detection, sentiment analysis, data clustering, fraud detection, spam filtering, and recommendation systems. They enable personalized experiences and strategic decision-making through analysis and organization of data. Mistral.ai offers an accessible tool called Classifier Factory, allowing organizations to create custom classifiers using efficient models and methods available on their platform and API.
‘Transformers Backend Integration in vLLM’: The Hugging Face Transformers library provides a versatile interface for model architectures, suitable from research to fine-tuning. However, scaling these models for deployment requires efficient inference, where vLLM excels. Recently integrated as a backend in vLLM, transformers allow for improved throughput and latency without compromising model flexibility. vLLM is faster and resource-efficient under load, supports an OpenAI-compatible API, and simplifies models’ deployment, serving as a bridge between plug-and-play flexibility and optimized performance.
‘Actors Horrified as They Learn What Selling Their Faces as AI Actually Means’: Actors are discovering the disturbing implications of selling their likenesses for AI use, as many find their images exploited in misleading or unethical ways without recourse due to binding contracts. Examples include South Korean actor Simon Lee’s image promoting dubious health cures and British actor Connor Yeates’ likeness being used in a political video, both without their consent. Such incidents highlight AI’s role in spreading misinformation and deepfakes, prompting public figures and actors to demand better protections and regulations against these invasive practices.
‘OpenAI O3 and O4-Mini System Card’: Simon Willison’s blog discusses the OpenAI O3 and O4-Mini System Card, noting the surprise of having both models in one document. These models enhance their thought processes by using tools like image transformation and Python for data analysis. The improvements in OpenAI’s PersonQA benchmark scores are observed, but the significance is debated. The paper also redefines “sandbagging” as a model concealing its full capabilities, linking it to an Anthropic publication, and highlighting the issue of ambiguous AI terminology definitions.
AI Agents
‘Meet Google A2A: The Protocol That Will Revolutionize Multi-Agent AI Systems’: The article introduces Google’s Agent-to-Agent (A2A) protocol, a standardized communication framework designed to simplify interactions between multiple specialized AI agents, which traditionally required complex, custom coding. By providing a universal language for AI services to communicate, A2A aims to reduce development time and complexity, allowing seamless integration and scalability without the need for custom adapters for each new service. Python A2A, an implementation by the author, supports various frameworks and facilitates easy agent orchestration, enabling AI systems to be more modular and versatile akin to LEGO blocks.
‘The Power Duo: How A2A + MCP Let You Build Practical AI Systems Today’: Manoj Desai’s article discusses the synergy between A2A (Agent-to-Agent) and MCP (Model Context Protocol), explaining how their integration forms a robust architecture for building practical AI systems. A2A facilitates direct communication among AI agents, akin to a social network, while MCP acts as a universal adapter, standardizing access to tools and data sources. This combination minimizes integration effort, simplifies component replacement, delineates error boundaries, supports easy extensibility, and promotes reusable components, significantly streamlining AI development.
‘Harness the Power of MCP Servers With Amazon Bedrock Agents’: The blog post by Amazon Web Services introduces building Amazon Bedrock agents using MCP servers to accelerate the development of generative AI applications. By leveraging Amazon Bedrock agents and MCP-based tools, users can connect to various data sources like AWS Cost Explorer and Perplexity AI. The Inline Agent SDK simplifies workflow orchestration and tool execution from MCP servers. The framework allows secure access to financial data and enhances capabilities with contextual intelligence, providing a robust foundation for AI innovations. For more details and code examples, the post directs readers to additional resources.
‘Library-McP: Working With Markdown Knowledge Bases’: The text discusses the development of “library-mcp,” a tool designed to work with Markdown knowledge bases using the Model Context Protocol (MCP). Originally created as a learning project to assist with internal workflows in accounting and operations, library-mcp allows local use with tools like Claude Desktop to explore Markdown content as “datapacks.” These datapacks can enhance creativity and consistency by dynamically compiling relevant content for specific tasks or questions. The author is excited about this approach and anticipates its usefulness in creating organized, contextually relevant data structures.
Tech
‘I”ve Operated Petabyte-Scale ClickHouse® Clusters for 5 Years’: The author shares insights from years of managing petabyte-scale ClickHouse clusters, highlighting architecture, storage, upgrades, and handling costs. Initially set up with basic replicas and local SSDs, they avoided complex sharding, adapting load balancers and HTTP over the native protocol. They discuss cloud storage’s pros and cons, advocate for compute-storage separation, and explain zero-copy replication. They advise on managing updates, testing compression, and maintaining cost efficiency. Emphasizing the complexity of running clusters, they highlight Tinybird’s role in simplifying analytics without full ClickHouse hosting.
‘China Fires Up World”s First Thorium-Powered Nuclear Reactor’: China has successfully launched the world’s first thorium-powered nuclear reactor in the Gobi Desert, marking a significant advancement in nuclear technology. The reactor, developed by the Chinese Academy of Sciences, is a two-megawatt molten salt reactor, which operates with reduced meltdown risks compared to traditional systems. Thorium is a safer alternative to uranium, less suited for weaponization. This breakthrough builds on past US research on molten salt reactors, previously halted in favor of uranium due to its military potential.