Job seekers in the U.S. and many other nations face a tough environment. At the same time, fears of AI-caused job loss have — so far — been overblown. However, the demand for AI skills is starting to cause shifts in the job market. I’d like to share what I’m seeing on the ground. (View Highlight)
First, many tech companies have laid off workers over the past year. While some CEOs cited AI as the reason — that AI is doing the work, so people are no longer needed — the reality is AI just doesn’t work that well yet. Many of the layoffs have been corrections for overhiring during the pandemic or general cost-cutting and reorganization that occasionally happened even before modern AI. Outside of a handful of roles, few layoffs have resulted from jobs being automated by AI. (View Highlight)
Granted, this may grow in the future. People who are currently in some professions that are highly exposed to AI automation, such as call-center operators, translators, and voice actors, are likely to struggle to find jobs and/or see declining salaries. But widespread job losses have been overhyped. (View Highlight)
Instead, a common refrain applies: AI won’t replace workers, but workers who use AI will replace workers who don’t. For instance, because AI coding tools make developers much more efficient, developers who know how to use them are increasingly in-demand. (If you want to be one of these people, please take our short courses on Claude Code, Gemini CLI, and Agentic Skills!) (View Highlight)
So AI is leading to job losses, but in a subtle way. Some businesses are letting go of employees who are not adapting to AI and replacing them with people who are. This trend is already obvious in software development. Further, in many startups’ hiring patterns, I am seeing early signs of this type of personnel replacement in roles that traditionally are considered non-technical. Marketers, recruiters, and analysts who know how to code with AI are more productive than those who don’t, so some businesses are slowly parting ways with employees that aren’t able to adapt. I expect this will accelerate. (View Highlight)
At the same time, when companies build new teams that are AI native, sometimes the new teams are smaller than the ones they replace. AI makes individuals more effective, and this makes it possible to shrink team sizes. For example, as AI has made building software easier, the bottleneck is shifting to deciding what to build — this is the Product Management (PM) bottleneck. A project that used to be assigned to 8 engineers and 1 PM might now be assigned to 2 engineers and 1 PM, or perhaps even to a single person with a mix of engineering and product skills. (View Highlight)
The good news for employees is that most businesses have a lot of work to do and not enough people to do it. People with the right AI skills are often given opportunities to step up and do more, and maybe tackle the long backlog of ideas that couldn’t be executed before AI made the work go more quickly. I’m seeing many employees in many businesses step up to build new things that help their business. Opportunities abound! (View Highlight)
The OpenClaw open-source AI agent became a sudden sensation, inspiring excitement, worry, and hype about the agentic future. (View Highlight)
What’s happened: In November, Developer Peter Steinberger released OpenClaw — formerly named WhatsApp Relay, Clawdbot, and Moltbot — as a personal AI agent to perform tasks like manage calendars, summarize emails, and send reminders. A post on the crowdsourced tech-news site HackerNewsnoted the project in late January, and it took off, garnering the fastest-growing number of GitHub stars and more Google searches than Claude Code. (View Highlight)
Users directed OpenClaw agents to organize schedules, monitor vibe-coding sessions, and post to personal web sites and newsletters. One user directed it to build subagents, and within a week was awakened by a phone call from his agent, which he claimed autonomously had registered a phone number, connected to a voice API, and waited until morning to ask “What’s up?” (View Highlight)
Meanwhile, the agents’ activities resulted in cost overruns, exposure of private credentials, and security breaches while users raced to close gaps in the system. (View Highlight)
Tech entrepreneur Matt Schlicht launched Moltbook, a Reddit-style social discussion network that is designed to be written, read, and organized by OpenClaw agents. By the end of the week, OpenClaw users had directed over a million agents to set up accounts. Moltbook’s agent membership, spurred by prompts or simply the descriptions their creators wrote in their default memory files, filled the site with manifestos, stories about their lives, and spam. (View Highlight)
How it works: OpenClaw is a configurable agentic framework that runs on a local computer or in a virtual machine in the cloud. Users can build agents to browse and write to their local file systems or operate within predefined sandboxes. They can also give agents permission to use cloud services like email, calendar, productivity applications, speech-to-text and text-to-speech applications, and virtually any service that responds to an API. Agents can use coding tools like Claude Code, interact on social networks, scrape websites, and spend money on users’ behalfs. (View Highlight)
Architecture: OpenClaw consists of a central gateway server and various client applications (such as chat, browser sessions, cloud services, and so on). It generates a dynamic system prompt at startup and maintains persistent memory across sessions using Markdown files. (View Highlight)
Memory: The default memory files include USER.md (information about the user), IDENTITY.md (information about the agent), SOUL.md (rules that govern the agent’s behavior), TOOLS.md (information about tools at the agent’s disposal) and HEARTBEAT.md, which instructs the agent when and how to connect with different applications. The agent and user can edit these files. (View Highlight)
User interface: Users can communicate with agents and direct them to take actions using chatbots or messaging services including Telegram, WhatsApp, Slack, iMessage, Google Chat, and others. (View Highlight)
Skills: The installation includes dozens of skills, from reading and sending emails or calendar invitations to controlling home speakers or lighting. Others can be installed via the command line or ClawHub, a public directory that contains hundreds of extensions contributed by users. Most skills are based on open-source command-line applications that interact with public APIs. (View Highlight)
Yes, but: OpenClaw and Moltbook initially launched with many security flaws and other issues, some of which have been fixed at the time of this writing. The combination of an open-ended system, insecure design, and inexperienced users resulted in a variety of vulnerabilities. Misconfigured OpenClaw deployments exposed API keys, and Moltbook exposed millions more. Skills designed to perform malicious tasks, such as stealing data, have proliferated. Many users have installed the system on dedicated machines to avoid exposing private data to attackers or well-meaning but accident-prone agents. (View Highlight)
We’re thinking: For an imaginative, enterprising open-source project, OpenClaw has inspired more than its share of hype. Pressreports have likened Moltbook — which holds messages that are little different than the large language model outputs that have amazed and amused the world since GPT-3 — to the advent of AGI and the Singularity. Let us assure you that agents are not there yet, or anywhere close. Rather, OpenClaw demonstrates that agents can be immensely useful, we are still finding good use cases, and we need to pay careful attention to security. That, and you never know when one of your open-source projects might take off! (View Highlight)
What’s new: Moonshot AI released Kimi K2.5, an updated version of its Kimi K2 large language model that adds vision capabilities and the ability to spawn what the authors call subagents — parallel workflows that control their own separate models to execute tasks as AI research, fact checking, and web development — and assign tasks to them. (View Highlight)
Architecture: MoonViT vision encoder (400 million parameters), mixture-of-experts transformer (1 trillion total parameters, 32 billion active per token) (View Highlight)
Availability: Free web user interface, weights free to download for noncommercial and commercial uses with attribution under modified MIT license, API0.60/0.10/3.00permillioninput/cached/outputtokens,[codingassistant](https://www.kimi.com/code?utmcampaign=The15 to $200 per month (View Highlight)
Using reinforcement learning, the team trained Kimi-K2.5, given a prompt, to generate subagents that operate in parallel, assign tasks to them, and incorporate their output into its response. Kimi-K2.5 received rewards for instantiating subagents and solving problems correctly. For instance, prompted to identify the top three YouTube channels across 100 domains, Kimi-K2.5 learned to gather information on each domain, generate 100 domain-specific subagents to search YouTube, and put their findings into a spreadsheet. (View Highlight)
Kimi K2.5 in thinking mode outperformed all open-weights models tested on various measures of reasoning, vision, coding, and agentic behavior. It also outperformed proprietary models including GPT 5.2 set to xhigh, Claude 4.5 Opus set to extended thinking, and Gemini 3 Pro set to high thinking on some vision and agentic benchmarks. (View Highlight)
Across 17 benchmarks of image and video performance, Kimi K2.5 achieved the highest score on 9, outperforming GPT 5.2 set to xhigh, Claude 4.5 Opus set to extended thinking, and Gemini 3 Pro set to high thinking. (View Highlight)
Subagents enabled Kimi-K2.5 to perform between 3 and 4.5 times faster than it did without using subagents. Subagents boosted its performance on the agentic benchmarks BrowseComp and WideSearch by 18.4 percentage points and 6.3 percentage points, respectively. (View Highlight)
Why it matters: Building an agentic workflow can improve a model’s performance on a particular task. Unlike predefined agentic workflows, Kimi K2.5 decides when a new subagent is necessary, what it should do, and when to delegate work to it. This automated agentic orchestration improves performance in tasks that are easy to perform in parallel. (View Highlight)
What’s new: The Wikimedia foundation announced partnerships with AI companies including Amazon, Meta, Microsoft, Mistral AI, and Perplexity. The partnership program, known as Wikimedia Enterprise, lets these partners access Wikipedia data at higher speeds and volumes than they could by scraping pages on the web. Financial terms were not disclosed. (View Highlight)
On its 25th anniversary, Wikipedia celebrated with high-profile deals to make its data easier for AI companies to train their models in exchange for financial support. (View Highlight)
Wikipedia data is available to all under a Creative Commons license that makes it free to use for commercial and noncommercial purposes. Its free availability and high quality has made it an important data source for training AI models. The foundation also offers an open Kaggle dataset for noncommercial AI training. (View Highlight)
Microsoft, Mistral AI, and Perplexity all signed up as enterprise partners within the last year. Wikimedia’s existing partnerships with Amazon and Meta had not previously been announced. Google became a Wikimedia Enterprise partner in 2022. (View Highlight)
Wikipedia receives more requests from automated web crawlers than human users. The site’s founder Jimmy Wales said crawlers gathering data to train AI systems had caused the foundation’s hosting, memory, and server costs to skyrocket. The foundation called for AI developers to support it financially, use the API rather than crawl the web, and attribute information derived from Wikipedia articles. (View Highlight)
Behind the news: Other publishers whose content is widely used to train AI systems have sought payment with varied levels of success. In 2023, Reddit and Stack Overflow announced plans to protect their data from AI crawlers while they sought licensing deals. Reddit was able to reach licensing agreements for Google, OpenAI, and others to use its content to train models. Stack Overflow saw traffic and question volume plummet, dropping from 200,000 questions per month in 2014 to 50,000 questions per month in late 2025. As its audience turned from discussing technical issues on the site to asking AI models for answers, the company pivoted from advertising as its primary revenue source to repackaging its data for AI training. (View Highlight)
Why it matters: AI companies want to train their models on Wikipedia, and gathering data by sending API calls is much faster than crawling the web — never mind the rapid pace of crawling required to keep up with the encyclopedia’s never-ending revisions. At the same time, Wikipedia needs revenue to survive. Selling API access offers a helpful service to developers while giving this crucial data source a stronger financial foundation. (View Highlight)
We’re thinking: These deals are win-win. People who choose to read the online encyclopedia the old-fashioned way can keep doing so, and people who build AI models can rest easier knowing they won’t kill a key source of training data. (View Highlight)
Mistral compressed Mistral Small 3.1 into much smaller versions, yielding a family of relatively small, open-weights, vision-language models that perform better by some measures than competing models of similar size. The method combines pruning and distillation. (View Highlight)
Performance: Ministral 3 14B (version unspecified) ranks ahead of Mistral Small 3.1 and Mistral Small 3.2 on the Artificial Analysis Intelligence Index, a weighted average of 10 benchmarks. Mistral compared Ministral 3 with Mistral Small 3.1 and open-weights competitors of equal size. Ministral 3 14B base outperformed Mistral Small 3.1 by 1 to 12 percentage points on tests of math and multimodal understanding, and tied on Python coding. It also outperformed its parent on GPQA Diamond. Compared to open-weights competitors: (View Highlight)
Why it matters: Cascade distillation offers a way to produce a high-performance model family from a single parent at a fraction of the usual cost. Training the Ministral 3 models required 1 trillion to 3 trillion training tokens compared to 15 trillion to 36 trillion tokens for Qwen 3 and Llama 3 models of similar sizes. Their training runs were also shorter, and their training algorithm is relatively simple. This sort of approach could enable developers to build multiple model sizes without proportionately higher training costs. (View Highlight)
Models: The system authenticates users via the AI API of their choice. Anthropic Claude Opus or Meta Llama 3.3 70B are the defaults, but OpenClaw also supports models from Google, OpenAI, Moonshot, Z.ai, MiniMax, and other developers, hosted locally or in the cloud. OpenClaw itself is free, but model hosts may charge per token of input and output. (View Highlight)