
Metadata
- Author: Fabien Girardin
- Full Title: Software Gets Personal for Organizations and Teams
- URL: https://medium.com/@girardin/software-gets-personal-for-organizations-and-teams-2706b7f3bd22
Highlights
- Software design is in more hands than ever. For most of its history inside organizations, software was something people used rather than shaped — systems procured, platforms rolled out, tools embedded into daily work. An idea rendered by a small group. Everyone else adapted to it or tolerated the gaps. Most people hacked around the parts that didn’t fit. Since the Generative AI jolt of 2024, that arrangement has begun to loosen in genuinely practical ways. The ability to turn a few sentences into functioning software without writing a single line of code is no longer a developer’s privilege. It belongs to anyone who can describe a problem clearly enough for a Large Language Model (LLM) to help solve it. This shift isn’t arriving at the edge of our communities or economy. It’s happening inside regulated banks, global manufacturers, massive consultancies and government institutions. When a multinational bank can enable thousands of employees to shape their own tools without setting their compliance requirements on fire, the risk calculation changes now and for everyone. The question is no longer, is this possible? The pointed question for any leader to ask is: what’s our excuse? (View Highlight)
- This chapter is about what that shift means for organizations. It explores how personal software — tools built close to the problem, shaped by the people who live inside the work — can leap from being an individual practice to a central organizational capability. It looks at what makes this transition work, what are the obstacles, and what are the essential ingredients embraced by the institutions willing to try. (View Highlight)
(View Highlight)- The core frame here is simple: for decades, organizations faced two options when they needed software. Build it or buy it. Both leave a long tail of specific, local, human-scale needs unserved and almost certainly feel outdated faster than we can say ROI. A third option is now available: Enable. (View Highlight)
- In 2024, that balance began to shift (see Software Gets Personal: An Introduction). When given agency and supportive governance frameworks, people who understand their work in detail can translate that understanding into tools that support it, often quickly and without asking for permission. These tools tend to be provisional, and closely tied to a specific context. They’re shaped by proximity to the problem, responsiveness to context, and a sense of completion that comes from temporal fit rather than scalable finish. They get revised as conditions change and shared when they prove useful. Importantly, over time, they mirror how the organization actually functions, rather than how it describes itself in diagrams and airbrushed presentations. (View Highlight)
- For most large organizations, this kind of activity has long been constrained by organizational design and operating models. Enterprise IT functions evolved to protect stability, security, and coherence at scale. Their core work involved setting boundaries: standardizing tools, controlling access, preventing work from drifting outside prescribed systems. That posture has been a site of struggle between teams and IT in almost every organization we’ve ever had the pleasure of knowing. It made sense to restrict access in a world where software was difficult and expensive to build, fragile to maintain, and risky to improvise. As an unintended consequence, it also trained all of us in organizations to treat deviation as something to suppress or tolerate, rather than something to identify, observe and learn from. (View Highlight)
- As they grow, organizations develop an immune system. It exists to protect the organism. The mistake many institutions make is confusing protection of the organism with protection of past processes. An immune system that calcifies around yesterday’s operating model eventually protects the wrong thing. If leaders are committed to the future health of the organization, then the task is not to defend familiar structures, but to understand what thriving requires under new conditions. Personal software surfaces this distinction quickly. It reveals whether the immune system still protects the organism — or has begun protecting its own past. (View Highlight)
- For organizations bold enough to embrace this shift, the opportunity extends far beyond better tools. At its core, it means building resilience in the face of rapid, global change. The same cultural values that welcome learning and experimentation deliver a more connected, engaged, and responsive network of teams and tools. By becoming more adaptive, more expressive, and more awake to change, teams will be enlivened and engaged in ways that are difficult to predict. What are the implications for solopreneurs, small teams and massive organizations? Rather than ask this as a technology question, let’s ask it as a question about culture, about how work gets organized, and about who is permitted to create. (View Highlight)
- As AI researcher and author of Co-Intelligence Ethan Molick puts it:
“The future of work isn’t just about individuals adapting to AI, it’s about organizations reimagining the fundamental nature of teamwork and management structures themselves. And that’s a challenge that will require not just technological solutions, but new organizational thinking.” — Ethan Molick (View Highlight)
- Support for personal software at organizational scale requires a more porous approach to roles and a different permission structure. This invites IT and governance functions (historically the necessary villain of the organization) to shift from gatekeeping to enablement, from enforcing uniformity to cultivating safe conditions for experimentation and perhaps most importantly for the organization as a whole to benefit from these micro-creations. This is not some fantasy or abstract ideal. It is already happening inside highly regulated, multinational institutions. As we’ll explore here in our own case study, we’ve seen a global bank create the conditions for thousands of employees to shape their own tools without setting their compliance requirements on fire. Companies and governments often use the fact that they are regulated as a kind of heat shield falsely protecting them from edgy innovations. This blanket excuse nurtures a risk of hiding from the uncertainty of innovation rather than risking value erosion by protecting its historic successes. (View Highlight)
- For those tasked with enabling innovation at scale, the ability for someone who has expertise and authority in a process to improve it opens a space that has always existed but has rarely been reachable. Talent remains one of the largest investments organizations make. Strategic dedication of one’s time remains at the core of leading a high-performance team and organization. Through this lens, inviting many to experiment with creating new tools may appear a waste of a precious resource. To be clear, we are not implying a free-for-all. Teams still must choose where to focus and benchmark outcomes. (View Highlight)
- As teams adopt AI across the organization, relevant and timely questions of responsibility, visibility, and coherence surface quickly. These questions are a strong local signal for attention and offer opportunities to lighten permissions and reveal new opportunities for value and risk. (View Highlight)
- As the actors of a team or process are empowered to reimagine it and then to animate that with modified tools, there will be many insights, observations and surprises. Most will be difficult to ignore as they will surface process efficiencies and effective alternatives. They will also lead strong teams to confront bold questions in the context of a more fluid organizational model and toolset. (View Highlight)
- Some organizations will resist, as it can be uncomfortable. Others will recognize it as a chance to broaden who gets to participate and improve how work happens or how it could. By welcoming more people into the design process and granting permission to refine and explore, leaders are also retooling teams and creating a culture where learning is prized and strengthened. (View Highlight)
- Enter into a meeting with OIO Studio, a design firm in Barcelona specializing in emerging technologies, and you will notice something new. A set of AI tools has become part of their toolkit. Some are current commercial software like Replit, a coding assistant that transforms natural language into code. Other tools are personal software created by members of the team for their often recurrent and specific needs, like a custom AI agent to synchronize their online agenda. Another type of personal software here is Roby, the team’s AI creative director. These are lightweight, highly iterative tools whose anticipated short lifecycle is outweighed by their near perfect fit for today’s tasks. (View Highlight)
- This is how the team describes Roby: “Roby is OIO’s first non-human AI creative director. He/It helps us to come up with new ideas and products, moderates the Discord community and runs its own Instagram account.” Roby first lived in a Raspberry Pi and then a Mac Mini under a desk in OIO studio. Some of the code is open source. Roby is a good example of AI software archetypes that are entering our world through our homes and offices. Roby is not a product OIO bought. It is not software commissioned from an agency. It is a tool shaped by one studio’s practice, built by the people who use it, for purposes no vendor anticipated. (View Highlight)
- Most people we speak with — and ourselves, honestly — are constrained first by time. We can imagine long lists of problems worth solving but cannot picture finding the hours to explore, test, and refine solutions. Inside teams and across organizations, the same pattern repeats: ideas that address real inefficiencies exist, but if the company hasn’t budgeted, prioritized, and incentivized people to address them, they never make the list. Or they simply fall off it. (View Highlight)
- Over nearly two decades of digital transformation work, we’ve watched this play out in organization after organization across industries and time zones. IT departments cannot cover all needs. When the various users have their unique problems, a centralized team is incapable of serving all. IT departments that learned to survive demand by learning to say no. They standardize: vendors, tools, process and expectations. They prioritize. This is rational. But it leaves a restless long tail of niche needs unserved, quietly draining energy and other assets.
The long-tail needs no one built software for. Until now. Image courtesy of Kasey Klimes. (View Highlight) - When official channels cannot help, people find other ways. Teams adopt unauthorized tools. Workarounds proliferate. Processes migrate into spreadsheets, messaging threads, and personal notebooks. This practice — the shadow IT — is typically framed as a problem to eliminate: a compliance risk, a security gap, a governance or cultural failure. But shadow IT is an important signal. And it’s one worth reading carefully. (View Highlight)
- When people route around official systems, they’re not being reckless. They’re being resourceful. They’re telling leaders, in the clearest way available to them, that a need exists that the organization has failed to address. The workaround is the message. Shadow IT reveals where the gaps are — which teams are most frustrated, which processes are tearing or already broken, which problems have been waiting for a solution that never came. (View Highlight)
- For middle managers and team leaders, this signal deserves particular attention. When someone builds an unauthorized tool or adopts a platform IT hasn’t approved, the instinct is often to shut it down. The more useful instinct is to ask: what does this tell us? What problem were they trying to solve? What does it say about our systems and our changing requirements? What haven’t we provided? Is there curiosity alive here — experimentation happening, learning accumulating — should our team and organization be cultivating these instincts rather than suppressing them? (View Highlight)
- Personal software makes these gaps visible in a form the organization can learn from. Like desire paths worn through grass in a public park — more honest and more useful than what the planners originally laid down — they show you where people actually need to go. The question is whether to follow the path or keep forcing the old map to be obeyed. (View Highlight)
- Zooming out, for organizations, personal software represents a wide opportunity to immediately tackle problems that formerly were not important enough to invest time or resources. The benefits are many beyond solving the specific issue. As tools spread and more people throughout organizations try their hand at defining, designing and building, talent grows more confident, resilient and creative. (View Highlight)
- There is also an emotional cycle that accompanies this shift. Initial excitement gives way to a sense of invincibility. Shortly after, a wave of doubt often follows: if anything can be built quickly, what is worth building at all? This arc mirrors how people respond to any profound change. Personal software functions as a safe entry point into that cycle. It allows individuals and teams to experience capability without committing to grand reinvention. In that sense, it is less a product strategy and more a learning strategy. (View Highlight)
- Not surprisingly, these broadly distributed capabilities also raise strategic questions. If every employee can build her own tools, what happens to procurement, governance, risk management, human resources, and institutional knowledge? How can an organization support this capacity without losing coherence or its ability to focus performance? How can it capture the experiments, learning and ultimate work that emerges from thousands of small experiments? (View Highlight)
- Consider a large multinational organization. Within it, a legal advisory team of nine lawyers handles 40,000 customer queries per year. They need to consolidate product specifications, internal policies, and external regulations into one searchable system. This is too specific for commercial software: the combination of sources, the particular regulations, the internal policies are unique to this organization. It is too small for the IT department to prioritize: nine people out of 120,000 is not a compelling business case. So the need persists. The lawyers spend hours on lookups that could be faster. The frustration accumulates quietly. We will return to this team later. One of them eventually built a solution. (View Highlight)
- ersonal software offers a perch. From it, individuals can glimpse what new capabilities feel like without dismantling the entire operating model. It is a way to touch the future lightly. Like a vaccine, it introduces a controlled exposure rather than a sudden system shock. Teams begin to sense what is possible before being forced to reorganize around it. (View Highlight)
- For decades, organizations faced a binary choice when they needed software. Build: commission custom development from internal teams or external agencies. Buy: purchase commercial software and adapt workflows to fit. Both options have constraints. Building is expensive and slow; it requires technical resources and competes for IT attention. Buying means accepting software designed for generic users, not a specific context. The long tail falls through the gap: too small to build, too specific to buy.
Build and Buy serve the head of the curve. Enable serves the long tail. (View Highlight) - What we’ve been talking about here is a third possibility — Enable. Organizations who welcome this capability can now enable their employees to create their own software. Enable does not replace Build or Buy; it addresses different territory. Build and Buy continue to serve the head of the curve: large-scale systems, mission-critical infrastructure, widely shared tools. Enable serves the long tail: specific needs, small teams, local workflows, contextual problems. The question is no longer just “should we build or buy?” but “should we build, buy, or enable?” (View Highlight)
- Enable works for the long tail because the people closest to a problem understand it better than any IT department or vendor could. As Ethan Molick observes:
“Individual workers, who are keenly aware of their problems and can experiment a lot with alternate ways of solving them, are far more likely to find powerful and targeted uses of AI.” — Ethan Molick (View Highlight)
- They can iterate quickly, adjusting to fit. They do not need to justify a business case to a committee; they just need to solve their own frustration. There are other consequences here too — inviting a wider group of people to resolve issues and improve workflows and outcomes builds a greater sense of ownership for the work, team and overall performance. Not everyone will jump in, but those who have imagined altering an antiquated process or shifting permissions are likely to be delighted by this new ability to boost operations. This represents some of the cultural shifts and cues that welcome non-engineering teams into the work of enriching system efficiencies. (View Highlight)
- AI adoption at this stage becomes a portfolio decision. Core systems, data infrastructure, and mission-critical tools warrant centralized control and dedicated teams. But niche tools, team workflows, and contextual problems are different territory: this is exactly where distributed creation thrives. The question isn’t whether to centralize or democratize; it’s knowing which tier you’re operating in. (View Highlight)
- In 2024, BBVA entered into a partnership with OpenAI to provide 3,000 ChatGPT Enterprise licenses to employees. They faced a crucial choice. Elena Alfaro , Head of Global AI Adoption at BBVA, describes it:
“Should we give everything to the data team, to engineering, or do we do something much more democratic and really test whether this could help anyone?” — Elena Alfaro, Head of Global AI Adoption at BBVA (View Highlight)
- They chose to democratize. What followed felt like a fireworks show of beautiful little experiments. After a few months, thousands of employees were creating their own AI tools. By the end of 2024, employees had created 20,000 custom GPTs. Most were experiments. But 1,500 were used weekly across areas like procurement, compliance, and communication. Each serves just a few employees. Together, they form a long tail of software. By mid-2025, BBVA had expanded to 11,000 licenses. In late 2025, they sealed an agreement to provide ChatGPT to all 120,000 employees. (View Highlight)
- Remember the nine lawyers handling 40,000 queries per year? One member of that team built a custom GPT to help the group. The tool consolidates product specifications, internal policies, and external regulations into one searchable system. It drafts answers faster and more thoroughly than manual lookup; all responses undergo human review before reaching branch managers. They call it their “tenth team member.” This is software shaped by one team’s experience and workflow, not by the IT department. (View Highlight)
- The success of BBVA’s AI adoption is not one application deployed to thousands. It is thousands of applications, each serving a few. The collective impact: BBVA reports that each of the 11,000 employees with licenses saved on average three hours per week. Elena Alfaro summarizes the effect: “The clearest impact is less time. But the next is higher quality in the results, and the third is more innovation.” (View Highlight)
- Here is where it becomes interesting. Personal software has reached organizational scale while maintaining its human-scale characteristics. Each tool is still built for immediacy, still finished when it fits, still human in scale. But multiplied across 11,000 employees, the effect is transformational. This is not commercial software logic, where one product serves many users. It is personal software logic, where many products each serve a few, operating at enterprise scale. The learning here is not only incremental improvements in all corners of the organization, but a call to action for every member of the team throughout the business to observe, think and contribute in new ways. This also subtly values team and individual performance beyond the scope of their historic role as the culture of the company expands to invite anyone to shape their work. Both in theory and in practice, this shift creates a kind of talent magnet as team members have acquired essential skills and gained the ability to explore new questions and modify their tools. (View Highlight)
- What BBVA demonstrates is that LLMs have dramatically lowered the software creation barrier. Most of these tools are built through natural language configuration rather than code. The value is created when employees experiment, learn, create, and collaborate. The impact is hard to track with regular indicators and remains largely invisible in many organizations. BBVA chose to embrace these personal software assets and the learning they produce. They offer a model for others facing the same crossroads. (View Highlight)
- What Made It Work at BBVA What is emerging at BBVA is not just a technology rollout. It is a cultural shift. Employees are becoming makers, not just users. The bank’s internal “GPT Store” lets staff share and adapt tools made by peers, spreading human-scale innovation across a vast organization. But tools alone do not create this shift. The conditions do. Three conditions made it possible. (View Highlight)
- BBVA handed out licenses and training with one clear instruction: try things. Put another way: Use it, use it, use it. The response surprised even those who initiated the program. “People were completely on fire,” Elena Alfaro recalls. “They made personal things because we had told them so clearly to experiment.” This “on fire” behavior shows that personal software flourishes when organizations grant cultural permission, not just technical access. It is not enough to provide tools. They must signal that experimentation is welcome. (View Highlight)
- Distributed experimentation requires trust. BBVA gave employees autonomy with responsibility. They did not create a “transformation office” that dictates what employees can and cannot do. Instead, they created space for employees to solve their own problems, with guardrails that emerge over time. Trust is the foundation. Without it, employees wait for instructions rather than act on frustrations. (View Highlight)
- Personal software creators share knowledge, not code. This is different from open source, where the code itself circulates. At BBVA, people share how they solved problems, what prompts worked, what failed. A community of practice formed around this exchange. A GPT created in Mexico may inspire a different solution in Turkey. When asked about duplication, Elena offers a surprising perspective: “How do we prevent people from repeating work? I tell them this is the least of our problems. If two people build two GPTs, they have both learned to create applications, which is far more valuable. And they probably bring complementary perspectives.” Unlike the traditional approach to software, duplication is not a waste. It is learning. (View Highlight)
- Underlying all of this is a demystified view of AI. Elena describes it simply: “It’s a tool. You need to know where it’s useful and where it’s not.” This practical, problem-first approach is the same attitude that defines personal software. No hype, no innovation theater. Just people solving problems with new capabilities. (View Highlight)
- Culture creates the conditions. Governance sustains them. Experimentation without structure eventually produces chaos. The next question: how do you govern thousands of personal tools without killing the energy that created them? Don’t ask how fast we can automate, but how deliberately we can preserve the human creativity that makes adaptation possible in the first place. And, underlying that, how might the culture that makes good work possible be expressed directly in the systems we empower people to build? (View Highlight)
- Not every experiment at BBVA worked. Of 20,000 GPTs created, most faded. Some answered questions incorrectly. Some duplicated tools that already existed. A few were built with enthusiasm and never opened again. This is not a cautionary tale. It is the point. (View Highlight)
- Personal software at organizational scale is, by design, a learning engine. The value is most definitely not that every tool attempted succeeds. It’s in the process of building, testing, failing, and revising and what that teaches the people doing it. Elena Alfaro’s observation about duplication captures this well: two people building two GPTs that solve the same problem haven’t wasted effort — they’ve both learned to build software, and they probably arrived at complementary insights potentially via alternative routes. Learning is the product, not the tool. (View Highlight)
- This requires a particular tolerance — not just institutional, but personal. The emotional arc of building with new tools tends to follow a recognizable shape. Early experiments feel exhilarating. Then something breaks, or doesn’t work as expected, or turns out to be less useful than imagined. A wave of doubt follows: if anything can be built quickly, what is worth building at all? Caution! This is the moment many organizations inadvertently kill — by measuring too early, by pulling governance levers prematurely, by treating the pause as evidence that the whole initiative was misguided. The organizations that navigate this well share a common posture: they treat early failure as information rather than verdict. Some practical patterns we’ve observed: When a tool doesn’t work, ask why before asking who. A failed experiment usually reveals something about the problem’s framing, not the person’s judgment. (View Highlight)
- At present, AI systems assist. They generate drafts, surface options, and suggest structures. Determining whether a problem is truly solved still rests with the human who lives inside it. That judgment cannot be outsourced. Yet the path to that judgment is changing. In the next chapter of this series, where we’ll focus on the individual, we’ll explore the notion of “mechanical sympathy” — the ability we have, which goes beyond curiosity, to develop new capacity as we learn from how things break when we just try. Rather than requiring formal technical training, competence increasingly develops through iteration. A tool is built. It almost works. It reveals its gaps. It improves. No disasters happen. Through that cycle, users acquire an embodied sense of what these systems can and cannot do. Experience becomes literacy. (View Highlight)
- When adoption drops off, treat it as data. A tool that fades after two weeks tells you the problem it addressed was either solved or wasn’t actually as pressing as it seemed. Both outcomes are useful. When duplication happens, celebrate it rather than managing it away. Parallel experiments surface different approaches to the same problem. Convergence happens naturally. Consider: premature standardization prevents discovery. When something goes wrong with real consequences — a tool that surfaces incorrect information, a GPT that miscommunicates policy to a colleague — treat it as a governance signal, not a reason to slam the aperture closed. (View Highlight)
- These moments are precisely when the tiered governance approach matters: not all tools require the same level of oversight. Some need strong guardrails and failure helps teach which ones those are. The deepest failure mode isn’t building something that doesn’t work. It’s building something, watching it fail, and concluding that the whole endeavor was not valuable or altogether too risky. Classically, the signal that ‘this isn’t working’ tends to arrive just before the real learning begins. Hang in! (View Highlight)
- Honda offers a useful counterpoint here, because the story starts with culture rather than tooling. For decades, the company’s core philosophy of Waigaya has served as a mechanism for frank, high-energy debate across domains. It exists to surface assumptions, force contact between different kinds of expertise, and keep ideas moving through friction rather than ceremony. That practice matters in a company whose work requires tradeoffs among safety, performance, manufacturing realities, regulations, and design. (View Highlight)
- In 2025, Honda’s internal AI group tried something unusually direct: they treated that cultural practice as a design pattern for AI agents. By articulating the culture as a template, the move to create agent software became straightforward. A Honda team built a multi-agent system in which different LLM-based agents act with various domain expertise and perspectives. They then engage and coordinate through discussion, reflecting how the Honda culture works in practice. Central to this is the idea that experts gather and openly discuss and debate their way toward a solution. The team tested multiple agent discussion styles — decentralized, centralized, layered, and shared pool — and after experimentation found the decentralized approach to integrate diverse views more naturally. In other words, they asked their AI agents to argue the way we’ve enjoyed watching Honda engineers and designers engage and respectfully argue. That decentralized style bias was designed into the system with the Waigaya cultural practice in mind. (View Highlight)
- What Honda did was not simply add AI to its culture. It studied its culture and asked how to best represent it computationally. This is a rather significant shift in approach. It begins with a commitment to preserve the culture while modernizing the tools and teams. It suggests a path into AI adoption that begins with cultural intent. (View Highlight)
- Rather than handing out licenses and hoping for good outcomes, an organization benefits from first asking what kinds of engagement it values and wants to protect: how disagreement is handled, how different forms of expertise meet, how decisions converge or diverge, how assumptions are challenged. Answering those questions moves learning from invisible to foreground — it becomes a central design element, not an afterthought. Honda’s experiment treats these as first-order requirements. Their system design assumes that the form of conversation shapes the quality of outcomes, then operationalizes and tests that assumption in practice. (View Highlight)
- BBVA shows what happens when an institution changes permission structures and creates room for distributed making. Honda’s Waigaya work shows another dimension: some of the most important work involves strengthening, protecting, and even encoding the conditions under which good judgment emerges. If we want to move into this world in a culture-first way, this is part of what that can look like: an organization that studies its own generative practices and then builds systems that reinforce them. (View Highlight)
- Governance means more than a one-time imposition of rules. Culture creates energy. Governance must sustain it without extinguishing it, by cordoning off spaces for expanded experimentation. Traditional IT governance assumes centralized control, standardized processes and approval workflows. Personal software at scale breaks these assumptions. You cannot govern 20,000 experiments the same way you govern a traditional large-scale system. A new model is needed. When designed properly, governance can be the means by which to observe and learn, and continuously update models and best practices as they evolve at speed. (View Highlight)
- Elena Alfaro poses the question directly: “How do we govern all of this? Can we govern everything?” Her answer: “No. You have to let people create their software, then govern what is heavily used or addresses a relevant process.” In the first seven months, employees launched more than 3,000 GPTs, of which over 900 were flagged as cases of strategic interest and with upscaling potential. A few months later, 1,500 GPTs were in active weekly use. Of those, only a fraction fall under strict governance. This is a radical departure from traditional governance. Most personal software lives ungoverned, and that is fine. It serves a few people, solves a small problem, and fades when no longer needed. (View Highlight)
- For the 1,500 tools that cross the threshold, governance is lightweight but real. Ownership: who is responsible for this tool? Data curation: what data feeds it, and is that data appropriate? System prompt quality: is the tool well-designed and safe? Impact measurement: is it actually helping? These checks create accountability without bureaucracy. (View Highlight)
- For the tools that touch relevant processes or serve many users, oversight is stricter. Compliance review, security checks, integration standards. The distinction is not arbitrary. It emerges from use patterns. Governance follows adoption, rather than preceding it.
Governance follows adoption, not the other way around (View Highlight) - This model requires new roles. At BBVA, creators are called “Wizards”: employees who build personal software. Champions hold an overall vision for a domain and connect efforts. In other organizations, Supervisors review tools that reach the governance threshold. Curators monitor data quality and model drift. These roles do not replace IT. They extend the organization’s capacity to manage distributed creation. (View Highlight)
- The entry barrier is low. This is good news and bad news. When 20–30 percent of employees can create software, up from 2–3 percent, the potential for mistakes grows. Model drift means AI behavior changes over time; tools may degrade without anyone noticing. Dependence on specific LLM providers creates strategic risk. Personal software may access sensitive data without proper controls. The risk is not that employees will create. It is that creation will happen without visibility (Shadow AI). (View Highlight)
- Traditional software metrics do not capture the value of personal software. Uptime, adoption rates, cost per user: these measure products built for scale. BBVA tracks time saved, quality improvement, innovation capacity. These are closer to HR metrics than engineering metrics. The question shifts from “how many people use this tool?” to “did this tool help these specific people?” (View Highlight)
- BBVA’s model is not the only way, but it offers a template. Experiment first, govern later. Classify by impact. Identify and measure what matters. This approach worked, in part, because custom GPTs always keep a human in the loop. As personal agents — tools that act, not just assist — begin to enter the workplace, governance will need to evolve. The question is how to keep control and avoid risks without sacrificing the agility that made the experiment worth running. (View Highlight)
- For teams and organizations, the work starts with culture. Some questions to ask: • How ready is your team to try on new ideas? • What is missing to unleash cheap, short experiments for roles who have only been on the receiving end of new tools and systems? • How can you harness insights and hunches that are floating around without a home? • Whose job is it to design better practices when the work keeps changing? (View Highlight)