I Analyzed 512,000 Lines of Leaked Code. It Shows What's Coming for Your AI Tools.
The Implications of the Claude Code Leak
Overview of the Claude Code Leak
- The leak primarily reveals Enthropic's development of an "always on" agent named Conway, which was unintentionally disclosed through a packaging error that exposed half a million lines of code.
- While much attention was given to the source code and security flaws, significant aspects like Funway—a standalone agent environment—were overlooked.
Features of Conway
- Conway operates as a sidebar within the Claude interface, distinct from traditional chat windows, featuring three main areas: search, chat, and system.
- The system section includes an extensions area for installing add-ons (akin to an app store), allowing for enhanced capabilities and integration with other services.
- Automatic triggers enable external services to wake up Conway based on specific public web addresses, providing flexibility in its operation.
Potential Use Cases for Conway
- After six months of use, Conway could autonomously manage emails by drafting responses based on learned patterns while flagging important messages for user review.
- It can monitor Slack channels and draft replies using context from previous documents reviewed by the user, showcasing its ability to integrate into daily workflows.
Challenges and Realities
- Despite impressive demos showing full functionality, real-world applications often require ongoing oversight due to potential inaccuracies in automated outputs.
- The effectiveness of agents like Conway hinges on their speed and iterative learning rather than achieving perfection immediately.
Strategic Context for Anthropic
- The development of Conway aligns with Anthropic's broader strategy over recent months, including launching Claude code channels for messaging through platforms like Discord and Telegram.
- Recent initiatives also include targeting non-engineers with tools designed for enterprise employees and establishing partnerships to enhance market presence.
Understanding Anthropic's Enterprise Lock-In Strategy
Overview of Anthropic's Strategy
- Anthropic is implementing an enterprise lock-in strategy by blocking third-party tools from cloud subscriptions, starting with Open Claw and expanding to other services.
- Users may face significantly higher pay-per-use rates (10 to 50 times more) if they utilize tools not built by Anthropic, indicating a strong push towards proprietary solutions.
Components of the Strategy
- The strategy involves multiple product decisions across five surfaces, including developer tools, enterprise collaboration tools, and persistent agents like Conway.
- This approach mirrors Microsoft's historical trajectory from operating system vendor to dominant enterprise platform over 15 years; Anthropic aims to achieve this in just 15 months.
The Role of Conway
- Conway acts as a critical component that enhances user stickiness by understanding organizational needs better than any other tool.
- The model context protocol (MCP), an open standard adopted widely, serves as the foundation for interoperability among AI tools but is layered with proprietary elements in Conway.
Proprietary Layering and Market Implications
- Extensions within Conway are packaged as CNW.zip files that create a proprietary ecosystem, limiting portability and reinforcing dependency on Anthropic’s environment.
- This model resembles Google Play Services for Android: while based on open-source software, it relies heavily on proprietary layers for commercial viability.
Developer Choices and Ecosystem Dynamics
- Developers face two paths: building portable MCP-compatible tools without distribution support or creating extensions for Conway that benefit from built-in discoverability within its ecosystem.
- The choice between open web development versus native app development reflects past trends where developers gravitated towards platforms offering better distribution mechanisms.
Historical Context and Future Outlook
- Similar patterns have emerged in tech history where ecosystems organized around proprietary layers overshadowed open alternatives; developers who ignored this often faced challenges.
- As the market evolves, there is potential for similar strategies from competitors like OpenAI and Google, leading to a landscape dominated by specific platforms.
OpenAI's Next Generation of Personal Agents
Overview of OpenAI and Enthropic Developments
- On February 14th, Sam Altman expressed excitement about driving the next generation of personal agents, with OpenClaw transitioning to a foundation supported by OpenAI.
- Enthropic began enforcing a ban on third-party tools using subscription login credentials, initially blocking access quietly in January and later revising terms of service in February.
The Strategy Behind Conway
- Steinberger observed a pattern where popular features are copied into closed systems while locking out open-source alternatives.
- The strategy involves four steps: creating a first-party version, making it free or subsidized, increasing costs for third-party versions, and shipping proprietary formats that favor the company's ecosystem.
Lock-In Mechanisms Explained
- Traditional tech lock-in involved data like files or customer records; however, Conway introduces a new layer by locking in behavioral models rather than just data.
- Switching from Conway after six months means losing not only an agent but also the accumulated understanding of user behavior that cannot be easily exported or transferred.
The Concept of Intelligence Portability
- This new form of lock-in is about intelligence portability—how well an agent understands user behavior over time rather than just data portability.
- Questions arise regarding ownership and transferability of the behavioral model built by the agent based on user interactions.
Future Directions for Behavioral Context Portability
- There is a need for industry-wide agreements on making behavioral context portable before products like Conway launch to avoid potential lock-in issues.
- The first era of AI competition focused on foundational models; now we are entering an era centered around interfaces and memory persistence.
Persistence Layer as Key Competitive Advantage
- The future competition will revolve around who owns the persistent layer that accumulates context and acts autonomously based on learned behaviors.
- Companies like Google, Anthropic, and OpenAI aim to build this persistent agent layer as their primary product offering due to its significant customer lock-in potential.
The Future of Agent Memory and Enterprise Architecture
The Impact of Conway on Agent Memory
- The emergence of Conway alters the decision-making process for enterprise architects regarding agent platforms, particularly concerning where agent memory should reside.
- Using a single provider like Anthropic means that all organizational knowledge is stored within their infrastructure, raising concerns about data portability if providers are switched.
Convenience vs. Ownership
- The launch of Conway will test whether companies prioritize convenience over ownership of their agent memory, with many likely opting for ease of use despite potential drawbacks.
- Users may become reluctant to switch from a well-integrated system like Conway due to its immediate effectiveness and user-friendliness, even if it lacks certain features.
Long-term Implications for Enterprises
- A significant trend emerging in 2026 will be the choice between owning a persistent memory layer versus relying on an agentic system that could lead to vendor lock-in.
- Consumer preferences are shifting towards specific plans (e.g., ChatGPT free or Claude plans), which may influence enterprise contracts and decisions around privacy and portability.
Behavioral Evidence Ownership
- There is a growing concern about whether employees' behavioral evidence should remain with them after leaving a company or be retained by the organization for ongoing benefit.
- Employees must navigate choices regarding which agent interfaces to adopt while considering how these decisions impact their career trajectories and team dynamics.
Value Allocation Discussions
- Companies need to engage in conversations about how to fairly allocate value derived from employee skills enhanced by persistent context layers.
- The challenge lies in determining whether employees should retain some form of behavioral fingerprint when they leave or if companies will default to claiming ownership over all work-related behavior.
The Future of Agents in the Workplace
The Effectiveness of Agents
- The speaker asserts that agents can make employees twice as effective, highlighting a shift in employer-employee dynamics through a "carrot and stick" approach.
- There is an emphasis on understanding the broader implications of integrating persistent agents into daily life, suggesting a need for awareness about their influence.
Building Persistent Memory Layers
- The speaker offers insights into creating a persistent memory layer, which could include behavioral audits to enhance personal productivity and effectiveness.
Employer Influence and Employee Lock-in
- Employers are expected to leverage proprietary platforms to enhance team intelligence, potentially leading to stronger employee lock-in by 2026.
- Contrary to fears of mass layoffs due to automation, the speaker believes that human oversight will still be necessary for managing these agents effectively.
Choosing Your Employer Wisely
- The choice of employer will increasingly reflect the type of agent one collaborates with, likening it to historical choices between operating systems (e.g., Windows vs. Mac).
- Employees should consider how familiar they are with their chosen agent and its potential impact on their productivity.