OpenAI’s CPO on how AI changes must-have skills, moats, coding, startup playbooks, more | Kevin Weil

OpenAI’s CPO on how AI changes must-have skills, moats, coding, startup playbooks, more | Kevin Weil

The Future of AI: Insights from Kevin Weil

The Rapid Evolution of AI Models

  • The current AI models are the least advanced they will ever be, highlighting the rapid pace of technological advancement in AI.
  • There is a mindset that every two months, new models will surpass existing capabilities, encouraging developers to innovate continuously.
  • Products that are on the edge of current model capabilities may soon become highly effective as new advancements emerge.

Reflections on Past Projects

  • Kevin Weil reflects on his experience with Libra at Facebook, describing it as a significant disappointment due to its failure to launch.
  • He believes that if Libra had been successfully implemented, it would have positively impacted the world by facilitating easy transactions via platforms like WhatsApp and Messenger.

Current Role and Responsibilities

  • Kevin Weil serves as Chief Product Officer at OpenAI, a leading company in AI and AGI development.
  • His background includes leadership roles at Instagram and Twitter, along with involvement in various boards such as Planet and Strava.

Skills for the Future

  • Discussion includes essential skills for product builders in an AI-driven era, emphasizing the importance of learning how to write evaluations (evals).
  • The conversation touches on what skills will be most valuable moving forward and what he is teaching his children regarding future competencies.

Podcast Promotion and Sponsorship

  • The episode features sponsorship messages promoting Eppo, an A/B testing platform designed for modern growth teams.
  • Persona is also highlighted as an adaptable identity platform aimed at helping businesses combat fraud while ensuring compliance.

Launching New Technologies

AI's Viral Impact and Internal Reactions

Initial Reactions to AI Product Launch

  • The speaker reflects on the unexpected viral reaction to their AI product, likening it to the launch of ChatGPT, indicating a significant impact in the AI space.
  • They share personal experiences from Instagram, where internal excitement about a product often predicts its success upon release.
  • The internal buzz around ImageGen was palpable; employees were actively generating content and sharing it, creating a vibrant atmosphere of engagement.

Confidence in Product Viability

  • The speaker emphasizes that strong internal usage is a good indicator of potential external success, especially for social products within tight-knit company networks.
  • A question arises about the Ghibli style's popularity; it's suggested that this was not an intentional marketing strategy but rather an organic response to user preferences.

Capabilities of the AI Model

  • The model demonstrates advanced capabilities in understanding complex instructions and visual arrangements based on user input, showcasing its versatility.
  • Excitement is expressed about future applications as users discover new ways to utilize the model’s features effectively.

The Future of AGI and Perceptions of AI

Anticipation for AGI Development

  • The speaker acknowledges their role at OpenAI during a pivotal time for AI development, hinting at future advancements towards AGI (Artificial General Intelligence).
  • They humorously mention receiving over 300 questions from the community regarding AGI timelines, highlighting public interest and curiosity.

Evolution of AI Terminology

  • A quote is shared: "AI is whatever hasn't been done yet," illustrating how perceptions shift as technology matures into commonplace algorithms.
  • This perspective suggests that what is currently seen as groundbreaking will eventually be normalized as just another algorithm once it becomes widely adopted.

Changing Perceptions Over Time

  • The speaker reflects on how quickly society adapts to new technologies like self-driving cars; initial awe transforms into everyday acceptance.

Joining OpenAI: A Journey Through Recruitment

The Initial Attraction to OpenAI

  • The speaker reflects on the rapid evolution of machine learning and AI, noting how ChatGPT has progressed since its earlier versions.
  • The speaker shares their background, having worked at major tech companies like Twitter, Facebook, and Instagram before being recruited by OpenAI as CPO.

Recruitment Process Insights

  • Initially planning a break after leaving Planet, the speaker was encouraged by Sam Altman to consider joining OpenAI for an exciting opportunity.
  • The recruitment process moved quickly; the speaker met with most of the management team in just a few days and felt a strong connection during discussions about OpenAI's future.

Interview Experience

  • After a positive dinner conversation with Sam Altman discussing visions for OpenAI, the speaker anticipated a successful interview round.
  • Despite feeling confident post-interview, there was an unexpected delay in communication from OpenAI that led to anxiety about their performance.

Overcoming Doubts

  • The speaker experienced nine days of uncertainty regarding their application status, leading them to second guess their interview responses.
  • Eventually receiving confirmation from OpenAI that they were moving forward with the hiring process alleviated concerns but highlighted the internal complexities within organizations.

Reflections on Company Culture

  • The discussion shifts towards differences in work culture at OpenAI compared to previous roles; notably, the pace of innovation is much faster.

Understanding the Role of Evals in AI Product Development

The Nature of LLMs and Their Outputs

  • Large Language Models (LLMs) excel at interpreting fuzzy, nuanced inputs typical of human communication, producing varied responses that maintain a spiritual similarity rather than identical wording.
  • The accuracy of LLM outputs significantly influences product development; a model performing at 60% accuracy necessitates different strategies compared to one achieving 99.5% accuracy.

Importance of Evals in Product Management

  • Evals are essential assessments for models, akin to quizzes that measure understanding or performance on specific tasks, such as creative writing or coding.
  • Understanding evals is crucial for product managers and developers as they gauge how well a model performs across various domains.

Evaluating Model Performance

  • Evals help identify areas where models perform reliably (e.g., 99.95% accuracy) versus those with lower reliability (e.g., 60%), guiding product design accordingly.
  • Continuous evaluation allows for iterative improvements in model performance based on real-world use cases and feedback.

Case Study: Deep Research Product

  • The deep research feature enables users to pose complex queries that would typically require extensive research, allowing the model to synthesize information efficiently over time.
  • During its development, eval benchmarks were established to assess the model's ability to handle intricate questions effectively.

Future Implications of Evals

  • The effectiveness of AI models is limited by our capacity to create meaningful eval frameworks; better eval systems can enhance overall intelligence and adaptability.

AI Models and Company-Specific Applications

The Importance of Tailoring AI Models

  • AI models need to be fine-tuned with company-specific or use case-specific data to perform effectively in particular contexts.
  • There are numerous use cases not covered in the training set of general models, necessitating additional teaching for optimal performance.

Opportunities Amidst Competition

  • Founders often question whether to build startups in the AI space due to competition from major players like OpenAI.
  • Ev Williams' insight emphasizes that there are more smart people outside a company than within, highlighting the potential for collaboration and innovation through APIs.

Industry-Specific Data Utilization

  • Many opportunities exist across various industries where AI can enhance existing processes, but companies may lack the resources or expertise to develop these solutions independently.
  • The focus should be on empowering developers (3 million currently using their API) to create innovative products tailored to specific needs.

Agility in Product Development

Rapid Shipping of Innovations

  • The ability to ship quickly is attributed to a bottoms-up approach rather than a rigid top-down roadmap.
  • While having a directional alignment is important, detailed plans are often subject to change as new insights emerge during development.

Planning vs. Execution

  • Acknowledging that plans may not always align with actual outcomes, the emphasis is placed on learning from past experiences during quarterly roadmapping sessions.
  • Regular reviews help teams assess what worked well and what didn’t, allowing for adjustments based on dependencies and infrastructure needs.

Embracing Mistakes and Learning

  • A culture of agility encourages making mistakes as part of the learning process; leadership supports rapid iteration even if it leads to setbacks.
  • Teams are encouraged to express strong opinions about product capabilities since they are directly involved in building them.

Product Review Processes

Aligning Team Efforts

Iterative Deployment and Model Maximalism in AI

Empowering Teams to Ship Quickly

  • The speaker emphasizes the importance of shipping products quickly, even if key team members are unavailable. They advocate for empowering teams to move forward without delays.

Philosophy of Iterative Deployment

  • The concept of "iterative deployment" is introduced, highlighting the value of learning about models through public iteration rather than waiting for complete knowledge before launching.

Embracing Imperfection in Models

  • Acknowledgment that models are not perfect and will make mistakes. The focus is on minimizing unnecessary scaffolding around limitations since improvements in models are expected soon.

Encouraging Innovation at the Edge of Capabilities

  • Developers are encouraged to continue building products that push the boundaries of current model capabilities, as advancements will enhance their functionality over time.

Real-world Examples of Rapid Evolution

  • A case study from a podcast guest illustrates how a product that struggled for years suddenly succeeded due to advancements in AI models, showcasing rapid evolution in technology.

Competition and Advancements in AI Coding

Recognition of Competitors' Strengths

  • The speaker acknowledges Anthropic's strong coding models while expressing confidence in their own capabilities, indicating a competitive landscape among AI providers.

Multi-dimensional Intelligence Landscape

  • Discussion on how intelligence varies across different model providers. Competition drives innovation and improvement within the industry, benefiting consumers and developers alike.

Historical Context of Performance Breakthroughs

  • An analogy is drawn comparing breakthroughs in athletic performance (e.g., breaking 4 minutes in a mile) to advancements in AI capabilities, emphasizing how competition fosters rapid progress.

Consumer Awareness and Product Utility

Factors Contributing to ChatGPT's Popularity

  • The speaker attributes ChatGPT's success partly to being first-to-market with various features, enhancing its utility as a one-stop-shop for users' needs.

Diverse Functionalities Offered by ChatGPT

  • ChatGPT supports multiple functionalities including real-time video input processing, speech recognition, deep research capabilities, and code writing—positioning it as an all-encompassing tool for users.

Future Developments Enhancing User Experience

  • Upcoming tools like Operator aim to further streamline user interactions with ChatGPT by automating tasks such as web browsing based on user instructions.

Unexpected Insights from Building AI Products

Surprising Challenges Encountered

Reasoning Models and User Interaction in AI

The Evolution of Reasoning Models

  • The speaker discusses the development of a reasoning model that can process complex questions, moving beyond simple answers to engage in deeper reasoning akin to human thought processes.
  • This model requires time to think through problems, presenting a challenge for user experience design as consumers are not accustomed to waiting long for responses.

User Experience Challenges

  • The need for a user interface (UI) that accommodates longer thinking times is highlighted; users typically do not wait around for 25 seconds without engaging in other activities.
  • A balance must be struck between providing updates during the thinking process and avoiding overwhelming users with excessive information.

Group Reasoning Dynamics

  • The effectiveness of collaborative models is discussed, where multiple models tackle the same problem and integrate their outputs, similar to brainstorming sessions among humans.
  • The speaker reflects on how this group reasoning approach mirrors human collaboration, leading to more effective solutions.

Designing Human-like Interactions

  • There’s an emphasis on creating AI interactions that feel human-like; initial designs included subheadings of thoughts but were later refined based on user feedback regarding verbosity.
  • Users prefer concise summaries over lengthy explanations; thus, the final product provides brief insights into the model's reasoning without overwhelming detail.

Chat as an Interface for AI Interaction

  • The discussion shifts to chat interfaces, which are seen as versatile and universal due to their alignment with natural human communication methods.
  • Despite skepticism about chat being the future interface for AI, it is argued that its flexibility allows it to cater effectively across various intelligence levels.

Advantages of Chat Interfaces

  • Chat interfaces facilitate maximum communication bandwidth by allowing unstructured dialogue, enhancing interaction quality compared to rigid formats.

Communication with Superintelligence

The Role of Chat in AI Interaction

  • The speaker emphasizes the importance of flexible communication mediums, suggesting that chat is particularly effective for interacting with superintelligence.
  • While chat serves as a versatile tool, there are scenarios where more prescribed and task-specific solutions may be preferable, especially in high-volume use cases.

OneSchema's New Product Launch

  • Christina Gilbert introduces OneSchema FileFeeds, designed to simplify integration processes by allowing users to build integrations within 15 minutes using CSV exports.
  • This product aims to alleviate the burden on product teams by enabling thousands of integrations without requiring engineering involvement.

Integration Reliability and Data Validation

  • OneSchema focuses on ensuring integration reliability, addressing common issues like outages caused by bad data records.
  • A built-in validation layer prevents incorrect data from entering systems and provides immediate notifications about any data discrepancies.

Collaboration Between Researchers and Product Teams

Evolution of Idea Generation

  • The discussion highlights the evolving collaboration between researchers and product teams, noting that innovation often stems from both research-led initiatives and input from PMs (Product Managers).
  • Initially, OpenAI operated primarily as a research company; however, with the rise of ChatGPT, it has transitioned towards becoming more product-focused while maintaining its research roots.

Integrating Research with Product Development

  • The speaker argues for a dual focus on being both a world-class research entity and a product company to enhance synergy between model development and practical applications.
  • Successful products require iterative feedback loops involving engineering, design, and research teams working collaboratively rather than sequentially.

Organizational Structure Insights

OpenAI's Product Management Philosophy

Empowerment and Team Dynamics

  • OpenAI boasts a product-focused, high-agency engineering team that feels empowered to move quickly, with PMs guiding without micromanaging.
  • The ideal structure includes fewer but highly skilled PMs who can effectively support the engineering leads and teams, fostering a productive environment.

Qualities of Effective PMs at OpenAI

  • High agency is crucial; PM candidates should proactively identify problems and take initiative rather than waiting for direction.
  • Comfort with ambiguity is essential due to the complex nature of projects, which often lack clear definitions or paths forward.
  • Emotional intelligence (EQ) is vital for building rapport with researchers and engineers who may question the necessity of a PM role.

Decision-Making in Ambiguous Environments

  • A successful PM must earn trust by demonstrating value while also being decisive when necessary, balancing between deferring to team expertise and making critical calls.
  • The ability to navigate ambiguity is key; effective decision-making involves knowing when to lead decisively versus allowing team innovation.

AI's Role in Product Development

  • There’s an ongoing discussion about how AI will impact roles within product teams. Despite predictions of AI taking over coding tasks, hiring continues robustly across all functions.
  • OpenAI utilizes AI tools like ChatGPT for various tasks such as summarizing documents and writing specifications, indicating a strong integration into daily workflows.

Future Aspirations with AI Integration

  • There’s recognition that current workflows still resemble traditional methods too closely; there’s potential for more innovative uses of AI in rapid prototyping and concept exploration.

Vibe Coding and the Future of Product Teams

Understanding Vibe Coding

  • The term "vibe coding" was introduced by Andrej Karpathy, highlighting a new approach to coding where developers interact fluidly with AI tools.
  • Tools like Cursor, Windsurf, and GitHub Copilot assist in code generation by suggesting edits based on prompts provided by users.
  • As users become more comfortable with these models, they can engage in a more relaxed interaction, allowing the model to suggest changes with minimal input.
  • While models may make mistakes or produce non-compiling code, users can iteratively refine their inputs to guide the model towards better outputs.
  • Vibe coding is particularly useful for rapid prototyping and proof-of-concept projects rather than production-level code.

Future Structure of Product Teams

  • There is an expectation that future product teams will integrate researchers into their structure to enhance development processes.
  • The industry has yet to fully embrace fine-tuned models; however, there is potential for significant improvements in performance tailored to specific use cases.
  • Fine-tuning models will likely become standard practice across various industries as AI becomes ubiquitous in product development workflows.
  • Researchers or machine learning engineers will be essential team members as fine-tuning becomes integral to building effective products.
  • Companies are already utilizing ensembles of models internally for diverse tasks rather than relying solely on generic solutions.

Practical Applications of Fine-Tuned Models

  • Founders from Cursor and Windsurf illustrate how custom models complement foundational ones, enhancing user experience beyond basic code generation capabilities.
  • Fine-tuning involves providing numerous examples of desired outcomes to improve model accuracy for specific problems or queries.
  • Organizations often employ multiple model calls tailored for different tasks instead of using a single broad model response.
  • Effective problem-solving requires breaking down complex issues into smaller tasks that can be addressed by specialized models.

Customer Support Automation and AI Integration

The Role of Automation in Customer Support

  • With over 400 million weekly active users, the company receives a high volume of inbound tickets but maintains a small customer support team (30-40 people) due to effective automation of processes.
  • The use of internal resources and knowledge bases allows for automated responses, with guidelines on personality and response style integrated into the model's training.

Model Utilization for Problem Solving

  • Different AI models are employed based on specific needs; for example, O series models are used where more reasoning is required, while faster models like four oh mini are utilized for quick checks.
  • This approach mirrors human problem-solving, where individuals possess varied skills that can be combined to achieve better outcomes than any single person could provide.

Human-AI Interaction Insights

  • The discussion highlights how different individuals have unique communication styles—some prefer visual aids while others favor verbal explanations—emphasizing the importance of understanding these differences in designing AI interactions.
  • Drawing parallels between human behavior and AI design can enhance the development of effective AI experiences by considering how humans naturally solve problems.

Preparing Future Generations for an AI-Dominated World

Teaching Skills to Children

  • A community member raises concerns about preparing children for future job markets dominated by technology; the speaker emphasizes fostering curiosity, independence, and critical thinking as essential skills.
  • The speaker notes their children’s comfort with technology, highlighting that coding skills will remain relevant but stresses teaching foundational life skills over specific technical abilities.

Potential of Personalized Tutoring through AI

  • Personalized tutoring via AI is identified as one of its most significant potential applications. Despite existing products like Khan Academy, there remains a gap in widespread adoption of effective personalized learning tools.
  • Studies indicate that combining traditional education with personalized tutoring leads to substantial improvements in learning speed. The speaker expresses surprise at the lack of robust solutions available despite free access to tools like Chat GPT.

Addressing Concerns About AI's Impact on Jobs

Optimism in Technology and AI's Future

The Role of Technology in Advancements

  • Concerns about superintelligence harming humanity are prevalent, but the speaker emphasizes a positive outlook on technology's role in societal advancements.
  • Over the past 200 years, technology has significantly contributed to economic growth, geopolitical progress, improved quality of life, and increased longevity.
  • While acknowledging temporary disruptions caused by technological changes, the speaker stresses the importance of supporting individuals affected by these shifts.

Education and Reskilling with AI

  • The speaker highlights ChatGPT as an effective tool for reskilling individuals, capable of teaching various subjects to those eager to learn.
  • Emphasizing societal responsibility, there is a call for collective efforts to ensure smooth transitions during technological advancements.

AI-Assisted Creativity: Current Trends and Future Prospects

  • The discussion shifts towards AI's impact on creative work; the speaker expresses optimism about future developments in AI-assisted creativity.
  • An example is given regarding Sora (an AI model), showcasing how it enables users—regardless of artistic skill—to generate creative outputs that they couldn't achieve alone.

Enhancing Creative Processes with AI

  • A filmmaker shares insights on using Sora for generating multiple variations of a cut scene quickly compared to traditional methods that were time-consuming and costly.
  • This new approach allows for more brainstorming opportunities and better final outcomes while still relying on human creativity for direction.

Iterative Development Philosophy in AI

  • The speaker appreciates an iterative deployment philosophy where innovations are shared early and refined based on public feedback rather than being kept secret until fully developed.

The Rapid Evolution of AI Models

Advancements in AI Model Capabilities

  • The evolution from GPT-3 to current models showcases a significant improvement in capabilities, with iterations occurring every three to four months.
  • Costs associated with these models have drastically decreased, with the cost of newer models being two orders of magnitude lower than earlier versions like GPT-3.5.
  • Each iteration results in smarter, faster, and safer models that hallucinate less frequently, indicating a positive trend in AI reliability.
  • The exponential growth in AI capabilities is likened to Moore's Law but at a steeper rate, suggesting transformative changes ahead.
  • Current AI users are experiencing the least advanced version of these technologies; future iterations will only improve.

Reflections on Model Maximalism

  • Emphasis on "model maximalism" suggests continuous development towards nearly achievable capabilities as technology progresses.
  • Users have grown accustomed to rapid advancements and expect immediate results from generative technologies.

The Libra Project: A Disappointment?

Overview of the Libra Initiative

  • The Libra project aimed to revolutionize remittances by allowing instant money transfers via WhatsApp for minimal fees.
  • Despite its potential impact on global financial transactions, the project faced significant regulatory challenges and public skepticism.

Lessons Learned from Libra's Challenges

  • The project's ambitious scope included launching a new blockchain and integrating it into existing platforms like WhatsApp and Messenger simultaneously.
  • Hasty implementation led to overwhelming resistance; lessons indicate that gradual change might have been more effective.

Future Considerations for Digital Currency

  • There remains disappointment over the absence of such a service today; however, there is hope that Meta could revisit this initiative under improved circumstances.

The Future of Money Transfer and Insights on Technology

The Need for Improved Money Transfer Solutions

  • Discussion highlights the ongoing need for seamless money transfer capabilities within platforms like WhatsApp, despite advancements in technology.
  • Kevin shares a personal anecdote about the limitations of current systems, emphasizing that even successful companies can improve their services.

Lightning Round: Book Recommendations

  • Kevin recommends "Co-Intelligence" by Ethan Mollick, which explores AI's practical applications in education.
  • He also suggests "The Accidental Superpower" by Peter Zion for insights into geopolitics and its impact on global dynamics.
  • Additionally, he mentions "Cable Cowboy," a biography of John Malone, highlighting his influence on the cable industry.

Favorite Movies and Cultural Reflections

  • Kevin expresses a desire to watch Amazon's adaptation of "Wheel of Time," reflecting on nostalgia from his childhood reading.
  • He praises "Top Gun 2" for its portrayal of American pride and patriotism, indicating a cultural shift towards celebrating strength.

Innovative Products and Technologies

  • Kevin discusses his enjoyment of vibe coding with products like Windsurf, noting their fun factor in product development.
  • He shares enthusiasm for Waymo as an innovative transportation solution that feels futuristic and enhances user experience.

Life Philosophy and Work Ethic

  • Kevin cites Mark Zuckerberg’s insight about consistent hard work leading to growth over time as a guiding principle in his life.
  • He emphasizes the importance of persistence in achieving excellence rather than seeking quick fixes or silver bullets.

Prompt Engineering Tips for AI Interaction

  • Discussing prompt engineering, Kevin argues that it should not be overly complex; users shouldn't need deep technical knowledge to interact effectively with AI models.

Fine-Tuning Techniques for AI Models

Importance of Providing Examples in Prompts

  • Fine-tuning can be effectively simulated by including examples in prompts, demonstrating desired outcomes. For instance, providing a format like "Here's an example and here's a good answer" helps guide the model's responses.
  • While this method is not as effective as full fine-tuning, it significantly improves results compared to prompts without examples. Many users overlook this technique.

Framing Questions for Better Responses

  • Users can enhance model performance by framing requests with specific roles or identities, such as asking the model to respond as "Einstein" or "the world's greatest marketer." This approach shifts the model's mindset positively.
  • The speaker frequently employs this strategy when preparing interview questions, illustrating how context influences response quality.

Acknowledgment of Expertise and Future Insights

  • The conversation highlights Kevin Weil's role at the forefront of technological advancements, emphasizing his contributions to shaping future developments in AI.
  • Kevin expresses gratitude for being invited and credits his team for their collaborative efforts in innovation.

Engagement with Audience Feedback

  • Kevin encourages listeners to reach out via social media platforms like Twitter and LinkedIn to share feedback on ChatGPT’s performance—what works well and what needs improvement.
  • He emphasizes the importance of user input in refining AI tools and acknowledges the vast number of active users providing insights.

Closing Remarks

Video description

Kevin Weil is the chief product officer at OpenAI, where he oversees the development of ChatGPT, enterprise products, and the OpenAI API. Prior to OpenAI, Kevin was head of product at Twitter, Instagram, and Planet, and was instrumental in the development of the Libra (later Novi) cryptocurrency project at Facebook. In this episode, you’ll learn: 1. How OpenAI structures its product teams and maintains agility while developing cutting-edge AI 2. The power of model ensembles—using multiple specialized models together like a company of humans with different skills 3. Why writing effective evals (AI evaluation tests) is becoming a critical skill for product managers 4. The surprisingly enduring value of chat as an interface for AI, despite predictions of its obsolescence 5. How “vibe coding” is changing how companies operate 6. What OpenAI looks for when hiring product managers (hint: high agency and comfort with ambiguity) 7. “Model maximalism” and why today’s AI is the worst you’ll ever use again 8. Practical prompting techniques that improve AI interactions, including example-based prompting Find the transcript at: https://www.lennysnewsletter.com/p/kevin-weil-open-ai Brought to you by: • Eppo—Run reliable, impactful experiments: https://www.geteppo.com/ • Persona—A global leader in digital identity verification: https://withpersona.com/lenny • OneSchema—Import CSV data 10x faster: OneSchema — Import CSV data 10x faster Where to find Kevin Weil: • X: https://x.com/kevinweil • LinkedIn: https://www.linkedin.com/in/kevinweil/ Where to find Lenny: • Newsletter: https://www.lennysnewsletter.com • X: https://twitter.com/lennysan • LinkedIn: https://www.linkedin.com/in/lennyrachitsky/ In this episode, we cover: (00:00) Kevin’s background (05:16) OpenAI’s new image model (08:13) The role of chief product officer at OpenAI (11:42) His recruitment story and joining OpenAI (15:59) Working at OpenAI (18:44) The importance of evals in AI (24:40) Opportunities in the space (26:34) Shipping quickly and consistently (29:47) Product reviews and iterative deployment (32:53) Winning consumer awareness (36:03) Designing thoughtful experiences (40:56) Chat as an interface for AI (45:21) Collaboration between researchers and product teams (48:05) Hiring product managers at OpenAI (53:06) How OpenAI uses AI: vibe coding, AI prototyping, and more (01:04:34) Raising kids in an increasingly intelligent AI world (01:08:07) Why Kevin feels optimistic about our AI future (01:14:20) The AI model you're using today is the worst AI model you'll ever use (01:17:58) Reflections on the Libra project (01:21:51) Lightning round and final thoughts Referenced: • OpenAI: https://openai.com/ • The AI-Generated Studio Ghibli Trend, Explained: https://www.forbes.com/sites/danidiplacido/2025/03/27/the-ai-generated-studio-ghibli-trend-explained/ • Introducing 4o Image Generation: https://openai.com/index/introducing-4o-image-generation/ • Waymo: https://waymo.com/ • X: https://x.com • Facebook: https://www.facebook.com/ • Instagram: https://www.instagram.com/ • Planet: https://www.planet.com/ • Sam Altman on X: https://x.com/sama • A conversation with OpenAI’s CPO Kevin Weil, Anthropic’s CPO Mike Krieger, and Sarah Guo: https://www.youtube.com/watch?v=IxkvVZua28k • OpenAI evals: https://github.com/openai/evals • Deep Research: https://openai.com/index/introducing-deep-research/ • Ev Williams on X: https://x.com/ev • OpenAI API: https://platform.openai.com/docs/overview • Dwight Eisenhower quote: https://www.brainyquote.com/quotes/dwight_d_eisenhower_164720 • Inside Bolt: From near-death to ~$40m ARR in 5 months—one of the fastest-growing products in history | Eric Simons (founder & CEO of StackBlitz): https://www.lennysnewsletter.com/p/inside-bolt-eric-simons • StackBlitz: https://stackblitz.com/ • Claude 3.5 Sonnet: https://www.anthropic.com/news/claude-3-5-sonnet • Anthropic: https://www.anthropic.com/ • Four-minute mile: https://en.wikipedia.org/wiki/Four-minute_mile • Chad: https://chatgpt.com/g/g-3F100ZiIe-chad-open-a-i • Dario Amodei on LinkedIn: https://www.linkedin.com/in/dario-amodei-3934934/ • Figma: https://www.figma.com/ • Julia Villagra on LinkedIn: https://www.linkedin.com/in/juliavillagra/ • Andrej Karpathy on X: https://x.com/karpathy ...References continued at: https://www.lennysnewsletter.com/p/kevin-weil-open-ai Recommended books: • Co-Intelligence: Living and Working with AI: https://www.amazon.com/Co-Intelligence-Living-Working-Ethan-Mollick/dp/059371671X • The Accidental Superpower: Ten Years On: https://www.amazon.com/Accidental-Superpower-Ten-Years/dp/1538767341 • Cable Cowboy: https://www.amazon.com/Cable-Cowboy-Malone-Modern-Business/dp/047170637X Production and marketing by https://penname.co/. For inquiries about sponsoring the podcast, email podcast@lennyrachitsky.com. Lenny may be an investor in the companies discussed.