AI and product management | Marily Nika (Meta, Google)
The Shiny Object Trap
In this section, the speaker discusses the importance of not pursuing AI for the sake of it, but rather focusing on identifying real problems and pain points that can be solved using AI.
The Shiny Object Trap
- The speaker advises against doing AI just for the sake of it.
- It is important to identify a problem or pain point that needs to be solved in a smart way.
- Once the problem and high-level solution are identified, one should reach out and figure out how to implement it effectively.
Introduction to Merrily Nika
This section introduces Merrily Nika, who is an expert in AI and product management. Her background includes working at Google on projects related to computer vision, machine learning, and speech recognition.
Merrily Nika's Background
- Merrily Nika teaches a popular course on Maven about AI and product management.
- She currently works as a product lead at Meta (formerly known as Facebook), focusing on metaverse avatars and identity.
- Prior to Meta, she worked at Google for over eight years on projects such as Google Glass, computer vision, and machine learning around speech recognition.
AI in Product Management
In this section, the conversation touches upon what product managers should pay attention to regarding AI. It also discusses available resources for getting started with AI and how AI tools can enhance PM's job performance.
Key Points:
- Product managers should stay informed about developments in AI.
- There are various resources available to help individuals get started with AI.
- Current AI tools can already assist product managers in performing their job more effectively.
- The discussion delves into technical aspects such as understanding what a model is and how models are trained.
Understanding AI and Product Management
In this section, the host welcomes Merrily Nika to the podcast and expresses curiosity about AI in product management. They discuss the challenges of staying up-to-date with AI advancements and seek insights from Merrily as a former PM.
Key Points:
- The host acknowledges the fast-paced nature of technology and the difficulty of keeping up with AI advancements.
- The conversation aims to understand the current state of AI in product management.
- Merrily Nika shares her experiences as a former full-time PM and provides insights into the role of AI in product management.
Media for Staying Updated on AI
This section focuses on the media sources that can help individuals stay informed about new developments and trends in AI and machine learning.
Key Points:
- Subscribing to newsletters is an impactful way to stay updated on AI and machine learning.
- The speaker recommends "The Download" by MIT Technology Review as a valuable resource.
- It is emphasized that even non-AI-centric sources will increasingly cover AI topics due to its pervasive influence across various domains.
Overhyped vs. Underhyped in AI
In this section, Merrily Nika discusses what she believes is overhyped and underhyped in the field of AI.
Key Points:
- The speaker mentions that there is both overhyping and underhyping happening simultaneously.
- She disagrees with fears that technology, including AI, will replace human jobs entirely.
- On one hand, certain technologies like GPT (Generative Pre-trained Transformer) are overhyped.
- On the other hand, there are areas where AI can make significant advancements that are currently underhyped and undervalued.
The transcript ends here.
Using Chat GPT in Work Life
The speaker discusses how they use Chat GPT in their work life, specifically for creating mission statements and user segments.
Using Chat GPT for Mission Statements
- The speaker uses Chat GPT to generate mission statements for their work.
- They find that the AI-generated mission statements are often better than what they can come up with themselves.
- This is because the mission statement needs to be understood by various stakeholders, not just product managers.
- By using Chat GPT, they can create inspiring and inclusive mission statements that resonate with a wider audience.
Creating User Segments with Chat GPT
- Chat GPT helps the speaker create user segments in a more effective way.
- It suggests user motivations and pinpoints that the speaker's mind wouldn't have considered.
- By breathing ideas into the AI-generated suggestions, the speaker can come up with unique and innovative user segments.
AI-enhanced Ideas
- The speaker emphasizes that they don't rely on AI to do their job for them.
- Instead, they already have a clear mission in mind before using Chat GPT to enhance their ideas.
- The AI-generated ideas serve as enhancements and inspirations rather than replacements for human creativity.
Using Chat GPT for Personas
The speaker explains how they frame prompts with Chat GPT to create personas for specific product areas.
Framing Prompts for Fitness Bands Example
- To create personas for fitness bands without screens, the prompt could be something like "Who would be interested in fitness bands without screens?"
- Chat GPT then provides a bulleted list of potential users, such as young professionals who lack time or people who prefer wearables that don't require frequent charging.
Future of AI as Default
The speaker discusses their belief that AI will become the default in all products and its impact on user experience.
AI as Default in All Products
- The speaker believes that AI will be integrated into every product, becoming a default feature.
- They mention the importance of recommender systems for personalized experiences, such as Netflix recommendations.
- Automation is another area where AI can greatly improve society's technological advancements.
Every PM as an AIPM
The speaker explains their perspective on every Product Manager (PM) becoming an "AIPM" and working with research scientists.
Partnering with Research Scientists
- The speaker suggests that PMs should get comfortable working with research scientists as partners.
- Research scientists can help build smart models, enable automation, and provide personalization.
- This collaboration may involve more uncertainty compared to traditional PM approaches but can lead to significant improvements.
Building Fantastic Products
The speaker elaborates on how cross-functional teams can benefit from having research scientists and building AI models into their products.
Finding the Intersection for Fantastic Products
- To create fantastic products, it is essential to find the intersection of user desirability, business viability, and technical feasibility.
- Research scientists play a crucial role in producing AI and machine learning models that enhance product capabilities.
These notes provide a comprehensive summary of the transcript while using timestamps to link to specific parts of the video.
The Role of AI in Product Management
In this section, the speaker discusses how AI can be utilized in product management and provides advice on incorporating AI into product development.
Incorporating AI into Product Management
- As a product manager, it is possible to leverage AI technologies even without a technical background.
- AI can be used to enhance various aspects of a product, such as security, personalization, ethics, healthiness, speed, and accuracy.
- Data-driven insights from user behavior can be improved with the help of AI.
- Changing the mindset of product managers is crucial for effectively utilizing AI capabilities.
- Collecting data and having dashboards are important initial steps towards implementing AI.
- Hiring a data science intern or team member can help explore the potential of integrating AI into products.
Starting with AI Integration
- Before starting to build an AI component within a team or company, it is essential to identify a problem or pain point that requires a smart solution.
- The role of an AI-focused PM is to help solve the right problem rather than just building and shipping products.
- It is advisable not to pursue AI for the sake of it but rather focus on addressing specific problems.
Signs for Considering or Avoiding AI Implementation
- Investing significant time and resources into building complex models may not be suitable for an MVP (Minimum Viable Product).
- For an MVP, creating a prototype using tools like Figma and showcasing what the future implementation of AI could look like is more effective than training models from scratch.
- Using existing data or leveraging adjacent product data can be meaningful when considering implementing recommendation systems or recognition features.
Data Requirements for Successful ML/AI Projects
- The amount of data required depends on the type of application. For simple tasks like image classification (e.g., dog vs. cat), even 20 labeled images may suffice. However, more complex applications like voice recognition or NLP may require thousands of labeled examples.
- Building AI systems is not easy, and finding the right amount of data can be challenging. Sometimes, synthesizing fake data may be necessary for training and testing models.
- Startups often lack sufficient data to build their own models, making it more practical to use pre-existing AI frameworks like GPT or TensorFlow.
Determining the Feasibility of AI Implementation
In this section, the speaker discusses considerations for determining whether AI is a suitable approach for solving a problem and when it makes sense to build custom models versus using existing ones.
Evaluating Feasibility
- When considering AI implementation, it is important to assess the return on investment (ROI) and potential time required before committing significant resources.
- Building custom models from scratch may not be feasible for startups with limited data availability.
- For big tech companies offering services like speech recognition or translation, building proprietary models might be justifiable.
Leveraging Existing Models
- Using pre-trained models like GPT or BERT can save time and effort compared to building custom models.
- It is crucial to evaluate whether existing models align with the specific problem at hand before deciding to use them.
Conclusion
The speaker concludes by emphasizing that incorporating AI into product management requires identifying meaningful problems, having access to relevant data, and making informed decisions about when to build custom models versus leveraging existing ones.
Key Takeaways
- Product managers can leverage AI technologies without technical backgrounds.
- Identifying problems that can benefit from AI solutions is essential before investing in model development.
- Startups should focus on creating MVPs rather than spending excessive time on training complex models.
- The amount of data required depends on the application complexity; simple tasks may require fewer labeled examples compared to more advanced applications like NLP.
- Evaluating the feasibility and ROI of AI implementation is crucial before committing significant resources.
- Leveraging existing models can be a practical approach, especially for startups with limited data availability.
The transcript provided does not cover the entire video.
Importance of Diversifying Data Sets
The importance of diversifying data sets and collecting your own data to ensure the quality of the product.
- It is crucial to diversify data sets because if every company uses the exact same data set, the quality of their products will be identical.
- Collecting your own data allows you to have control over the quality of your product.
- As a product manager, it is your responsibility to determine when the quality of your product is good enough to launch.
Understanding Models and Training
Explanation of what a model is and how training works in simple terms.
- A model can be compared to a child's brain that learns through repetition and recognition.
- A model has the ability to take an input (e.g., an image or text) and recognize patterns based on previous training.
- Training a model involves providing large amounts of relevant data for it to process and learn from.
- The output of training is not code but rather the ability for the model to recognize and classify inputs with certain probabilities.
Applications of AI and Machine Learning
Examples of impactful applications of AI and machine learning.
- One example is language translation using devices like Google Glass, where speech from one language can be transcribed, translated, and displayed in real-time for another person.
- These technologies have the potential to break down communication barriers between different languages.
- There are numerous real-world applications that are no longer science fiction but require connecting various components together.
Benefits of Training Models
Advantages and outcomes of training models.
- Training models involves providing large amounts of relevant data for the model to process and learn from.
- The model learns patterns and features in the data that may not be explicitly understood by humans.
- The output of training is not code but rather the ability for the model to recognize and classify inputs with certain probabilities.
AI's Impact on Communication
The impact of AI on communication, particularly through speech recognition and translation.
- Speech recognition technology can transcribe spoken words into text, enabling easier communication between individuals.
- This technology has the potential to unlock borders of communication by providing real-time translations.
- These advancements are no longer science fiction but are already a reality.
Role of Product Managers with AI
Discussing whether AI will replace product managers or enhance their roles.
- AI technologies like Chat GPT can assist product managers in various tasks, making them more efficient and freeing up time for other activities.
- However, AI is unlikely to replace product managers entirely. Instead, it enhances their capabilities and improves processes.
- Product managers still play a crucial role in decision-making, strategy development, and understanding user needs.
New Section
In this section, the speaker discusses areas of product management that should be focused on and areas that may require less attention in the context of AI product management.
Areas to Invest More Skill Wise
- The speaker suggests that product managers (PMs) should invest more in learning how to code and train machine learning models.
- Learning these skills can help PMs gain confidence, understand how things work, and have a different mindset when approaching AI products.
- By understanding the fundamentals of coding and training models, PMs can better comprehend how tools are created and perform their day-to-day tasks with more knowledge.
Areas to Less Focus On
- The speaker encourages PMs to not solely rely on existing tools or applications for AI tasks but rather take an active role in learning and experimenting with coding and model training.
- While local applications may be able to assist with certain tasks, having hands-on experience with coding and training models provides a deeper understanding of the underlying processes.
- PMs should not blindly trust tools but instead develop the skill set necessary to comprehend how those tools were created.
New Section
In this section, the speaker provides resources for individuals interested in learning to code and offers advice on getting started.
Resources for Learning to Code
- Coursera offers various courses for offline learners who prefer self-paced online education.
- Stanford University's "Introduction to AI" course on Coursera is highly recommended by the speaker.
- For those who enjoy collaborative learning experiences, CareerFoundry, General Assembly, and Coding Dojo are suggested as excellent online platforms.
New Section
This section focuses on what steps a PM can take early in their career to become a strong AI PM.
Steps for Becoming a Strong AI PM
- The speaker mentions having a course on Maven for current and aspiring product managers interested in building AI products.
- Understanding the unique aspects of managing AI products compared to general product management is crucial.
- PMs should familiarize themselves with the stages of AI product development and learn how to manage the problem rather than just the product.
- If working at a company with AI researchers and scientists, PMs are encouraged to reach out, shadow them, and gain firsthand experience in their work.
The transcript does not provide specific timestamps for each bullet point.
Challenges in ML Research
The speaker discusses the challenges faced in machine learning (ML) research.
Uncertainty in Model Training
- Results obtained from training ML models may not be optimal or aligned with the initial hypotheses and expectations.
- Encouragement and support are crucial to keep the team motivated throughout the process.
Emotional Management as a Leader
- Managing emotions as a leader in ML research can be challenging.
- Adapting to different emotional states and effectively leading the team is important.
Difficulty in Obtaining Good Data
- Collecting high-quality data for ML research can be challenging.
- Creativity is required to explore unconventional methods of data collection, such as approaching people on the street for contributions.
Career Trajectory Considerations
- In traditional product management roles, launching new products leads to career advancement.
- However, in ML research, it is essential to clarify with hiring managers how progress will be assessed differently from traditional product management roles.
Encouraging Team and Career Trajectory
The speaker emphasizes the importance of encouraging teams and understanding career trajectories in ML research.
Role of Team Encouragement
- As a leader, it is crucial to act as a cheerleader for the team, providing motivation and support throughout the ML research process.
Understanding Career Trajectories
- In ML research, career progression may differ from traditional product management roles.
- It is necessary to clarify with hiring managers how progress will be evaluated and what factors contribute to career advancement.
Emotional Management and Flexibility
The speaker discusses emotional management and adaptability required in ML research.
Emotional Adaptation as a Leader
- Managing emotions effectively as a leader in ML research can be challenging.
- Adapting to different emotional states and situations is crucial for success.
Flexibility in Approaches
- ML researchers need to be open to trying different approaches and techniques.
- Being willing to explore new methods and strategies is essential for overcoming challenges.
Importance of Data and Career Trajectory
The speaker highlights the significance of data quality and career trajectory considerations in ML research.
Challenges in Obtaining Good Data
- Obtaining high-quality data for ML research can be difficult.
- Researchers may need to think creatively and explore unconventional methods of data collection.
Career Trajectory Considerations
- In ML research, career progression may differ from traditional product management roles.
- It is important to clarify with hiring managers how progress will be assessed differently in a research-focused role.
Flexing Different Muscles and Buy-in for ML Investment
The speaker discusses the importance of flexing different skills in ML research and obtaining buy-in for investment.
Flexing Different Muscles
- Engaging in ML research allows individuals to develop new skills and capabilities.
- It is encouraged to embrace the challenges associated with transitioning from traditional product management roles to research-focused positions.
Obtaining Buy-in for ML Investment
- To gain support for investing in ML projects, it is beneficial to provide examples of successful AI-driven products within the company.
- Demonstrating similarities between proposed projects and previous successful endeavors helps build a case for investment.
Getting Buy-in for ML Investment
The speaker provides advice on obtaining buy-in for investing in machine learning (ML) projects.
Initial Buy-in
- When proposing an ML project, emphasize the potential value it can bring and the need for experimentation.
- Communicate that it may take time to determine if the project is worth the effort, but there is potential for significant impact.
Sustaining Buy-in
- After initial success, maintaining buy-in for ongoing ML projects can be challenging.
- Continuously demonstrate the value and impact of the project to stakeholders to ensure continued support.
Maintaining Buy-in and Leveraging Adjacent Products
The speaker discusses strategies for maintaining buy-in and leveraging adjacent products in ML research.
Maintaining Buy-in
- To sustain support for ongoing ML projects, highlight the positive outcomes achieved so far, such as improved search rankings or other measurable successes.
- Continuously communicate the importance of further refining and optimizing models to maintain stakeholder engagement.
Leveraging Adjacent Products
- Showcasing successful AI-driven products within the company can inspire confidence in investing in ML research.
- Drawing parallels between proposed projects and existing successful endeavors helps build a case for continued investment.
Inspiration from Adjacent Products
The speaker emphasizes drawing inspiration from adjacent products in ML research.
Using Examples as Inspiration
- When seeking buy-in for new ML initiatives, provide examples of previously successful AI-driven products within or outside the company.
- Highlight similarities between proposed projects and past successes to demonstrate feasibility and potential impact.
Contingency Planning
- In addition to presenting new ideas, have contingency plans ready in case initial approaches do not yield desired results.
- Demonstrating preparedness reassures stakeholders about potential risks and mitigations.
Academia as a Source of New Tech Insights
The speaker highlights academia as a valuable source of new technology insights in ML research.
Importance of Academia and Research Blogs
- Academia and research blogs provide valuable information on emerging ML technologies.
- Platforms like "archive" showcase new papers and insights from research scientists, bridging the gap between academia and industry.
Collaboration Between Production and Research
- Investing in staffing roles that bridge the gap between production management and academic research can lead to innovative product development.
- Product managers play a crucial role in monetizing ideas generated by research scientists.
Monetization and PM's Role
The speaker discusses the importance of monetization in ML research and the role of product managers (PMs).
Monetizing ML Research
- PMs need to develop strategies for effectively monetizing ML research outcomes.
- Identifying ways to generate revenue from ML-driven products is essential for long-term success.
Leveraging User Feedback
- Gathering user feedback through surveys or sign-up forms helps understand users' willingness to pay for ML-driven products.
- Incorporating user preferences into monetization strategies enhances product viability.
Trust, Failure, and Investment in ML
The speaker explores trust, failure, and investment considerations in machine learning (ML) projects.
Building Trust through Experience
- As a company invests more in ML initiatives, trust in the technology grows.
- Cultivating a culture that welcomes experimentation allows for greater acceptance of potential failures.
Success Rate of ML Investments
- Many investments in ML may not yield successful outcomes or prove to be efficient uses of time initially.
- However, with improved tooling and access to public models, there is potential for shorter development cycles leading to more useful results.
Academic Insights on Niche Tech
The speaker emphasizes the value of academic insights in niche technologies for ML research.
Academic and Research Organizations
- Academic institutions and research organizations provide valuable insights into niche technologies.
- Collaboration between academia and industry can lead to the development of innovative ML products.
Expanding PM Roles
- As more PMs are involved in bridging the gap between academia and industry, the potential for creating impactful ML-driven products increases.
- PMs play a crucial role in taking research ideas and finding ways to monetize them effectively.
Tooling and Public Models
The speaker discusses how tooling and public models can impact the success rate of ML investments.
Impact of Tooling
- Improved tooling in ML development can reduce time investment required for building models from scratch.
- Accessible public models allow for faster prototyping, potentially increasing the efficiency of ML projects.
Shorter Development Cycles
- With readily available tools and public models, shorter development cycles become feasible.
- This may encourage companies to invest in ML projects with shorter timeframes for achieving useful outcomes.
Staying Updated on Niche Tech
The speaker provides insights on staying updated with new niche technologies in ML research.
Importance of Academia and Research Blogs
- Academic institutions and research blogs serve as valuable sources for staying informed about emerging niche technologies.
- Platforms like "archive" provide access to new papers and insights from research scientists.
Collaboration Between Production, Research, and PMs
The speaker highlights the importance of collaboration between production management, research teams, and product managers (PMs).
Collaborative Efforts
- Collaboration between production management, research teams, and PMs is essential for successful ML-driven product development.
- PMs play a crucial role in monetizing research ideas and bridging the gap between production and academia.
Monetization Strategies
- PMs need to develop effective strategies for monetizing
New Section
In this section, the speaker mentions Tyler Cohen's blog, Marginal Revolution, as a great resource for insights from research papers. The speaker also highlights Tyler Cohen's expertise and enthusiasm for AI.
Tyler Cohen's Blog - Marginal Revolution
- Tyler Cohen's blog, Marginal Revolution, is recommended as a valuable source for insights from research papers.
- Tyler Cohen is known for his intelligence and passion for AI.
New Section
In this section, the interviewer asks about the framework of the course being discussed. The speaker provides an overview of the course structure and its target audience.
Course Framework
- The course is three weeks long and designed for aspiring or current PMs who want to incorporate AI solutions into their work.
- Week one focuses on introducing the product development lifecycle in general and how it differs when working with AI.
- Idea creation is also covered in week one, emphasizing the importance of thinking beyond what users currently know they want.
- Subsequent weeks delve deeper into topics such as productionizing AI solutions, collaborating with research scientists, building trust with partners, and paving a path towards becoming an AI PM.
New Section
In this section, the speaker discusses specific topics covered in the course related to product development with AI. They highlight collaboration with research scientists and emphasize the importance of influencing them.
Course Topics: Collaboration and Influence
- Collaboration with research scientists is explored in detail during the course.
- Students learn how to partner effectively with researchers and gain their trust to convert precious research into practical products.
- The importance of influence skills in convincing researchers to align their work with product goals is emphasized.
New Section
In this section, the interviewer asks about the most exciting workshop in the course. The speaker explains that throughout the workshops, participants work on their own AI product and present it at the end.
Exciting Workshop: Presenting AI Products
- Participants have homework throughout the course where they develop their own AI product from start to finish.
- At the end of the course, participants present their work and receive feedback.
- The speaker finds this workshop particularly exciting as it showcases participants' achievements and enthusiasm.
New Section
In this section, the interviewer asks for examples of projects built during or after the course. The speaker mentions a team that created a model to analyze x-rays and identify potential issues.
Example Project: X-ray Analysis
- One team built a machine learning model capable of analyzing x-rays obtained online.
- The model could identify potential issues with patients based on x-ray images.
- While not intended to replace medical professionals, this project demonstrates the possibilities of using AI in healthcare.
New Section
In this section, the interviewer asks about tools recommended for building AI solutions. The speaker mentions AutoML offered by Google Cloud as a tool for training custom machine learning models with minimal coding knowledge required.
Recommended Tool: AutoML
- AutoML by Google Cloud is recommended as a tool for training high-quality custom machine learning models without extensive coding knowledge.
- Users need to provide a large dataset of photos or images relevant to their application.
- AutoML simplifies model training and can be used for various applications, as demonstrated by a company using it to maintain wind turbines through drone imagery analysis.
Timestamps are provided in seconds (s) format.
Building a Course
In this section, the speaker discusses the process of building a course and shares insights on audience research, course duration, and personal engagement.
Building a Course
- The speaker treated creating their course like developing a product.
- They conducted audience research by reaching out to potential learners and gathering information on what they wanted to learn.
- There were several iterations in defining the target audience for the course, initially focusing on software engineers wanting to become AI product managers but later expanding to include product managers wanting to specialize in AI.
- It is important to find the right audience and understand their specific needs.
- The duration of the course should be carefully considered. One week may be too short, two weeks rushed, while three weeks allows for more comprehensive learning and community building.
- Building personal relationships with each learner is crucial. The speaker messaged everyone individually, answered questions, and addressed concerns to build trust.
Evolving the Course
In this section, the speaker discusses making changes and additions to the course based on feedback and evolving trends.
Course Evolution
- Bonus sections were added based on feedback from learners. For example, a bonus section was added about GPT-3 training after receiving questions from new cohorts.
- The speaker emphasizes that courses need to adapt and evolve as technology advances rapidly.
Encouragement for Creating Courses
In this section, the speaker encourages others to create their own courses and highlights the value of teaching.
Creating Courses
- Initially skeptical about people wanting to learn from them, the speaker realized that sharing knowledge can be game-changing for others.
- Creating courses allows individuals to share their expertise and make a positive impact on others' lives.
- Teaching and explaining concepts helps solidify one's own understanding and learn even more in the process.
Lightning Round
In this section, the speaker answers a series of quick# Building a Course
In this section, the speaker discusses the process of building a course and shares insights on audience targeting, course duration, and personal engagement.
Building a Course
- The speaker treated creating their course like developing a product.
- They identified their target audience by reaching out to people and understanding their learning needs.
- Iterations were made to cater to different audiences, initially focusing on software engineers wanting to become AI product managers but later expanding to include product managers wanting to specialize in AI.
- Important factors in building a course include finding the right audience, understanding their needs, determining an appropriate duration (three weeks is recommended), and establishing personal relationships with students.
- The speaker emphasized the importance of messaging everyone, answering questions, and addressing concerns to build trust with students.
- Bonus sections were added based on student feedback and emerging topics.
Evolving the Course
In this section, the speaker discusses updates made to the course since its initial creation.
Course Updates
- A bonus section was added about GPT-3 and how it was trained due to student inquiries at the start of a new cohort.
- The speaker highlights the importance of continuously adapting and updating courses as new information becomes available.
Encouragement for Creating Courses
In this section, the speaker encourages others to create their own courses and emphasizes the value of teaching.
Encouragement for Creating Courses
- The speaker initially doubted whether people would want to learn from them but realized that sharing knowledge can be game-changing for others.
- They encourage others not to underestimate themselves and consider creating courses on topics they may take for granted but are valuable for learners.
- Teaching helps crystallize thoughts and deepen understanding of a subject.
- The speaker emphasizes the importance of collaboration and sharing knowledge in today's world.
Lightning Round
In this section, the speaker answers rapid-fire questions.
Lightning Round
- Recommended books include "Inspired" by Marty Cagan, which focuses on creating products that people love, and a workbook called "Adventures of Women in Tech Workbook" by Atlanta Car, aimed at helping women navigate working in tech.
- A favorite podcast is "Bosses Podcast" hosted by Boz, the CTO of Facebook.
- The speaker enjoyed watching the TV show "The White Lotus" and recommends it.
The lightning round questions were not transcribed fully.