Agentic Thinkers Podcast | SurrealDB | Ep.4

Agentic Thinkers Podcast | SurrealDB | Ep.4

The Rise of Personal AI and Agentic Systems

Introduction to Agentic AI

  • Discussion on the increasing interest in personal agent memory, personal AI, and LLM compute.
  • Emphasis on the trend away from extremely large models (e.g., 100 billion parameters), suggesting size may not correlate with effectiveness.

Overview of Surreal DB

  • Introduction to Surreal DB as a scalable multimodel database developed in Rust, aiming to unify various database types.
  • Mention of the podcast's host, Matt Elie, and his partner Ree discussing their roles in the show.

Origin Story of Surreal DB

  • Toby Morgan Hitchcock shares that Surreal DB originated from frustrations with managing multiple databases for different applications.
  • Initial development was inspired by a golf analytics application that highlighted challenges across various data types.

Challenges Addressed by Surreal DB

  • Explanation of difficulties faced when using multiple database platforms (e.g., time series, document stores).
  • Complexity arose from needing to manage different data consistency models while scaling applications effectively.

Unique Features of Surreal DB

  • Description of how the golf analytics app tracked player movements and shots on a course, requiring diverse data management.
  • Introduction to temporal augmented radix trees as a unique feature addressing gaps in existing databases related to time-series data.

Technical Insights into Data Management

  • Discussion about the embedded storage engine called Seral KV used within Surreal DB for graph-like data storage with historical tracking capabilities.
  • Explanation of how versioning is implemented through timestamps allowing efficient navigation between different versions without extensive traversal.

Database Development Journey

Initial Challenges and Learning Curve

  • The speaker discusses the transition from an initial database design to a combination of B+ tree and LSM tree for improved performance and scalability, highlighting the learning process involved.
  • Reflecting on the daunting task of writing a database from scratch, the speaker references Bob Miner’s work on Oracle in the 70s as a significant historical undertaking.
  • The speaker admits that if they had known the challenges ahead, they might not have started building the database, emphasizing that prior experience with various databases did not prepare them for internal workings.
  • Conceptualization began in 2015-2016, with open-sourcing occurring in 2022; this period included extensive internal use before public release.

Market Positioning and Comparison

  • The discussion shifts to market competition among database providers like MongoDB and PostgreSQL, questioning how their product compares to these established companies.
  • The speaker explains that their database can handle multimodal data types (graph, document), which is typically not supported by other databases.

Unique Features of Serial DB

  • Serial DB allows users to query JSON-like data using SQL-like languages while supporting both schemaless and schema-full operations similar to PostgreSQL or MySQL.
  • It is designed from the ground up for multimodel data handling (time series, graph, document), enabling operation at scale across large clusters storing petabytes of data.

Multimodal vs. Multimodel Data Handling

  • The distinction between multimodal (different ways to query data) and multimodel (supporting multiple types of data structures) is clarified; both terms are often confused in industry discussions.
  • While files such as audio or video can be stored within the system, querying these file types directly is not supported.

Integration and Transaction Management

  • Unlike competitors who may integrate different systems through acquisitions, Serial DB operates all functionalities within a single engine ensuring consistency across transactions.
  • All queries run under either read or write transactions maintain ACID properties; once confirmed inserted data becomes immediately visible for subsequent transactions.

Understanding Data Consistency in AI and Database Management

The Importance of Data Consistency

  • In the context of AI, vector search databases are fast but may lack consistency, which is crucial for applications requiring transaction integrity.
  • Ensuring data consistency across various types of databases (vector, graph) is essential for understanding relationships between accounts and transactions.
  • Unlike some systems that allow dropping down from consistency, certain databases maintain transactional integrity regardless of scale.

Transaction Management Challenges

  • Managing transactions across different database models (SQL vs. graph vs. AI vector embeddings) presents complexity in ensuring ACID compliance.
  • Some vector stores sacrifice consistency for performance; this trade-off can affect how recent data is accessed during operations.

Query Optimization Across Different Data Types

  • Optimizing queries across diverse data types without offloading to separate engines enhances performance and simplifies management.
  • Providers that integrate multiple services face scaling challenges due to differing operational characteristics among those services.

Scalability and System Design

  • A unified system capable of scaling from a single node to many nodes is advantageous for handling large datasets in organizations.
  • Balancing specialization versus generalization in database design poses challenges; covering multiple models can dilute depth in any one area.

Trade-offs in Database Functionality

  • Databases inherently involve trade-offs; prioritizing concurrency may lead to sacrificing other features like consistency or background processing capabilities.
  • Building a database from the ground up allows more control over these trade-offs compared to using pre-built solutions with fixed structures.

Performance Considerations with Joins

  • In-memory graph structures typically offer better performance but come with limitations on scalability and transaction support.
  • The discussion raises the question about the efficiency of joins within databases, highlighting developer preferences against them.

Understanding Data Modeling and Its Evolution

The Role of Joins in Data Management

  • Joins are efficient for single table lookups, especially when written correctly with appropriate indexes.
  • The advent of document databases like MongoDB required data duplication or denormalization to address scalability challenges.
  • Traditional data modeling often does not reflect how humans think about relationships; we conceptualize multi-level deep relationships rather than simple tables.
  • While joins have been extensively researched for performance and efficiency, they can become cumbersome as the complexity of data increases beyond two tables.
  • Advocating for graph queries allows for a more natural traversal of data that aligns with human thought processes.

Shifts in Data Structuring Approaches

  • Normalization has historically been crucial, but the rise of unstructured data due to AI and big data is changing this perspective.
  • There is a concern that people may become complacent with structuring their data, relying on unstructured formats without proper organization.
  • The evolution towards various database types (e.g., graph databases, time series databases) reflects the need for diverse approaches in modern applications.

Importance of Time in Data Context

  • AI's influence necessitates understanding different modalities such as vector search combined with traditional full-text search methods.
  • Time has emerged as a critical factor in contextualizing user queries, shifting from a secondary consideration to a fundamental aspect of data relevance.
  • Understanding temporal context is essential for agents responding to inquiries based on historical email interactions or ticketing systems.

Challenges and Opportunities in Rapid Development

  • Rapid development facilitated by AI tools can lead to improperly structured data storage if not carefully managed.
  • Developers must maintain an understanding of best practices despite the speed at which applications are built using AI-generated code.

Balancing Flexibility and Structure

  • Utilizing graph databases allows for flexible document storage while still accommodating schema requirements where necessary.
  • A balance between schema-less designs and structured schemas is vital for application functionality, ensuring clarity even within complex JSON-like formats.

Understanding the Importance of Temporal Data in AI Applications

The Role of Time in Data Management

  • The schema behind data is crucial for making applications production-ready, especially as various database types become more centralized and integral to the context layer for agents and AI applications.
  • Users expect timely responses when querying data; outdated information can lead to negative experiences, highlighting the necessity for real-time data updates.
  • Data relevance diminishes over time, necessitating mechanisms to track when data becomes obsolete or less relevant, which is critical for accurate AI outputs.

Different Aspects of Time in Data

  • Various forms of time are significant in data management: determining relevance, tracking historical states at specific times, and ensuring consistent input for AI models despite changing datasets.
  • Understanding how organizational changes affect data over time is essential; this includes auditing practices and compliance with data protection regulations.

Evolving Practices in Data Storage

  • Many organizations overlook the importance of temporal aspects in their data strategies; however, this focus is becoming increasingly vital as agents require nuanced understanding of past and present events.
  • Historical methods like journaling have been inefficient due to excessive storage needs; modern approaches aim to optimize these processes by capturing only necessary changes rather than entire records.

Efficient Data Handling Techniques

  • Surreal DB offers a more optimal way to manage historical data by storing differences (diffs), akin to version control systems like GitHub, rather than duplicating entire datasets.
  • Traditional methods often resulted in inefficiencies; newer techniques emphasize capturing only what has changed over time while maintaining an audit trail.

Balancing Performance and Cost

  • Querying capabilities are increasingly important as organizations seek efficient ways to store large volumes of historical data without incurring high costs associated with active storage solutions.
  • A new distributed storage engine allows for cost-effective management of vast amounts of inactive historical data by utilizing object storage instead of expensive SSD or hard drive space.

Exploring Agentic AI and Database Functionality

Understanding Agentic AI in Databases

  • The speaker discusses the evolving terminology around agentic AI, emphasizing the need for clarity as concepts rapidly change.
  • They introduce the concept of a "data agenda," which allows adding functionality directly to databases without needing to scale out to serverless platforms, thus reducing latency.
  • The ability to run business logic immediately within the database is highlighted, enabling real-time responses based on data changes.
  • Developers can now build functions using their CI pipeline, allowing testing outside of a running database before integration and versioning.
  • This approach enhances performance and capabilities by integrating external API calls (e.g., Anthropic, OpenAI) directly into database operations.

Non-deterministic Behavior and Local Language Models

  • The introduction of local language models (LLMs) raises questions about non-deterministic behavior in databases; this could complicate traditional transactional processes.
  • The speaker confirms that while LLM outputs may vary each time due to their nature, developers retain control over how they handle these outputs within applications.
  • They assert that whether LLM calls are made through databases or other programming languages, validation of results remains crucial for data integrity.
  • Developers have autonomy in deciding how to manage non-deterministic outputs from LLM integrations within transactions while ensuring adherence to schemas before data insertion.
  • The discussion emphasizes that while generalized models tend toward non-determinism, developers must navigate this complexity responsibly.

Future Directions with Local Models

  • There is an acknowledgment of a trend towards local model implementations as users seek more control over their data processing environments.
  • Smaller domain-specific models are noted for their constrained outputs compared to broader generalist models like OpenAI's offerings, potentially leading to more predictable behaviors in transactions.
  • However, integrating various model calls into transactions could introduce delays; thus, careful consideration is needed regarding transaction speed and efficiency.

Understanding Local Models and Their Importance

The Role of Local Models in Data Management

  • Local models can run for extended periods without affecting data integrity, as they do not alter data modified by concurrent transactions.
  • The architecture allows for concurrent background processing of indexes, enabling efficient handling of large tables without locking them.

Privacy and Data Storage Concerns

  • Running models locally enhances privacy compared to cloud-based solutions, where extensive personal data is often stored.
  • While cloud services provide benefits, local execution on devices reduces costs associated with data transfer and storage.

Cost Efficiency in Application Development

  • Developing applications locally can be significantly cheaper than relying on cloud resources, especially when considering time efficiency versus cost.
  • The ability to utilize local GPUs or models can lead to substantial savings while maintaining performance.

The Future of AI: Compact Models and Personalization

Trends in AI Model Development

  • There is a growing trend towards building personal AI agents that operate locally rather than relying solely on expansive cloud-based systems.
  • Advances in hardware, such as Apple's neural chips, are making it feasible to run powerful models directly on personal devices.

Model Size vs. Quality

  • Increasing model size does not necessarily correlate with improved output quality; context and accompanying data play a crucial role.
  • More compact models are becoming viable for local deployment, which will significantly influence the future landscape of AI technology.

Exploring Serialism: A New Paradigm for Developers

What is Serialism?

  • Serialism enables developers to create functions using languages like Rust or JavaScript that compile into WebAssembly modules for database integration.

Benefits of Using WebAssembly Modules

  • These modules allow for event-driven actions within databases without tightly coupling logic to the database itself, enhancing modularity and version control.

Integration with Machine Learning

  • Serialism facilitates running machine learning models directly within databases via ONNX support and API calls to external services like Claude or ChatGPT.

Community Engagement in Open Source Development

Embracing Community Contributions

  • Engaging with open-source communities is essential; many organizations prefer transparent development processes that allow them to understand how databases function.

Open Source Development and Database Architecture

The Importance of Open Source in Development

  • The database industry has shifted towards open-source systems, which are essential for competition against proprietary closed-source systems.
  • Building an open-source platform allows developers to provide feedback and contribute enhancements, fostering a collaborative environment.
  • Despite the complexity of contributing to large codebases (e.g., 500,000 lines of Rust), modern tools like Claude and ChatGPT simplify coding for contributors.

Plugin Architecture and Surreal MCP

  • Companies often create open-source areas; HashiCorp's Terraform is cited as a successful example. Surrealism also has strong roots in this space.
  • Surreal is built from the ground up with its own binary protocol and query language, necessitating custom SDKs for various programming languages.
  • Unlike PostgreSQL, which benefits from a vast ecosystem, Surreal must develop its own infrastructure components like SDKs and servers.

Managing Databases with MCP

  • MCP (Managed Cloud Platform) enables users to manage databases effectively, whether starting new instances or communicating with existing ones.
  • The approach to building applications varies significantly based on their complexity—ranging from simple chat apps to comprehensive organizational knowledge management systems.

Virtual File Systems and Data Management

  • Surreal includes a virtual file system that allows agents to store various document types efficiently while leveraging familiar Unix-like operations.
  • This system supports extensive data storage capabilities, scaling up to terabytes across multiple compute nodes.

Enhancements through Plugins and Markdown Usage

  • Surrealism facilitates running plugins directly within the database, including LLM calls and local model execution without compromising data consistency.
  • The ecosystem around the database includes necessary add-ons that enhance application development while maintaining core functionalities expected by enterprises.

Concerns about Logic Storage in Markdown

  • There is growing interest in using markdown for consistent interactions with models; however, concerns arise regarding logic being stored inefficiently across disparate folders rather than centralized repositories.

Understanding the Role of File Systems and Databases in AI

The Limitations of Flat Files

  • The speaker discusses the organization of data into folders and files, highlighting that while this method is effective, it may not be sufficient for complex systems.
  • A mention of using markdown files to store memory projects indicates a growing trend towards simpler data management solutions, but acknowledges inherent limitations.
  • For basic applications like listing capital cities, flat files or spreadsheets can suffice; however, scalability becomes an issue as complexity increases.

Beyond Simple Structures

  • Emphasizes the need for developers to understand how entities within their organizations relate to each other, suggesting that simple file systems may not meet all needs.
  • Introduces vector search as a method for finding semantically similar items rather than relying solely on keyword matching, indicating a shift in data retrieval methods.

The Evolution of Data Management

  • Discusses applications that visualize markdown documents as graphs, illustrating innovative ways to manage and interact with information.
  • Raises questions about whether understanding schemas is still necessary in an era where AI can determine data structures autonomously.

Trusting AI with Code and Data

  • Questions the future role of developers who might rely entirely on AI-generated code without understanding its workings or structure.
  • Explores whether it's feasible to trust agents with governance over data structures instead of requiring human oversight.

Balancing Data Sovereignty and Code Flexibility

  • Highlights concerns regarding the emotional attachment people have towards their data compared to their code, especially in light of regulations like GDPR.
  • Discusses schema's role in ensuring compatibility between applications but questions if humans will continue to prioritize schema understanding as reliance on AI grows.

Understanding the Role of LLMs and AI Native Concepts

The Importance of Schema in LLMs

  • Discussion on how the schema is managed by LLMs, emphasizing that understanding code may not be necessary for effective data management.
  • Platforms utilizing data will increasingly rely on AI agents rather than direct human interaction, highlighting a shift in data handling dynamics.
  • Importance of clearly labeling data attributes (e.g., expiry date vs. creation date) to ensure accurate interpretation by different LLMs.
  • Acknowledgment that while schema provides comfort regarding data structure, it may not always reflect the true nature of interactions with AI systems.

Evolving Definitions of AI Native

  • Introduction to the concept of "AI native," where individuals and organizations adapt to using AI tools without needing deep technical knowledge.
  • Definition of an "AI native" individual as someone who effectively utilizes AI tools like agents for various tasks, indicating a cultural shift towards embracing these technologies.
  • Emphasis on the need for flexibility in how developers interact with AI applications across different platforms and programming languages (e.g., Markdown documents, Python SDK).

The Spectrum of Being AI Native

  • Recognition that being "AI native" is a spectrum; everyone engages with AI differently based on their needs and capabilities.
  • Discussion about how organizations must consider their specific problems when integrating AI solutions, especially in sensitive areas like finance or defense.

Balancing Control and Autonomy in Data Management

  • Exploration of ethical considerations surrounding autonomous decision-making by AIs, particularly in high-stakes environments such as military applications.
  • Reflection on how trust in generated data impacts its use; concerns about accuracy and reliability are paramount when dealing with critical information.

The Pendulum Effect of Abstraction

  • Analysis of historical trends showing that society often swings between detailed understanding and abstraction; this affects knowledge retention within organizations as experienced personnel retire.
  • Call for balance between control over technology and allowing autonomy to foster innovation while ensuring accountability.

Rapid Advancements in Technology

  • Observations on how advancements in LLM capabilities can drastically reduce project timelines from months to mere minutes, underscoring the transformative potential of current technologies.

Understanding the Evolution of Database Architecture

The Need for Continuous Adaptation in Database Systems

  • The speaker emphasizes the necessity of understanding database outputs, noting that the landscape is rapidly changing and will continue to evolve significantly over time.
  • A discussion on rearchitecting a database solution highlights fundamental changes made to improve its inner workings, indicating a significant decision-making process behind these modifications.

Key Components of Database Architecture

  • The architecture is divided into three main components:
  • Storage Engine: Manages data writing, storage, and retrieval from the key-value store.
  • Parser: Processes incoming queries before passing them to the next component.
  • Internal Core Executive and Planner: Understands queries, plans execution, and interacts with the storage engine.

Enhancements in Version 3.0

  • Version 3.0 includes optimizations in transaction handling within the storage engine while maintaining similar operational methods across the system.
  • The query parser has evolved from a basic syntax execution model (version one) to a more advanced recursive descent parser (version two), which significantly speeds up complex query processing.

Complexity of Query Language

  • The database's query language resembles a programming language rather than traditional SQL, allowing for complex operations but also introducing challenges in optimization due to its expressiveness.
  • Accessing data can involve multiple interpretations (e.g., array values or field names), complicating how queries are optimized and executed.

Innovations in Query Execution

  • Major changes have been made to the internal query engine; it now converts text queries into executable plans that can run operations in parallel when possible.
  • Instead of processing documents sequentially, improvements allow for batch processing of thousands of documents simultaneously, enhancing throughput across various types of queries.

Decision-Making Behind Architectural Changes

  • The transition towards rearchitecting was driven by reaching functional limits with version two; performance enhancements were necessary as previous versions lacked effective query planning capabilities.

Performance Improvements in Database Architecture

Enhancements in Query Performance

  • Rearchitecting the database has led to significant performance improvements, with some graph reversal queries being up to 27 times faster.
  • SerialQL allows for efficient query writing, reducing the need for complex optimizations typically required in relational databases.

Advantages of Graph-like Structure

  • The graph-like nature of Serial DB enables starting from specific records (e.g., person Toby) and traversing directly to related records without full table scans.
  • This optimization enhances overall query performance, bringing previously slower queries up to par with faster ones.

Challenges with Query Optimization

  • Poorly written queries can compound issues under load, often revealing inefficiencies only during production use.
  • The existence of a slow query log highlights ongoing challenges in maintaining optimal performance.

Transitioning from Legacy Databases

Compelling Reasons for Migration

  • Companies are motivated to move from traditional legacy databases due to factors like cost reduction and data consolidation.
  • Data sprawl complicates systems, leading to increased latency and reduced accuracy as multiple databases must be queried simultaneously.

Impact on Performance and Accuracy

  • Managing various database platforms increases complexity; querying across different types (graph, vector, document databases) can lead to inefficiencies.
  • Improved accuracy is crucial for AI agents that require precise responses; inefficient querying can hinder this goal.

Benefits for Large Enterprises

  • For Fortune 500 companies, reducing infrastructure complexity while improving application layer efficiency is vital.
  • Simplifying data management across diverse modalities (text, document, graph data types) enhances scalability and availability.

Crazy Database Architectures and Legacy Setups

Observations on Database Practices

  • The speaker reflects on the absurdities of certain database architectures, noting that while they may seem crazy, they are often the norm in many organizations.
  • Organizations frequently adopt systems that meet immediate needs but later find themselves bolting on additional providers or platforms, complicating their architecture over time.

Common Issues with Database Implementations

  • Many deployments vary significantly; for instance, relational databases are misused for tasks they're not suited for, leading to inefficiencies.
  • The complexity of ETL (Extract, Transform, Load) processes adds to the chaos as organizations struggle to maintain data consistency across various platforms.

Real-world Examples of Inefficient Systems

  • A notable case involved a utility client exporting an entire asset management SAP database into a text file nightly. This inefficient process took six hours and was prone to failure due to time constraints.
  • The rationale behind this setup was that the two systems did not communicate effectively; thus, this workaround had persisted for ten years despite its flaws.

Challenges with Legacy Code and Procedures

  • The discussion highlights issues like overly complex stored procedures—some reaching millions of lines—that become untouchable due to lack of understanding among current staff.
  • Such legacy code often starts small but grows unwieldy over time. Developers avoid touching it out of fear or uncertainty about its functionality.

Evolution of Database Management Roles

  • There is a shift in how developers interact with databases; previously, DBAs were seen as specialists who could handle everything in SQL without needing external development support.
  • As technology evolves, reliance on outdated practices creates "legacy debt," which continues to challenge modern development efforts and necessitates adaptation.

Future Implications for Development Practices

  • Concerns arise regarding job security as AI technologies advance; traditional roles may change significantly as automation takes over some database management functions.

Exploring the Future of AI and Database Design

The Role of Rust in Team Collaboration

  • Writing code in Rust can enhance team collaboration by breaking down silos between database specialists and software developers, fostering a more integrated approach to development.

AI's Impact on Design Optimization

  • Utilizing AI for database design and architecture can lead to optimized solutions, as it transcends traditional boundaries of expertise, allowing for a more holistic approach to system design.

Understanding Code with AI Assistance

  • The ability to quickly comprehend what a function does without delving into the code is revolutionary; this capability significantly reduces time spent on debugging and understanding complex systems.

Evolution of Legacy Systems

  • AI's capacity to write or modify legacy code (e.g., COBOL) indicates a shift in industry practices, potentially replacing specialized roles that were once essential for maintaining older systems.

Predictions for Data Layer Development

  • In the next few years, there will be an emphasis on privacy-focused computing and improved data platforms that can handle large-scale operations efficiently while ensuring data accuracy.

Privacy and Cost Efficiency in Computing

  • A growing focus on local computation for privacy reasons will drive innovations aimed at reducing costs associated with data processing while enhancing user trust.

The Importance of Data Quality

  • Organizations must prioritize high-quality data management as success increasingly hinges on accurate outputs from language models, particularly in sensitive contexts like defense.

Accuracy as a Business Imperative

  • Achieving over 98% accuracy in outputs is crucial; businesses relying on customer support agents must ensure reliability to avoid dissatisfaction and churn among users.

Future Trends in Database Technology

  • Surreal DB is positioned as a key player in the evolving landscape of databases designed for agentic applications, highlighting adaptability as essential for future success.

Understanding the Impact of LLM Decisions

The Importance of Trust in LLM Outputs

  • Decisions based on outputs from large language models (LLMs) can significantly impact lives, necessitating a strong understanding of their reliability.
  • LLMs are inherently nondeterministic; thus, it is crucial to provide them with high-quality and comprehensive data for better decision-making.
  • Just like humans perform better with adequate preparation, LLM performance improves with well-curated input data.

Reliability and Variability of Systems

  • While guaranteeing absolute accuracy in deterministic systems is impossible, a high level of trust can be established if the system performs reliably most of the time.
  • Different systems have varying levels of importance in decision-making processes, which will influence future developments.

Future Directions for Serial DB

Recent Funding and Growth Prospects

  • Serial DB recently secured Series A funding, indicating growth potential over the next 12 to 18 months as they adapt to rapid industry changes.

Product Development and Market Strategy

  • Upcoming features include Postgres wire compatibility, allowing users to query using familiar SQL syntax while leveraging graph-like data storage behind the scenes.
  • The focus is on building a go-to-market team to expand outreach beyond inbound inquiries from developers and organizations.

Enhancing Support for Large Organizations

  • As Serial DB collaborates with larger organizations, ensuring database uptime and accuracy becomes essential for supporting AI applications effectively.

Cloud vs. On-Prem Solutions

Flexibility in Deployment Options

  • Organizations require flexibility regarding cloud or on-premises solutions due to varying control needs over their data infrastructure.

Historical Reflections on Early Days

  • Reflecting on early experiences can yield valuable insights into what could have been done differently during initial development phases.

What Would Founders Do Differently?

Insights on Startup Challenges and Best Practices

  • The speaker reflects on the numerous challenges faced by startup founders, emphasizing the rapid changes in the industry that require adaptability and foresight.
  • Unlike a decade ago when database choices were simpler, today's landscape offers various architectures for building applications, necessitating a more nuanced approach.
  • The importance of integrations is highlighted; as they are crucial for startups building from scratch to connect with existing ecosystems effectively.
  • Building every SDK and platform connector is resource-intensive, indicating that early investment in these areas could have been beneficial.
  • Understanding developers and users through active communication is essential; engaging with communities can provide valuable insights into user needs and preferences.

Best Practices for Building Agents

  • The speaker shares their interest in best practices for building agents, particularly from a non-technical perspective, highlighting the need for accessible resources.
  • They emphasize that best practices depend on one's coding expertise; platforms exist where users can create agents without deep technical knowledge.
  • Mentioned frameworks like Agno, Padantic, and Crew AI facilitate agent development while DB serves as an essential data layer.
  • Scaling agents poses challenges; organizations often struggle with accuracy and deployment at scale. Solutions are being sought to address these issues effectively.
  • The availability of Python and JavaScript frameworks allows users to build sophisticated applications quickly, showcasing how technology has evolved to simplify development processes.

Building a Business with Family: Insights and Experiences

The Journey of Creating an Agent

  • The speaker expresses excitement about learning and building their own agent for talent, highlighting the unexpected success of the project.
  • They relate their experience to that of a family business, referencing a small shoe cleaning venture they had with their brother during childhood.

Working Dynamics in Family Businesses

  • A question is posed about the challenges and experiences of building a business with a sibling, emphasizing personal anecdotes from working together.
  • The response reflects on the amazement felt when projects succeed quickly, underscoring the thrill of creating something functional.

Complementary Skill Sets

  • The speaker describes how they and their brother have distinct roles within their company—one focusing on branding and team management while the other drives growth and product vision.
  • They emphasize that having different skill sets prevents overlap, which can lead to conflicts in decision-making.

Importance of Trust in Partnerships

  • Trust is identified as crucial for successful partnerships; many startups fail due to founders having conflicting visions or falling out.
  • Personal experiences are shared regarding trust dynamics in previous companies, reinforcing its significance in maintaining healthy working relationships.

Navigating Conflicts While Maintaining Respect

  • The speaker acknowledges that disagreements are natural among siblings but emphasizes that mutual respect and shared vision facilitate smoother operations.
  • Despite occasional conflicts, they express satisfaction with their collaborative efforts over time.
Video description

In episode 4 we welcomed Tobie Morgan Hitchcock, Co-Founder and CEO of SurrealDB. Fresh off securing Series A funding we spoke with Tobie around the business, AI Agents, growth plans and product shifts. SurrealDB is an AI-native, multi-model database built in Rust designed to unify multiple data models into a single, powerful engine. SurrealDB is designed as a single data and logic layer for AI agents, knowledge graphs, real-time applications (e.g. recommendation engines, fraud detection systems), and OLTP applications requiring multiple data types. SurrealDB also has multiple uses and is solidifying itself as one of the go to Database platform for AI agents. We also welcomed Rhys Davies to the panel! Moving from being the camera to the hot seat, co-hosting the podcast going forward.