AutoGen Studio Tutorial - NO CODE AI Agent Builder (100% Local)
Introduction to Autogen Studio
In this section, the speaker introduces Autogen Studio, a tool developed by the Microsoft research team behind Autogen, an AI agent project. Autogen Studio allows users to create AI agent teams easily and is fully open source.
Installing and Setting Up Autogen Studio
- Autogen Studio can be installed locally and powered with Chat GPT or local models.
- To work with Chat GPT, the required package is
conda. Install it if not already installed.
- Run
conda create -n AG autogen python=3.11to create a new conda environment named "AG" for Autogen.
- Activate the new environment using
conda activate AG.
- Install Autogen Studio by running
pip install autogen studio.
- Obtain an API key from OpenAI by creating a new key in your OpenAI account's API key section.
- Export the API key in your terminal using
export OPENAI_API_KEY=<your_api_key>.
- Spin up Autogen Studio by running
autogen studio --ui-port 8081.
Exploring Autogen Studio
- Access Autogen Studio through the provided URL (localhost).
- Autogen Studio provides a user-friendly interface for creating AI agents and workflows.
- Skills are tools that can be given to AI agents and teams, usually written in code.
- Default skills include generating images and finding papers on archives.
- Agents are individual AI entities with roles, tools, and tasks.
- Default agents include Primary Assistant and User Proxy.
- Workflows combine agents and tasks to accomplish specific goals.
The transcript does not provide timestamps for each bullet point.
Setting Up Autogen Studio
In this section, the speaker introduces Autogen Studio and explains how to set it up. They discuss various settings and options available in Autogen Studio.
Description and Auto Replies
- Autogen Studio allows users to create AI agents for various tasks.
- The speaker suggests watching their video for a detailed breakdown of Autogen's features.
- Users can define the maximum consecutive auto replies for their agents.
Modes and System Messages
- Autogen offers different modes: never, only on terminate, always on every step.
- Users can define a system message to control agent behavior.
Models and Skills
- Users can add multiple models to an agent, which are daisy-chained together.
- The first model in the list is the default model for the agent.
- Skills are pieces of code that agents can run.
- Users can add skills to an agent or replace them with existing ones.
Playground and Sessions
- The playground is where users test agent workflows.
- Sessions represent fixed amounts of time for an agent team to accomplish a task asynchronously.
- Users can create sessions and choose specific workflows for testing.
Publishing Results
- Agents' results can be published on the web from the playground.
- Users have options to delete sessions or save results to files.
Testing Agent Workflows
This section focuses on testing agent workflows using the playground feature in Autogen Studio. The speaker demonstrates how to create sessions, choose workflows, and complete tasks.
Visualization Agent Workflow
- A session represents a fixed amount of time for an agent team to accomplish a task.
- The speaker creates a new session using the visualization agent workflow as an example.
Task: Stock Price Plot
- User input: "Plot the chart of Nvidia and Tesla stock price for 2023. Save the result to a file named Nvidia Tesla PNG."
- Autogen Studio pings GPT-4 to complete the task.
- The waiting icon indicates that the task is in progress.
- The speaker suggests streaming results in real-time instead of waiting for completion.
Result and Agent Messages
- The speaker shows the agent messages exchanged during the task.
- User proxy agent represents user input.
- Visualization assistant creates a plan, writes code, fetches data, and runs visualization.
- Results include stock data.csv, Nvidia Tesla PNG (visualization), plot stock chart.py (code), and fetch stock data.py.
Creating Custom Agent Workflows
This section explains how to create custom agent workflows using Autogen Studio. The speaker demonstrates creating a new workflow and troubleshooting issues with missing skills.
Travel Agent Group Workflow
- The speaker creates a new travel agent group workflow as an example.
Task: Paint a Picture
- User input: "Paint a picture of Ethiopian coffee freshly brewed in a tall glass cup."
- Autogen Studio attempts to physically paint the picture but fails due to missing skills.
- To fix this, the speaker switches to the general agent workflow that includes the "generate images" skill.
Task Retry with General Agent Workflow
- User input: "Paint a picture of Ethiopian coffee freshly brewed in a tall glass cup."
- Autogen Studio successfully generates the image using DALL·E model.
Conclusion
Autogen Studio provides users with powerful tools for creating AI agents and testing workflows. Users can define models, skills, and system messages for their agents, as well as publish results on the web. The playground feature allows for easy testing of different workflows and tasks. Troubleshooting missing skills can be resolved by selecting appropriate agent teams or workflows that include required skills.
Using Oama and Light Llm Locally
In this section, the speaker demonstrates how to use Oama and Light Llm locally. The necessary steps and tools required for local usage are explained.
Installing Oama and Downloading a Model
- To use models locally, two tools are needed: Oama and Light Llm.
- Install Oama by going to the official website, downloading it, and following the installation process.
- After installation, a llama icon should appear in the task tray.
- To download a model, run
O Lama run mistolcommand. This will download the Mistal model (approximately 4GB in size).
Installing Light Llm and Fixing Gunicorn Issue
- Install Light Llm using
pip install Light llm Das Das up upgrade.
- If an error occurs regarding a missing module called Gunicorn, fix it by running
pip install gunicorn.
Setting Up a Server with Mistal Model
- Set up a server with Mistal running powered by Oama using the command
light l m-- model oama SL mistol.
- The server will spin up at Local Host 8000.
Creating Agents and Workflows
- Create a new agent named "Mistal Assistant" powered by Mistal locally.
- Create workflows for Mistal using "gp4" as the receiver agent name.
- Customize agent descriptions and system messages as desired.
Testing with Playground
- Use the playground to test the Mistal workflow.
- Verify that interactions such as asking for jokes or writing code work correctly.
Running Multiple Models Locally
This section explains how to run multiple models locally using Olama.
Running Multiple Models with Olama
- Run
AMA run llama 2to initiate the download of another model using Olama.
- Once the download is complete, create a new tab and activate Light Llm with the command
cond activate AG light llm -- model o lama lama 2.
- Follow the same steps as before to set up a server with the newly downloaded model.
The transcript is already in English.
New Section
In this section, the speaker discusses finding the right fine-tuned model for specific tasks and mentions the sign out functionality in Autogen Studio.
Finding the Right Fine-Tuned Model
- Autogen Studio allows users to find the appropriate fine-tuned model for their specific task.
- The speaker highlights that Autogen Studio has a sign out functionality, but when clicked, it prompts users to implement their own logout logic.
- Users can set up their own authentication within Autogen Studio, enabling them to share projects with their team.
New Section
The speaker expresses their admiration for Autogen Studio and invites viewers to provide feedback and suggestions for future content.
Impressions of Autogen Studio
- The speaker is highly impressed by Autogen Studio.
- Viewers are encouraged to leave comments if they would like a follow-up or deeper dive into Autogen Studio.
- Feedback and suggestions on what viewers would like to see in future videos are welcomed.
- If viewers enjoyed the video, they are encouraged to consider giving it a like.