How To Use AutoGen With ANY Open-Source LLM FREE (Under 5 min!)

How To Use AutoGen With ANY Open-Source LLM FREE (Under 5 min!)

Getting Open-Source Models Working with Autogen

In this video, the speaker demonstrates how to use open-source models with Autogen. They introduce LM Studio as a tool for downloading and running open-source models on Windows or Mac. The process involves selecting a model, setting up a server, and connecting Autogen to the server.

LM Studio: Downloading Open-Source Models

  • LM Studio is a powerful tool that allows users to download and run any open-source model on Windows or Mac.
  • Visit LM studio.ai to download the required version for your operating system.
  • Search for the desired model within LM Studio's interface.
  • Choose the appropriate version of the model based on its compatibility and performance.
  • The speaker recommends using Mistol 7B Instruct version 0.1 for Autogen.
  • Download the selected model from LM Studio.

Setting Up a Local Server

  • Install LM Studio and launch it.
  • Click on the "Local Server" button in LM Studio's interface.
  • Select the desired model from the dropdown menu.
  • Start the server by clicking "Start server."

Connecting Autogen to the Local Server

  • Import Autogen into your code environment.
  • Modify the configuration settings in Autogen:
  • Keep AI type as "open AI."
  • Set API base to "localhost:1234/v1" (the local server address).
  • Leave API key as null since no API key is needed for local servers.

Running Autogen with Open-Source Models

  • Define an assistant agent and user proxy in your code, specifying their roles and settings.
  • Execute your code, triggering interactions between the assistant agent and user proxy.
  • Use prompts like "You are a coder specializing in Python" for assistant agent and tasks like "Write a Python method to output numbers 1 to 100" for user proxy.
  • Monitor the inference happening in LM Studio's interface, observing the logs and outputs.

Challenges and Future Improvements

  • The speaker acknowledges that there are still challenges in making Autogen work as well as GPT-4.
  • Some issues may arise with prompt templates, causing incomplete or incorrect responses.
  • Ongoing efforts are being made to improve Autogen's performance and address these challenges.

Conclusion

The speaker concludes by highlighting the ease of using open-source models with Autogen through LM Studio. They encourage viewers to explore this approach further and share their experiences.

Video description

A short video on how to use any open-source model with AutoGen easily using LMStudio. I wanted to get this video out so you all can start playing with it, but I'm still figuring out how to get the best results using a non-GPT4 model. Enjoy :) Join My Newsletter for Regular AI Updates πŸ‘‡πŸΌ https://forwardfuture.ai/ My Links πŸ”— πŸ‘‰πŸ» Subscribe: https://www.youtube.com/@matthew_berman πŸ‘‰πŸ» Twitter: https://twitter.com/matthewberman πŸ‘‰πŸ» Discord: https://discord.gg/xxysSXBxFW πŸ‘‰πŸ» Patreon: https://patreon.com/MatthewBerman Media/Sponsorship Inquiries πŸ“ˆ https://bit.ly/44TC45V Links: AutoGen Beginner Tutorial - https://www.youtube.com/watch?v=vU2S6dVf79M AutoGen Intermediate Tutorial - https://www.youtube.com/watch?v=V2qZ_lgxTzg AutoGen - https://microsoft.github.io/autogen LMStudio - https://lmstudio.ai/