ChatGPT Plugins: Build Your Own in Python!

ChatGPT Plugins: Build Your Own in Python!

OpenAI Custom Plugins for Chat GPT

In this video, the speaker introduces the release of custom plugins for Chat GPT by OpenAI. The speaker explains what these plugins are and how they work.

Introduction to Plugins

  • OpenAI has released custom plugins for Chat GPT.
  • Plugins can be accessed through the "Plugins" dropdown menu in Chat GPT.
  • Plugins are similar to agents, which are tools used by large language models to perform specific functions.

Lang Chain Dots Plugin

  • The Lang Chain Dots plugin is demonstrated as an example.
  • Without the plugin enabled, a question about "LM chain in line chain" returns no results.
  • With the plugin enabled, a response is given that describes Lang Chain as a Python library that facilitates development using large language models and provides links to documentation.

Building a Plugin

  • To build a plugin, start with a template application from the OpenAI Chat GPT retrieval plugin repository.

Understanding the Chat GPT Retrieval Plugin

In this section, we will explore the directory of the Chat GPT Retrieval Plugin and understand its components.

Components of the Plugin

  • The plugin is a Docker container that deploys an API for GPT to interact with.
  • The server directory contains two endpoints: upset and query.
  • The API interacts with a Pinecone vector database and Lang chain dots via an AI embedding model.

How Components Work Together

  • Lane chain dots are run through a Python notebook, downloaded, and thrown into the upset endpoint. This converts text into meaningful numerical representations of information using an AI embedding model called AI embed.()
  • These embeddings are stored in a Pinecone vector database which serves as a potential source of information for chat GPT down below.
  • When chat GPT receives a query, it sends it to the query endpoint which takes it into the same AI embedding model to get a query vector.()
  • Pinecone then returns documents similar to the query vector from embedded lane chain dots in response to chat GPT's request. Five documents are returned at a time through the API via requests back to chat GPT.()
  • Chat GPT now has access to both context and query information from panic and line chain dots respectively, allowing it to answer questions based on new context that it has received through this plugin's API interaction with outside world. This is what enables all interactions between chat GPT and outside world via this plugin's API deployment process.()

[#] Deploying the API

In this section, we will learn how to deploy the Chat GPT Retrieval Plugin's API.

Steps to Deploy the API

  • The first step is to make the API accessible by deploying it.

Creating a Web App

In this section, the speaker explains how to create a web app using GitHub and DigitalOcean. They also discuss the environment variables required for the app.

Creating a Resource from Source Code

  • To create a web app, select "create" and then "resource from source code."
  • Authenticate with GitHub to allow DigitalOcean access to your repo.
  • Select the repo you want to use.

Environment Variables

  • Two specific environment variables are required: the bearer token and the OpenAI API key.
  • Other environment variables needed include Pinecone as our retrieval component, Pinecone API key, Pinecone environment, and Pinecone index.
  • The bearer token is used by our plugin to authenticate incoming requests.
  • The OpenAI API key is used to encode everything with an OpenAI embedding model.
  • To get these keys, go to platform.openai.com for the OpenAI API key and apple.pinecone.io for the Pinecone API key.

Setting Up Environment Variables

In this section, we learn how to set up environment variables in DigitalOcean.

Bearer Token

  • Use Json web tokens or anything you want in this field.
  • Put in something like your name or other random things.

OpenAI API Key

  • Go to platform.openai.com and click on "view API keys" under your account in the top right corner.
  • Copy your API key.

Pinecone Environment and Index

  • Go to apple.pinecone.io and copy your Pinecone environment and index keys.
  • Add all of these keys into DigitalOcean's environment variable settings.

Understanding Pinecone Indexing

In this section, we learn about indexing with Pinecone.

Retrieval Component

  • Pinecone is used as our retrieval component.
  • We need a Pinecone API key, Pinecone environment, and Pinecone index.

Pinecone Index

  • The Pinecone index can be named anything you want.
  • It specifies the Pinecone environment that we need to use.

Adding Bearer Token and OpenAI API Key

In this section, we learn how to add the bearer token and OpenAI API key to DigitalOcean's environment variables.

Bearer Token

  • Use Json web tokens or anything you want in this field.
  • Put in something like your name or other random things.

OpenAI API Key

  • Go to platform.openai.com and click on "view API keys" under your account in the top right corner.
  • Copy your API key.

Finalizing App Creation

In this section, we finalize app creation by selecting a region and saving our settings.

Region Selection

  • Select a region depending on where you are located or what you are wanting to do.

Saving Settings

  • Once all settings have been added, press save.
  • Check that all environment variables have been added correctly.

Deploying the API

In this section, we learn how to deploy the API and add data to it.

Adding Data to Pinecone

  • Head over to Colab and install prerequisite libraries.
  • Skip the "preparing data" stage and directly load the dataset from Hugging Face today assets.
  • Reformat the dataset into the format required by the API component. The format includes three fields: ID, text, and metadata.
  • Create a metadata dictionary for each item in the dataset that contains additional information such as URLs.
  • Include a bearer token in your request headers to authenticate with the API.

Indexing Part

  • Add authorization to your request headers so that it knows you are authorized to use the API.
  • Update your endpoint URL with your new one.
  • Use requests.post() method instead of requests.put() method when sending requests in batches of 100 documents. This is because sometimes errors occur when sending requests, but we don't want to cancel everything if we see one error one time.

Retry Strategy for Internal Server Errors

In this section, the speaker discusses setting up a retry strategy for internal server errors in post requests.

Setting Up Retry Strategy

  • The speaker explains that they will wait a fifth of a second, then a second, and then a few seconds before retrying.
  • This retry strategy is set up for all internal server errors.
  • The code added to implement this retry strategy is explained.

Sending Documents to API

In this section, the speaker discusses sending documents to an API using tqdm.

Sending Documents

  • The speaker imports tqdm.auto to send all documents to the API.
  • The documents are sent to the Open AI embedding model (text embedding order 002).
  • The embeddings are stored in the Pycon vector database for later use with Chat GPT plugin.

Querying Examples

In this section, the speaker provides examples of queries that will be implemented in Chat GPT.

Query Examples

  • Examples of queries include "What is the Lem chain in Lime chain?", "How do I use Pinecone in Line Chain?", and "What is the difference between Knowledge Graph memory and buffer memory for conversational memory?"
  • A 200 response is expected when running these queries.
  • The returned documents from these web pages are printed out.

Implementing Queries in Chat GPT

In this section, the speaker discusses integrating querying into Chat GPT without modification.

Integrating Queries

  • The speaker uninstalls an existing plugin and creates a RAM plugin.
  • They encounter errors with their manifest file due to incorrect default web address.
  • The Ai plugin.json file is updated with the correct web address.

Model and Human Description

In this section, the speaker discusses the description for the model and human in prompt engineering.

Description for Model

  • The description should specify when to use the tool.
  • Use the tool to get up-to-date information about the line chain python Library.
  • Do not use this tool if the user did not ask about line chain.

Description for Human

  • The description should be shorter and nicer than that of the model.
  • Up-to-date information about the line chain python Library is sufficient.

Installing Plugin

This section covers installing a plugin using an unverified URL.

Install Plugin

  • Click on "Plug install" and select "Install an unverified plugin."
  • Paste in the URL provided by chat gbt.
  • Enter HTTP access token (Bearer token).
  • Install plugin.

Fixing Open API Spec Error

This section covers fixing an error with open API spec.

Fixing Open API Spec Error

  • Copy and paste new openapi.yaml file from link provided in video.
  • Change API name to "Retrieval Plugin."
  • Update URL to match that used before.
  • Chat gbt reads openapi.yaml file to inform how it creates queries.

Using Plugins in Chat GPT

In this section, the speaker discusses how to use plugins in Chat GPT and demonstrates how to add a natural language approach during query and an optional metadata filter.

Adding Filters

  • To add filters, include a natural language approach during query and an optional metadata filter.
  • The speaker describes how to use the filters.

Updating Open API YAML

  • Update the Open API YAML file after adding filters.

Checking for Updates

  • Wait for the deployment process to complete before checking for updates.
  • Refresh the page once it is deployed.

Using Pinecone in Lang Chain

In this section, the speaker explains how to use Pinecone in Lang Chain and provides instructions on installing Pinecone Python SDK.

Installing Pinecone Python SDK

  • To use Pinecone in Lang Chain, install Pinecone Python SDK by running pip install pinecone.
  • Import it from here: import pinecone.

Accessing Python Documentation

  • Click here for access to Python documentation that provides more information on using Pinecone in Lang Chain.

Understanding Conversational Memory in Light Chain

In this section, the speaker explains conversational memory types - Knowledge Graph memory and buffer memory - used by Light Chain.

Types of Memory Used by Light Chain

  • Two types of memory are used by Light Chain: conversational Knowledge Graph memory and conversation buffer memory.
  • Conversational Knowledge Graph memory uses knowledge of structured and organized information from conversations while conversation buffer memory maintains a sequence of chat messages for short-term memory.

Providing Concepts-Aware Responses

  • Both types of memories are used to provide concepts-aware responses based on previous interactions with users.
Video description

OpenAI's ChatGPT now has plugins! Creatively named "ChatGPT Plugins", this new feature means plugins can be built by anyone, and in this video, we will see how to build one using the chatgpt-retrieval-plugin template from OpenAI. šŸ”— openai/chatgpt-retrieval-plugin Repo: https://github.com/openai/chatgpt-retrieval-plugin šŸ”— OpenAI platform: https://platform.openai.com/ šŸ”— Pinecone console: https://app.pinecone.io/ šŸ”— Find the code here: https://github.com/pinecone-io/examples/blob/master/learn/generation/openai/chatgpt/plugins/langchain-docs-plugin.ipynb ā–¶ļø Langchain data prep video: https://youtu.be/eqOfr4AGLk8 šŸ”— openapi.yaml I used: https://gist.github.com/jamescalam/709f83e4515975df832bf06c8a33ff26 šŸ‘‹šŸ¼ NLP + LLM Consulting: https://aurelio.ai šŸŽ™ļø Support me on Patreon: https://patreon.com/JamesBriggs šŸ‘¾ Discord: https://discord.gg/c5QtDB9RAP šŸ‘‹šŸ¼ Socials: Twitter: https://twitter.com/jamescalam LinkedIn: https://www.linkedin.com/in/jamescalam/ Instagram: https://www.instagram.com/jamescalam/ 00:00 ChatGPT Plugins 00:53 Plugins or LLM agents? 02:46 First look at ChatGPT plugins 05:02 Using the chatgpt-retrieval-plugin 06:45 How the plugin works 12:06 Deploying the plugin with digital ocean 15:15 ChatGPT retrieval plugin environment variables 18:42 Adding langchain docs to chatgpt plugin 26:02 Querying the chatgpt retrieval plugin 27:46 Adding the plugin to ChatGPT 28:52 Setup for ChatGPT plugin manifest file 32:47 Install unverified plugins on ChatGPT 33:41 Handling OpenAPI spec error 37:04 Asking ChatGPT plugin questions 39:45 Final thoughts on ChatGPT plugins #chatgpt #artificialintelligence #nlp