Mistral Dolphin 2.2 - Small, Uncensored, INCREDIBLE!
Introduction and Dolphin 2.2 Release
The transcript begins with an introduction to Eric Hartford's work on the release of Dolphin 2.2, a fine-tuned version of Mistol 7B. The speaker mentions testing previous versions and expresses excitement about trying out Dolphin 2.2.
- Eric Hartford has released Dolphin 2.2, a fine-tuned version of Mistol 7B.
- Previous versions of Dolphin have performed well in tests.
- The speaker plans to put Dolphin 2.2 through its paces.
Opportunity for Fine-Tuning Models
The speaker discusses an opportunity to fine-tune models specific to autogen and mgpt projects, but emphasizes the need for data to train these models effectively.
- There is an opportunity to fine-tune models for autogen and mgpt projects.
- Data is needed for training these fine-tuned models.
- The speaker encourages providing API call data through autogen or mgpt projects.
Features of Dolphin 2.2
The features of Dolphin 2.2 are highlighted, including conversation and empathy capabilities derived from curated Samantha DNA dataset.
- Dolphin 2.2 is based on Mistl AI with an Apache 2.0 license.
- New features in Dolphin 2.2 include conversation and empathy capabilities.
- Curated Samantha DNA dataset enhances the model's ability for back-and-forth conversation and roleplay scenarios.
Model Compliance and Prompt Format
Model compliance is discussed, along with the prompt format used for text generation.
- Data set filtering removes alignment and bias, making the model more compliant.
- Implementing an alignment layer before exposing the model as a service is advised.
- The prompt format for text generation is introduced.
Setting Up and Loading Dolphin 2.2 Model
The process of setting up and loading the Dolphin 2.2 model using the Transformers library is explained.
- The model card is downloaded and loaded using the Transformers library.
- Parameters such as Max new tokens are set accordingly.
- The prompt template for generating text is shown.
Testing Python Script Generation
The speaker tests the Dolphin 2.2 model by generating a Python script to output numbers from 1 to 100.
- A simple Python script to output numbers from 1 to 100 is generated successfully.
Attempting to Write a Snake Game in Python
The speaker attempts to generate code for a snake game in Python but encounters issues with missing functions and undefined variables.
- Issues with missing functions (clear console) and undefined variables (Windows) arise while attempting to write a snake game in Python.
- Attempts are made to fix these issues, but no successful resolution is achieved.
Writing a Poem about AI
The speaker generates a poem about AI with exactly 50 words, which turns out to be very close to the target word count.
- A poem about AI with exactly 50 words is generated successfully, impressively close to the target word count.
Writing an Email Resignation
The speaker tests creative writing capabilities by generating an email resignation message, which all models have been able to accomplish.
- An email resignation message is generated successfully, demonstrating the model's ability in creative writing tasks.
Providing Facts about US Presidents
The speaker tests the model's knowledge by asking about the president of the United States in 1996, which is answered correctly.
- The model correctly identifies Bill Clinton as the president of the United States in 1996.
Testing Uncensored Responses
The speaker tests whether Dolphin 2.2 provides uncensored responses and confirms that it does.
- Dolphin 2.2 is confirmed to provide uncensored responses, similar to previous versions of Dolphin.
Logic and Reasoning Test - Shirt Drying Problem
The speaker presents a logic and reasoning problem related to shirt drying and expects a solution from the model.
- A logic and reasoning problem related to shirt drying is presented.
- No specific response or solution from the model is mentioned in this part of the transcript.
Proportional Drying Time
This section discusses the concept of proportionality in drying time calculations.
Calculating Drying Time for Different Quantities of Shirts
- The drying time for 5 shirts is 4 hours.
- To calculate the drying time for 20 shirts, we can use the ratio between 5 shirts and 20 shirts.
- The correct drying time for 20 shirts is not mentioned.
Incorrect Calculation of Drying Time
This section highlights an incorrect calculation of drying time.
Incorrect Calculation Assumption
- Multiplying the individual drying time per shirt (5 hours) by the total number of shirts (20) results in an incorrect calculation of 100 hours.
- This assumption assumes that only one shirt can be dried at a time, which is not accurate.
Comparing Speeds: Jane, Joe, and Sam
This section explores a logic and reasoning question about comparing speeds.
Analyzing Relative Speeds
- Jane is faster than Joe.
- Joe is faster than Sam.
- Based on these statements, it can be concluded that Sam is not faster than Jane.
Simple Math Problems
This section presents simple math problems to solve.
Addition and Multiplication Problems
- The sum of four and four equals eight. (4 + 4 = 8)
- The multiplication problem (4 * 2) results in eight.
- The addition problem (17 + 3) equals twenty.
Creating a Healthy Meal Plan
This section involves creating a healthy meal plan.
Designing a Meal Plan
- A perfect meal plan consisting of breakfast, mid-morning snack, lunch, snack, and dinner is provided.
- The importance of staying hydrated is mentioned.
Incorrect Answer for Word Count
This section highlights an incorrect response regarding word count.
Word Count Calculation
- The model's response states that the response to the prompt consists of 105 words, which is incorrect.
Killer Problem
This section presents a logic problem about killers in a room.
Analyzing the Situation
- Initially, there are three killers in the room.
- Someone enters the room and kills one of them.
- According to the information given, nobody leaves the room after the killing takes place.
- Therefore, two killers remain in the room along with person X who killed one of them.
Summarization Test
This section evaluates summarization skills.
Bullet Point Summary
- The model successfully provides a bullet point summary using dashes for bullet points and includes all main talking points from a text about nuclear fusion.
Creating JSON from Natural Language
This section involves creating JSON from natural language descriptions.
JSON Creation
- The model successfully creates JSON based on given attributes (three people: Mark and Joe as males, Sam as a woman; ages: 30 for Sam, 19 for both men).
Fighting Duck-Sized Horses or Horse-Sized Duck?
This section explores reasoning behind choosing between fighting duck-sized horses or a horse-sized duck.
Comparing Scenarios
- Scenario 1: Fighting 100 duck-sized horses. While there may be many individual attacks, their size and strength might not pose an insurmountable challenge.
- Scenario 2: Fighting one horse-sized duck. In this case, you face one larger and stronger opponent that can inflict more damage.
- The decision depends on skills, experience, and abilities.
Laws of Physics: Marble in a Cup
This section discusses the placement of a marble in a cup according to the laws of physics.
Analyzing the Situation
- Initially, the marble is placed inside a normal cup.
- When the cup is turned upside down and placed on a table, as long as it remains on the table, the marble stays within its boundaries.
- If someone puts the upside-down cup with the marble inside into a microwave, the marble is still contained within the cup.
The model's response regarding whether or not this situation is true is incorrect.