Clawbot sem gastar com API veja como AGORA! (Moltbot)

Clawbot sem gastar com API veja como AGORA! (Moltbot)

How to Use Cloudbot for Free

Introduction to Cloudbot

  • The video demonstrates how to use Cloudbot completely free of charge, eliminating the need for a VPS or paid API tokens.
  • The project previously known as Cloud Bot has been renamed to Mult Bot due to a request from Antropic, the creator of another LLM called "cloud."
  • The name change was necessary because of the similarity between the two names, leading to confusion in branding.

Project Updates and Context

  • Mateus Batiste introduces himself and shares his experience in programming over the past decade, emphasizing his goal of providing useful content.
  • He encourages viewers to engage with his channel by liking and subscribing, highlighting the importance of community feedback.
  • A campaign named "FA Mateus Feliz" is mentioned, which has received positive engagement from viewers.

Installation Overview

  • The video will focus on installing Mult Bot locally for free, marking it as likely one of the last references to its former name.
  • Antropic's request led to a friendly renaming process; viewers are reassured that installation remains straightforward despite this change.

Software Requirements

  • A previous video discussing Cloud Bot (now Mult Bot), including installation processes and use cases, is recommended for better understanding.
  • To run Mult Bot locally without costs associated with VPS hosting, users can utilize an old computer or their current PC.

Key Components for Setup

  • Users will need OLAMA software which allows downloading open-source LLM models onto their computers.
  • By integrating OLAMA with Mult Bot, users can manage and run LLM models effectively on their local machines.
  • Although initially complex sounding, following provided steps makes setup manageable even for beginners.

Additional Resources

  • A free guide on vibe coding is offered in the description; it includes over 50 pages aimed at enhancing user skills in prompt crafting and project development.

Installation and Setup of Cloud Bot

Initial Setup Instructions

  • The speaker is using Windows for the installation via command line (cmd) and emphasizes changing commands based on the operating system being used.
  • Users should execute the command m bot instead of cloud bot during installation, as shown in the demonstration.
  • An acceptance prompt appears during installation; caution is advised against installing on personal PCs due to access permissions.
  • It’s recommended to install on an older PC or a VPS for safety. The speaker will also provide instructions for complete removal post-installation.

Configuration Steps

  • Users must select a model during setup; options are available but may require internet connection. Default settings are suggested for optimal performance.
  • Integration channels for messaging with the bot can be skipped initially, as detailed instructions are provided in a referenced video tutorial.
  • Memory-saving configurations can also be skipped at this stage, focusing instead on completing the installation process.

Finalizing Installation

  • After completing onboarding, users need to install a model from the specified website. The installation process is straightforward—just follow prompts like "next."
  • Once installed, users should verify by entering Olama -V in their terminal to check if it functions correctly.

Integrating Cloud Bot with LLM

Documentation and Integration Process

  • The integration begins with accessing documentation from Olama's site, which outlines steps for connecting Cloud Bot and LLM.
  • Although initial setup steps are mentioned, they can be skipped since prior installations have already been completed by users.

Understanding Model Requirements

  • Large Language Models (LLMs), such as those used here, operate similarly to chatbots but vary significantly in parameters affecting performance.
  • More parameters mean heavier models that require powerful GPUs rather than relying solely on RAM for processing efficiency.

Recommendations for Model Selection

  • Users are encouraged to consult cloud services regarding their PC specifications to find suitable models balancing performance and machine capabilities.
  • Documentation provides insights into recommended models; some may not function without specific tools or integrations.

How to Install and Use Cloud Bot with GPT Model

Installing the GPT Model

  • The speaker discusses using a specific model, referred to as "GPT osso," which is based on one of the best models available, version 5.2, for running with Cloud Bot.
  • To download the model, the command olama pool GPT-2.B is used. The speaker mentions that they have already downloaded it quickly.
  • The command lama list displays all installed models; the speaker notes having a smaller model called "Queen" but emphasizes that it does not perform as well as GPT osso.

Configuring and Running Cloud Bot

  • After confirming installation, users need to configure the model by selecting it in their terminal. The speaker opts for "osso 20B" due to its balance of performance and speed.
  • To open Cloud Bot, users can copy a provided URL into their browser. This initiates a conversation with Cloud Bot using the selected model without incurring API costs.

Understanding Model Performance

  • The bot identifies which LLM (Large Language Model) is being used and responds accordingly, indicating that it has access to all machine resources for accurate responses.

Additional Resources and Training Opportunities

  • The speaker introduces an automation training program called "Agentes de A," aimed at teaching users from basic to advanced levels about automations related to Cloud Bot.
  • Viewers are encouraged to check out additional resources via WhatsApp for questions or further information about the training program.

VPS Recommendations for Optimal Performance

  • Users can run Cloud Bot similarly on their machines as if installed on a VPS (Virtual Private Server).
  • The speaker recommends using Hostia for VPS services due to cost-effectiveness and performance benefits when running top-tier LLMs privately.

Choosing Your VPS Plan

  • For optimal processing power when running Cloud Bot, VM2 is recommended as it provides better performance compared to other options.
  • Users should select Linux as their operating system during setup since it's ideal for installing Cloud Bots effectively.

Cost-Saving Tips

  • A promotion offers discounts when committing to longer-term plans (at least 12 months), making it financially advantageous for users looking for dedicated servers.

Uninstalling Instructions

  • Finally, instructions will be provided on how to uninstall if users decide not to keep the software running locally after testing.

How to Uninstall Cloud Bot Properly

Steps for Uninstallation

  • The speaker discusses the importance of uninstalling software like Cloud Bot, emphasizing that it can access the entire computer and may pose risks if not properly maintained.
  • Instructions are provided on how to execute the uninstallation command: cloud bot uninstall or mode bot. Users can select options using the tab key and proceed with Enter.
  • After running the initial command, some remnants of Cloud Bot remain. To completely remove it, users must run npm uninstall -g cloud bot, ensuring all components are deleted from their system.

Configuration Files

  • The speaker notes that a configuration folder named "poncloudbot" is created in the user's directory, containing JSON files. These files help retain settings if reinstalled.
  • Users are advised to delete this configuration folder if they want a complete removal of Cloud Bot, as it does not contain any executable programs but rather configuration data.

Final Thoughts on Installation

  • The installation process is highlighted as being successful without reliance on APIs or servers, making it a free solution for personal computers or idle machines.
  • Viewers are reminded to choose models compatible with their hardware specifications (size and performance), stressing that larger models require better GPUs and RAM for effective execution.

Engagement Encouragement

  • The speaker invites viewers to engage by liking the video, subscribing for more content about Cloud Bot and Bolt Bot, and leaving comments for assistance with any issues encountered during installation.
Video description

Neste vídeo eu mostro como rodar o Moltbot (antigo Clawdbot) de forma 100% gratuita usando Ollama e modelos de IA locais. 🔥 LANÇOU Conheça a Formação Agentes de IA: https://app.horadecodar.com.br/lp/formacao-agentes-de-ia-n8n?utm_source=yt_clawdbot_free ✅ Hospedagem para n8n que eu indico: https://hostinger.com.br/matheusbattisti (use o cupom HORADECODAR para ter +10% de desconto) 📘 Guia Engenharia de Prompt: https://app.horadecodar.com.br/ebookpages/guia-engenharia-de-prompt 📕 Guia de Vibe Coding: https://app.horadecodar.com.br/ebookpages/guia-completo-vibe-coding O Moltbot é o assistente de IA pessoal open source que viralizou recentemente. Ele roda na sua máquina, funciona pelo WhatsApp e Telegram, tem memória persistente e executa tarefas por você. O problema é que a maioria das pessoas configura ele com APIs pagas como Claude ou GPT, o que pode ficar caro dependendo do uso. A solução é rodar com modelos locais através do Ollama. Você instala o Ollama na sua máquina, baixa um modelo de linguagem como Llama, Mistral ou Qwen, e configura o Moltbot para usar esse modelo local. Zero custo com API, zero dependência de serviços externos. No vídeo eu passo por todo o processo. Primeiro instalo o Moltbot do zero seguindo o setup padrão. Depois instalo o Ollama, que é o runtime para rodar LLMs locais de forma simples. Baixo um modelo adequado para o assistente e configuro a integração entre as duas ferramentas. O resultado é um assistente de IA completo rodando 100% local na sua máquina. Você conversa pelo WhatsApp ou Telegram, ele responde, lembra das conversas, executa tarefas, e você não paga nada de API. Dependendo do modelo que escolher, a qualidade das respostas é surpreendentemente boa. Se você queria testar o Moltbot mas não queria gastar com APIs, esse vídeo resolve seu problema. Observação: O projeto mudou de nome recentemente. Clawdbot agora se chama Moltbot por questões de trademark, mas é exatamente o mesmo projeto, mesma equipe, mesmas funcionalidades. Observação 2: O projeto mudou de nome mais uma vez, agora é OpenClaw! Entre no nosso servidor de Discord e me siga nas redes: 🟣 Discord Hora de Codar: https://discord.gg/Veq4mvsWwk 🔴 Instagram: https://www.instagram.com/horadecodar/ 🔷 Telegram: https://t.me/horadecodar TIMESTAMPS 00:00 Introdução sobre Moltbot gratuito (antigo Clawdbot) 00:40 Clawdbot virou Moltbot 02:31 Como rodar o Clawdbot/Moltbot de forma gratuita 06:10 Instalando o Clawdbot na nossa máquina 09:36 Instalando o Ollama 12:45 Instalando LLM local para rodar Moltbot sem pagar API (antigo Clawdbot) 13:42 Como rodar o Moltbot com LLM local 14:25 Testando o Clawdbot 19:15 Como desinstalar o Clawdbot (Moltbot) 21:20 Conclusão e o futuro do Moltbot (antigo Clawdbot)