Proxmox

How to Use Bolt.new for FREE with Local LLMs (And NO Rate Limits)



Over the last month, together as a community we started oTToDev, a fork of Bolt.new that aims to add a bunch of much needed functionality like being able to use any LLM you want, including local ones with Ollama.

In this video I give some super important tips and tricks for using local LLMs with oTToDev, some of which can really apply to using any LLM with any AI coding assistant! I also cover my favorite open source LLM to use for coding my AI apps.

Everything I cover in this video I’ll be going into more detail in my YouTube livestream I’ll be doing this Sunday, November 10th, at 7:00 PM CDT! See you there! Here is the link for that for you to enable notifications for it when I go live:

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

00:00 – Intro
02:12 – Getting Started with oTToDev (Bolt.new Fork)
03:01 – Ollama’s Big Problem
04:26 – Fixing Ollama for oTToDev
06:55 – FlexiSpot Segment (My New Favorite Chair)
08:02 – My Favorite Open Source Coding LLM
10:39 – Using oTToDev to Build an App with a Small Local LLM
15:26 – Outro

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

As I mentioned in the video, I collabed with FlexiSpot for a small segment of this video because having an ergonomically correct chair is super important for me since I’ve dealt with sciatica in the past (seriously not fun), so I wanted to share with all of you, especially if you’re a developer sitting a lot like me!

Use my exclusive code ‘C750’. If you purchase the C7/C7Max now, you’ll get a $50 discount. It’s the best time to buy the C7 (or C7Max)!

US:
CA:

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

I’m building something BIG behind the scenes:

Here is the link to oTToDev on GitHub (repo will be renamed soon), ready for you to run locally and use LLMs from Ollama, Anthropic, OpenAI, OpenRouter, Gemini, Groq, and more! Check out the README to see future improvements planned!

Link to the prompts I used in the video to create the app:

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

Artificial Intelligence is no doubt the future of not just software development but the whole world. And I’m on a mission to master it – focusing first on mastering AI Agents.

Join me as I push the limits of what is possible with AI. I’ll be uploading videos at least two times a week – Sundays and Wednesdays at 7:00 PM CDT! Sundays and Wednesdays are for everything AI, focusing on providing insane and practical educational value. I will also post sometimes on Fridays at 7:00 PM CDT – specifically for platform showcases – sometimes sponsored, always creative in approach!

[ad_2]

source

Related Articles

30 Comments

  1. hey cole, your github and the video show different instructions on getting qwen2.5-coder setup.. in your video it looks like you made a folder and put a file named qwen2.5-coder in there, but in the github it says to make a textfile called modelfile. im pretty lost and cant proceed. what should i do?

  2. Would be very nice if you could show how to use it installed on a server instead of local. I think this is not working at the moment and a lot of people have problems with it, because of cross-origin etc. errors. Other then that, thanks for your videos and work on it 🙂

  3. I setup this and Llama locally, added the Llama url and it kept saying needs api key, Groq worked ok but was hoping to use with local LLM. I use Bolt currently and would like to develop Flutter apps. any ideas how to fix the API Key error

  4. @ColeMedin. Great video and cool project! I added Haiku 3.5 to your list of LLMs, but no previews are being generated. Any ideas why? Am I supposed to have pre-installed any pip libraries ahead of time?

    Also, other than React/Tailwind CSS, do you know any other tech stacks that can be generated bolt.new/Ottodev preview?

    Keep up the good work!

  5. i run it locally with llama3.1:8b and it works good. I dont no why, but the preview functions doesnt seem to work good. first it worked and then it's just full white.

    i wish:
    1- editing previous messages.
    2. detach files -> images for context etc. -> would be nice for the new llama vision model to get all infos and then change to a code model and implement the analysis i.e.

    just testet it for an hour.
    Thanks for this!

  6. I followed the instructions in the video to expand the context, but I'm getting this error:
    C:Usersmarcobolt.new-any-llmmodelfiles>ollama create -f Qwen2.5Coder qwen2.5-coder-ottodev:3b
    transferring model data
    pulling manifest
    Error: pull model manifest: file does not exist
    I specify that inside the modelfiles folder I don't have all the files that are shown in your video, but only the one I created as you told me. However, the video is not very clear.

  7. First of all thanks.. for your work.. and the community.., I use proxy.. server and try to configure but I get the error: code: 'ERR_INVALID_URL',
    input: '172.16.1.254:3128'" please create a way that people like me how use proxyś could install without any issues.

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button