How to Use Bolt.new for FREE with Local LLMs (And NO Rate Limits)
Over the last month, together as a community we started oTToDev, a fork of Bolt.new that aims to add a bunch of much needed functionality like being able to use any LLM you want, including local ones with Ollama.
In this video I give some super important tips and tricks for using local LLMs with oTToDev, some of which can really apply to using any LLM with any AI coding assistant! I also cover my favorite open source LLM to use for coding my AI apps.
Everything I cover in this video I’ll be going into more detail in my YouTube livestream I’ll be doing this Sunday, November 10th, at 7:00 PM CDT! See you there! Here is the link for that for you to enable notifications for it when I go live:
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
00:00 – Intro
02:12 – Getting Started with oTToDev (Bolt.new Fork)
03:01 – Ollama’s Big Problem
04:26 – Fixing Ollama for oTToDev
06:55 – FlexiSpot Segment (My New Favorite Chair)
08:02 – My Favorite Open Source Coding LLM
10:39 – Using oTToDev to Build an App with a Small Local LLM
15:26 – Outro
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
As I mentioned in the video, I collabed with FlexiSpot for a small segment of this video because having an ergonomically correct chair is super important for me since I’ve dealt with sciatica in the past (seriously not fun), so I wanted to share with all of you, especially if you’re a developer sitting a lot like me!
Use my exclusive code ‘C750’. If you purchase the C7/C7Max now, you’ll get a $50 discount. It’s the best time to buy the C7 (or C7Max)!
US:
CA:
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
I’m building something BIG behind the scenes:
Here is the link to oTToDev on GitHub (repo will be renamed soon), ready for you to run locally and use LLMs from Ollama, Anthropic, OpenAI, OpenRouter, Gemini, Groq, and more! Check out the README to see future improvements planned!
Link to the prompts I used in the video to create the app:
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Artificial Intelligence is no doubt the future of not just software development but the whole world. And I’m on a mission to master it – focusing first on mastering AI Agents.
Join me as I push the limits of what is possible with AI. I’ll be uploading videos at least two times a week – Sundays and Wednesdays at 7:00 PM CDT! Sundays and Wednesdays are for everything AI, focusing on providing insane and practical educational value. I will also post sometimes on Fridays at 7:00 PM CDT – specifically for platform showcases – sometimes sponsored, always creative in approach!
[ad_2]
source
I'm building something BIG behind the scenes:
https://ottomator.ai/
ALSO
Check out the FlexiSpot C7 ergonomic chair at https://bit.ly/4fBz8R3 and use the code ''24BFC7'' to get $50 off on the C7 ergonomic chair! Like I said in the video, investing in a good chair is crucial for any developer, and FlexiSpot is certainly a good option.
insane.
Congrats man, but running bolt.new in local has to be painfully slow.
Hi cole, I made all the settings, but my Ollama is not activating it in the chat panel to be used!
What did you put in this field:
# You only need this environment variable set if you want to use LLAMA models
# EXAMPLE http://localhost:11434
OLLAMA_API_BASE_URL=
Why do I need login authorization with cloudflare when it starts?
Can you add images? I have this installed but can't find how to add to the prompt
I am struggling to get this up and running on my mac. Could anyone help ?
Im a fan, im subbed, but im so tired of the hyper simplified 'coding project' examples. They're meaningless to me at this point.
hey cole, your github and the video show different instructions on getting qwen2.5-coder setup.. in your video it looks like you made a folder and put a file named qwen2.5-coder in there, but in the github it says to make a textfile called modelfile. im pretty lost and cant proceed. what should i do?
Love your work, Cole! Thanks a lot. I have oTToDev running locally now. All is good, but quite often the preview is just blank. Is that a known issue or more likely related to my MAC?
If we use a local LLM from Ollama does the model need to be 128k content length?
Btw, Great project from an individual youtuber🤟
Would be very nice if you could show how to use it installed on a server instead of local. I think this is not working at the moment and a lot of people have problems with it, because of cross-origin etc. errors. Other then that, thanks for your videos and work on it 🙂
Hi. I am interested in learning how to build n8n workflows. What's the best channel to learn from scratch about N8N ? Can someone please advise
I'm glad the only people can use this is we programmer we self so we are not cook yet. Please don't build SaaS for other about this.
why cant we select the free llama 405 B from the openrouter ? it's free and can't see it in the openrouter dropdown list? is there a way to modify this ?
well it would be awesome if there was a system minimum requirement was present in readme
Hi Cole! Please add the use for the Microsoft Azure API, as well as the new GitHub Marketplace Models free tier API for GPT-4o in OttoDev. Thank you for your hard work!
If anyone still have trouble to run bolt or ollama locally just use Pinokio computer (google it), in one click you can installed and run it!
I setup this and Llama locally, added the Llama url and it kept saying needs api key, Groq worked ok but was hoping to use with local LLM. I use Bolt currently and would like to develop Flutter apps. any ideas how to fix the API Key error
@ColeMedin. Great video and cool project! I added Haiku 3.5 to your list of LLMs, but no previews are being generated. Any ideas why? Am I supposed to have pre-installed any pip libraries ahead of time?
Also, other than React/Tailwind CSS, do you know any other tech stacks that can be generated bolt.new/Ottodev preview?
Keep up the good work!
Hi @Cole Thanks for the valuable share, what if need to use a custom api with own llm model ?
Can you recommend someone to help me ( a newbie) set this up on my home PC for a project I am working on?
i run it locally with llama3.1:8b and it works good. I dont no why, but the preview functions doesnt seem to work good. first it worked and then it's just full white.
i wish:
1- editing previous messages.
2. detach files -> images for context etc. -> would be nice for the new llama vision model to get all infos and then change to a code model and implement the analysis i.e.
just testet it for an hour.
Thanks for this!
Hi Cole, what Mac machine do you recommend to use for development with bolt.new with local modals running. Will be nice if you can tell which is best on budget and which is best overall
I followed the instructions in the video to expand the context, but I'm getting this error:
C:Usersmarcobolt.new-any-llmmodelfiles>ollama create -f Qwen2.5Coder qwen2.5-coder-ottodev:3b
transferring model data
pulling manifest
Error: pull model manifest: file does not exist
I specify that inside the modelfiles folder I don't have all the files that are shown in your video, but only the one I created as you told me. However, the video is not very clear.
First of all thanks.. for your work.. and the community.., I use proxy.. server and try to configure but I get the error: code: 'ERR_INVALID_URL',
input: '172.16.1.254:3128'" please create a way that people like me how use proxyś could install without any issues.
How can we make the google ai studio api work? Because simply adding it in the .dev file doesn't work.
How to load from github repo or stackblitz project ? As it's available in bolt.new I supposed it should be possible ?
When I use Ollama it gives me "x-api-key is required" how can i solve that
Hi Cole can you fork the Cofounder which can be useful.cofounder is better than bolt.