TL;DR
-
OpenClaw is promising but not ready for mainstream use yet. It requires technical know-how, has security considerations, and needs constant babysitting—the agents hallucinate, break configs, and don’t always report back.
-
The real value was learning, not automating. I didn’t find killer use cases, but I gained hands-on experience with LLMs, agents, and the current limitations of the tech.
-
We’re in the early days. Expensive, clunky, unclear what it’s good for—but the direction is interesting. Local models might make this practical eventually. For now, it’s a fascinating experiment for tinkerers, not a must-have tool.
Introduction
After returning from FOSDEM, I was bombarded with hype around OpenClaw —an open‑source, self‑hosted AI assistant that promises to glue large language models (LLMs) to your personal tools. As an embedded‑software engineer who lives on Linux, loves FOSS & maintains FOSS projects, I was both skeptical and curious. Skeptical because, honestly, I’m not a pro AI guy: in my opinion, most AI tools are garbage, and LLMs like ChatGPT or Claude are environmental and ethical disasters. But also curious because AI is here to stay, and as a software engineer, I need to stay relevant. And since I’m all‑in on self‑hosting, I had to try this allegedly revolutionary tool. Here’s what a week of hands‑on testing taught me.
What is OpenClaw anyway ?
OpenClaw is an open-source, self-hosted AI assistant that connects to LLMs and your personal tools.
What does that exactly mean? Before jumping into this, I barely knew what an AI agent is, but I understood that OpenClaw is a tool that allows to run an AI-thing on any of my computer/SBC, talk to it via Telegram and give it access to all your files, network, applications to automate anything you want.
I was not so interested at first, I guess my brain learnt to ignore all those useless AI news. But the more I read and talked about it with some friends, the more curious I became about it. And since it’s open-source, easily self-hostable and “free”, I decided to install it and check it out!
The setup
I first installed it in a VM on my laptop, but quickly move it to an SBC that I could leave on 24/7. I used my OrangePi5+, since it’s the only unused SBC I had laying around. The installation on Fedora for ARM64 went well, just copy/paste the curl & bash command from the documentation and follow the instructions. I won’t detail the whole installation process since it’s quite straightforward, and there are tons of tutorial available on the internet.

The OrangePi5+ is over-powered for OpenClaw, and I’ll probably move it to a Quartz64 Model A 8GB from Pine64 in the near future. The Quartz64 is an amazing board that sips very little power and doesn’t heat up at all, even without any heat spreader!
It’s time for a small disclaimer: OpenClaw is known to have many security issues, which could be problematic if you give your agent free rein on your local machine, network, alone accounts,… Please make sure you know what you’re doing before setting your instance up!
In my case, OpenClaw is running on a dedicated machine, and, except for the API keys for the LLMs, I won’t provide it any of my credential nor access to any other machines on my network!
First Attempt: The Free Tier Struggle
At first, I didn’t want to pay for an LLM subscription since I just wanted to test the waters, see what was being the hype and do some experiments.
I first tried with the free tier of Qwen it worked… at the beginning, but I quickly hit limits of the free tier.
I also tried using free models at OpenRouter , but that didn’t work at all. I guess the free tier is way too limited for agent use.
Let’s spend a few bucks for GLM-4.7
*You get what you pay for, as they say, so I decided to spend a few €€€ on a subscription. I opted with the GLM Coding Lite plan from z.ai since GLM4.7 was often mentioned as a cheaper alternative to other big LLMs (like ChatGPT or Claude, for example).
All of a sudden, things went much more smoothly: I could talk with the agent, ask it to setup the Telegram channel, test a few prompts, and experiment with AI!
The first thing I did was asking the agent to setup the Telegram channel, which went very well.
Then I asked it to do whatever necessary so that I can access the web UI of OpenClaw from my local network, since by default, it’s only available from the computer it’s running on. Aaaaan… I watched the agent edit the configuration of OpenClaw, restart it, say that it was done (but it wasn’t working), try again, break the configuration so that OpenClaw would not start again,… It was a bit frustrating, but also quite funny… it was trying but had no idea what it was doing! I eventually stopped that and reverted the configuration since, according to the documentation, OpenClaw is not designed to work that way, and you should use a SSH or Tailscale tunnel to access your OpenClaw instance from other machines.
Until now, that’s not very impressive, I just watched the agent configure itself with a very low success ratio.
But then I asked it to connect to my Home Assistant instance. It told me how to create the Long-term access token it needed to access the instance, and then manage answer requests like “what’s the outdoor temperature” and “switch that light on/off”:

And it did exactly that:

I didn’t have to do any coding, or provide any additional info or anything else, it figured everything out by itself, which is quite amazing to me!
I played with it for 3 days, quite intensively in the evening. Then, the 3rd day, I hit a new rate limit: the weekly quota. With Z.ai, you pay for a monthly subscription (and probably a limited amount of tokens per months), but you also get rate-limited by 5 hours and by weeks! And when you reach the weekly quota… you have to wait for the end of the week before being able to use your subscription again. Shame, but… you get what you pay for, right?
I want more! The OpenRouter experiment
Since the GLM subscription was cheap, I decided to burn 10 additional € to OpenRouter . OpenRouter is a service which allows you to use many models from many providers from a single subscription (in exchange of a small fee).
When you buy at least 10€, you get access to all their paid models, obviously, but also to free ones (that are most likely rate limited as well). And yes, it worked.
First I tried their openrouter/auto “model”. This is not an actual model, but it automatically analyzes the prompt and chooses the best model to execute it.
Once again, that was not a good idea! It routed too many requests to the new Claude Opus 4.6 which costs a lot! A few requests cost more than 3€ out of my bucket of 10!
So I quickly decided to try other models and mostly used MiniMax M2.5 which has a lot of good feedback on Reddit
. I also eventually tried Kimi K2.5, Trinity Large Preview (free) and Step 3.5 (free).
And I did more experiments: make them use my searxng instance for web search, set up cron jobs for small automations, read my Logseq notes, install new skills, write some code, and even generate a whole blog for the agent.
While all of this worked, it was not as easy as many posts on reddit and videos on youtube would have you believe: it hallucinates fixes for errors it creates itself, it restarts the gateway and does not report back when the gateway is running again, it loops on parts of the discussion even when I asked it to stop talking about that, failed to build a small test application (but the prompt was probably too short),… It basically needs a lot of hand holding and monitoring.
What about local models?
I do not have a huge AI rig at home. And even if my desktop can run some models on its 3060ti and its 64GB RAM, I do not want to let it run 24/7.
But the OrangePi5+ it’s running on has 32GB RAM sooo… why not try to run llama.cpp and a model that fits into the memory?
I think that would be nice for simple automations, cron jobs running during the night, when the speed is not an issue.
I tried Qwen3-30B and GPT-OSS-20B. They run very slowly (1-3 token/second) but they run and fit in memory!
Buuut that didn’t integrate well with OpenClaw: it would automatically fall back to paid subscriptions or just silently timeout. Request would stay unanswered, with no error or status whatsoever.
Ok, but what is it for?
I did all of these experiments over the course of a week, mainly during the evenings. I had a lot of fun watching the agent doing stuff by itself, sometimes failing miserably, sometimes doing things I did not expect.
All in all, it’s not ready to be used by the general public: you need to know what you are doing, be aware of security risks, pay attention to subscription cost, know that the agents hallucinate,…
And I failed at finding actual useful use-cases that would make my life simpler.
Yes, I can use it as an AI chat, but I can use any other platform for that, and I already pay for Proton Lumo for that use-case.
I could also use it to help with coding, but I’m not sure that would be as good as Codex or JetBrains AI, for example. And, to be honest, I barely use them.
It turns out that I cannot think of any workflow or process that I would like to automate and that I cannot automate with Home Assistant or a simple script. Maybe I’ll find some in the future, as I learn how to use the technology and as this technology evolves day by day.
But does that mean that this experiment is a failure? Absolutely not! I learnt a lot about AI, models, providers and agents these last few days, a lot more that I learnt these last few months by using Github Copilot at work for example.
And even if I do not particularly like the AI hype, I think I have to stay updated to new technology to stay relevant in my job, I think I have to learn to use those tools before they eventually replace me (if that ever happens). And for that, this experiment was a big win!
What’s next?
I feel like I’ve just scratched the surface here. The more I play with it, the more I want to try new things:
-
Try other models, maybe try ChatGPT and see if a bigger well-known model performs better/faster
-
Try to better integrate my Logseq notes with QMD
-
Try multiple agents talking to each other and sub-agents
-
Try more skills and plugins
-
Try alternatives like PicoClaw that is much more lightweight
I’ll keep my instance running, continue playing with it from time to time, especially when I’m too tired to do anything else meaningful, and I’ll stay up to date on the evolution of this new technology and see how it goes!
And beyond?
During this experiment, I felt like I (and all other people using OpenClaw right now) am a pioneer testing brand new technology. I don’t know much about it, I don’t know how to use it, but I feel like it’ll be part of our life in the future.
I think of it like the early days of the internet when internet access was expensive (you paid to the minute), slow, not reliable and not secure. And most of the people didn’t know what to do on the internet. Then came the ADSL connections. Less and less expensive, but with limited quota. And now, internet is a commodity, most of the people subscribe to their internet provider as if they would subscribe to their electricity company, for example.
The same might happen to this AI technology. We are just discovering what it can do, but still have to find use-cases that will actually make our lives easier. It cost a lot of money and most people do not want to spend money on it.
But the technology progresses rapidly. It might become cheaper in the future? Or we might be able to run the LLM locally, on a simple computer or SBC ? Who knows?
I know the current state of the technology is not bright. But that doesn’t mean that it cannot improve in the future, right?
