Be the first to know and get exclusive access to offers by signing up for our mailing list(s).

Subscribe

We ❤️ Open Source

A community education resource

6 min read

Getting started with OpenClaw: Complex tasks from a simple chat

How an open source autonomous agent framework is changing the way developers interact with AI systems.

Over the past year, autonomous, task-executing assistants have been making clear headway from concept to serious infrastructure AI agents. Tools like Cursor and Codex have shown developers what AI can do inside an IDE, but today I think we are seeing something bigger emerge: AI agents that can work and execute in real-time in almost any chosen environment. 

This is a fast-moving industry, and if I’m completely honest, no one really knows in which direction it is heading. OpenClaw is a project that demonstrates this pace of change in plain sight. In just a few short weeks in early 2026, the project has cycled through multiple names (Clawdbot, Moltbot, and now OpenClaw), refactored repeatedly, and grown as community interest surged. 

What began as a niche, self-hosted experiment has become one of the most widely discussed open source autonomous agent frameworks this year. For developers paying attention to the next phase of AI tooling, OpenClaw offers a revealing look at what happens when natural language reasoning meets direct system execution. 

What is OpenClaw? 

OpenClaw is an open source autonomous agent framework designed to execute advanced tasks through simple chat interfaces such as WhatsApp or Telegram. Unlike traditional AI chatbots that stop at text generation, OpenClaw takes action. When configured with appropriate permissions, it can: 

  • File system interaction: Interact with your file system 
  • Command execution: Execute shell commands 
  • Code modification: Modify and generate code 
  • API connection: Connect to external APIs 
  • Workflow automation: Automate workflows across applications 

You describe your objective in natural language. The agent interprets your request, plans the necessary steps, and executes those steps within your environment. 

Under the hood, OpenClaw wraps a large language model (LLM) with task orchestration, tool invocation, and execution logic. The language model handles reasoning; the framework handles action. Practically, it helps to think of OpenClaw as three pieces working together: (1) the model for “thinking,” (2) a tool/skills layer for “doing,” and (3) a chat channel (like Telegram) that acts as the front door. 

One of its most accessible features is that it does not require a local GPU when paired with a remote AI provider’s external APIs for heavy reasoning tasks. Therefore, developers can run OpenClaw on modest hardware, from a laptop to a small server, without needing specialized compute resources. If you choose to host models locally, hardware requirements will depend on the model you select. 

This flexibility lowers the barrier to entry and makes experimentation far more approachable. 

Read more: Why open source is critical for the continued advancement of new tech

Developer advantages and benefits

In practice, OpenClaw is compelling for developers for a few reasons: 

  • Reduced context switching: It reduces context switching. Instead of bouncing between terminal, editor, browser tabs, and docs, you can describe a goal and let the agent carry out the “busy work” in the background. 
  • Reusable workflows: It turns repeatable work into a reusable workflow. Once you’ve proven a process (upgrade steps, refactor rules, validation commands), it’s easier to run it again consistently. 
  • Total control: It stays under your control. You choose where it runs, what it can access, and which model providers it can call. 
  • Persistent assistance: It’s “always-on” if you want it to be. Run it on a server, and it becomes a persistent helper rather than a one-off local experiment 

The key shift is simple: instead of an assistant that suggests what you should do next, OpenClaw can become an assistant that performs the steps, within guardrails you define. 

What can it do? 

In reality, the possibilities are endless. If you install OpenClaw on a Linux server, you can configure it to do anything you can do on Linux. The scope really is that wide! Instead of manually updating hundreds of files during a framework upgrade, instruct OpenClaw to scan repositories, refactor syntax, and validate results through automated test execution. 

Get it to execute infrastructure commands and interact with APIs, enabling automation for environment configuration, scripting, system health checks, and log analysis. It’s not a replacement for engineers, but it’s increasingly becoming a capable assistant. 

Read more: The AI slop problem threatening open source maintainers

Getting started 

For developers interested in experimenting with OpenClaw, the process is relatively straightforward. Initial experimentation works well on a local machine and can also be installed on the cloud hosting platform by Atlantic.Net. But if you want a persistent, always-on assistant, infrastructure choice matters. 

OpenClaw install (Ubuntu + Telegram) 

Prereqs: Ubuntu server, Node, an AI provider API key, and a Telegram bot token from @BotFather

1) Create a non-root user

ssh root@<SERVER_IP>
adduser openclaw
usermod -aG sudo openclaw
su - openclaw

2) Install Node (via NVM)

curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.40.3/install.sh | bash
source ~/.bashrc
nvm install 24 (Node 22+ required; 24 is fine)

3) Install OpenClaw + run the setup wizard (installs the daemon)

npm install -g openclaw@latest
openclaw onboard --install-daemon

In the wizard: pick your model provider, choose Telegram, and paste your BotFather token. 

4) Pair your Telegram DM (first-time access)

  1. Message your bot in Telegram once. 
  2. Approve the pending pairing: 
openclaw pairing list telegram
openclaw pairing approve telegram <CODE>

5) Open the dashboard (securely, via SSH tunnel)

ssh -N -L 18789:127.0.0.1:18789 openclaw@<SERVER_IP>

Then browse: http://localhost:18789/

For ongoing use, deploying OpenClaw on a dedicated cloud server, any small VM with stable uptime and network access works well. For example, an Atlantic.Net Cloud instance with a few CPU cores and 4GB RAM is typically enough for the Node.js runtime and API calls, without turning this into a heavyweight deployment. Local testing is fine; persistent agents benefit from stable infrastructure. 

Safety first 

As OpenClaw can execute commands, it’s worth treating it like any other automation system that can touch your environment: run it as a non-root user, apply least privilege to everything it can access (directories, repos, and APIs), and keep pairing/approvals tightly controlled so only accounts you own can drive it. 

Looking ahead 

OpenClaw represents a shift in how developers interact with AI systems. For years, AI tools functioned primarily as advisors. OpenClaw demonstrates how AI is becoming an operator as well. 

Whether autonomous agents become a foundational layer of software engineering or change into something more constrained remains to be seen. What’s clear is that open source frameworks allow developers to experiment directly with that future on their own terms and infrastructure. 

More from We Love Open Source

About the Author

Marty Puranik, Founder and CEO of Atlantic.Net, is a visionary leader in the cloud and data center industry. As a seasoned executive, he champions innovation in Infrastructure as a Service, focusing on leveraging cutting-edge technologies like AI to enhance performance, scalability, and customer value, driving the next generation of cloud solutions.

Read Marty Puranik's Full Bio

The opinions expressed on this website are those of each author, not of the author's employer or All Things Open/We Love Open Source.

Want to contribute your open source content?

Contribute to We ❤️ Open Source

Help educate our community by contributing a blog post, tutorial, or how-to.

We're hosting two world-class events in 2026!

Join us for All Things AI, March 23-24 and for All Things Open, October 19-20.

Open Source Meetups

We host some of the most active open source meetups in the U.S. Get more info and RSVP to an upcoming event.