I run an AI agent on a server in my house. It monitors my CRM, generates email drafts, does research overnight, sends me reports on Telegram before I wake up. It has access to my client data, my pipeline, my business files. And until last week, the honest answer to "is this secure?" was "probably, if you're careful." That just changed.
The problem nobody wanted to talk about
AI agents like OpenClaw are genuinely useful. I use one every day. It handles tasks that used to take me hours. But there was always this uncomfortable truth underneath it all: the agent has access to everything on the machine it runs on. Your files. Your passwords. Your client data. Your network. If it can read it, the agent can read it. And if it can reach the internet, it can (in theory) send that data anywhere.
For personal use, that's a risk you manage yourself. Think before you prompt. Don't give it access to things you wouldn't want leaked. But for business? For client data? For anything regulated? "Be careful" is not a security policy.
What NVIDIA just did
NVIDIA released something called NemoClaw. Open source. Free. And it solves the biggest problem AI agents had. Here's what it does in plain English: it puts your AI agent in a locked room. The agent can still do everything it could before. Research, write, code, automate. But it can only access the files you specifically allow. It can only reach the websites you specifically whitelist. Everything else is blocked by default.
Jensen Huang (NVIDIA's CEO) put it bluntly at their conference last week: "Agentic systems can access sensitive information, execute code, and communicate externally. This can't possibly be allowed without controls." He's right. And now those controls exist.
How it actually works (no jargon)
Think of it like this. Before NemoClaw, your AI agent was an employee with a master key to every room in your building. Useful, but terrifying if something goes wrong. After NemoClaw, the agent works in one specific room. You decide what's in that room. You decide which doors are unlocked. The agent can't wander off and access things you didn't approve.
The technical bit (for those who care): NemoClaw wraps OpenClaw in something called OpenShell, which is a sandboxed container. All network requests, file access, and API calls go through a policy engine. The policies are written in simple YAML files. Deny by default, allow only what you specify. If someone asks "can you prove your agent won't send data externally?" you show them the policy file and say "it's deny by default. Only these endpoints are allowed."
For everyone else: it's a locked room with a guest list. Nothing gets in or out without your permission.
Why this matters for UK service businesses
I build AI systems for accountancy firms, recruitment agencies, law firms. Every single one of them asks the same question in our first call: "What about our client data?"
Up until now, my answer was honest but complicated. We use enterprise API access (not consumer chatbots). Your data doesn't train any models. We use UK data residency with Google Vertex AI in London. We sign a DPA before every engagement. All true. All solid. But the agent itself running on a machine? The security was always about trust and good practice, not enforced controls.
That's now different. With NemoClaw, I can show a policy file that proves exactly what the agent can and can't access. That's a different conversation with a compliance officer.
For regulated industries this is particularly big. Accountancy firms dealing with HMRC data. Law firms handling case files under SRA rules. Recruitment agencies processing candidate information under GDPR. "Trust me" becomes "here's the policy file, it's auditable, and it's deny by default."
What Jensen said that caught my attention
At NVIDIA's GTC conference, Jensen compared OpenClaw to Linux and NemoClaw to Red Hat Enterprise Linux. He said every company needs an "OpenClaw strategy" the same way they needed a Linux strategy, an HTTP strategy, a Kubernetes strategy. Bold claim. But he's not wrong about the direction.
AI agents are becoming infrastructure. Not a nice to have. Not a productivity hack. Actual infrastructure that businesses will rely on daily. And like any infrastructure, it needs security controls.
The numbers backing this up are hard to ignore. NVIDIA sees $1 trillion in orders through 2027. OpenClaw is the fastest growing open source project in history (13,000+ GitHub stars in weeks). And now the Nemotron coalition includes Cursor, LangChain, Mistral, Perplexity, and Black Forest Labs. This is not a side project. This is the entire industry moving in one direction.
The honest limitations
NemoClaw is alpha software. I tested it. It works but it's rough around the edges. Docker is required. The setup is not trivial. Port conflicts, permission issues, and the blueprint system still has bugs. One reviewer put it well: "Ready for engineers and coders who understand Docker. For the general user, wait."
I agree. If you're a business owner, you're not installing this yourself. But here's the thing. You don't need to. The person building your AI system (someone like me) handles this layer. You just need to know it exists and what it means for your data.
The other limitation: NemoClaw adds security by restricting what the agent can do. That's the whole point. But the things that make AI agents dangerous (accessing everything, reaching everywhere) are also what make them useful. There's always a tradeoff between security and capability. NemoClaw lets you dial that tradeoff precisely rather than leaving it at "everything or nothing."
What this means for you right now
If you're running a service business and thinking about AI agents, this changes the conversation in three ways:
- Data privacy just got a real answer. Not "we're careful" but "here's the policy file proving the agent can only access what we approved." If you're in a regulated industry, this matters.
- The security objection just got weaker. "AI agents aren't secure enough for business use" was a legitimate concern until last week. Now there's an open source, NVIDIA-backed security layer with deny-by-default network policies. The goalpost moved.
- The gap between personal AI and business AI is closing. This used to be two different worlds. Consumer tools that were powerful but risky, and enterprise tools that were locked down but useless. NemoClaw bridges that gap. Same agent, same capabilities, but now with controls that would satisfy a compliance audit.
I'm already integrating this into how I build AI systems for clients. If you're interested in what a secured AI agent could do for your business, that's exactly what our free audit covers.
Find out how a secured AI agent could work for your business
Fortnight & Co builds working AI systems for UK service businesses in 14 days. If we can't find a use case that saves your team 5+ hours per week, you don't pay.
Get your free automation auditFree - 20 minutes - No obligation - Custom report
