Skip to content

NVIDIA NemoClaw Evaluation

Date: March 17, 2026 (GTC Day 2) Status: NemoClaw announced at GTC 2026 keynote, March 16. Alpha release.


TL;DR

NemoClaw is not an AI agent. It is a security/sandboxing wrapper for OpenClaw, routing inference through NVIDIA's OpenShell runtime. It does not replace, threaten, or directly improve nanobot. The concepts are worth stealing; the code is not worth integrating today.

Score: 18/50 as a nanobot replacement. Interesting as a market signal.


What NemoClaw Actually Is

[ OpenClaw Agent ]           <-- the actual agent (not NVIDIA's)
        |
[ NemoClaw Plugin ]          <-- NVIDIA's TypeScript CLI + Python blueprint
        |
[ OpenShell Runtime ]        <-- NVIDIA's sandboxing layer (Landlock/seccomp/netns)
        |
[ NVIDIA Cloud Inference ]   <-- Nemotron 3 Super 120B via build.nvidia.com
  • Language: TypeScript 36.7%, Shell 29.9%, JavaScript 27.5%, Python 4.8%
  • GitHub: 4,000 stars (day-old repo, riding NVIDIA brand + GTC hype)
  • License: Apache 2.0
  • Contributors: 19
  • Maturity: Alpha -- "early-stage, rough edges expected"

NemoClaw has zero agent logic. No memory, no skills, no tool framework, no task planning, no channels. It is purely a deployment and policy enforcement wrapper for OpenClaw.


Feature Comparison: NemoClaw vs Nanobot

Capability NemoClaw Nanobot
Agent loop No (uses OpenClaw's) Yes (custom asyncio)
Multi-channel (Discord/Telegram/Slack/WhatsApp) No Yes (8+ channels)
MCP support No Yes (stdio + HTTP)
RAG / semantic memory No Yes (ChromaDB)
Multi-LLM routing No (Nemotron-first) Yes (LiteLLM + custom + Codex)
Scheduled tasks No Yes (cron service)
Local LLM (llama.cpp) Undocumented workaround Yes (primary mode)
Network egress control Yes (kernel-level allowlists) No
Filesystem sandboxing Yes (Landlock) No (restrict_to_workspace flag)
Inference interception Yes (OpenShell gateway) No
Audit trail Network-level only Tool-level (audit.jsonl)
Dashboard No Yes (port 18791)
Production ready No (alpha, broken on Spark) Yes (running daily)

DGX Spark Compatibility

Broken on launch day. NVIDIA Developer Forums thread "NemoClaw on Spark" has users hitting Error: status: NotFound, message: 'sandbox not found' during onboarding. The default Nemotron 3 Super model is not even in OpenClaw's model catalog yet. vLLM on SM121/aarch64 has its own separate known issues.

Your llama.cpp server on port 8001 could theoretically work via the vllm inference profile (which accepts any OpenAI-compatible endpoint), but:

  • Use host.openshell.internal:8001 not localhost (resolves inside container)
  • Completely undocumented path
  • Requires running full OpenShell container stack

Security Model -- Worth Understanding

NemoClaw's security is its entire value proposition. Four layers:

Layer Mechanism Hot-reloadable
Network egress Operator-approved allowlist Yes
Filesystem Agent limited to /sandbox + /tmp No (locked at creation)
Process isolation No privilege escalation No (locked at creation)
Inference routing API calls intercepted by OpenShell Yes

Kernel mechanisms: Landlock, seccomp, network namespaces -- legitimate Linux security primitives.

What it doesn't cover: - Decision-level audit trails (what the agent decided, not just what network calls it made) - Business rule enforcement ("agent may not recommend transaction > $X without approval") - Human-in-the-loop workflow gates - Regulatory framework alignment (PCI-DSS, FinCEN, SOC2)


Relevance to Our Startup Ideas

Compliance AI Agent (46/50)

Validates the market. Does not provide the differentiation.

NemoClaw + Adobe + Salesforce + SAP all buying "sandboxed agents with guardrails" at GTC confirms the enterprise demand. But NemoClaw's guardrails are infrastructure-level (network, filesystem), not decision-level (compliance rules, audit trails, human-in-the-loop).

Your moat is the decision-level rules engine that NemoClaw cannot close quickly because it requires compliance domain expertise (Cash App, $100B+ txn volume), not just engineering.

Sovereign AI Platform (44/50)

NemoClaw's local inference story on DGX Spark validates the "sovereign compute" narrative. Use it as a reference architecture in pitch decks. Do not use it as infrastructure you ship.


What to Steal (Concepts, Not Code)

Three patterns from NemoClaw worth building natively in nanobot:

  1. Declarative network allowlist -- nanobot's web_fetch tool currently has no egress control. A YAML-based allowlist with hot-reload would close the SSRF vulnerability from the security audit AND add a compliance-relevant feature. (~50 lines Python)

  2. Blueprint pattern -- versioned, signed policy artifacts. Useful for compliance ("here's the exact policy the agent was running when it made this decision"). (~30 lines for config versioning)

  3. Inference interception at gateway level -- useful for model A/B testing and routing audit without modifying the agent loop. Nanobot's ModelRouter is the right place. (~40 lines)

Estimated total: ~120 lines of Python, 1-2 days work.


GTC Action Items

  • Hit the "build-a-claw" booth in GTC Park (running through March 19, 8am-5pm)
  • Ask NVIDIA engineers directly: What is the Spark timeline? What is the compliance audit log story?
  • Do NOT attempt to install/integrate NemoClaw this week -- it's broken on Spark

Verdict

Question Answer
Replace nanobot? No -- NemoClaw is not an agent
Complement nanobot? Concepts yes, code no
Integrate NemoClaw? Not now -- alpha, broken on Spark
Competitive threat? Low today, medium in 12-18 months
Market signal? Strong -- validates secure sovereign agent demand
Action now? Build Python-native equivalents of the 3 patterns above

Re-evaluate in 4-6 weeks when the Spark onboarding bug is patched and the alpha stabilizes.


Sources