Study Case · Work in Progress

The control layer for autonomous AI systems

FOFOCA is ThinkNEO's first physical-world study case — an open project built on a 100% NVIDIA stack. Every AI decision the robot makes flows through the ThinkNEO Enterprise AI Control Plane. Not a product for sale. A proof that ThinkNEO can govern a robot you build — and give you full safety and observability over it.

Open Project 100% NVIDIA Stack NVIDIA Inception Program Nemotron Ultra · Nano Governed by ThinkNEO

// What it is

An open robot — governed by ThinkNEO, powered by NVIDIA.

FOFOCA (Fully Operational Feline-free Omniscient Companion Assistant) is an open, non-commercial household robot that operates 24/7 in a real residential environment. It monitors pets, receives deliveries, detects emergencies, and — when needed — calls SAMU, fire brigade, or police autonomously.

It is not a product we sell. It's a proof — a public, open study case — that ThinkNEO can take a robot you built yourself and give it the same enterprise-grade governance a Fortune-500 agent pipeline gets: runtime guardrails, full observability, cost attribution, and an immutable audit trail for every physical decision.

What makes it different isn't the hardware. It's that every single AI decision the robot makes — every reflex, every reasoning step, every action taken in the physical world — passes through a single enforcement layer before reaching any model or actuator. That layer is ThinkNEO. The entire AI stack runs on NVIDIA: Nemotron Ultra, Nemotron Nano, Jetson-class compute, and NVIDIA NIM microservices — end to end.

// Core hardware

Built on Raspberry Pi 5 — 8 GB.

The brain of FOFOCA is a Raspberry Pi 5 with 8 GB of RAM — a credit-card-sized computer powerful enough to run real-time computer vision, speech processing, and autonomous navigation. Every AI call is orchestrated locally before being routed through the ThinkNEO Control Plane.

Raspberry Pi 5
Raspberry Pi Raspberry Pi 5 8 GB RAM

Quad-core Arm Cortex-A76 running at 2.4 GHz. Handles OpenCV, YOLOv8 inference, ROS2 navigation, and Piper TTS — all simultaneously. Connected to Insta360 for 360° vision, a 6-DOF robotic arm, and tracked locomotion via ESP32.

CPU: 4x Cortex-A76 @ 2.4 GHz RAM: 8 GB LPDDR4X GPU: VideoCore VII I/O: 2x USB 3.0 · 2x USB 2.0 Net: Gigabit · Wi-Fi 5 · BT 5.0 Vision: Insta360 via USB 3.0 Arm: 6x MG90S via RP2040 PWM Tracks: ESP32 + HW130

// Governance architecture

Every AI call routes through the control plane.

FOFOCA holds exactly one API key: the ThinkNEO key. All model traffic — Nemotron Ultra for complex reasoning, Nemotron Nano for real-time reflexes, any other provider — goes through the control plane. No direct provider access. No lock-in. Zero code changes when routing shifts.

🤖
FOFOCA
Raspberry Pi 5
Camera, arm, tracks, sensors
🛡
ThinkNEO Control Plane
Guardrails · Observability
FinOps · Audit Trail · Routing
🧠
Nemotron Ultra / Nano
Reasoning + real-time reflexes
Any provider, swappable
Runtime Guardrails
Hard limits on what the robot can do — enforced before a prompt ever reaches a model. Emergency-only tool calls, action cooldowns, safe-zone constraints.
Policy Enforcement
Which model handles which decision. Reasoning goes to Ultra. Reflexes go to Nano. Fallback rules, context-aware routing, provider-agnostic by design.
Immutable Audit Trail
Every decision the robot made, every tool call, every emergency trigger — logged, timestamped, tamper-proof. When the robot calls SAMU, there's a full trace.
Real-Time FinOps
Per-task cost attribution. How much did it cost the robot to resolve a barking episode? To receive a delivery? The control plane knows, in real time.

// Study case metrics

Live data from production 24/7.

FOFOCA is generating real production telemetry — not a sandbox, not a demo. The dashboard below will stream live as the study case moves through deployment phases.

Metric
Source
What it proves
Requests / day
ThinkNEO Dashboard
Real AI volume in live home deployment
Cost per task
AI FinOps
Cost per robot action (dog monitoring, delivery intake…)
Guardrails fired
Runtime
Real-world safety interventions in physical AI
Latency by model
Observability
Nemotron Ultra vs Nano — fresh production numbers
Governed uptime
Monitoring
24/7 operation with zero governance downtime
Model swap impact
Routing
Zero code changes to the robot when routing shifts
Emergencies handled
Audit Trail
Full traceability on SAMU / fire / police triggers
Controlling what an autonomous robot is allowed to decide, what it is allowed to do, and proving it afterwards — is one of the biggest problems in AI right now.

We're building the layer that solves it.
— ThinkNEO · Physical-world governance

// Development roadmap

13 phases. Public from day one.

Mechanical base → perception → locomotion → ThinkNEO integration → audio pipeline → local server → dog module → delivery module → personal assistant → smart-home mesh → SLAM → emergency module → public dashboard. Each phase ships a concrete capability and the governance evidence behind it.

AI Models: NVIDIA Nemotron Ultra · Nemotron Nano
Inference: NVIDIA NIM microservices
Compute: Jetson-class + Raspberry Pi 5 edge
Vision: Insta360 + YOLOv8 + InsightFace
Audio: YAMNet + faster-whisper + Piper TTS
Memory: ChromaDB + PostgreSQL + MinIO
Navigation: ROS2 + SLAM
Governance: ThinkNEO Control Plane

100% NVIDIA AI stack. FOFOCA is a member of the NVIDIA Inception Program and uses Nemotron models end-to-end. The entire reasoning and reflex pipeline runs on NVIDIA infrastructure, governed by ThinkNEO.