FOFOCA is ThinkNEO's first physical-world study case — an open project built on a 100% NVIDIA stack. Every AI decision the robot makes flows through the ThinkNEO Enterprise AI Control Plane. Not a product for sale. A proof that ThinkNEO can govern a robot you build — and give you full safety and observability over it.
// What it is
FOFOCA (Fully Operational Feline-free Omniscient Companion Assistant) is an open, non-commercial household robot that operates 24/7 in a real residential environment. It monitors pets, receives deliveries, detects emergencies, and — when needed — calls SAMU, fire brigade, or police autonomously.
It is not a product we sell. It's a proof — a public, open study case — that ThinkNEO can take a robot you built yourself and give it the same enterprise-grade governance a Fortune-500 agent pipeline gets: runtime guardrails, full observability, cost attribution, and an immutable audit trail for every physical decision.
What makes it different isn't the hardware. It's that every single AI decision the robot makes — every reflex, every reasoning step, every action taken in the physical world — passes through a single enforcement layer before reaching any model or actuator. That layer is ThinkNEO. The entire AI stack runs on NVIDIA: Nemotron Ultra, Nemotron Nano, Jetson-class compute, and NVIDIA NIM microservices — end to end.
// Core hardware
The brain of FOFOCA is a Raspberry Pi 5 with 8 GB of RAM — a credit-card-sized computer powerful enough to run real-time computer vision, speech processing, and autonomous navigation. Every AI call is orchestrated locally before being routed through the ThinkNEO Control Plane.
Raspberry Pi 5
8 GB RAM
Quad-core Arm Cortex-A76 running at 2.4 GHz. Handles OpenCV, YOLOv8 inference, ROS2 navigation, and Piper TTS — all simultaneously. Connected to Insta360 for 360° vision, a 6-DOF robotic arm, and tracked locomotion via ESP32.
// Governance architecture
FOFOCA holds exactly one API key: the ThinkNEO key. All model traffic — Nemotron Ultra for complex reasoning, Nemotron Nano for real-time reflexes, any other provider — goes through the control plane. No direct provider access. No lock-in. Zero code changes when routing shifts.
// Study case metrics
FOFOCA is generating real production telemetry — not a sandbox, not a demo. The dashboard below will stream live as the study case moves through deployment phases.
Controlling what an autonomous robot is allowed to decide, what it is allowed to do, and proving it afterwards — is one of the biggest problems in AI right now.
We're building the layer that solves it.
// Development roadmap
Mechanical base → perception → locomotion → ThinkNEO integration → audio pipeline → local server → dog module → delivery module → personal assistant → smart-home mesh → SLAM → emergency module → public dashboard. Each phase ships a concrete capability and the governance evidence behind it.
100% NVIDIA AI stack. FOFOCA is a member of the NVIDIA Inception Program and uses Nemotron models end-to-end. The entire reasoning and reflex pipeline runs on NVIDIA infrastructure, governed by ThinkNEO.