Karatage Investment Committee — 8 April 2026

The State of Robotics

A field scan from our first weeks swimming in the space — what's real, what's hype, and where the opportunity gaps are.

Physical AI Manipulation Training Data Open Source Local Inference
Scroll to explore

01 — The Big Picture

A market on the verge of an inflection

The global robotics market isn't just growing — it's accelerating. Capital is flooding in, major OEMs are running real factory pilots, and the convergence of AI + physical hardware is creating entirely new categories. Here's the scale we're talking about:

$90B
2024 Market Size
$206B
2030 Forecast
15%
CAGR
$38B
Humanoid TAM 2035
542K
Industrial Bots Installed (2024)
4.7M
Robots in Operation Worldwide

Goldman Sachs increased its humanoid TAM projection 6x from $6B to $38B in a single revision. Morgan Stanley's bull case envisions 1 billion+ humanoid robots by 2050. This is the kind of exponential trajectory we look for.

Key Signal

Capital is concentrating hard. In early 2026 alone: Figure AI raised $1B at $39B valuation. Apptronik's Series A hit $935M at $5.3B valuation. Mind Robotics (Rivian spinout) raised $615M. Robotics and embodied AI have become one of the most heavily financed tech categories.


02 — The Real Bottleneck

It's the hands, not the humanoid

The headline may be humanoid robots walking around — but the actual innovation frontier is manipulation and dexterity. How a robot picks up an object, senses its weight, adjusts its grip, and places it precisely is where the hard problem lives.

"The dexterity gap severely limits robots' role in our daily lives. Most current robots still rely on simple grippers capable only of repetitive tasks in structured environments." — RobustDexGrasp, CoRL 2025

What we're observing

Performance Gap

Robots are ~10x slower than humans at pick-and-pack style tasks. Sorting a basket of objects, closing lids, or folding items — tasks a human does in seconds — take robots minutes. Speed + precision in unstructured environments is the unlock.

See this in action: a robot attempting to plug an ethernet cable into a socket — high dexterity, painfully slow. Or GEN-1 putting paper bills into a wallet — impressive precision, but the speed gap is obvious.

Gripper Design is Still in Flux

The industry hasn't converged on a standard hand architecture. Options range from simple 2-finger pincers (7 DoF) to full five-fingered dexterous hands with 20+ DoF. Whether the optimal design looks like a caliper, a human hand, or something entirely new is still open. Touch sensors, tactile feedback, and compliance are all active research areas.

Breakthrough: F-TAC Hand

Published in Nature Machine Intelligence — a robotic hand covering 70% of its surface with tactile sensors at 0.1mm resolution. Outperformed non-tactile systems with p<0.0001 in 600 real-world trials. Tactile feedback is proving essential for real-world manipulation.

The "clean room" problem

Nearly every robot demo we're seeing operates in a highly controlled environment — one basket on a table, few objects, no clutter. The real world has stacks of paper, tangled cables, wet surfaces, variable lighting. Bridging from lab to real-world is where companies stall. Computer vision in cluttered, high-entropy environments remains unsolved at production quality.

A key open question: what foundation model works best for robot vision — video, language, or grounding models? Meanwhile, tools like this open-source robot simulator (demo) are making it easier to experiment with manipulation in virtual environments.

🤖

Tesollo DG-5F-S

South Korea. 20 DoF five-fingered hand, 880g. Backdrivable joints. Debuted at CES 2026.

🧤

Sharpa Tactile Hands

Demonstrated tool manipulation with tactile AI at GTC 2026. Solving "data drought" with sim.

🖐️

RUKA (NYU)

3D-printed, tendon-driven, 15 underactuated DoF. Affordable open-source dexterous hand.

🔬

GR-Dexter

21 DoF per hand + VLA model. Teleoperated via Meta Quest + Manus gloves for data collection.


03 — The Real Moat

Training data is the new gold

Hardware is being commoditized. The real value — and the real competitive advantage — lives in the quality, diversity, and scale of training data used to teach robots how to operate in the physical world.

"Language models trained on trillions of words. Image models trained on billions of photos. Robots are starting from scratch." — Asimov (YC), robotics training data startup

How robots learn today

Teleoperation (Most Common)

Human-in-the-loop demonstration

Human operators wear VR headsets & glove harnesses (e.g. Meta Quest + Manus Gloves) to physically guide robot arms through tasks — folding shirts, loading dishes, plugging cables. The robot records every movement as training data.

Simulation (Sim-to-Real)

NVIDIA Isaac Sim, Omniverse

Training in high-fidelity virtual environments before deploying to physical hardware. NVIDIA is pushing this hard — but the "reality gap" between sim and real remains a challenge.

Reinforcement Learning

Trial and error with reward signals

Robots learn by repeatedly attempting tasks and receiving feedback. NVIDIA has shown simulated robots learning to walk this way — but in practical production, this is still rare. Most real deployments rely on imitation learning from human demos.

Video Pre-training (Emerging)

Rhoda AI — "Direct Video Action"

New approach: pre-train on hundreds of millions of internet videos to understand motion and physics, then fine-tune on as little as 10 hours of robot data. Rhoda raised $450M Series A for this approach.

Here's teleoperation in action — a human operator driving a robot through laundry folding via LeRobot.

Who's building the data infrastructure?

📊

Sensei (YC)

Scale AI for robotics. Outsourced data collection at 1/10th the cost of teleoperation.

🏠

Asimov (YC)

Marketplace for human movement data. People record daily tasks to train humanoids. Note: different from Asimov the humanoid robot company.

🎓

Tutor Intelligence

$42M raised. Remote human "tutors" control robots live when they hit unfamiliar situations. Fleet learns from every intervention.

🚗

Mind Robotics

Rivian spinout. $615M raised. Uses factory floor data from EV production to train industrial robots. Valued at ~$2B.

Waymo Parallel

Just as Waymo pays drivers to operate its cars on UK routes to train self-driving models, robotics companies are hiring people to sit in teleoperation harnesses and teach robots domestic tasks. The best training data requires real human demonstration — there's no shortcut.


04 — Proof in Production

BMW: The clearest signal yet

BMW has completed the world's first sustained humanoid robot deployment in automotive manufacturing — and the results provide a reality check on both the potential and the limitations.

30K+
BMW X3s Produced
90K+
Parts Handled
1,250
Operating Hours
10hr
Daily Shifts (M-F)

Figure 02 worked in BMW's Spartanburg body shop for 10 months, placing sheet metal parts with 5mm tolerance in 2 seconds. The task was deliberately chosen: repetitive, ergonomically difficult, in an already highly automated area.

Key Learnings

Lab-to-production was faster than expected. Motion sequences trained in the lab transferred to shift work reliably. But hardware reliability — especially the forearm/wrist assembly — was the #1 failure point. This directly informed the Figure 03 redesign.

Now expanding to Europe. BMW Plant Leipzig began testing Hexagon's AEON humanoid in Dec 2025. Full pilot launches summer 2026 for high-voltage battery assembly and component manufacturing.

Reality Check

Let nobody be fooled that humanoids will take over human factory tasks anytime soon. The conditions must be right: dangerous work, highly specialized processes, well-understood and repeatable tasks, and already-automated environments. We're not at general-purpose human replacement — and that gap is where the opportunity lies.


05 — Beyond Humanoids

Purpose-built robots: the quiet winner

While humanoids grab headlines, specialized robots designed for specific jobs at scale may be the more investable near-term category. The economics are clearer, the technical bar is lower, and the customer pull is real.

☀️

Maximo (AES Corp)

Solar panel installation robot fleet. Completed a 100MW install using NVIDIA Isaac Sim. Solar farms have 10,000s of panels — perfect scale for automation.

🌾

Aigen

Autonomous weeding rovers using regenerative farming practices. Caterpillar-belt mobility for rough terrain.

📦

Gather AI

$40M Series B. Autonomous drones inside warehouses for inventory tracking. Case-level accuracy.

🏭

Workr (Fireclay Tile)

Automating repetitive saw work at a ceramics manufacturer. Demoed at GTC 2026.

🚜

Upside Robotics

$7.5M seed. Autonomous fertilizer robots for Canadian agriculture.

🔍

Boost Robotics

Mobile manipulation robots for data center inspection & maintenance. Addressing staffing shortages in AI infrastructure.

NVIDIA Robotics shared a video of a specialised solar panel installation robot in action — exactly the kind of purpose-built automation that's already delivering ROI.

Investment Thesis

Purpose-built robots targeting dangerous, dull, or dirty jobs in industries with clear financial justification (farming, solar, warehousing, mining) don't need to solve general-purpose manipulation. They need to do one thing really well, at scale. This is where ROI is proven fastest.


06 — The Platform Play

NVIDIA: owning the robotics stack

NVIDIA is aggressively positioning itself as the infrastructure layer for physical AI — just as it did for cloud AI with GPUs. This week is National Robotics Week, and NVIDIA is using it to showcase a growing ecosystem of robotics startups built on their stack.

GTC 2026 — San Jose (March 2026)

30,000+ attendees. 110+ robots on the floor. Jensen Huang's keynote highlighted partnerships with ABB, FANUC, Agility, Figure AI, Boston Dynamics. The vibe shift: robots went from "science projects" to commercial product demos.

Key NVIDIA platforms: Isaac Sim (simulation), Omniverse (digital twins), Nemotron (foundation models), Isaac Perceptor (perception). The full training-to-deployment pipeline.

NVIDIA's robotics flywheel

Simulation
Isaac Sim
Training
Nemotron
Perception
Isaac Perceptor
Hardware
Jetson → Blackwell
Digital Twin
Omniverse

Just like AWS for cloud — if NVIDIA owns the simulation, training, and deployment toolchain, every robotics company built on their platform becomes a distribution channel for NVIDIA hardware. This is one to watch closely.


07 — The Enabler

Local inference changes everything

One of the biggest constraints in AI-driven robotics has been latency, cost, and dependency on cloud-based frontier models. The release of Google's Gemma 4 — and the broader trend of powerful small models — is about to blow this wide open.

4
Model Sizes (E2B to 31B)
256K
Max Context Window
140+
Languages Supported
400M+
Gemma Downloads

Gemma 4 is Apache 2.0 licensed, runs on everything from phones to Mac Studios, and includes vision + audio understanding. The E2B model fits in 5GB RAM. The 26B MoE activates only 4B parameters during inference — running nearly as fast as a tiny model with the intelligence of a much larger one.

Why This Matters for Robotics

Cost drops to hardware only. No per-token fees, no API latency. Engineers can iterate thousands of times locally without cloud costs.

Edge deployment unlocked. Gemma 4 runs on NVIDIA Jetson, Raspberry Pi, and even mobile phones. A robot can make real-time decisions without phoning home.

Multimodal natively. Vision, audio, and text in one model means a robot can see its environment, hear instructions, and reason about actions — all on-device.

People are already demonstrating this in the wild: real-time camera scene description running on a mobile device, object identification and scene understanding running entirely in the browser via WebGPU, and on-device audio transcription. These are exactly the capabilities a robot needs — vision, scene understanding, and audio — all running locally.

Practical Example

Hugging Face's LeRobot trained in CARLA driving simulator using Gemma 4 — the model learned to change lanes to avoid pedestrians after training. Same approach works for any robotics task where the model needs to see and act.


08 — The Grassroots

Hobbyist builders are accelerating the field

A growing open-source community is building, sharing, and iterating on robot kits at a pace that mirrors the early days of 3D printing and Arduino. This is where the next wave of robotics founders will come from.

🦾

Seeed SO-ARM101

~$220-$300. Leader/follower arm kit for imitation learning. Works with LeRobot + Hugging Face. 3D printable parts. The most popular entry point.

🤖

Asimov "Here Be Dragons"

$15,000–$20,000. Open-source humanoid. 25+2 DoF, 35kg. DIY unassembled kit. $499 deposit waitlist. Built by Menlo Research (Singapore).

🏗️

Tom Dorr Dual-Arm Mobile

$650. Dual-arm mobile robot with 4-hour assembly time. Growing community of builders and contributors.

🐕

Unitree (Robot Dogs → Humanoids)

Series C at $1.4B+ valuation. Robot dogs from $16K. Humanoid G1 becoming the dev platform of choice — until open-source catches up.

3D printing is the key enabler — it allows rapid iteration on robot parts without expensive CNC machining. Combined with open-source software (LeRobot, ROS) and cheap compute (Jetson, Mac Mini), the barrier to entry has never been lower.


09 — Follow the Money

Major funding rounds (2025-2026)

Figure AI
$1B
$39B valuation
Apptronik
$935M
$5.3B valuation
Mind Robotics
$615M
~$2B valuation
Rhoda AI
$450M
Series A
Physical Intel.
$400M

Also notable: UBTech secured a $1B credit line (debt, not equity). Unitree closed a Series C at $1.4B+ valuation (amount undisclosed).

Investor Signals

Strategic OEMs writing checks: Mercedes-Benz, John Deere, NVIDIA, Uber, Volvo all participated in robotics rounds. This isn't just VC money — the companies that will buy these robots are investing in them. ABB's robotics division sold to SoftBank for $5.37B — a clear signal that robotics platforms are attracting control premiums tied to AI and autonomy strategies.

Specialist robotics VCs to watch: Zetta Ventures, Two Sigma Ventures, Em Collective, and Eclipse Ventures (who just announced a $1.3B new fund). See the full VC list thread →


10 — Where We Go From Here

Opportunity map for Karatage

Watch List — Near Term

1. Training data infrastructure — The "Scale AI for robotics" play. Companies collecting and curating real-world demonstration data at scale. This is the moat.

2. Purpose-built robots in energy + mining — Solar installation, inspection, mining automation. Directly relevant to our existing portfolio.

3. NVIDIA's robotics ecosystem — Monitor GTC and their startup partnerships as a leading indicator of where the industry is heading.

4. Local inference + edge AI — The commoditization of powerful on-device models (Gemma 4, etc.) removes a massive cost barrier for robotics startups.

Key Risks

Hardware reliability — BMW's pilot showed the forearm/wrist as the #1 failure point. Mechanical durability in real-world conditions is underestimated.

Speed gap — 10x slower than humans for most manipulation tasks. Until this closes, ROI for general pick-and-pack is limited.

Hype cycle — Valuations are stretched ($39B for Figure AI with minimal revenue). The capital is flowing but product-market fit is still being proven.

What We're Doing

Hands-on learning: Building with LeRobot SO-101 kit + Bambu Lab 3D printer. Engaging the open-source community via X/Twitter. Getting plugged into founder networks.

Next steps: Deep-dive into training data companies. Map the NVIDIA robotics ecosystem startups. Identify crossover opportunities with our existing energy/mining operations.


11 — References

Sources & further reading

Videos & demos

Analysis & threads

Companies & investors on X

Companies mentioned

Reports & data