Physical AI Unit · Spritle Software

Robots that can see and understand.

Plug in real-time Vision AI and your robot arm grasps correctly on the first try: no pre-sorting, no fixed jigs, no cloud dependency, no months-long integration.

above 97.5% accuracy  ·  30 ms latency  ·  30fps live

Loading · 0%
GRIPPER conf: 0.97
30ms Inference
0.42m Depth · stereo
Above 97.5% Accuracy
01 · Object Detection
Every object.
Identified.
02 · Depth Intelligence
Every distance.
Measured.
03 · Edge Inference
All onboard.
Zero cloud.
scroll to explore
30ms
Latency
above 97.5%
Accuracy
30fps
Live feed
Object Intelligence

No more failed grasps
on the first pick.

Most arms fail when parts arrive out-of-position or mixed with others. Ours runs real-time object intelligence across cluttered, unstructured environments, classifying, locating, and orienting every object before the arm moves.

  • Real-time bounding box detection at 30fps
  • Multi-class object classification
  • 6DoF pose estimation & tracking
  • Works with partial occlusion
Precision Depth

Millimetre-perfect
placement. Zero trial runs.

Sub-millimetre stereo depth means your arm computes the exact grasp on the first attempt, no calibration jigs, no manual positioning, no wasted cycle time.

  • Sub-millimetre spatial resolution
  • Stereo + structured light fusion
  • Occlusion-aware reconstruction
  • Continuous live depth maps
0.2m 1.0m 2.0m
Object Detection
Depth Estimation
6DoF Pose Tracking
Edge Inference
ROS2 Integration
Manufacturing Vision AI
Real-time Segmentation
Grasp Planning
Multi-Camera Fusion
Anomaly Detection
NVIDIA Jetson / Hailo
Object Detection
Depth Estimation
6DoF Pose Tracking
Edge Inference
ROS2 Integration
Manufacturing Vision AI
Real-time Segmentation
Grasp Planning
Multi-Camera Fusion
Anomaly Detection
NVIDIA Jetson / Hailo
Object Intelligence in Action

Demos that speak
louder than slides.

Six deployments across manufacturing, logistics, and assembly, captured on production hardware, not a demo rig.

Demo 01
Grasp Intelligence

Bin-picking with zero pre-sorting

Flexiv arm identifies, poses, and picks randomly oriented gear components from a mixed bin at production speed.

Demo 02
Quality Control

Real-time defect detection

Sub-millimetre anomaly detection on PCBs and machined parts. Flags defects before they reach assembly.

Demo 03
Assembly Assist

Guided assembly with pose feedback

6DoF pose loop guides the arm through multi-step assemblies with sub-mm placement accuracy.

Demo 04
Multi-object Tracking

Simultaneous tracking of objects

Real-time tracking of multiple parts on a moving conveyor, tracking ID, class, and 3D position maintained continuously.

Demo 05
Human-Robot Safety

Shared workspace collision avoidance

Detects human presence and intent in real time. Robot speed and trajectory adapt automatically within 8ms.

Demo 06
Warehouse Logistics

Pallet depalletising in unstructured stacks

Vision AI identifies package types, orientations, and stacking order in real time, with no fixed pallet format required.

What We Deliver

Everything your robot needs
to see, decide, and act.

Six capabilities. One unit. Camera feed to grasp command, entirely onboard, no cloud round-trip.

01

Object Detection

Real-time multi-class detection at 60fps. Identifies and classifies every object in the workspace with bounding boxes and confidence scores.

02

Depth Perception

Sub-millimetre stereo depth maps in real time. Enables grasps and placements that work in production — not just in the lab.

03

6DoF Pose Estimation

Full 6-degree-of-freedom tracking of objects and end-effectors. Tells the robot not just where an object is — but exactly how it's oriented.

04

Edge Inference

All models run onboard. No cloud, no latency penalty, no data egress. Optimised for NVIDIA Jetson and Hailo. Works fully air-gapped.

05

ROS2 Integration

Native ROS2 nodes out of the box. Publishes object poses, depth clouds, and detection streams directly to your robot's topic graph.

06

Last-Mile Integration

Mounts on Flexiv, UR, Fanuc, KUKA and more. Full SDK. From unboxing to running inference on your arm: measured in hours, not months.

How It Works

From camera to command
in four steps.

Most robots are blind. They follow fixed coordinates regardless of what's in front of them. We close the loop: camera feed in, pick command out, in under 5ms.

01

Perceive

Stereo cameras and structured light build a continuous 3D scene model — capturing geometry, depth, texture, and reflectance in real time. Not just a colour pattern. Full spatial understanding.

Vision Layer
02

Understand

Object Intelligence classifies every item — known or unknown — without prior training. The system analyses an object's shape, orientation, and grasp affordances within milliseconds, adapting on the fly.

Intelligence Layer
03

Decide

Edge inference computes grasp vectors, pick coordinates, and approach angles — all onboard, no cloud round-trip. Structured outputs ready for your robot's control system in under 5ms.

Decision Layer
04
objectmind_sdk — robot_arm_01
> connect --arm robot_01 --stream live
Connecting to arm controller...
✓ Connected. Stream active @ 60Hz
> detect --frame current
Running inference...
✓ 3 objects detected [gear, bolt, casing]
> pick --object "gear" --confidence 0.97
Computing grasp vector...
✓ Grasp: [x:214.3, y:88.1, z:42.7] θ:23°
✓ Arm executing pick sequence...
>
STEP 04 / ACTION INTELLIGENCE
Decisions,
Executed.

Structured outputs feed directly into robot control systems via our low-latency API. Pick coordinates, grasp vectors, and object metadata, streamed at 60Hz, no middleware, no mapping layer.

Meet the Team

The Humans
Behind the Robots.

Balaji
Founder
Balaji
Knows what to build and moves fast to make it happen.
Product Direction Decision Making Execution
Surendran
Co-founder & CTO
Surendran
Builds systems that work smoothly even as things grow.
Backend Systems APIs Cloud Setup
Prabakaran
VP of Engineering
Prabakaran
Keeps the team moving and makes sure things get shipped right.
Team Management Delivery Processes
Mohankumar
Principal Software Engineer
Mohankumar
Handles tough technical problems and keeps systems running well.
Architecture Debugging Performance
Visnupriya
Tech Lead
Visnupriya
Keeps the code clean and helps the team stay on track.
Code Reviews System Design Mentoring
Aadhish
Intern
Aadhish Kumar S
Picks things up quickly and contributes where it matters.
Scripting Testing Fixing Issues
Pooja
Growth Marketing Associate
Pooja
Figures out what brings users in and doubles down on it.
GTM Acquisition Partnerships
Common Questions

Frequently Asked Questions

Everything you need to know about Yours, Physically — Spritle's Vision AI for industrial robots.

What makes Yours, Physically different? +

Yours, Physically by Spritle is an industrial robot Vision AI system designed for real-time robotic perception in industrial environments. Unlike cloud-based solutions, it runs fully onboard eliminating network latency and keeping data within your facility. It combines object detection, 6DoF pose estimation, and depth sensing into a single integrated system, enabling robots to operate in unstructured environments with minimal setup.

How can robots pick unsorted parts without trial-and-error? +

Yours, Physically functions as a bin picking vision system, enabling robots to understand mixed, unstructured scenes. It performs robot arm object detection and estimates object position and orientation in real time, allowing reliable grasping without pre-sorted inputs or fixed jigs. This improves efficiency and reduces manual intervention in industrial workflows.

Why does 6DoF pose estimation matter? +

6DoF pose estimation provides both the position and orientation of an object in 3D space. This allows robotic systems to determine the correct approach angle and grasp position, improving accuracy and reducing failed picks in real-world environments. It is a core component of any effective vision AI for robot arms.

How is Yours, Physically deployed? +

Yours, Physically supports ROS2 vision AI integration, publishing detection, pose, and depth data directly into existing robotic workflows. For non-ROS environments, a Python SDK (via gRPC) provides structured outputs for seamless integration across different systems.

Why run vision AI on-device instead of the cloud? +

Running onboard inference for robot arms eliminates dependency on internet connectivity and reduces latency. It also ensures that sensitive production data remains within the facility. This makes it more reliable for industrial environments requiring consistent, real-time performance.

Does it work with different robot arms? +

Yes. Yours, Physically is built as a vision AI for robot arms and supports major platforms including Flexiv, Universal Robots (UR), Fanuc, and KUKA. It provides structured outputs that integrate with standard robot control systems.

What kind of performance can be expected? +

The system delivers high-accuracy real-time object detection for robotics with low-latency inference. Performance may vary depending on hardware configuration and deployment conditions, but it is optimized for industrial-grade, real-time applications.

What production problems does it solve? +

Yours, Physically enables applications such as bin picking, sorting, assembly, and conveyor tracking using industrial robot vision AI. It reduces reliance on manual handling, pre-sorting, and fixed setups, enabling more flexible and scalable automation.

See your part get picked,
live.

We run the demo on a real robot arm. Bring a part number or just curiosity.
No deck. No SDRs. You talk directly with the engineers.

Typically responds within one business day

No spam, ever. We typically respond within 8hrs or less on business days.