RoboSense AC1 Review: Hardware-Fused LiDAR + Camera + IMU for Robotics Perception

The RoboSense AC1 is the first product in RoboSense's Active Camera series — and it's a genuine departure from how robotics perception has been approached for the past decade. Instead of stacking a LiDAR, an RGB camera, and an IMU separately, then spending days on calibration and timestamp synchronization, the AC1 bakes all three into a single all-solid-state module with hardware-level spatiotemporal fusion. The payoff: synchronized depth, image, and motion posture data from one compact unit, ready to drop into a ROS2 pipeline or a custom inference stack.

RoboSense AC1 LiDAR Depth Camera - front and side view
RoboSense AC1 — all-solid-state hardware fusion of LiDAR, RGB camera, and IMU in a 135 x 80 x 40 mm, 230 g form factor.

What Is the AC1?

RoboSense classifies the AC1 as an "Active Camera" — a term that initially sounds like marketing, but makes sense once you look inside. It's not a depth camera with an RGB sensor bolted on. The AC1 uses a VCSEL + SPAD + CMOS architecture: a Vertical Cavity Surface Emitting Laser array for pulsed light emission, Single-Photon Avalanche Diodes for time-of-flight detection, and a CMOS image sensor for full-HD color capture. A TDK IIM-42652 6-DoF IMU handles inertial motion data. All three are fused at the hardware level — no software timestamp juggling, no calibration drift from sensor desynchronization.

That fusion happens before data ever reaches your computing platform. Most perception stacks that combine a LiDAR point cloud with camera frames do it in software, which introduces latency, calibration drift, and potential desyncs during fast maneuvers. The AC1 sidesteps all of that by design. It's the right architectural choice, and it's the kind of decision that saves weeks of debugging in a real robot integration.

Core Technical Specifications

Specs verified directly from the RoboSense AC1 official product page and datasheet. The depth FOV and maximum range figures are substantially better than the class average for this weight class:

Parameter Specification
Sensor Technology VCSEL + SPAD + CMOS (All-Solid-State)
Maximum Detection Range Up to 70 m
Low-Reflectivity Detection 20 m @ 10% reflectivity, 100 kLux ambient
Ranging Accuracy 3 cm @ 1σ (distance-independent)
Depth Blind Zone ≤0.2 m
Depth FOV (H x V) 120 deg x 60 deg
Depth Angular Resolution ~0.625 deg x 0.625 deg
Point Cloud Density ~45,000 points/frame
Depth Frame Rate 10 Hz
RGB Resolution 1920 x 1080 @ 30 Hz (Rolling Shutter)
RGB FOV (H x V) 144 deg x 78 deg
IMU TDK IIM-42652 (6-DoF, 200 Hz)
Sunlight Resistance 100 kLux (full performance maintained)
Data Interface USB 3.0 (USB-C connector) / GMSL2
Power Supply 12 V DC
Typical Power Consumption 11 W
Dimensions (L x W x H) 135 x 80 x 40 mm
Weight ~230 g
Operating Temperature -20 C to +60 C
OS / Middleware ROS, ROS2, Linux (x86_64 / ARM), Windows
RoboSense AC1 depth point cloud and RGB fusion output
AC1 depth point cloud output - 120 x 60 deg FOV with 3 cm accuracy maintained across the full 70 m detection range.

Hardware Architecture: Why VCSEL + SPAD Matters

Most consumer depth cameras use structured light (Intel RealSense, Orbbec) or passive stereo (OAK-D). Both approaches struggle outdoors and against highly reflective or absorptive surfaces. The AC1 uses direct Time-of-Flight (dToF) with VCSEL emitters and SPAD detectors - the same fundamental physics behind automotive-grade LiDARs, packaged into a camera-like form factor at ~230 g.

VCSEL arrays pulse at high rates with tight spectral bandwidth, making them straightforward to filter from broadband ambient sunlight. SPAD detectors are single-photon sensitive and extremely fast - they resolve individual photons and timestamp them with sub-nanosecond precision. That combination is exactly why the AC1 maintains 3 cm accuracy at 70 m under 100 kLux direct sunlight, conditions where structured light cameras simply give up. It also explains the 20 m detection range for black objects at 10% reflectivity - not something traditional ToF cameras can claim.

The all-solid-state design means no moving parts. No rotating mirror assemblies, no mechanical failure modes. For mobile robots that experience vibration, shock, and uneven terrain regularly, that's a real reliability advantage over spinning LiDARs.

Use Cases

SLAM and Autonomous Navigation

The AC1's 120x60 deg depth FOV paired with hardware-synchronized IMU data makes it well-suited for LiDAR-inertial SLAM. The IMU's 6-DoF output at 200 Hz feeds cleanly into LIO-SAM-style odometry pipelines. For mobile robots navigating both indoor corridors and outdoor spaces, the 100 kLux sunlight immunity removes the outdoor performance cliff that kills structured light SLAM systems the moment they exit a warehouse door. RoboSense ships open-source SLAM algorithms directly in the AC1 SDK.

AMR and Warehouse Robotics

Warehouse environments are harsh for depth cameras: dark packaging, shiny pallet wrap, retroreflective safety strips, narrow aisles. The AC1's reflective surface handling actively suppresses crosstalk and overexposure from high-reflectivity materials like stainless steel and warning tape. The 120 deg horizontal FOV means a single AC1 unit can cover navigation-critical forward zones that would otherwise require two or three sensors - a direct BOM reduction.

Humanoid Robots

Humanoid platforms have tight payload and power budgets. The AC1 checks both: ~230 g and 11 W typical. Getting RGB + depth + IMU from a single sensor head reduces wiring complexity significantly - something that matters a lot when you're routing cables around servo clusters and articulated joints. Hand-eye coordination tasks benefit directly from hardware-synchronized color and depth data, where pixel-accurate depth overlay is required for reliable grasping.

Industrial Robotics and 3D Inspection

For pick-and-place and bin-picking, the 3 cm accuracy and the 0.2 m blind zone work together effectively. Arms operating at close range need reliable depth data starting from near-zero distances - the 0.2 m blind zone is well below typical arm-to-workpiece operating clearances. The open-source SDK includes semantic segmentation and 3D Gaussian splatting for scene reconstruction workflows, which is increasingly relevant for digital twin and quality inspection pipelines.

Drones and UAVs

Weight and power matter most on UAVs. At 230 g and 11 W, the AC1 is usable on mid-size platforms. The 70 m detection range opens up obstacle avoidance at altitudes and speeds where 10 m-range depth cameras are essentially useless. IMU integration also simplifies flight controller coupling since motion pose data arrives pre-fused with depth frame timestamps.

RoboSense AC1 application scenarios - humanoid robot, AMR, drone, industrial
AC1 application range: humanoid robots, AMRs, UAVs, and industrial inspection - one sensor platform covering all of them.

Comparison Context

The AC1 occupies a category that barely existed two years ago: hardware-fused LiDAR + camera + IMU at sub-kg form factors with serious outdoor range. Here's how it stacks up against common alternatives engineers reach for:

Sensor Technology Max Range Depth FOV (HxV) Outdoor Capable HW-Fused IMU
RoboSense AC1 dToF (VCSEL+SPAD) 70 m 120 x 60 deg Yes (100 kLux) Yes (Hardware)
Intel RealSense D455 Active Stereo IR ~6 m 87 x 58 deg Limited IMU only
Microsoft Azure Kinect DK dToF ~5.5 m 120 x 35 deg Indoor only IMU only
Livox MID-360 + Camera LiDAR + Camera (separate) 40 m 360 x 59 deg Yes Software sync only
Orbbec Gemini 335L Structured Light ~10 m 95 x 73 deg Indoor only IMU only

Honest take: if your application is purely indoor, well-lit, and short-range, a RealSense D455 or Orbbec Gemini costs less and works fine. The AC1 is built for harder problems - outdoor environments, mixed-reflectivity scenes, and platforms where sensor BOM reduction actually matters. It's also the only option in this weight class where LiDAR depth, RGB, and IMU share a hardware synchronization clock. That's the differentiator worth paying attention to.

AI-Ready Ecosystem and Software

RoboSense ships the AC1 with AC Studio, a cross-platform desktop application covering data fusion visualization, SLAM replay, and object detection inference. For embedded and ROS development, the open-source SDK covers:

  • Drivers: Native Linux, ROS, and ROS2 drivers
  • Data acquisition nodes: Ready-to-use ROS2 nodes publishing point cloud, RGB, and IMU on standard topics
  • Calibration tools: Depth-to-RGB extrinsic calibration utilities
  • Fusion algorithms: SLAM, localization, semantic segmentation, point cloud-vision fusion
  • 3D Gaussian splatting: Scene reconstruction for digital twin and inspection workflows
  • Platform support: Pre-built binaries and cross-compilation scripts for NVIDIA Orin N, Rockchip RK3588, and Horizon Sunrise X5
  • AC Viewer: Dedicated cross-platform viewer for depth and RGB data

The 3D Gaussian splatting inclusion is worth flagging specifically. It's increasingly used for high-fidelity 3D scene reconstruction in robotics - particularly for digital twin workflows - and having it as a first-class SDK component signals clearly who RoboSense is targeting. This is a professional robotics tool with a real developer ecosystem, not a hobbyist depth camera with a ROS wrapper tacked on.

Integration Notes for Engineers

  • Power rail: The 12 V input isn't available on every compute board. If you're running off a 5 V rail, you'll need a DC-DC step-up. It adds a few grams but the 11 W draw needs a clean supply.
  • Interface: USB 3.0 via USB-C connector for standard PC and SBC integration. GMSL2 is available for automotive and industrial setups needing longer cable runs without signal loss - useful when the sensor is mounted far from the compute unit on a robot arm.
  • Rolling shutter on RGB: The color camera uses a rolling shutter. On platforms with aggressive angular velocity (fast-turning UAVs, quick-turning AMRs), this can introduce skew on the color channel. The LiDAR depth channel is unaffected.
  • SDK maturity: AC1 launched March 2025. The SDK is actively maintained - check the RoboSense GitHub and official WIKI for the latest driver versions before starting a new project integration.
  • Supported compute platforms: Orin N (Jetson), RK3588, and Horizon Sunrise X5 have optimized builds. General Linux (x86_64, aarch64) and Windows work for development and prototyping.

Frequently Asked Questions

What is the RoboSense AC1's maximum detection range?

The AC1 achieves a maximum detection range of 70 meters under standard conditions - over 6x the range of typical 3D cameras in this form factor. For dark objects with 10% reflectivity under 100 kLux sunlight, effective detection range is up to 20 meters.

How does the AC1 handle direct sunlight?

The VCSEL + SPAD dToF architecture maintains full performance at up to 100 kLux ambient illumination - equivalent to direct outdoor sunlight on a clear day. Ranging accuracy stays at 3 cm at the full range under these conditions. Structured light cameras typically fail above 15-30 kLux.

Is the AC1 compatible with ROS2?

Yes. RoboSense provides native ROS and ROS2 drivers as part of the open-source SDK. Point cloud, RGB image, and IMU data are published on standard ROS2 topics. Pre-built packages for common SLAM frameworks are included.

What computing platforms does the AC1 support?

The AC1 SDK includes optimized builds and cross-compilation support for NVIDIA Orin N (Jetson Orin), Rockchip RK3588, and Horizon Sunrise X5. General Linux (x86_64 and ARM) and Windows are supported for development.

Can the AC1 replace multiple sensors on a robot?

In many configurations, yes. The 120x60 deg depth FOV covers over 70% more area than conventional 3D cameras. The hardware-fused IMU eliminates a separate IMU module. For applications requiring 360 deg coverage, additional sensors will still be needed - but for forward-hemisphere perception tasks, a single AC1 can consolidate what previously required multiple separate units.

What is the depth resolution of the AC1?

Angular resolution is approximately 0.625 deg x 0.625 deg for the depth channel at 10 Hz, yielding approximately 45,000 points per frame across the 120x60 deg FOV.

What is the AC1's power consumption?

Typical power consumption is 11 W from a 12 V DC supply. USB 3.0 handles data. Plan accordingly when integrating with battery-powered mobile platforms - factor in a step-up converter if your power bus is 5 V.

Does the AC1 work with reflective or dark surfaces?

Yes. The dToF architecture actively suppresses crosstalk and overexposure from highly reflective surfaces like stainless steel and retroreflective warning strips. For low-reflectivity objects (10% reflectance), detection is reliable up to 20 m even under 100 kLux sunlight - a performance envelope that structured light and passive stereo cameras can't match.

Where can I get the RoboSense AC1?

The AC1 is available through OpenELAB. Full product details, availability, and ordering information are on the RoboSense AC1 product page at OpenELAB.

Ready to add hardware-fused 3D perception to your next robot?

The RoboSense AC1 combines a long-range dToF LiDAR, a full-HD RGB camera, and a hardware-synchronized 6-DoF IMU in a single 230 g module - with a complete open-source ROS2 SDK ecosystem. View the RoboSense AC1 at OpenELAB →

Laat een reactie achter

Je e-mailadres wordt niet gepubliceerd. Vereiste velden zijn gemarkeerd *

Zijbalk

Blogcategorieën
Laatste bericht

Meld je aan voor onze nieuwsbrief

Ontvang de laatste informatie over onze producten en speciale aanbiedingen.