RoboSense AC1: Integrating LiDAR, Camera, and IMU with Open-Source Algorithms

A new "Active Camera" category aims to solve the hardware calibration and software fragmentation issues in robot perception.

The robotics industry is moving fast—from humanoid prototypes to mass-produced quadrupedal robots—but the demands on perception systems are outpacing current hardware. Traditional cameras, which rely on passive visible light, struggle with ambient lighting changes. Binocular (stereo) cameras offer ranging but suffer from poor accuracy, light interference, and limited robustness. Similarly, Structured Light and iToF solutions often fail to meet long-range requirements or get washed out by outdoor sunlight.

  

Historically, the engineering workaround has been to stack a camera with an external LiDAR (dToF). While this additional depth information, it creates a bulky system that is inefficient to develop and hard to mass-produce. It typically forces developers to handle tedious extrinsic calibration and synchronization manually.

  

Engineers have been waiting for a integrated perception solution—complete with supporting software—to eliminate the "reinventing the wheel" phase of basic sensor fusion.

  

The AC1: Hardware-Level Sensor Fusion

Following their breakthrough in mass-producing full-stack LiDAR chips, RoboSense has leveraged their proprietary digital SPAD-SoC chip technology to create a new category of perception hardware: the Active Camera (AC1).

The AC1 is designed to be a "one-stop" solution for robot perception. Unlike the traditional method of stacking distinct sensors, the AC1 provides hardware-level fusion of depth, color, and motion posture (IMU). This integration allows perception systems to evolve from complex, custom-built rigs into a standardized, commercially viable component.

RoboSense supports this hardware with an "AI-Ready" ecosystem, offering basic software tools and open-source algorithms to significantly shorten development cycles.

RoboSense CEO Qiu Chunchao describes the Active Camera as "the true eye of the robot," inviting developers globally to utilize the ecosystem to push the boundaries of robotic perception.

  

Technical Specs: Beyond Traditional 3D Cameras

The AC1 moves from "passive reception" to "active collection," achieving spatiotemporal synchronization of its data streams. Its performance metrics address several key pain points of traditional 3D cameras:

  • Field of View (FOV): 120°x60° depth FOV and 144°x78° color FOV (Fused: 120°x60°). This covers roughly 170% of the area of standard 3D cameras.
  • Range: Maximum ranging distance of 70m (over 600% that of traditional 3D cams).
  • Environmental Robustness: Handles strong outdoor light up to 100kLux.
  • Accuracy: Maintains 3cm@1σ accuracy even on objects with 10% reflectivity at 20m. Crucially, accuracy does not degrade significantly with distance, allowing for precise shape restoration from near to far.

 

Field Reports: What Engineers Are Saying

The AC1 is already being tested by major players, including the National and Local Joint Engineering Research Center for Humanoid Robots and CASBOT.

Li Yuexuan, an Algorithm Engineer at the Joint Center, highlighted the form-factor benefits:

"The available space for deploying sensors on existing humanoid robots is limited. Calibrating and aligning different sensors separately is troublesome, and there is a lack of easy-to-use multi-sensor fusion algorithms. AC1 saves deployment space and can directly fuse algorithms for images, point clouds, and IMU, achieving excellent SLAM, perception, and localization effects."

Yang Guodong, Co-founder and Head of Motion Intelligence at CASBOT, echoed this sentiment regarding system integration:

"Our perception and hardware teams are very satisfied with AC1. It not only eliminates the tedious work of calibrating and aligning different sensors separately but also reduces the number of hardware components, saving internal space, which is very friendly for the design of compact humanoid robots."

The AI-Ready Ecosystem: Open Source SDK & Algorithms

Hardware is only half the solution. The core of the AC1 value proposition is the AI-Ready ecosystem, designed to reshape development workflows. This ecosystem is divided into three parts: AC Studio, WIKI, and Datasets.

AC Studio serves as a comprehensive tool suite providing an open-source SDK. It handles the low-level infrastructure so developers don't have to:

  • Drivers & Tools: Node data collection, data calibration, fusion, and cross-compilation.
  • Open-Source Algorithms: Includes Localization, SLAM, 3D Gaussian Splatting, Object Detection/Recognition, Semantic Segmentation, and Point Cloud/Vision Fusion.

By providing these core algorithms out of the box, developers can bypass basic infrastructure coding and focus immediately on scenario-based application development. RoboSense has committed to continuously iterating the SDK and releasing free training datasets to drive further innovation.

Future Applications: Spatial Intelligence

The AC1 represents a new sensor category providing a universal data paradigm for spatial intelligence.

The fused depth, color, and posture data is applicable far beyond humanoid robots and drones. The AC1 is suitable for building digital twin environments (3D scanning and modeling), environmental monitoring, autonomous driving, and industrial automation.

Currently, RoboSense is running a developer recruitment program. Early partners include Singapore's ARC Lab, Huazhong University of Science and Technology, and the Beijing Institute of Technology. Through this "open hardware and open-source base model" approach, RoboSense aims to accelerate the era of human-robot collaboration.

  

Resources for Developers:

Efterlad en kommentar

Din e-mailadresse vil ikke blive offentliggjort. Obligatoriske felter er markeret med *

Sidebjælke

Seneste indlæg

Denne sektion indeholder i øjeblikket ikke noget indhold. Tilføj indhold til denne sektion ved hjælp af sidepanelet.

Tilmeld dig vores nyhedsbrev

Få de seneste oplysninger om vores produkter og særlige tilbud.