RoboSense AC1: The Mobile Eye Driving Humanoid Robot Industrialization with Advanced AI Perception

In the dynamic realm of robotic perception, the fundamental challenge has always been enabling machines to "see the world clearly" and comprehend it with human-like, or even superhuman, precision. Just as biological vision accounts for over 80% of environmental information acquisition for humans, a robot's perceptual ability directly dictates the scope and sophistication of its applications.

However, for decades, mainstream depth measurement technologies—including monocular, binocular, structured light, and iToF—have faced inherent bottlenecks. These solutions either rely on inferential algorithms, leading to compromised reliability, suffer from a lack of robustness due to susceptibility to ambient light and material properties, or struggle with scalability due to prohibitive size and cost constraints.

This long-standing dilemma spurred RoboSense to develop a revolutionary approach rooted in digital dToF (direct Time-of-Flight) LiDAR technology, transforming the ambitious concept of a "robot's eye" into a tangible, industrialized reality.

A significant milestone arrived on December 2nd when Z-MOV Robotics unveiled its T800 humanoid robot, a full-size, ultra-efficient general-purpose platform. Crucially, the T800 is equipped with RoboSense's AC1 (Active Camera) core perception component, demonstrating immediate market readiness with sales already commencing at a starting price of 180,000 yuan. Standing at 1.73m tall and weighing 75kg, the T800 boasts advanced configurations such as a 450N·m peak torque joint module, dexterous hands, and a solid-state power battery. Its adaptability spans diverse scenarios, from factory collaboration to hotel services, having successfully completed its technology closed-loop verification and now poised for mass industrial application.

As the cornerstone of the T800's omnidirectional perception system, the RoboSense AC1 integrates a LiDAR, an RGB camera, and an IMU. This innovative design achieves hardware-level fusion and spatiotemporal alignment of depth, image, and motion posture information. This integrated data stream is vital for the T800's millisecond-level environmental modeling and AI-powered intelligent path planning, enabling it to accurately avoid obstacles and navigate flexibly within complex, dynamic environments.

The AC1 is already gaining widespread recognition and mass adoption from leading companies like Z-MOV Robotics, firmly establishing itself as a crucial "robot mobile eye" and a key driver in accelerating the industrialization of humanoid robots.

 

01. The Digital Revolution: Why RoboSense's Digital LiDAR is Game-Changing

The technical challenge of depth measurement has always been a delicate balancing act between precision, reliability, and cost. Each traditional solution falls short in at least one of these critical aspects:

  • Monocular/Binocular Vision: Infers depth indirectly, highly dependent on scene textures and susceptible to lighting variations.
  • Structured Light: Offers decent precision but is severely affected by ambient light, limiting outdoor and large-space adaptability.
  • iToF: Compact, yet often lacks robust performance in general-purpose, complex environments.

LiDAR's inherent dToF technology, which directly measures the time of light's flight, possesses a natural advantage in depth measurement reliability. However, historically, it was constrained by low resolution, large form factors, and exorbitant costs, relegating its application to niche, high-end automotive sectors.

RoboSense's digital transformation has fundamentally broken this stalemate through a chip-based reconstruction approach. The proprietary area-array SPAD-SoC receiver chip and the addressable 2D scanning VCSEL transmitter chip integrate the complex optical system of traditional LiDAR onto a miniature chip. This innovation achieves a "camera-like," all-solid-state form factor.

This digital architecture ushers in two profound revolutions:

  1. Resolution Leap: The resolution has dramatically increased to an equivalent of 144 lines, significantly surpassing the 128-line LiDAR found on advanced L4 autonomous vehicles like Baidu's Apollo Go. This provides a point cloud density sufficient for robust and dense environmental perception.
  2. Drastic Cost and Size Reduction: LiDAR units that once cost hundreds of thousands of yuan can now be mass-produced at a price point comparable to an RGBD camera. The proliferation of LiDAR in consumer vehicles priced around 100,000 yuan directly testifies to this transformative change.

The true value of any technology is ultimately validated by the market. In 2025, RoboSense anticipates the delivery volume of its digital all-solid-state LiDAR to exceed 200,000 units. The widespread adoption of digital LiDAR is now driving a technological convergence across the entire RGBD camera industry. The historical debate between traditional vision and LiDAR proponents is gradually subsiding as dToF technology achieves a comprehensive victory in terms of precision, cost, and reliability.

RoboSense's core iteration strategy centers on chip upgrades, aiming for a one-to-two-year product iteration cycle. This mirrors the evolution of cameras from VGA to 4K and 8K, driving a qualitative leap in solid-state LiDAR with continuous performance improvements and significant cost reductions.

This strategy is underpinned by SPAD chip technology, which shares commonalities with current CMOS technology at the chip level. This allows RoboSense to leverage the mature CMOS industry chain, raw materials, and process systems, ensuring a robust foundation for large-scale production. Moving forward, RoboSense will continue to propel the convergence of RGBD camera technology towards dToF solutions, accelerating chip iteration by expanding market scale and production capacity.

It is projected that within the next five to ten years, digital solid-state LiDAR will likely follow Moore's Law, achieving rapid advancements in resolution and cost control. Embracing this outlook, RoboSense has fully committed to a digital roadmap this year and predicts that the entire LiDAR industry will progressively shift in this direction. RoboSense views digital LiDAR as a critical growth driver for achieving profitability and ensuring long-term, stable growth.

Qiu Chunchao, CEO of RoboSense, emphasized that while Q3 performance reflected the conclusion of past product cycles, the robust promotion of mass production for digital products and the accumulation of new orders are pivotal to the company's future trajectory.

 

02. RoboSense AC1: The Pioneering Active Camera for Intelligent Mobile Perception

Focusing on the core proposition of the "robot's eye," RoboSense has created a new product category, Active Camera, and introduced the groundbreaking AC1 as its first integrated perception solution. The AC1, designed as the "mobile eye," expertly solves critical robot challenges in localization, obstacle avoidance, and robust environmental mapping. Through its innovative hardware integration and comprehensive ecosystem empowerment, AC1 provides the industry with a powerful solution, bridging advanced technology with practical implementation.

Launched in March 2025, the RoboSense AC1 marked a paradigm shift. It achieved, for the first time, hardware-level spatiotemporal synchronous fusion of three core sensors: LiDAR, RGB camera, and IMU. This innovation moved robot perception beyond merely stacking multiple devices to a new stage of single-device, all-scenario coverage.

Key Technical Specifications & Advantages of AC1:

  • Balanced Range & Precision: The AC1 delivers a stable ranging precision of 3cm (1σ), crucial for reliable path planning and environmental mapping, with accuracy maintained consistently across distances.
  • Expansive Field of View (FoV): Its fused perception FoV reaches 120° × 60°, a 70% improvement over traditional 3D cameras. This wide FoV reduces the need for frequent viewing angle adjustments, enhancing robot efficiency.
  • Ultra-Long-Range Measurement: Offers a 70-meter maximum detection range, far exceeding traditional 3D cameras. Even for low-reflectivity objects (10% reflectivity), it achieves precise detection at 20 meters, accurately restoring the shape and size of both near and far objects.

  • Environmental Adaptability: The AC1 exhibits strong immunity to strong light interference (up to 100kLux) and can operate continuously across indoor and outdoor environments. This breaks the "usable indoors, limited outdoors" constraint common with many traditional sensors.
  • Compact & Robust Design: Engineered as a lightweight, solid-state module, the AC1 is significantly smaller than traditional multi-sensor setups, making it easy to integrate into various robot platforms. It is also designed for industrial-grade performance, capable of withstanding harsh operating conditions.
  • Cost-Effective Solution: By integrating multiple sensors into one hardware-fused unit, the AC1 offers a more economical solution compared to assembling separate LiDAR and camera systems, facilitating broader commercial adoption.

The AI-Ready Ecosystem: Empowering Developers for Rapid Deployment

To lower development barriers and accelerate innovation, the AC1 is fully supported by the AI-Ready ecosystem. This comprehensive suite offers:

  • AC Studio: A one-stop tool suite providing a full-chain open-source SDK, including drivers, data collection tools, advanced data calibration and fusion functionalities, and cross-compilation environments. This allows developers to transition from spending months "reinventing the wheel" on basic software tasks to rapidly "building with blocks," significantly shortening deployment times.
  • Open-Source Algorithms: The accompanying algorithm library covers cutting-edge technologies like SLAM (Simultaneous Localization and Mapping), semantic segmentation, and 3D Gaussian Splatting, and is compatible with mainstream AI computing platforms. This empowers developers to bypass foundational algorithm development and directly engage in scenario-based and function-oriented secondary development.
  • WIKI & Datasets: Comprehensive WIKI documentation serves as a developer hub for the Active Camera. Additionally, curated multi-scenario datasets (to be gradually released) provide valuable training data for AI models, further streamlining development.

This integrated "hardware + software + data" solution enables the AC1 to be widely adapted across various scenarios, including humanoid robots, drones, autonomous driving, industrial, and home service robots. Its capabilities also extend to digital twin environment construction, 3D scanning and modeling, environmental monitoring, and stockpile monitoring.

 

Conclusion: RoboSense AC1 – Paving the Way for Truly Intelligent Robots

In summary, the RoboSense AC1 Active Camera represents a fundamental breakthrough against the limitations of traditional 3D cameras. With its outstanding performance, it effectively meets the demanding perception needs of intelligent robots for safe obstacle avoidance, precise mapping, and robust mobile coordination, both indoors and outdoors. This not only significantly improves operational efficiency and safety but also empowers a new generation of autonomous machines.

RoboSense, through its innovative digital LiDAR technology and the powerful AC1 Active Camera, is not merely supplying components; it is defining the perception standard for the intelligent robotics industry. By offering an integrated, hardware-level fused solution complemented by a robust AI-Ready ecosystem, RoboSense dramatically lowers the R&D barriers for robot developers. The cost-effectiveness and ease of use democratize access to advanced perception, enabling businesses of all sizes to rapidly develop cutting-edge robotic products.

The AC1 signifies a pivotal shift in robot vision – moving from "passive imaging" to truly "active, intelligent perception." It offers a robust solution to the limitations and integration complexities of traditional multi-vision sensor stacks. This embodies a new technological philosophy: deep integration of hardware and algorithms to empower robot perception that genuinely transcends human sensory limitations, propelling us towards a future of widespread spatial intelligence.

For international engineers and developers, the AC1, supported by its AI-Ready ecosystem, transforms development from a struggle with foundational tools to a streamlined process that fosters true innovation. For the robotics industry, this represents an efficiency revolution and, more importantly, the definitive beginning of the next evolution of intelligent robots. As the Active Camera product line continues to expand, with AC1 leading the charge, RoboSense will consistently drive the evolution of robotic perception technology, paving the way for a world where robots, equipped with vision systems that "surpass the human eye," make the digitalization and intellectualization of the physical world a ubiquitous reality.

 

References

  1. https://www.robosense.ai/en/rslidar/AC1
  2. https://mp.weixin.qq.com/s/sV_BWw5S4twSHM0PhTyxWg?scene=1

Disclaimer: This content is shared with friendly intentions. If any rights are infringed, please notify us promptly, and we will remove it immediately.

Jätä kommentti

Sähköpostiosoitettasi ei julkaista. Pakolliset kentät on merkitty *

Sivupalkki

Uusin julkaisu

Tässä osiossa ei ole tällä hetkellä sisältöä. Lisää sisältöä tähän osioon käyttämällä sivupalkkia.

Rekisteröidy uutiskirjeeseemme

Hanki viimeisimmät tiedot tuotteistamme ja erikoistarjouksistamme.