Orbbec Femto Bolt Development Guide: Complete SDK Setup and Programming Tutorial

Last Updated: March 2026

Getting started with the Orbbec Femto Bolt is straightforward. This comprehensive development guide covers SDK setup, programming tutorials, and practical code examples to help you build powerful 3D vision applications.

Orbbec Femto Bolt Development

Orbbec Femto Bolt Development Guide: Complete SDK Setup and Programming Tutorial

What is the Orbbec Femto Bolt?

The Orbbec Femto Bolt is a high-performance Time of Flight (ToF) depth camera developed through a collaboration between Orbbec and Microsoft, based on the proven Azure Kinect depth technology. This compact yet powerful device delivers 4K RGB imagery paired with precise depth sensing capabilities, making it an ideal choice for developers working on AI, robotics, augmented reality, and computer vision applications. What sets the Femto Bolt apart is its API compatibility with the Microsoft Azure Kinect SDK through the Orbbec SDK K4A Wrapper, allowing developers to leverage existing Azure Kinect codebases with minimal modifications. The camera supports dual depth modes—Wide Field of View (WFOV) at 120 degrees for large-area scanning and Narrow Field of View (NFOV) at 75 degrees for precision measurements—while maintaining depth accuracy of less than 11mm plus 0.1% of the measured distance. Whether you're building autonomous robots, developing volumetric video capture systems, or creating immersive AR experiences, the Femto Bolt provides the sensor capabilities and software flexibility needed to bring your vision to life.

Understanding the Femto Bolt's Technical Architecture

Time of Flight Depth Sensing Technology

The Femto Bolt employs advanced Time of Flight (ToF) technology operating at 850nm wavelength, a carefully chosen spectrum that balances eye safety requirements with optimal sensor sensitivity. Unlike traditional stereo vision systems that rely on feature matching between two cameras, ToF cameras measure depth directly by calculating the time it takes for modulated infrared light to travel from the sensor to objects in the scene and reflect back. This active illumination approach provides several significant advantages: consistent depth accuracy regardless of scene texture, faster processing since no feature extraction is required, and reliable performance on flat or textureless surfaces that challenge stereo matching algorithms.

The camera's 1-megapixel ToF sensor captures depth data across the entire field of view simultaneously, enabling real-time depth mapping at up to 30 frames per second depending on the selected mode. The 850nm infrared wavelength is particularly well-suited for indoor applications because it remains invisible to humans while being strongly reflected by most common materials. The sensor's dynamic range allows it to handle varying light conditions within its operating environment, though direct sunlight should be avoided as it can saturate the detector elements and degrade measurement accuracy.

Dual Depth Mode Operation

The Femto Bolt's versatility stems from its support for two distinct depth operating modes, each optimized for different application scenarios. The WFOV (Wide Field of View) mode captures depth data at 1024x1024 pixel resolution across a 120-degree horizontal and vertical field of view, making it perfect for environmental mapping, robotics navigation in cluttered spaces, and large-area scanning projects. This mode operates at 15 frames per second and can detect objects up to 5.46 meters away, providing comprehensive coverage of large environments with minimal camera repositions.

The NFOV (Narrow Field of View) mode sacrifices breadth for precision, operating at 640x576 resolution with a 75-degree horizontal and 65-degree vertical field of view while doubling the frame rate to 30 FPS. This mode excels at applications requiring detailed measurements: quality control inspection, object recognition tasks, biometric scanning, and precision manipulation in robotics. The higher frame rate also benefits dynamic scenes where the camera or objects are in motion, reducing motion blur in the depth data and ensuring more accurate tracking.

Integrated 4K RGB Camera System

Beyond its depth sensing capabilities, the Femto Bolt incorporates a high-resolution 4K RGB camera capable of capturing color imagery at up to 3840x2160 pixels. This camera features an 80-degree horizontal and 51-degree vertical field of view, providing slightly narrower coverage than the depth sensor's WFOV mode but offering significantly more detail for visual analysis. The camera supports High Dynamic Range (HDR) imaging, enabling it to handle challenging lighting conditions where bright and dark areas coexist in the same scene.

Perhaps most importantly, the RGB camera performs hardware-synchronized registration with the depth data, ensuring that every pixel in the color image has a corresponding depth value. This tight integration eliminates the need for software alignment processing and guarantees that color and depth information remain perfectly matched even during rapid camera movement or in dynamic scenes. The automatic exposure and white balance systems work continuously to maintain optimal image quality across varying lighting conditions, though manual controls are also available for applications requiring precise color accuracy.

Inertial Measurement Unit Integration

The integrated 6-axis Inertial Measurement Unit (IMU) combines a 3-axis accelerometer with a 3-axis gyroscope, providing essential motion sensing capabilities that enhance many depth camera applications. In robotics contexts, IMU data can be fused with visual odometry algorithms to improve localization accuracy and maintain tracking continuity during brief camera occlusions. For augmented reality applications, the IMU helps track device orientation and movement, contributing to stable overlay registration in the user's view.

The IMU data streams at approximately 1.6 kHz, providing high-frequency motion updates that complement the 15-30 FPS depth and color frames. Timestamps from the IMU are synchronized with camera frames, enabling precise data fusion in your applications. This combination of visual depth sensing with inertial measurements creates a robust sensing foundation for SLAM (Simultaneous Localization and Mapping), gesture recognition, vibration detection, and any application requiring awareness of both visual environment and device motion.

Setting Up Your Development Environment

Windows Development Environment Configuration

Developing applications for the Femto Bolt on Windows requires setting up Visual Studio 2019 or later with C++ development tools, along with the Orbbec SDK K4A Wrapper that provides API compatibility with Azure Kinect applications. Begin by installing Visual Studio with the "Desktop development with C++" workload, ensuring you include the Windows 10 SDK components. Next, download the latest Orbbec SDK K4A Wrapper release from the official GitHub repository, selecting the pre-built binaries for Windows to avoid compilation time.

Before using the camera, you must execute the UVC metadata registration script that enables proper communication between Windows and the USB-connected depth camera. Open PowerShell as Administrator and navigate to the scripts directory within the SDK, then run the installation command:

cd src/orbbec/OrbbecSDK/misc/scripts
Set-ExecutionPolicy -ExecutionPolicy RemoteSigned -Scope CurrentUser
.\obsensor_metadata_win10.ps1 -op install_all

This script registers the camera's USB Video Class metadata capabilities with Windows, enabling access to timestamp and other metadata essential for proper depth camera operation. Without this step, you may encounter issues where the device is recognized but depth frames are not available or timestamps are incorrect.

Linux Development Environment Configuration

Linux development requires installing numerous build dependencies before compiling or using the SDK. The following command installs all required packages on Ubuntu-based distributions:

sudo apt update
sudo apt install -y \
    pkg-config \
    ninja-build \
    doxygen \
    clang \
    gcc-multilib \
    g++-multilib \
    python3 \
    nasm \
    libgl1-mesa-dev \
    libsoundio-dev \
    libvulkan-dev \
    libx11-dev \
    libxcursor-dev \
    libxinerama-dev \
    libxrandr-dev \
    libusb-1.0-0-dev \
    libssl-dev \
    libudev-dev \
    mesa-common-dev \
    uuid-dev

After installing dependencies, you must configure udev rules to allow non-root access to the camera device. The SDK includes an installation script that creates the necessary device permissions:

cd src/orbbec/OrbbecSDK/misc/scripts
sudo chmod +x ./install_udev_rules.sh
sudo ./install_udev_rules.sh

This udev configuration ensures that any user in the video group can access the Femto Bolt without requiring root privileges. After adding yourself to the video group with sudo usermod -a -G video $USER and logging out and back in, you can access the camera as a regular user.

To build the SDK from source, clone the repository and its submodules, then configure and compile:

git clone https://github.com/orbbec/OrbbecSDK-K4A-Wrapper.git
cd OrbbecSDK-K4A-Wrapper
git submodule update --init --recursive
mkdir build && cd build
cmake .. -G Ninja
ninja
sudo ninja install

NVIDIA Jetson Embedded Platform Setup

The Femto Bolt is fully compatible with NVIDIA Jetson platforms including the AGX Orin, Orin NX, Orin Nano, AGX Xavier, and Xavier NX, enabling edge AI applications that require local processing without cloud connectivity. Begin by flashing your Jetson device with the latest JetPack SDK from NVIDIA, ensuring you select the appropriate JetPack version for your hardware.

The Linux installation procedure described above applies to Jetson devices running Ubuntu-based L4T (Linux for Tegra) distributions. The CMake build system automatically detects the ARM64 architecture and compiles appropriate binaries. For optimal performance, ensure your Jetson is running in maximum performance mode:

sudo nvpmodel -m 0
sudo jetson_clocks

These commands enable all CPU cores at maximum frequency and configure the GPU for sustained performance. When integrating with CUDA applications, remember that the Jetson's GPU can be used for accelerated depth processing, point cloud generation, and AI inference tasks.

Software Development Kit Deep Dive

Understanding the K4A Wrapper Architecture

The Orbbec SDK K4A Wrapper represents an elegant solution for developers who want to leverage existing Azure Kinect ecosystems while benefiting from Orbbec's cost-effective hardware. This wrapper translates every call in the Azure Kinect Sensor SDK API to corresponding Orbbec SDK functions internally, creating a seamless compatibility layer that typically requires no code changes beyond library file substitution. The wrapper is built upon Azure Kinect Sensor SDK v1.4.1, ensuring API stability and comprehensive feature support.

When your application calls functions like k4a_device_open() or k4a_device_start_cameras(), the wrapper intercepts these calls and routes them to the appropriate Orbbec SDK implementation. This translation happens transparently, allowing existing Azure Kinect codebases to run on Femto Bolt hardware with minimal friction. The wrapper supports both the main branch (v1) and v2-main branch (v2) of the Orbbec SDK, giving you flexibility in choosing stable or cutting-edge implementations.

Device support varies by firmware version, and the SDK enforces minimum requirements to ensure proper operation. The Femto Bolt requires minimum firmware version 1.1.2, with version 1.1.2 being the recommended version for optimal compatibility. You can check your device's firmware version using the SDK's device enumeration functions or the included viewer utilities.

API Compatibility and Differences

While the K4A Wrapper provides extensive API compatibility, developers should be aware of several differences from the native Azure Kinect SDK. The most notable is the recording timestamp offset field type: the Orbbec SDK uses uint64_t start_timestamp_offset_usec while Azure Kinect uses uint32_t start_timestamp_offset_usec. Applications that explicitly use this field will require header file replacement or conditional compilation.

Several Azure Kinect interfaces are not implemented in the K4A Wrapper, though they are rarely needed in most applications:

  • Custom memory allocator functions (k4a_set_allocator())
  • Temperature reading and setting functions
  • Manual exposure, white balance, and ISO speed controls at the image level
  • Sync cable jack status detection

For the vast majority of depth camera applications, these unimplemented functions won't affect functionality. However, if you're migrating specialized Azure Kinect code that relies on these features, you'll need to implement alternative approaches or accept reduced functionality.

Python Development with pyk4a

Python developers can leverage the Femto Bolt through the pyk4a library, which provides a Pythonic interface to the underlying Azure Kinect-compatible API. Install the package using pip:

pip install pyk4a

The pyk4a library simplifies camera initialization and frame capture into intuitive method calls. Here's a basic example that captures synchronized depth and color frames:

from pyk4a import PyK4A, Config, ColorResolution, DepthMode

# Configure camera settings
config = Config(
    color_resolution=ColorResolution.RES_720P,
    depth_mode=DepthMode.NFOV_UNBINNED,
    camera_fps=30,
    synchronized_images_only=True
)

# Initialize and start the camera
k4a = PyK4A(config=config)
k4a.start()

try:
    # Capture 100 frames
    for frame_count in range(100):
        capture = k4a.get_capture()

        # Access depth image as numpy array
        depth = capture.depth
        print(f"Depth frame {frame_count}: {depth.shape}")

        # Access color image
        color = capture.color
        print(f"Color frame {frame_count}: {color.shape}")

        # Access IR image if available
        if capture.ir is not None:
            print(f"IR frame: {capture.ir.shape}")

finally:
    k4a.stop()

The pyk4a library returns depth and color images as numpy arrays, making it trivial to integrate with OpenCV, NumPy-based processing pipelines, and machine learning frameworks. The library also supports configuration of hardware synchronization for multi-camera setups and provides convenient access to IMU data.

Alternative Python Wrappers

Beyond pyk4a, Python developers have additional options for accessing Femto Bolt data. The official azure-kinect-python package provides direct bindings to the Azure Kinect SDK, offering feature-parity with the C++ API for applications requiring complete control:

pip install azure-kinect-python

This package follows the C++ API structure more closely than pyk4a, which may be preferable for developers transitioning from C++ development or requiring specific SDK features not exposed through pyk4a's convenience interface.

For 3D visualization and processing, Open3D provides powerful tools that integrate well with depth camera data:

pip install open3d

Open3D can ingest depth images from the Femto Bolt and automatically convert them to colored point clouds, perform registration between multiple depth captures, and provide interactive visualization utilities. The library's data structures are optimized for efficient manipulation of millions of points, making it suitable for 3D reconstruction and mapping applications.

Core Programming Concepts and Examples

Device Initialization and Configuration

The foundation of any Femto Bolt application is proper device initialization and camera configuration. The SDK uses a configuration structure that controls all camera parameters before starting the capture pipeline. Understanding each configuration option allows you to optimize the camera for your specific application requirements.

The configuration structure includes settings for color resolution, depth mode, frames per second, synchronized image capture, depth delay, color delay, wired synchronization mode, and subordinate delay offset. Here's how to configure the camera for a typical application:

#include <k4a/k4a.h>
#include <stdio.h>
#include <stdlib.h>

int main() {
    // Get the number of connected devices
    uint32_t device_count = k4a_device_get_installed_count();
    if (device_count == 0) {
        printf("No Azure Kinect devices found!\n");
        return 1;
    }

    // Open the default device
    k4a_device_t device = NULL;
    if (k4a_device_open(K4A_DEVICE_DEFAULT, &device) != K4A_RESULT_SUCCEEDED) {
        printf("Failed to open device\n");
        return 1;
    }

    // Configure camera parameters
    k4a_device_configuration_t config = K4A_DEVICE_CONFIG_INIT_DISABLE_ALL;

    // Enable depth camera
    config.depth_mode = K4A_DEPTH_MODE_NFOV_UNBINNED;
    config.camera_fps = K4A_FRAMES_PER_SECOND_30;

    // Enable color camera with 720p resolution
    config.color_resolution = K4A_COLOR_RESOLUTION_720P;
    config.color_format = K4A_IMAGE_FORMAT_COLOR_BGRA32;

    // Enable synchronized capture
    config.synchronized_images_only = true;

    // Set depth-to-color delay for alignment
    config.depth_delay_off_color_usec = 0;

    // Configure wired synchronization (standalone mode)
    config.wired_sync_mode = K4A_WIRED_SYNC_MODE_STANDALONE;

    // Start the cameras
    if (k4a_device_start_cameras(device, &config) != K4A_RESULT_SUCCEEDED) {
        printf("Failed to start cameras\n");
        k4a_device_close(device);
        return 1;
    }

    printf("Camera started successfully\n");
    printf("Depth mode: %d\n", config.depth_mode);
    printf("Color resolution: %d\n", config.color_resolution);
    printf("Frame rate: %d FPS\n", config.camera_fps);

    // ... frame capture code would go here ...

    // Clean up
    k4a_device_stop_cameras(device);
    k4a_device_close(device);

    return 0;
}

The K4A_DEVICE_CONFIG_INIT_DISABLE_ALL macro initializes the configuration structure with all cameras disabled, allowing you to selectively enable only what your application needs. This explicit approach prevents unexpected defaults from affecting your application's behavior.

Frame Capture and Processing

Capturing frames from the Femto Bolt involves requesting captures from the device and extracting the desired image types. The SDK uses a wait-based approach where your application blocks until a complete frame is available, ensuring synchronized data delivery. Here's a comprehensive example showing proper frame handling:

#include <k4a/k4a.h>
#include <stdio.h>
#include <stdlib.h>

void process_depth_image(k4a_image_t depth_image) {
    if (depth_image == NULL) {
        return;
    }

    // Get image properties
    int width = k4a_image_get_width_pixels(depth_image);
    int height = k4a_image_get_height_pixels(depth_image);
    int stride = k4a_image_get_stride_bytes(depth_image);
    uint8_t* buffer = k4a_image_get_buffer(depth_image);

    // Get timestamp
    uint64_t timestamp = k4a_image_get_timestamp_usec(depth_image);
    printf("Depth frame at %llu us: %dx%d (stride=%d)\n", 
           (unsigned long long)timestamp, width, height, stride);

    // Access raw depth values (16-bit unsigned integers)
    uint16_t* depth_data = (uint16_t*)buffer;

    // Example: Find the minimum depth value in the frame
    uint16_t min_depth = UINT16_MAX;
    for (int i = 0; i < width * height; i++) {
        uint16_t depth_value = depth_data[i];
        if (depth_value != 0 && depth_value < min_depth) {
            min_depth = depth_value;
        }
    }

    // Convert to millimeters (depth is in mm)
    printf("Minimum depth in frame: %.2f meters\n", min_depth / 1000.0f);
}

void process_color_image(k4a_image_t color_image) {
    if (color_image == NULL) {
        return;
    }

    int width = k4a_image_get_width_pixels(color_image);
    int height = k4a_image_get_height_pixels(color_image);
    int stride = k4a_image_get_stride_bytes(color_image);
    k4a_image_format_t format = k4a_image_get_format(color_image);

    printf("Color frame: %dx%d (format=%d, stride=%d)\n", 
           width, height, format, stride);
}

int main() {
    k4a_device_t device = NULL;
    k4a_device_open(K4A_DEVICE_DEFAULT, &device);

    k4a_device_configuration_t config = K4A_DEVICE_CONFIG_INIT_DISABLE_ALL;
    config.depth_mode = K4A_DEPTH_MODE_WFOV_UNBINNED;
    config.color_resolution = K4A_COLOR_RESOLUTION_1080P;
    config.camera_fps = K4A_FRAMES_PER_SECOND_15;
    config.synchronized_images_only = true;

    k4a_device_start_cameras(device, &config);

    // Capture 100 frames
    for (int frame_index = 0; frame_index < 100; frame_index++) {
        // Wait for a capture (timeout after 1000ms)
        k4a_capture_t capture = NULL;
        k4a_wait_result_t result = k4a_device_get_capture(device, &capture, 1000);

        if (result == K4A_WAIT_RESULT_SUCCEEDED) {
            // Extract and process depth image
            k4a_image_t depth_image = k4a_capture_get_depth_image(capture);
            process_depth_image(depth_image);
            if (depth_image != NULL) {
                k4a_image_release(depth_image);
            }

            // Extract and process color image
            k4a_image_t color_image = k4a_capture_get_color_image(capture);
            process_color_image(color_image);
            if (color_image != NULL) {
                k4a_image_release(color_image);
            }

            // Release the capture when done
            k4a_capture_release(capture);
        }
        else if (result == K4A_WAIT_RESULT_TIMEOUT) {
            printf("Timeout waiting for capture\n");
        }
        else {
            printf("Failed to get capture\n");
            break;
        }
    }

    k4a_device_stop_cameras(device);
    k4a_device_close(device);

    return 0;
}

The capture object holds references to all synchronized images (depth, color, IR, and optionally transformed versions). Always release captures and images when you're finished to prevent memory leaks. The SDK manages internal buffers and will reuse them across frames for efficiency.

Point Cloud Generation

One of the most powerful capabilities of the Femto Bolt is generating 3D point clouds by transforming depth data into spatial coordinates. The SDK includes built-in functions for this transformation that handle the camera's intrinsic calibration automatically:

#include <k4a/k4a.h>
#include <stdio.h>
#include <stdlib.h>
#include <math.h>

// Structure to hold a 3D point
typedef struct {
    float x, y, z;
    uint8_t r, g, b;
} PointXYZRGB;

void generate_point_cloud(k4a_image_t depth_image, 
                         k4a_image_t color_image,
                         PointXYZRGB** points_out,
                         int* point_count_out) {
    // Get depth image properties
    int depth_width = k4a_image_get_width_pixels(depth_image);
    int depth_height = k4a_image_get_height_pixels(depth_image);
    uint8_t* depth_buffer = k4a_image_get_buffer(depth_image);

    // Get color image properties
    int color_width = 0, color_height = 0;
    uint8_t* color_buffer = NULL;
    if (color_image != NULL) {
        color_width = k4a_image_get_width_pixels(color_image);
        color_height = k4a_image_get_height_pixels(color_image);
        color_buffer = k4a_image_get_buffer(color_image);
    }

    // Get calibration for transformation
    k4a_calibration_t calibration;
    // (Assume device calibration is already retrieved)

    // Create transformation handle
    k4a_transformation_t transformation = k4a_transformation_create(&calibration);

    // Create transformed depth image (aligned to color)
    k4a_image_t transformed_depth = NULL;
    k4a_image_create(K4A_IMAGE_FORMAT_DEPTH16, 
                     color_width, color_height,
                     color_width * (int)sizeof(uint16_t),
                     &transformed_depth);

    k4a_transformation_depth_image_to_color_camera(transformation,
                                                    depth_image,
                                                    transformed_depth);

    // Allocate point cloud array
    int max_points = color_width * color_height;
    PointXYZRGB* points = (PointXYZRGB*)malloc(sizeof(PointXYZRGB) * max_points);
    int point_count = 0;

    // Get transformed depth buffer
    uint16_t* transformed_depth_data = (uint16_t*)k4a_image_get_buffer(transformed_depth);

    // Generate XYZ coordinates
    float* xyz_buffer = (float*)malloc(color_width * color_height * 3 * sizeof(float));
    k4a_transformation_color_camera_to_world_point(transformation,
                                                     &calibration,
                                                     depth_image,
                                                     color_width * color_height,
                                                     (const k4a_float2_t*)color_buffer,
                                                     xyz_buffer);

    // Iterate through each pixel
    for (int i = 0; i < color_width * color_height; i++) {
        uint16_t depth = transformed_depth_data[i];

        if (depth > 0 && depth < 65535) {
            points[point_count].x = xyz_buffer[i * 3];
            points[point_count].y = xyz_buffer[i * 3 + 1];
            points[point_count].z = xyz_buffer[i * 3 + 2] / 1000.0f; // mm to meters

            // Add color if available
            if (color_buffer != NULL) {
                // Assuming BGRA format
                points[point_count].b = color_buffer[i * 4];
                points[point_count].g = color_buffer[i * 4 + 1];
                points[point_count].r = color_buffer[i * 4 + 2];
            }

            point_count++;
        }
    }

    *points_out = points;
    *point_count_out = point_count;

    // Clean up
    free(xyz_buffer);
    k4a_image_release(transformed_depth);
    k4a_transformation_destroy(transformation);
}

This example demonstrates the core concept of point cloud generation: transforming 2D depth pixels into 3D coordinates using the camera's intrinsic calibration parameters. The transformation also aligns depth data to the color camera's perspective, enabling colored point clouds where each 3D point has an associated RGB value.

Accessing IMU Data

The IMU provides high-frequency motion data that can enhance tracking accuracy and enable gesture recognition applications. Accessing IMU data requires a separate streaming mechanism from camera frames:

#include <k4a/k4a.h>
#include <stdio.h>
#include <stdlib.h>

void process_imu_sample(k4a_imu_sample_t* sample) {
    printf("IMU Sample:\n");
    printf("  Accel: x=%.4f, y=%.4f, z=%.4f (m/s²)\n",
           sample->acc_sample.x, sample->acc_sample.y, sample->acc_sample.z);
    printf("  Gyro:  x=%.4f, y=%.4f, z=%.4f (rad/s)\n",
           sample->gyro_sample.x, sample->gyro_sample.y, sample->gyro_sample.z);
    printf("  Timestamp: %.3f s\n", sample->acc_timestamp_usec / 1000000.0);
    printf("  Temperature: %.2f °C\n", sample->temperature);
}

int main() {
    k4a_device_t device = NULL;
    k4a_device_open(K4A_DEVICE_DEFAULT, &device);

    // Start IMU streaming (must start before cameras for proper sync)
    if (k4a_device_start_imu(device) != K4A_RESULT_SUCCEEDED) {
        printf("Failed to start IMU\n");
        k4a_device_close(device);
        return 1;
    }

    k4a_device_configuration_t config = K4A_DEVICE_CONFIG_INIT_DISABLE_ALL;
    config.depth_mode = K4A_DEPTH_MODE_NFOV_UNBINNED;
    k4a_device_start_cameras(device, &config);

    // Collect IMU samples for 10 seconds
    for (int i = 0; i < 1000; i++) {
        k4a_imu_sample_t sample;
        k4a_wait_result_t result = k4a_device_get_imu_sample(device, &sample, 100);

        if (result == K4A_WAIT_RESULT_SUCCEEDED) {
            process_imu_sample(&sample);
        }
    }

    k4a_device_stop_imu(device);
    k4a_device_stop_cameras(device);
    k4a_device_close(device);

    return 0;
}

The IMU streams data at approximately 1.6 kHz, providing motion updates far more frequently than the camera frames. This high-frequency data is valuable for filtering camera poses between frames, detecting rapid movements, and integrating with extended Kalman filters for robust tracking.

Multi-Camera Synchronization

Wired Synchronization Configuration

For applications requiring multiple Femto Bolt cameras—such as volumetric capture systems, 360-degree scanning, or extended field-of-view imaging—the SDK supports wired synchronization through the camera's sync connector. This allows multiple cameras to capture frames simultaneously with sub-millisecond precision.

One camera operates as the master while others operate as subordinates (slaves). The master camera generates the synchronization signal that triggers all subordinate cameras to capture simultaneously. Connect the cameras using 3.5mm TRS cables between the sync ports on each device:

#include <k4a/k4a.h>
#include <stdio.h>

void configure_master(k4a_device_t device) {
    k4a_device_configuration_t config = K4A_DEVICE_CONFIG_INIT_DISABLE_ALL;
    config.depth_mode = K4A_DEPTH_MODE_WFOV_UNBINNED;
    config.camera_fps = K4A_FRAMES_PER_SECOND_15;
    config.wired_sync_mode = K4A_WIRED_SYNC_MODE_MASTER;
    config.subordinate_delay_off_master_usec = 0;

    k4a_device_start_cameras(device, &config);
    printf("Camera configured as MASTER\n");
}

void configure_subordinate(k4a_device_t device) {
    k4a_device_configuration_t config = K4A_DEVICE_CONFIG_INIT_DISABLE_ALL;
    config.depth_mode = K4A_DEPTH_MODE_WFOV_UNBINNED;
    config.camera_fps = K4A_FRAMES_PER_SECOND_15;
    config.wired_sync_mode = K4A_WIRED_SYNC_MODE_SUBORDINATE;

    // Delay after master trigger before capturing
    // Adjust based on cable length and setup
    config.subordinate_delay_off_master_usec = 0;

    k4a_device_start_cameras(device, &config);
    printf("Camera configured as SUBORDINATE\n");
}

int main() {
    // Open first camera (master)
    k4a_device_t master = NULL;
    k4a_device_open(0, &master);

    // Open second camera (subordinate)
    k4a_device_t subordinate = NULL;
    k4a_device_open(1, &subordinate);

    configure_master(master);
    configure_subordinate(subordinate);

    // Now both cameras will capture synchronized frames
    // Master triggers capture, subordinate captures after delay

    // Clean up
    k4a_device_stop_cameras(master);
    k4a_device_stop_cameras(subordinate);
    k4a_device_close(master);
    k4a_device_close(subordinate);

    return 0;
}

The subordinate delay parameter compensates for signal propagation delays in the synchronization cable. For most installations with short cables, zero delay works well, but you may need to tune this value for precise synchronization across larger setups.

External Trigger Synchronization

The Femto Bolt also supports external trigger mode, where an external signal initiates frame capture. This is essential for industrial applications requiring synchronization with other equipment, strobe lighting, or precisely timed capture sequences:

#include <k4a/k4a.h>
#include <stdio.h>

void configure_external_trigger(k4a_device_t device) {
    k4a_device_configuration_t config = K4A_DEVICE_CONFIG_INIT_DISABLE_ALL;
    config.depth_mode = K4A_DEPTH_MODE_NFOV_UNBINNED;
    config.camera_fps = K4A_FRAMES_PER_SECOND_30;

    // Configure for external trigger mode
    config.wired_sync_mode = K4A_WIRED_SYNC_MODE_SUBORDINATE;
    config.external_sync_mode = true;

    // External trigger expects rising edge to trigger capture
    config.subordinate_delay_off_master_usec = 0;

    k4a_device_start_cameras(device, &config);
    printf("Camera configured for EXTERNAL TRIGGER mode\n");
    printf("Waiting for external trigger signal...\n");
}

int main() {
    k4a_device_t device = NULL;
    k4a_device_open(K4A_DEVICE_DEFAULT, &device);

    configure_external_trigger(device);

    // Capture frames when triggered
    for (int i = 0; i < 10; i++) {
        k4a_capture_t capture = NULL;

        // Wait up to 2 seconds for trigger
        k4a_wait_result_t result = k4a_device_get_capture(device, &capture, 2000);

        if (result == K4A_WAIT_RESULT_SUCCEEDED) {
            printf("Captured frame %d\n", i);
            k4a_capture_release(capture);
        }
    }

    k4a_device_stop_cameras(device);
    k4a_device_close(device);

    return 0;
}

External trigger mode requires a properly formatted trigger signal on the sync connector. The camera expects a 3.3V logic signal with minimum pulse width specifications. Consult the hardware documentation for exact electrical requirements.

Recording and Playback

MKV Recording Functionality

The SDK supports recording camera data to MKV (Matroska Video) files for later analysis, processing, or playback. Recordings capture all synchronized streams (depth, color, IR, and IMU) with full metadata, allowing complete reconstruction of the capture session:

#include <k4a/k4a.h>
#include <stdio.h>
#include <string.h>

int main() {
    k4a_device_t device = NULL;
    k4a_device_open(K4A_DEVICE_DEFAULT, &device);

    k4a_device_configuration_t config = K4A_DEVICE_CONFIG_INIT_DISABLE_ALL;
    config.depth_mode = K4A_DEPTH_MODE_NFOV_UNBINNED;
    config.color_resolution = K4A_COLOR_RESOLUTION_720P;
    config.camera_fps = K4A_FRAMES_PER_SECOND_30;
    config.synchronized_images_only = true;

    // Create recording settings
    k4a_recording_t recording = NULL;
    k4a_recording_configuration_t record_config = {
        .format = K4A_RECORDING_FORMAT_MKV,
        .segment_size = 0,  // 0 = no segmentation
    };

    // Start recording to file
    const char* filename = "capture_session_001.mkv";
    if (k4a_device_start_recording(device, &record_config, filename) != K4A_RESULT_SUCCEEDED) {
        printf("Failed to start recording\n");
        k4a_device_close(device);
        return 1;
    }

    printf("Recording to %s\n", filename);

    // Record for 60 seconds
    for (int frame = 0; frame < 1800; frame++) {  // 30 fps * 60 sec
        k4a_capture_t capture = NULL;
        k4a_wait_result_t result = k4a_device_get_capture(device, &capture, K4A_WAIT_INFINITE);

        if (result == K4A_RESULT_SUCCEEDED) {
            // Capture is automatically added to recording
            k4a_capture_release(capture);

            if (frame % 30 == 0) {
                printf("Recorded %d frames\n", frame);
            }
        }
    }

    // Stop recording
    k4a_device_stop_recording(device);
    printf("Recording complete. File saved.\n");

    k4a_device_stop_cameras(device);
    k4a_device_close(device);

    return 0;
}

Recordings include all calibration data automatically, ensuring that playback maintains full spatial accuracy. The MKV format is widely supported and can be processed with standard video tools or the SDK's playback functions.

Playback and Data Extraction

Playing back recorded files is straightforward with the SDK's playback API:

```cpp

include

include

int main() { k4a_playback_t playback = NULL;

// Open recording file
if (k4a_playback_open("capture_session_001.mkv", &playback) != K4A_RESULT_SUCCEEDED) {
    printf("Failed to open recording\n");
    return 1;
}

// Get recording properties
k4a_record_configuration_t record_config;
k4a_playback_get_record_configuration(playback, &record_config);

printf("Recording properties:\n");
printf("  Duration: %.2f seconds\n", k4a_playback_get_duration_count_us(playback) / 1000000.0);
printf("  Depth mode: %d\n", record_config.depth_mode);
printf("  Color resolution: %d\n", record_config.color_resolution);

// Get calibration from recording
k4a_calibration_t calibration;
k4a_playback_get_calibration(playback, &calibration);

// Iterate through all frames
k4a_capture_t capture = NULL;
while (k4a_playback_get_next_capture(playback, &capture) == K4A_RESULT_SUCCEEDED) {
    k4a_image_t depth = k4a_capture_get_depth_image(capture);
    if (depth != NULL) {
        uint64_t timestamp = k4a_image_get_timestamp_usec(depth);
        printf("Frame at %.3f seconds\n", timestamp / 1000000.0);
        k4a_image_release(depth);
    }
    k4a_capture_release(capture);
}

k4a_playback_close(playback);

Frequently Asked Questions

What is the maximum depth range of the Orbbec Femto Bolt?

The Femto Bolt achieves effective depth sensing from 0.25 meters to 5.46 meters, depending on the selected depth mode and environmental conditions. This range covers most human-scale interaction scenarios, from close-up robotic manipulation tasks to mid-range monitoring applications. WFOV (Wide Field of View) mode optimizes for larger coverage areas at the cost of some precision, while NFOV (Narrow Field of View) mode provides enhanced accuracy for more focused inspection tasks.

Can the Femto Bolt be used outdoors?

The Femto Bolt is optimized for indoor environments but can accommodate semi-outdoor use with appropriate lighting conditions. Strong direct sunlight can overwhelm the infrared illumination the camera uses for depth sensing, limiting performance in sunny outdoor settings. However, covered outdoor areas, shaded locations, or cloudy conditions typically permit acceptable operation. For permanent outdoor deployments, consider protective housing and environmental controls.

Is the Femto Bolt compatible with ROS (Robot Operating System)?

Yes, the Femto Bolt is compatible with ROS through the available SDK and community-developed ROS wrappers. The Azure Kinect SDK that the Femto Bolt supports has ROS integration options, and the Orbbec community has developed additional ROS packages. This compatibility enables straightforward integration with existing robotics development workflows.

How accurate is the depth measurement?

The Femto Bolt achieves depth accuracy of less than 11mm systematic error plus 0.1% of the measured distance, with random error standard deviation maintained at or below 17mm. This performance meets the requirements of demanding industrial and research applications. Actual accuracy depends on operating mode, range, surface characteristics, and environmental conditions.

Can multiple Femto Bolt cameras be synchronized?

Yes, the Femto Bolt supports external trigger control that enables precise synchronization between multiple cameras. This capability supports cascaded deployments for extended coverage, multiview capture for 3D reconstruction, and simultaneous recording from multiple viewpoints for volumetric video applications.

What is the difference between WFOV and NFOV modes?

WFOV (Wide Field of View) mode captures depth at 1024x1024 resolution and 15 frames per second, providing 120° horizontal and vertical coverage ideal for large-area applications. NFOV (Narrow Field of View) mode delivers 640x576 resolution at 30 frames per second with 75° horizontal and 65° vertical coverage, suited for applications requiring focused examination with higher frame rates.

Does the Femto Bolt require special lighting?

The Femto Bolt uses its own infrared illumination for depth sensing, so it doesn't require specific external lighting for depth measurements. However, the RGB camera functions like a conventional camera and benefits from appropriate lighting for color imaging. For depth accuracy, avoid highly reflective surfaces and extremely dark materials that might affect infrared signal return.

What programming languages are supported for Femto Bolt development?

The Femto Bolt supports development in multiple languages through various SDKs. The primary C/C++ SDK provides the most complete functionality via the Azure Kinect API and K4A Wrapper. Python developers can use pyk4a or azure-kinect-python packages. Additional community bindings exist for C#, Unity, and other languages.

Conclusion

The Orbbec Femto Bolt transforms advanced 3D vision technology from expensive specialty equipment into an accessible capability for developers and integrators across virtually every industry. Its exceptional combination of high-precision TOF depth sensing, 4K RGB imaging, remarkably compact form factor, and full Azure Kinect compatibility creates an extraordinarily versatile platform suitable for applications ranging from sophisticated robotic bin picking to volumetric video capture, from detailed inspection systems to immersive AR experiences.

The extensive range of development possibilities—spanning robotics, computer vision, AR/VR, logistics, healthcare, and research—demonstrates the technology's remarkable versatility. Virtually every industry can find extraordinary value in the camera's ability to capture accurate three-dimensional information that enables automation, insight, and interaction previously impractical due to excessive cost or complexity barriers.

Successful implementation requires careful understanding of environmental requirements, integration considerations, and performance limitations. However, for applications genuinely aligned with its remarkable capabilities, the Femto Bolt provides an excellent foundation for innovative 3D vision solutions.


This comprehensive development guide covers Orbbec Femto Bolt SDK setup, programming tutorials, and code examples. For detailed product specifications and purchasing information, visit the Orbbec Femto Bolt product page on OpenELAB.

Interested in learning more about ToF camera technology? Explore OpenELAB's extensive collection of TOF cameras and depth cameras for additional options.

Looking for application examples? Check out our Orbbec Femto Bolt Applications Guide to discover industry-specific use cases and implementation insights.

Ready to start developing? Explore OpenELAB's development resources for SDK documentation, practical code examples, and detailed integration tutorials to accelerate your 3D vision project development.

Learn more about Time of Flight technology on Wikipedia and join discussions on Reddit about robotics and depth camera applications.

Shop: OpenELAB

Lascia un commento

Il tuo indirizzo email non sarà pubblicato. I campi obbligatori sono contrassegnati da *

Barra laterale

Categorie del Blog
Ultimo post

Iscriviti alla nostra newsletter

Ottieni le ultime informazioni sui nostri prodotti e offerte speciali.