Last Updated: March 2026
Building with the Femto Mega is surprisingly straightforward—it's basically a Jetson Nano with a camera attached. If you've developed for Jetson before, you'll feel right at home. Here's how to get started.

Setup Options
Option 1: Use the Jetson Directly
The Femto Mega runs a full Jetson Nano OS. Connect a monitor and keyboard, and you have a complete development environment.
Option 2: Development Machine
Develop on your computer, deploy to the Femto Mega. The camera appears as a standard USB device when connected to a host.
SDK Options
Orbbec SDK
Native SDK with full feature access. Works on the Jetson or host PC.
Azure Kinect SDK
If you're coming from Azure Kinect, use the K4A wrapper. Same API—just works.
Python Example
import pyorbbec as ob
import cv2
pipeline = ob.Pipeline()
config = ob.Config()
config.enable_stream(ob.StreamType.DEPTH, 1024, 1024, ob.FPS.FPS_15)
pipeline.start(config)
while True:
frames = pipeline.wait_for_frames(1000)
if frames:
depth = frames.get_depth_frame()
# Process with AI model here
# Run TensorFlow/PyTorch directly on Jetson!
pipeline.stop()
AI on the Jetson
Here's what makes the Femto Mega special:
# Run TensorFlow model ON THE CAMERA
import tensorflow as tf
# Load your model
model = tf.saved_model.load('/path/to/model')
# Process depth frame
def process_frame(depth_frame):
# Preprocess
input_data = preprocess(depth_frame)
# Run inference - happens on Jetson!
results = model(input_data)
return results
ROS Integration
The Femto Mega works with ROS. Standard topics, familiar workflow.
Conclusion
The Femto Mega gives you full Jetson development capability. If you need onboard AI, this is the depth camera to choose.
Shop: OpenELAB
