处理测量结果#
概述#
Basler blaze 相机测量每个传感器像素的光传播距离。
使用这些距离,相机会在右旋坐标系中计算每个传感器像素的 x、y、z 坐标。坐标系的原点位于相机的光学中心,该中心位于相机外壳内。y 轴指向下方,z 轴指向远离相机的方向。
相机以深度图或点云的形式提供 3D 信息,取决于像素格式。在深度图中,z 坐标被编码为 16 位灰度值。如下所示,可以从灰度值中计算出所有 3D 坐标值。点云以浮点数的形式包含每个传感器像素的 x、y、z 3D 坐标。单位是 mm。
If there is no valid depth information for a sensor pixel (e.g., due to outlier removal or insufficient light, i.e., light that is not strong enough to pass the confidence threshold), the corresponding values in a depth map or a point cloud are set to the value defined by the Scan3dInvalidDataValue
parameter (default setting is 0).
处理深度图#
深度图由 16 位灰度值组成。对于每个传感器像素,相机会将 z 坐标值转换为灰度值,并将其存储在深度图中。
In combination with the camera's calibration data provided by the Scan3dCoordinateScale
, Scan3dPrincipalPointU
, Scan3dPrincipalPointV
, and Scan3dFocalLength
parameters, complete 3D information can be retrieved from a depth map.
信息
即: Scan3dCoordinateScale
parameter value varies depending on the pixel format selected. When working with the camera in the blaze Viewer, the pixel format is always set to the Coord3D_ABC32f
pixel format. You can't change this setting. The depth maps provided by the blaze Viewer are created based on the point clouds. The Scan3dCoordinateScale
parameter value is 1 in this case. When you're working with the blaze camera outside the pylon Viewer and set the pixel format to Coord3D_C16
, the Scan3dCoordinateScale
parameter value is different. The Scan3dCoordinateScale
parameter values for the different pixel formats are listed in the following table.
Pixel Format | Scan3dCoordinateScale[C] 参数值 |
---|---|
Coord3D_ABC32f | 1 |
Coord3D_C16 | 0.152588 |
Mono16 | 0.152588 |
请参阅 GrabDepthMap C++ 示例,了解如何配置相机以发送深度图以及如何访问深度图数据。
从 2D 深度图计算 3D 坐标#
要将深度图的 16 位灰度值转换为以 mm 为单位的 z 坐标,请使用以下公式:
z [mm] = gray2mm * g
其中:
g
= gray value from the depth map
gray2mm
= value of the Scan3dCoordinateScale
parameter
要计算 x 和 y 坐标,请使用以下公式:
x [mm] = (u-cx) * z / f
y [mm] = (v-cy) * z / f
其中:
(u,v)
= column and row in the depth map
f
= value of the Scan3dFocalLength
parameter, i.e., the focal length of the camera's lens
(cx,cy)
= values of the Scan3dPrincipalPointU
and Scan3dPrincipalPointV
parameters, i.e., the principal point
C++ 示例代码#
// Enable depth maps by enabling the Range component and setting the appropriate pixel format.
camera.ComponentSelector.SetValue(ComponentSelector_Range);
camera.ComponentEnable.SetValue(true);
camera.PixelFormat.SetValue(PixelFormat_Coord3D_C16);
// Query the conversion factor required to convert gray values to distances:
// Choose the z axis first...
camera.Scan3dCoordinateSelector.SetValue(Scan3dCoordinateSelector_CoordinateC);
// ... then retrieve the conversion factor.
const auto gray2mm = camera.Scan3dCoordinateScale.GetValue();
// Configure the gray value used for indicating missing depth data.
// Note: Before setting the value, the Scan3dCoordinateSelector parameter must be set to the axis the
// value is to be configured for, in this case the z axis. This means that Scan3dCoordianteSelector must be set
// to "CoordinateC". This has already been done a few lines above.
camera.Scan3dInvalidDataValue.SetValue((double)missingDepth);
// Retrieve calibration data from the camera.
const auto cx = camera.Scan3dPrincipalPointU.GetValue();
const auto cy = camera.Scan3dPrincipalPointV.GetValue();
const auto f = camera.Scan3dFocalLength.GetValue();
// ....
// Access the data.
const auto container = ptrGrabResult->GetDataContainer();
const auto rangeComponent = container.GetDataComponent(0);
const auto width = rangeComponent.GetWidth();
const auto height = rangeComponent.GetHeight();
// Calculate coordinates for pixel (u,v).
const uint16_t g = ((uint16_t*)rangeComponent.GetData())[u + v * width];
const double z = g * gray2mm;
const double x = (u - cx) * z / f;
const double y = (v - cy) * z / f;
处理已保存的深度图#
对于使用 blaze 旧版 SDK、pylon SDK 或 blaze ROS 驱动程序获得的深度图,您必须使用以下公式:
Distance Measured [mm] = Pixel_Value x Scan3dCoordinateScale[C]
对于使用 blaze Viewer 保存的深度图,请使用以下公式:
Distance Measured [mm] = DepthMin_parameter + (Pixel_Value x (DepthMax_Parameter - DepthMin_parameter)) / 65535
处理点云#
由于点云由相机坐标系内的 x、y、z 坐标三元组组成,因此无需进一步处理即可从点云中提取 3D 信息。
有关如何配置用于发送点云的相机以及如何访问数据的信息,请参阅 FirstSample C++ 示例。
如果除了点云之外还需要深度图,请参考 ConvertPointCloud2DepthMap C++ 示例,该示例说明如何从点云计算灰度和 RGB 深度图。
将坐标系的原点移动到相机外壳的前部#
The camera's coordinate system's origin is located in the camera's optical center which is inside the camera housing. If you prefer coordinates in a coordinate system which is located at the camera's front of the housing, i.e., which is translated along the z axis, a constant, device-specific offset has to be subtracted from the z coordinates. The required offset can be retrieved from the camera by getting the value of the ZOffsetOriginToCameraFront
parameter:
如果 (x,y,z) 是相机坐标系中某个点的坐标,则沿 z 轴移到相机坐标系外壳前部的坐标系中对应坐标 (x',y',z') 可以使用以下公式确定:
x' = x
y' = y
z' = z - offset
计算距离#
Given a point's coordinates (x,y,z)
in mm, the distance of that point to the camera's optical center can be calculated using the following formula:
d = sqrt( xx + yy + z*z )
The distance d'
to the front of the camera's housing can be calculated as follows:
z' = z - offset
d' = sqrt( xx + yy + z'*z')
将深度信息可视化为 RGB 图像#
您可以使用以下方案针对彩虹颜色映射,根据 z 坐标或距离值计算 RGB 值。这对于更好地可视化数据很有用。
First, a depth value from the [minDepth..maxDepth]
value range is converted into a 10-bit value. This 10-bit depth value is mapped to 4 color ranges where each range has a resolution of 8 bits.
minDepth
and maxDepth
= values of the DepthMin
and DepthMax
parameters, i.e., the camera's current depth ROI
深度值 | 映射到 Color 范围 |
---|---|
0..255 | 红色至黄色 (255,0,0) -> (255,255,0) |
256..511 | 黄色至绿色 (255,255,0) -> (0, 255, 0) |
512..767 | 绿色至浅绿色 (0,255,0) -> (0,255,255) |
768..1023 | 浅绿色至蓝色 (0,255,255) -> (0, 0, 255) |
In the following code snippet, depth
is either a z value or a distance value in mm.
const int minDepth = (int)m_camera.DepthMin.GetValue();
const int maxDepth = (int)m_camera.DepthMax.GetValue();
const double scale = 65536.0 / (maxDepth - minDepth);
for each pixel {
// Set depth either to the corresponding z value or
// a distance value calculated from the z value.
// Clip depth if required.
if (depth < minDepth)
depth = minDepth;
else if (depth > maxDepth)
depth = maxDepth;
// Compute RGB values.
const uint16_t g = (uint16_t)((depth - minDepth) * scale);
const uint16_t val = g >> 6 & 0xff;
const uint16_t sel = g >> 14;
uint32_t res = val << 8 | 0xff;
if (sel & 0x01)
{
res = (~res) >> 8 & 0xffff;
}
if (sel & 0x02)
{
res = res << 8;
}
const uint8_t r = res & 0xff;
res = res >> 8;
const uint8_t g = res & 0xff;
res = res >> 8;
const uint8_t b = res & 0xff;
}