Python Programmer's Guide#
As an alternative to the official python wrapper, it is possible to operate the cameras via the Python Harvester library. Harvesters is a Python library designed to simplify the image acquisition process in computer vision applications. It enables image acquisition through GenTL Producers.
pypylon#
Installing pypylon#
- Install pylon (version 7.5 or above) and the pylon Supplementary Package for blaze (version 1.7 or above)
- Install pypylon:
pip3 install pypylon
Python Programming Samples#
The pylon Supplementary Package for blaze includes some Python programming samples that illustrate how to access a blaze camera using Python and pypylon.
To open the folder containing the programming samples for blaze cameras, press the Win key, go to the Basler folder, and choose blaze Samples. A File Explorer window opens. The Python programming samples are located in the Python folder.
The Python samples are located in the /opt/pylon/share/pylon/Samples/blaze/Python/pypylon folder.
信息
Before building the samples, copy the folder containing the samples to a location of your choice where you have write access.
For each sample, there is a README.md file providing information about what the sample does and about the requirements to run the sample.
Accessing a blaze Camera Using pypylon#
Look at SimpleGrab/simple_grab.py or use the following snippet:
from pypylon import pylon
dc = pylon.DeviceInfo()
dc.SetDeviceClass("BaslerGTC/Basler/GenTL_Producer_for_Basler_blaze_101_cameras")
camera = pylon.InstantCamera(pylon.TlFactory.GetInstance().CreateFirstDevice(dc))
camera.Open()
camera.StartGrabbing(pylon.GrabStrategy_LatestImageOnly)
while camera.IsGrabbing():
grabResult = camera.RetrieveResult(1000, pylon.TimeoutHandling_ThrowException)
if grabResult.GrabSucceeded():
pylonDataContainer = grabResult.GetDataContainer()
for componentIndex in range(pylonDataContainer.DataComponentCount):
pylonDataComponent = pylonDataContainer.GetDataComponent(componentIndex)
if pylonDataComponent.ComponentType == pylon.ComponentType_Intensity:
intensity = pylonDataComponent.Array
_2d_intensity = intensity.reshape(pylonDataComponent.Height, pylonDataComponent.Width)
pylonDataComponent.Release()
grabResult.Release()
camera.StopGrabbing()
Harvester#
Installing Harvester#
On a command line, issue the following command:
Python Harvester Programming Samples#
The pylon Supplementary Package for blaze includes some Python programming samples that illustrate how to access a blaze camera using Python and Harvester.
To open the folder containing the programming samples for blaze cameras, press the Win key, go to the Basler folder, and choose blaze Samples. A File Explorer window opens. The Python programming samples are located in the Python folder.
The Python Harvester samples are located in the /opt/pylon/share/pylon/Samples/blaze/Python/Harvester folder.
信息
Before building the samples, copy the folder containing the samples to a location of your choice where you have write access.
For each sample, there is a README.md file providing information about what the sample does and about the requirements to run the sample.
Linux
If you have installed the pylon Software Suite and the pylon Supplementary Package for blaze to a location other than /opt/pylon, you have to adapt the path where Harvester should load the GenTL producer for blaze cameras from.
In the sample code, search for occurrences of /opt/pylon/lib/gentlproducer/gtl
and adjust the path.
Accessing a blaze Camera Using Harvester#
Perform the following steps to find all available blaze cameras with Harvester:
- Import the Harvester module.
- Instantiate a Harvester object.
-
Load the GenTL producer for blaze cameras.
-
Update the list of available devices.
- Get information about available devices.
To connect to a camera, use the h.create()
function.
The following statement connects to the first available blaze camera:
If you want to open a specific camera, you can specify it using properties that are provided through the device_info_list
property of the Harvester class object.
示例:
Note that the specifiers must uniquely identify the device you want to open. No device can be opened if more than one camera matches the criteria.
To disconnect from the camera, issue the following command:
Accessing Camera Parameters#
Camera parameters are accessible via the image acquirer's remote_device
property. The remote device's node_map
property provides access to all camera parameters.
示例:
# Get a parameter.
operatingMode = ia.remote_device.node_map.OperatingMode.value
# Set a parameter.
ia.remote_device.node_map.OperatingMode.value = 'ShortRange'
To get a list of all available camera parameters, use the following command:
Refer to the SimpleGrab sample for more examples how to set and get blaze camera parameters.
Acquiring Data#
信息
Since Harvester doesn't support the GenICam GenDC streaming format, you always have to disable the GenDC streaming format before acquiring data. To disable the GenDC streaming format, use the following command:
Buffers grabbed from blaze cameras can contain multiple components. There is a Range, an Intensity, and a Confidence component.
The following code snippet enables all available components and configures the pixel format of the Range component so that depth data will be provided as point clouds. In a point cloud, an x, y, z coordinate triple is assigned to each sensor pixel.
ia.remote_device.node_map.ComponentSelector.value = "Range"
ia.remote_device.node_map.ComponentEnable.value = True
ia.remote_device.node_map.PixelFormat.value = "Coord3D_ABC32f"
ia.remote_device.node_map.ComponentSelector.value = "Intensity"
ia.remote_device.node_map.ComponentEnable.value = True
ia.remote_device.node_map.PixelFormat.value = "Mono16"
ia.remote_device.node_map.ComponentSelector.value = "Confidence"
ia.remote_device.node_map.ComponentEnable.value = True
ia.remote_device.node_map.PixelFormat.value = "Confidence16"
If you want to use depth maps instead of point clouds, configure the Range component as follows:
ia.remote_device.node_map.ComponentSelector.value = "Range"
ia.remote_device.node_map.ComponentEnable.value = True
ia.remote_device.node_map.PixelFormat.value = "Coord3D_C16"
To start acquisition, call the image acquirer's start()
function:
A typical grab loop looks like this:
信息
A buffer is only valid inside the with
statement and will be destroyed when you leave the scope. If you want to use buffers outside of the with
scope, you have to use np.copy()
to make deep copies.
The following code snippet illustrates how to access a buffer's components:
with ia.fetch() as buffer:
depth = buffer.payload.components[0]
intensity = buffer.payload.components[1]
confidence = buffer.payload.components[2]
Harvester returns the components as one-dimensional NumPy arrays.
It may be convenient for further processing to reshape the 1D arrays into 2D or 3D arrays:
depth_arr = depth.data.reshape(pointcloud.height, pointcloud.width,
int(pointcloud.num_components_per_pixel))
intenstiy_arr = intensity.data.reshape(intensity.height, intensity.width)
confidence_arr = confidence.data.reshape(confidence.height, confidence.width)
num_components_per_pixel
depends on the pixel format for the Range component. If the pixel format equals Coord3D_ABC32f
, the Range component is represented as a point cloud, i.e., the are 3 float values for each pixel. The result is a three-dimensional array storing x, y, and z coordinates for each pixel.
If the pixel format equals Coord3D_C16
, the Range component is represented as a 16-bit gray-value image, i.e., there is one 16-bit integer value for each pixel.
Refer to the Processing Measurement Results topic for more details about how to convert the gray values of depth maps into mm.
When done with grabbing data, call the stop()
function:
As an alternative to implementing a grab loop as shown above, Harvester can provide a background thread implementing the grab loop and informing the application by issuing callbacks. This approach is illustrated in the SimpleGrab_Callback sample.
Debugging Applications and Controlling the GigE Vision Heartbeat#
GigE Vision cameras like the blaze cameras require the application to periodically access the camera by regularly sending special network packets to the camera. If the camera doesn't receive these heartbeats, it will consider the connection to be broken and won't accept any commands from the application anymore.
When you run your application, the GenTL producer for blaze cameras will normally generate these heartbeats. If you set a breakpoint in your application and the breakpoint is hit, the debugger will suspend all threads including the one sending the heartbeats. Therefore, when you're debugging your application and single-stepping through your code, no heartbeats are sent to the camera. As a result, the camera notices a heartbeat timeout and closes the connection. The default value for the heartbeat timeout is 3000 ms.
To work around this, you have to increase the heartbeat timeout during development. You can do this by setting an environment variable named GEV_HEARTBEAT_TIMEOUT
and specifying the desired timeout in milliseconds.
Alternatively, you can set the heartbeat timeout in your application like this:
信息
If you increase the heartbeat timeout to a high value and stop your application without closing the device properly, you won't be able to open the camera again and will receive an error stating that the device is currently in use. This can happen if you stop your application using the debugger or if your application terminates unexpectedly. To open the camera again, you must either wait until the timeout has elapsed or disconnect the network cable from the camera.