Programming with pylon C#
Programming with pylon C#
The Architecture of the pylon C API#
pylon C and GenApi#
The pylon C API builds upon GenApi, a software framework that provides a high-level API for generic access to all compliant digital cameras, hiding the peculiarities of the particular interface technology used. Accordingly, the application developer can focus on the functional aspects of the program to be developed. Due to the abstraction provided by GenApi, programs need not be adjusted to work with different types of camera interfaces. Even applications can be addressed where different camera interfaces are used at the same time.
The dependency of pylon C upon GenApi shows in some places, mostly where names of functions or other entities start with a GenApi prefix. Wherever this is the case, an element of the underlying GenApi layer is directly exposed to the pylon C user.
Objects and Handles#
The pylon C API defines several data entities termed 'objects'. These are used to expose certain aspects of the pylon C functionality to the user. For example, there is a stream grabber object that serves the purpose of receiving image data streamed by a camera (which, in turn, is also represented by an object called a camera object).
Inside a program, every object is represented and uniquely identified by a handle. A function performing an action that involves an object is passed a handle for that object. Handles are type-safe, which is to say that handles representing different kinds of objects are of different types. Accordingly, the C language type system is able to detect errors such as passing a wrong kind of object to a function call. Furthermore, handles are unique in the sense that no two handles representing two different objects will ever be equal when they are compared. This is even true if the comparison is made between two handles of different types after they were forcefully cast to a common type.
Camera Objects#
In pylon C, physical camera devices are represented by camera objects (sometimes also referred to as device objects). A camera object handle has the type of PYLON_DEVICE_HANDLE
.
A camera object is used for:
- Establishing communication with a camera (i.e., opening the camera object)
- Accessing the camera's parameters (to query or change its configuration, see Parameters below)
- Obtaining a stream grabber object used for grabbing images (see Stream Grabbers below)
- Obtaining an event grabber object used for retrieving event messages (see Event Grabbers below)
- Obtaining a chunk parser object used for analyzing a self-descriptive, structured data stream (see Chunk Parsers below)
Transport Layers#
The term 'transport layer' is used as an abstraction for a physical interface such as Gigabit Ethernet (GigE), USB, or CoaXPress (CXP). For each of these interfaces, there are drivers that provide access to camera devices. As an abstraction of these drivers, a transport layer provides the following functionality:
- Device discovery (also called device enumeration)
- Reading and writing camera registers
- Grabbing images
- Retrieving event messages
- Configuring the transport layer itself (e.g. timeouts)
- Creating camera objects
- Deleting camera objects
pylon C includes multiple transport layers, e.g.:
- PylonGigE for Gigabit Ethernet cameras using the GigE Vision protocol
- PylonUsb for USB3 Vision compliant cameras
- PylonCLSer for Camera Link cameras using the CL serial interface (limited to camera configuration only)
- PylonGtc for CoaXPress compliant cameras using GenTL (generic transport layer)
A transport layer is strictly an internal concept of the pylon C API that application writers need not be concerned with, as there is no user-visible entity related to it. This means there is no 'transport layer object' in pylon C. As every camera has exactly one transport layer, it is, for all practical purposes, considered an integral part of the camera object. However, being aware of the transport layer concept may be useful for properly understanding device enumeration and communication.
Waiting#
Typically, pylon C applications are event-driven. This means that such applications, or the threads running within them, will often wait for some condition to become true, for example, a buffer with image data to become available.
pylon C provides a generalized mechanism for applications to wait for externally generated events, based on the concepts of wait objects and wait object containers. Wait objects provide an abstraction layer for operating system-specific synchronization mechanisms. Events in pylon C include image data that become available at a stream grabber (see Retrieving Grabbed Images), or event data that become available at an event grabber. With wait objects, pylon C provides a mechanism for applications to wait for these events. Moreover, applications can create wait objects of their own that can be explicitly signaled.
Wait objects can be grouped into wait object containers, and wait functions are provided by pylon C for the application to wait until either any one or all wait objects in a container are signaled. This way, events originating from multiple sources can be processed by a single thread.
Wait objects are represented by handles of the PYLON_WAITOBJECT_HANDLE
type, while handles of the PYLON_WAITOBJECTS_HANDLE
type represent wait object containers.
Stream Grabbers#
A camera object, as defined by the pylon C architecture, is capable of delivering one or more streams of image data (see below for an exception). To grab images from a stream, a stream grabber object is required. Stream grabber objects cannot be created directly by an application. They are managed by camera objects, which create and pass out stream grabbers. All stream grabbers expose the very same interface, regardless of the transport mechanism they use for data transfer. This means that for all transport layers, images are grabbed from streams in exactly the same way. The details of grabbing images are described in the Grabbing Images section below.
NOTE
There may be cameras for which image data support is not implemented in pylon C. Device objects for cameras of this kind will have no stream grabber at all. Such device objects can still be used to access device parameters, grabbing, however, will not be possible. Throughout the remainder of this document it will be assumed that there is at least one image data channel available for every device object.
If a camera is capable of delivering multiple data streams, its device object will provide a stream grabber for each data stream. A device object can report the number of provided stream grabbers. Stream grabber objects are represented by handles of the PYLON_STREAMGRABBER_HANDLE
type. Section Grabbing Images describes their use in detail.
Event Grabbers#
In addition to sending image data streams, some cameras are capable of sending event messages to inform the application about certain conditions that arise. For example, a camera may send an event message when the image acquisition process is complete within the camera, but before the image data are actually transferred out of the camera. The application might need this information to know when it is safe to start a handling system that moves the next part into position for a subsequent acquisition, without having to wait for the image data to arrive.
Event grabber objects are used to receive event messages. Retrieving and processing event messages is described below in the Handling Camera Events section.
Event grabber objects are represented by handles of the PYLON_EVENTGRABBER_HANDLE
type.
Chunk Parsers#
If the so-called chunk mode is activated, Basler cameras can send additional information appended to the image data. When chunk mode is enabled, the camera sends an extended data stream consisting of the image data combined with additional information, such as a frame number or a time stamp. The extended data stream is self-descriptive. pylon C chunk parser objects are used for parsing the extended data stream and for providing access to the additional information. Use of chunk parser objects is explained in the Chunk Parser: Accessing Chunk Features section.
Chunk parser objects are represented by handles of the PYLON_CHUNKPARSER_HANDLE
type.
Parameters#
The behavior of some kinds of objects (camera objects in particular) can be controlled by the application through a set of related parameters (sometimes also called features). Parameters are named entities having a value that may or may not be readable or writable by the application. Writing a new value to an object's parameter will generally modify the behavior of that object.
Every parameter has an associated type. There are currently six different types defined:
- Integer - An integer parameter represents a feature that can be set by an integer number, such as a camera's image width or height in pixels. The current value of an integer parameter is augmented by a minimum and a maximum value, defining a range of allowed values for the parameter, and an increment that acts as a 'step width' for changes to the parameter's value. The set of all allowed values for an integer parameter can hence be expressed as x:={minimum}+;N*{increment}, with N=0,1,2…, x<={maximum}. The current value, minimum, maximum, and increment can all be accessed as 64 bit values via functions provided by pylon C for this purpose.
- Float - A float parameter represents a feature that can be set by a floating-point value, such as a camera's exposure time expressed in seconds. It resembles the integer parameter with two exceptions: all values are of the 'double' type (double precision floating point numbers as defined by the IEEE 754 standard), and there is no increment value. Hence, a float parameter is allowed to take any value from the interval {minimum}<=x<={maximum}.
- Boolean - A boolean parameter represents a binary-valued feature, which, can be enabled or disabled. pylon C provides functions for checking the current state and setting the parameter. An example for a boolean parameter would be a 'switch' to enable or disable a particular feature, such as a camera's external trigger input.
- String - The parameter's value is a text string like, for example, a camera's type designator or its serial number.
- Enumeration - The parameter can take any value from a predefined set. The set of possible values is organized as an ordered list, such that any value can be identified by a both name (text string) and an index (integer number).
- Command - A command parameter represents an executable feature, a provides a way of telling an object to perform a certain action. The operation thus triggered may need some time to execute, and will eventually terminate. An example is the 'perform white balance auto calibration' command that may be available for a color camera. A command parameter can be queried for its execution state (whether the command is still executing or has already terminated).
Every parameter also has an associated access mode that determines the kind of access allowed. There are currently four access modes defined:
- Implemented - A parameter with the given name actually exists, because the object does implement the related feature. Some features are only available for certain devices and not for others. For example, a monochrome camera will not have a white balance feature, and consequently will not have a parameter named "WhiteBalance".
- Available - Depending on the object's state, a parameter may be temporarily unavailable. For example, a camera parameter related to external triggering may not be available while the camera is in free run mode. 'Available' implies 'implemented'.
- Readable - A parameter's value can be read. 'Readable' implies 'available'.
- Writable - A parameter's value can be changed (set). 'Writable' implies 'available'.
Parameters can be both readable and writable at the same time.
Image Terminology Issues#
Throughout this document, a distinction is made between image acquisition, image data transfer, and image grabbing. It is essential to understand the exact meaning of these terms.
The operations performed internally by the camera to produce a single image are collectively termed image acquisition. This includes, among other things, controlling exposure of the image sensor and sensor read-out. This process eventually results in the camera being ready to transfer image data out of the camera to the computer. Image data transfer designates the transfer of the aquired data from the camera's memory to the computer via the camera's interface, e.g., USB or Gigabit Ethernet. The process of writing the image data to the computer's main memory is referred to as image grabbing.
Programming the pylon C API#
Debugging pylon Applications Using GigE Cameras#
When debugging a pylon application using GigE cameras you may encounter heartbeat timeouts. The application must send special network packets to the camera in defined intervals. If the camera doesn't receive these heartbeats it will consider the connection as broken and won't accept any commands from the application. This requires setting the heartbeat timeout of a camera to a higher value when debugging. If you're using the PylonGigEConnectionGuard, this tool can automatically set the heartbeat timeout to the default of 60 minutes or to a value defined by you. For more information about debugging using GigE cameras, see Debugging pylon Applications Using GigE Cameras.
Initialization/Uninitialization of the pylon C Runtime Library#
The pylon C runtime system must be initialized before use. A pylon based application must call the PylonInitialize() function before using any other functions of the pylon C runtime system.
Before an application exits, it must call the PylonTerminate() function to free resources allocated by the pylon C runtime system.
/* ... Shut down the pylon runtime system. Don't call any pylon method after
calling PylonTerminate(). */
PylonTerminate();
Error Handling#
All pylon C API functions return a value of the GENAPIC_RESULT (see Error Codes) type, defined in GenApiCError.h. The return value is GENAPI_E_OK
if the function completed normally without detecting any errors. Otherwise, an error code is returned. This will either be one of the GENAPI_E_XXX
error codes defined in GenApiCError.h, if the error is detected in the GenApi layer that forms the basis of pylon C, or one of the PYLON_E_XXX
error codes defined in PylonCError.h.
In addition to returning an error code, pylon C functions set up a textual error description that applications can retrieve. It consists of two parts that can be accessed via GenApiGetLastErrorMessage() and GenApiGetLastErrorDetail(). The string returned by GenApiGetLastErrorMessage() contains a concise description of the most recent error, suitable to be displayed to the user as part of an error message. Additional error information is returned by GenApiGetLastErrorDetail(); this error information is intended to aid in identifying the conditions that caused the error.
This is what a typical error handler might look like:
/* This function demonstrates how to retrieve the error message for the last failed
function call. */
void printErrorAndExit( GENAPIC_RESULT errc )
{
char* errMsg;
size_t length;
/* Retrieve the error message.
... First find out how big the buffer must be, */
GenApiGetLastErrorMessage( NULL, &length );
errMsg = (char*) malloc( length );
/* ... and retrieve the message. */
GenApiGetLastErrorMessage( errMsg, &length );
fprintf( stderr, "%s (%#08x).\n", errMsg, (unsigned int) errc );
free( errMsg );
/* Retrieve the more details about the error
... First find out how big the buffer must be, */
GenApiGetLastErrorDetail( NULL, &length );
errMsg = (char*) malloc( length );
/* ... and retrieve the message. */
GenApiGetLastErrorDetail( errMsg, &length );
fprintf( stderr, "%s\n", errMsg );
free( errMsg );
PylonTerminate(); /* Releases all pylon resources */
pressEnterToExit();
exit( EXIT_FAILURE );
}
All programming examples use a macro to check for error conditions and conditionally invoke the above error handler:
Enumerating and Creating Camera Objects#
In pylon C, camera devices are managed by means of 'camera objects'. A camera object is a software abstraction and is represented by a handle of the PYLON_DEVICE_HANDLE
type. Available devices are discovered dynamically, using facilities provided by the transport layer.
Device discovery (aka enumeration) is a two-step process. In the first step, the PylonEnumerateDevices() function returns, in its numDevices argument, the total number of camera devices detected for all interfaces. Assuming this value is N, you can then access every camera using a numeric index from the range [0 … N-1]. In the second step, PylonGetDeviceInfo() is called for every index value in turn. By looking at the fields of the PylonDeviceInfo_t struct, every individual camera can be identified. A call to PylonGetDeviceInfoHandle() then translates the device index to a PYLON_DEVICE_INFO_HANDLE
that can be used to query device properties. Finally, a device object (represented by a PYLON_DEVICE_HANDLE
) can be created by calling PylonCreateDeviceByIndex(). A PYLON_DEVICE_HANDLE
is required for all operations involving a device.
The code snippet below illustrates device enumeration and creation:
/* Enumerate all camera devices. You must call
PylonEnumerateDevices() before creating a device. */
res = PylonEnumerateDevices( &numDevices );
CHECK( res );
if (0 == numDevices)
{
fprintf( stderr, "No devices found.\n" );
PylonTerminate();
pressEnterToExit();
exit( EXIT_FAILURE );
}
/* Get a handle for the first device found. */
res = PylonCreateDeviceByIndex( 0, &hDev );
CHECK( res );
If an application is done using a device, the device handle must be destroyed:
Opening and Closing a Camera#
Before access to camera parameters is possible, the transport layer must be initialized and a connection to the physical camera device must be established. This is achieved by calling the PylonDeviceOpen() function.
/* Before using the device, it must be opened. Open it for configuring
parameters and for grabbing images. */
res = PylonDeviceOpen( hDev, PYLONC_ACCESS_MODE_CONTROL | PYLONC_ACCESS_MODE_STREAM );
CHECK( res );
To release the connection to a device, and to free all related resources, call the PylonDeviceClose() function.
/* ... Close and release the pylon device. The stream grabber becomes invalid
after closing the pylon device. Don't call stream grabber related methods after
closing or releasing the device. */
res = PylonDeviceClose( hDev );
CHECK( res );
Camera Configuration#
This section describes how a camera object is used to configure camera device parameters. For a discussion of all relevant concepts, see Parameters. Parameters are identified by their names. After opening the pylon Viewer, you can easily browse through all parameters that are available for a particular type of camera. This is described in more detail under Browsing Parameters.
All functions that work on parameters respect accessibility. If the desired kind of access is not (currently) possible, error messages are returned accordingly. It is also possible to check for sufficient accessibility beforehand, using one of the following functions: PylonDeviceFeatureIsImplemented(), PylonDeviceFeatureIsAvailable(), PylonDeviceFeatureIsReadable(), or PylonDeviceFeatureIsWritable().
/* This function demonstrates how to check the presence, readability, and writability
of a feature. */
void demonstrateAccessibilityCheck( PYLON_DEVICE_HANDLE hDev )
{
_Bool val; /* Output of the check functions */
/* Check to see if a feature is implemented at all. */
val = PylonDeviceFeatureIsImplemented( hDev, "Width" );
printf( "The 'Width' feature %s implemented\n", val ? "is" : "isn't" );
val = PylonDeviceFeatureIsImplemented( hDev, "MyCustomFeature" );
printf( "The 'MyCustomFeature' feature %s implemented\n", val ? "is" : "isn't" );
/* Although a feature is implemented by the device, it might not be available
with the device in its current state. Check to see if the feature is currently
available. The PylonDeviceFeatureIsAvailable sets val to 0 if either the feature
is not implemented or if the feature is not currently available. */
val = PylonDeviceFeatureIsAvailable( hDev, "BinningVertical" );
printf( "The 'BinningVertical' feature %s available\n", val ? "is" : "isn't" );
/* If a feature is available, it could be read-only, write-only, or both
readable and writable. Use the PylonDeviceFeatureIsReadable() and the
PylonDeviceFeatureIsWritable() functions(). It is safe to call these functions
for features that are currently not available or not implemented by the device.
A feature that is not available or not implemented is neither readable nor writable.
The readability and writability of a feature can change depending on the current
state of the device. For example, the Width parameter might not be writable when
the camera is acquiring images. */
val = PylonDeviceFeatureIsReadable( hDev, "Width" );
printf( "The 'Width' feature %s readable\n", val ? "is" : "isn't" );
val = PylonDeviceFeatureIsReadable( hDev, "MyCustomFeature" );
printf( "The 'MyCustomFeature' feature %s readable\n", val ? "is" : "isn't" );
val = PylonDeviceFeatureIsWritable( hDev, "Width" );
printf( "The 'Width' feature %s writable\n", val ? "is" : "isn't" );
printf( "\n" );
}
The next code snippet demonstrates how to read and set an integer parameter:
/* This function demonstrates how to handle integer camera parameters. */
void demonstrateIntFeature( PYLON_DEVICE_HANDLE hDev )
{
static const char featureName[] = "Width"; /* Name of the feature used in this sample: AOI Width */
int64_t val, min, max, incr; /* Properties of the feature */
GENAPIC_RESULT res; /* Return value */
if (PylonDeviceFeatureIsReadable( hDev, featureName ))
{
/*
Query the current value, the allowed value range, and the increment of the feature.
For some integer features, you are not allowed to set every value within the
value range. For example, for some cameras the Width parameter must be a multiple
of 2. These constraints are expressed by the increment value. Valid values
follow the rule: val >= min && val <= max && val == min + n * inc. */
res = PylonDeviceGetIntegerFeatureMin( hDev, featureName, &min ); /* Get the minimum value. */
CHECK( res );
res = PylonDeviceGetIntegerFeatureMax( hDev, featureName, &max ); /* Get the maximum value. */
CHECK( res );
res = PylonDeviceGetIntegerFeatureInc( hDev, featureName, &incr ); /* Get the increment value. */
CHECK( res );
res = PylonDeviceGetIntegerFeature( hDev, featureName, &val ); /* Get the current value. */
CHECK( res );
#if __STDC_VERSION__ >= 199901L || defined(__GNUC__)
printf( "%s: min= %lld max= %lld incr=%lld Value=%lld\n", featureName, (long long) min, (long long) max, (long long) incr, (long long) val );
#else
printf( "%s: min= %I64d max= %I64d incr=%I64d Value=%I64d\n", featureName, min, max, incr, val );
#endif
if (PylonDeviceFeatureIsWritable( hDev, featureName ))
{
/* Set the Width half-way between minimum and maximum. */
res = PylonDeviceSetIntegerFeature( hDev, featureName, min + (max - min) / incr / 2 * incr );
CHECK( res );
}
else
fprintf( stderr, "The %s feature is not writable.\n", featureName );
}
else
fprintf( stderr, "The %s feature is not readable.\n", featureName );
}
32 bit variants of the integer access functions are also provided for convenience. These allow to handle the common case where all values are known to be 32 bit entities more easily:
/* The integer functions illustrated above take 64 bit integers as output parameters. There are variants
of the integer functions that accept 32 bit integers instead. The Get.... functions return
an error when the value returned by the device doesn't fit into a 32 bit integer. */
void demonstrateInt32Feature( PYLON_DEVICE_HANDLE hDev )
{
static const char featureName[] = "Height"; /* Name of the feature used in this sample: AOI height */
int32_t val, min, max, incr; /* Properties of the feature */
GENAPIC_RESULT res; /* Return value */
if (PylonDeviceFeatureIsReadable( hDev, featureName ))
{
/*
Query the current value, the allowed value range, and the increment of the feature.
For some integer features, you are not allowed to set every value within the
value range. For example, for some cameras the Width parameter must be a multiple
of 2. These constraints are expressed by the increment value. Valid values
follow the rule: val >= min && val <= max && val == min + n * inc. */
res = PylonDeviceGetIntegerFeatureMinInt32( hDev, featureName, &min ); /* Get the minimum value. */
CHECK( res );
res = PylonDeviceGetIntegerFeatureMaxInt32( hDev, featureName, &max ); /* Get the maximum value. */
CHECK( res );
res = PylonDeviceGetIntegerFeatureIncInt32( hDev, featureName, &incr ); /* Get the increment value. */
CHECK( res );
res = PylonDeviceGetIntegerFeatureInt32( hDev, featureName, &val ); /* Get the current value. */
CHECK( res );
printf( "%s: min= %d max= %d incr=%d Value=%d\n", featureName, min, max, incr, val );
if (PylonDeviceFeatureIsWritable( hDev, featureName ))
{
/* Set the value to half its maximum */
res = PylonDeviceSetIntegerFeatureInt32( hDev, featureName, min + (max - min) / incr / 2 * incr );
CHECK( res );
}
else
fprintf( stderr, "The %s feature is not writable.\n", featureName );
}
else
fprintf( stderr, "The %s feature is not readable.\n", featureName );
}
Setting float parameters is similar, but there is no increment:
/* Some features are floating point features. This function illustrates how to set and get floating
point parameters. */
void demonstrateFloatFeature( PYLON_DEVICE_HANDLE hDev )
{
static const char featureName[] = "Gamma"; /* The name of the feature used */
_Bool isWritable; /* Is the feature writable? */
double min, max, value; /* Value range and current value */
GENAPIC_RESULT res; /* Return value */
if (PylonDeviceFeatureIsReadable( hDev, featureName ))
{
/* Query the value range and the current value. */
res = PylonDeviceGetFloatFeatureMin( hDev, featureName, &min );
CHECK( res );
res = PylonDeviceGetFloatFeatureMax( hDev, featureName, &max );
CHECK( res );
res = PylonDeviceGetFloatFeature( hDev, featureName, &value );
CHECK( res );
printf( "%s: min = %4.2f, max = %4.2f, value = %4.2f\n", featureName, min, max, value );
/* Set the value to half its maximum. */
isWritable = PylonDeviceFeatureIsWritable( hDev, featureName );
if (isWritable)
{
value = 0.5 * (min + max);
printf( "Setting %s to %4.2f\n", featureName, value );
res = PylonDeviceSetFloatFeature( hDev, featureName, value );
CHECK( res );
}
else
fprintf( stderr, "The %s feature is not writable.\n", featureName );
}
else
fprintf( stderr, "The %s feature is not readable.\n", featureName );
}
Setting boolean parameters is even simpler:
/* Some features are boolean features that can be switched on and off.
This function illustrates how to access boolean features. */
void demonstrateBooleanFeature( PYLON_DEVICE_HANDLE hDev )
{
static const char featureName[] = "GammaEnable"; /* The name of the feature */
_Bool isWritable; /* Is the feature writable? */
_Bool value; /* The value of the feature */
GENAPIC_RESULT res; /* Return value */
/* Check to see if the feature is writable. */
isWritable = PylonDeviceFeatureIsWritable( hDev, featureName );
if (isWritable)
{
/* Retrieve the current state of the feature. */
res = PylonDeviceGetBooleanFeature( hDev, featureName, &value );
CHECK( res );
printf( "The %s features is %s\n", featureName, value ? "on" : "off" );
/* Set a new value. */
value = (_Bool) !value; /* New value */
printf( "Switching the %s feature %s\n", featureName, value ? "on" : "off" );
res = PylonDeviceSetBooleanFeature( hDev, featureName, value );
CHECK( res );
}
else
printf( "The %s feature isn't writable\n", featureName );
}
An enumeration parameter can only be set to one of the members of a predefined set:
/* There are camera features that behave like enumerations. These features can take a value from a fixed
set of possible values. One example is the pixel format feature. This function illustrates how to deal with
enumeration features.
*/
void demonstrateEnumFeature( PYLON_DEVICE_HANDLE hDev )
{
char value[64]; /* The current value of the feature */
size_t len; /* The length of the string */
GENAPIC_RESULT res; /* Return value */
_Bool isWritable;
_Bool supportsMono8;
_Bool supportsYUV422Packed;
_Bool supportsMono16;
/* The allowed values for an enumeration feature are represented as strings. Use the
PylonDeviceFeatureFromString() and PylonDeviceFeatureToString() methods for setting and getting
the value of an enumeration feature. */
/* Get the current value of the enumeration feature. */
len = sizeof( value );
res = PylonDeviceFeatureToString( hDev, "PixelFormat", value, &len );
CHECK( res );
printf( "PixelFormat: %s\n", value );
/*
For an enumeration feature, the pylon Viewer's "Feature Documentation" window lists the the
names of the possible values. Some of the values might not be supported by the device.
To check if a certain "SomeValue" value for a "SomeFeature" feature can be set, call the
PylonDeviceFeatureIsAvailable() function with "EnumEntry_SomeFeature_SomeValue" as an argument.
*/
/* Check to see if the Mono8 pixel format can be set. */
supportsMono8 = PylonDeviceFeatureIsAvailable( hDev, "EnumEntry_PixelFormat_Mono8" );
printf( "Mono8 %s a supported value for the PixelFormat feature\n", supportsMono8 ? "is" : "isn't" );
/* Check to see if the YUV422Packed pixel format can be set. */
supportsYUV422Packed = PylonDeviceFeatureIsAvailable( hDev, "EnumEntry_PixelFormat_YUV422Packed" );
printf( "YUV422Packed %s a supported value for the PixelFormat feature\n", supportsYUV422Packed ? "is" : "isn't" );
/* Check to see if the Mono16 pixel format can be set. */
supportsMono16 = PylonDeviceFeatureIsAvailable( hDev, "EnumEntry_PixelFormat_Mono16" );
printf( "Mono16 %s a supported value for the PixelFormat feature\n", supportsMono16 ? "is" : "isn't" );
/* Before writing a value, we recommend checking to see if the enumeration feature is
currently writable. */
isWritable = PylonDeviceFeatureIsWritable( hDev, "PixelFormat" );
if (isWritable)
{
/* The PixelFormat feature is writable, set it to one of the supported values. */
if (supportsMono16)
{
printf( "Setting PixelFormat to Mono16\n" );
res = PylonDeviceFeatureFromString( hDev, "PixelFormat", "Mono16" );
CHECK( res );
}
else if (supportsYUV422Packed)
{
printf( "Setting PixelFormat to YUV422Packed\n" );
res = PylonDeviceFeatureFromString( hDev, "PixelFormat", "YUV422Packed" );
CHECK( res );
}
else if (supportsMono8)
{
printf( "Setting PixelFormat to Mono8\n" );
res = PylonDeviceFeatureFromString( hDev, "PixelFormat", "Mono8" );
CHECK( res );
}
/* Reset the PixelFormat feature to its previous value. */
PylonDeviceFeatureFromString( hDev, "PixelFormat", value );
}
}
The next code snippet demonstrates use of a command parameter:
/* There are camera features, such as starting image acquisition, that represent a command.
This function that loads the factory settings, illustrates how to execute a command feature. */
void demonstrateCommandFeature( PYLON_DEVICE_HANDLE hDev )
{
GENAPIC_RESULT res; /* Return value. */
/* Before executing the user set load command, the user set selector must be
set to the default set. Since we are focusing on the command feature,
we skip the recommended steps for checking the availability of the user set
related features and values. */
/* Choose the default configuration set (with one of the factory setups chosen). */
res = PylonDeviceFeatureFromString( hDev, "UserSetSelector", "Default" );
CHECK( res );
/* Execute the user set load command. */
printf( "Loading the default set.\n" );
res = PylonDeviceExecuteCommandFeature( hDev, "UserSetLoad" );
CHECK( res );
}
All kinds of parameters can be accessed as strings, as demonstrated by the following code snippet:
/*
Regardless of the parameter's type, any parameter value can be retrieved as a string. Each parameter
can be set by passing in a string correspondingly. This function illustrates how to set and get the
Width parameter as string. As demonstrated above, the Width parameter is of the integer type.
*/
void demonstrateFromStringToString( PYLON_DEVICE_HANDLE hDev )
{
static const char featureName[] = "Width"; /* The name of the feature */
size_t len;
char* buf;
char smallBuf[1];
char properBuf[32];
GENAPIC_RESULT res; /* Return value */
/* Get the value of a feature as a string. Normally getting the value consits of 3 steps:
1.) Determine the required buffer size.
2.) Allocate the buffer.
3.) Retrieve the value. */
/* ... Get the required buffer size. The size is queried by
passing a NULL pointer as a pointer to the buffer. */
res = PylonDeviceFeatureToString( hDev, featureName, NULL, &len );
CHECK( res );
/* ... Len is set to the required buffer size (terminating zero included).
Allocate the memory and retrieve the string. */
buf = (char*) alloca( len );
res = PylonDeviceFeatureToString( hDev, featureName, buf, &len );
CHECK( res );
printf( "%s: %s\n", featureName, buf );
/* You are not necessarily required to query the buffer size in advance. If the buffer is
big enough, passing in a buffer and a pointer to its length will work.
When the buffer is too small, an error is returned. */
/* Passing in a buffer that is too small */
len = sizeof( smallBuf );
res = PylonDeviceFeatureToString( hDev, featureName, smallBuf, &len );
if (res == GENAPI_E_INSUFFICIENT_BUFFER)
{
/* The buffer was too small. The required size is indicated by len. */
printf( "Buffer is too small for the value of '%s'. The required buffer size is %d\n", featureName, (int) len );
}
else
CHECK( res ); /* Unexpected return value */
/* Passing in a buffer with sufficient size. */
len = sizeof( properBuf );
res = PylonDeviceFeatureToString( hDev, featureName, properBuf, &len );
CHECK( res );
/* A feature can be set as a string using the PylonDeviceFeatureFromString() function.
If the content of a string can not be converted to the type of the feature, an
error is returned. */
res = PylonDeviceFeatureFromString( hDev, featureName, "fourty-two" ); /* Can not be converted to an integer */
if (res != GENAPI_E_OK)
{
/* Print out an error message. */
size_t l;
char* msg;
GenApiGetLastErrorMessage( NULL, &l ); /* Retrieve buffer size for the error message */
msg = (char*) malloc( l ); /* Provide memory */
GenApiGetLastErrorMessage( msg, &l ); /* Retrieve the message */
printf( "%s\n", msg );
free( msg );
}
}
Grabbing Images#
Grabbing Using the PylonDeviceGrabSingleFrame() Function#
The easiest way to grab an image using pylon C API is to call the PylonDeviceGrabSingleFrame() function. First, set up the camera using the methods described in Camera Configuration. Then call PylonDeviceGrabSingleFrame() to grab the image. It will adjust all neccessary parameters and grab an image into the buffer passed. This is shown in the following code snippet:
PylonGrabResult_t grabResult;
_Bool bufferReady;
/* Grab one single frame from stream channel 0. The
camera is set to single frame acquisition mode.
Wait up to 500 ms for the image to be grabbed. */
res = PylonDeviceGrabSingleFrame( hDev, 0, imgBuf, payloadSize,
&grabResult, &bufferReady, 500 );
if (GENAPI_E_OK == res && !bufferReady)
{
/* Timeout occurred. */
printf( "Frame %d: timeout\n", i + 1 );
}
CHECK( res );
/* Check to see if the image was grabbed successfully. */
if (grabResult.Status == Grabbed)
{
/* Success. Perform image processing. */
getMinMax( imgBuf, grabResult.SizeX, grabResult.SizeY, &min, &max );
printf( "Grabbed frame #%2d. Min. gray value = %3u, Max. gray value = %3u\n", i + 1, min, max );
}
else if (grabResult.Status == Failed)
{
fprintf( stderr, "Frame %d wasn't grabbed successfully. Error code = 0x%08X\n",
i + 1, grabResult.ErrorCode );
}
Note
Even though using PylonDeviceGrabSingleFrame() is quite easy there are some limitations. It will, for instance, involve much set up and shutdown work for each invocation of the function, thus causing considerable overhead and execution time.
If you need more flexibility or want to achieve the maximum possible frame rate you'll need to grab using stream grabber objects (see Grabbing Using Stream Grabber Objects). This will allow maximum control over the grab process.
Grabbing Using Stream Grabber Objects#
The following sections describe the use of stream grabber objects. The order of the section reflects the sequence in which a typical grab application will use a stream grabber object.
Getting a Stream Grabber#
Stream grabber objects are managed by camera objects. The number of stream grabbers provided by a camera can be determined using the PylonDeviceGetNumStreamGrabberChannels() function. The PylonDeviceGetStreamGrabber() function returns a PYLON_STREAMGRABBER_HANDLE
. Prior to retrieving a stream grabber handle, the camera device must have been opened. Please take note of the fact that the value returned from PylonDeviceGetNumStreamGrabberChannels() may be 0, as some camera devices, e.g. Camera Link cameras, have no stream grabber. These cameras can still be parameterized as described, but grabbing is not supported for them. Before use, stream grabbers must be opened by a call to PylonStreamGrabberOpen(). When acquiring images is finished the stream grabber must be closed by a call to PylonStreamGrabberClose().
A stream grabber also provides a wait object for the application to be notified whenever a buffer containing new image data becomes available.
Example:
/* Image grabbing is done using a stream grabber.
A device may be able to provide different streams. A separate stream grabber must
be used for each stream. In this sample, we create a stream grabber for the default
stream, i.e., the first stream ( index == 0 ).
*/
/* Get the number of streams supported by the device and the transport layer. */
res = PylonDeviceGetNumStreamGrabberChannels( hDev, &nStreams );
CHECK( res );
if (nStreams < 1)
{
fprintf( stderr, "The transport layer doesn't support image streams\n" );
PylonTerminate();
pressEnterToExit();
exit( EXIT_FAILURE );
}
/* Create and open a stream grabber for the first channel. */
res = PylonDeviceGetStreamGrabber( hDev, 0, &hGrabber );
CHECK( res );
res = PylonStreamGrabberOpen( hGrabber );
CHECK( res );
/* Get a handle for the stream grabber's wait object. The wait object
allows waiting for buffers to be filled with grabbed data. */
res = PylonStreamGrabberGetWaitObject( hGrabber, &hWait );
CHECK( res );
Note
The lifetime of a stream grabber is managed by the camera owning it. There is no need (and no facility) to dispose off a
PYLON_STREAMGRABBER_HANDLE
. This also means that, if the camera object owning the stream grabber is deleted by calling PylonDestroyDevice() on it, the related stream grabber handle will become invalid.
Configuring a Stream Grabber#
Independent of the physical camera interface used, every stream grabber provides two mandatory parameters:
- MaxBufferSize - Maximum size in bytes of a buffer used for grabbing images
- MaxNumBuffer - Maximum number of buffers used for grabbing images
A grab application must set the above two parameters before grabbing begins. pylon C provides a set of convenience functions for easily accessing these parameters: PylonStreamGrabberSetMaxNumBuffer(), PylonStreamGrabberGetMaxNumBuffer(), PylonStreamGrabberSetMaxBufferSize(), PylonStreamGrabberGetMaxBufferSize().
The payload size is determined by the configuration of the camera object and the stream grabber object. Both objects may provide a PayloadSize parameter. The PylonStreamGrabberGetPayloadSize() function returns the minimum size required for the buffer.
Depending on the transport technology, a stream grabber can provide further parameters such as streaming-related timeouts. All these parameters are initially set to reasonable default values, so that grabbing works without having to adjust them. An application can gain access to these parameters using the method described in Generic Parameter Access.
Preparing a Stream Grabber for Grabbing#
Depending on the transport layer used for grabbing images, a number of system resources may be required, for example:
- DMA resources
- Memory for the driver's data structures
A call to PylonStreamGrabberPrepareGrab() allocates all required resources and causes the camera object to change its state. For a typical camera, any parameters affecting resource requirements (AOI, pixel format, binning, etc.) will be read-only after the call to PylonStreamGrabberPrepareGrab(). These parameters must be set up beforehand and cannot be changed while the camera object is in this state.
Providing Memory for Grabbing#
All pylon C transport layers utilize user-provided buffer memory for grabbing image and chunk data. An application is required to register the data buffers it intends to use with the stream grabber by calling PylonStreamGrabberRegisterBuffer() for each data buffer. This is necessary for performance reasons, allowing the stream grabber to prepare and cache internal data structures used to deal with user-provided memory. The call to PylonStreamGrabberRegisterBuffer() returns a handle for the buffer, which is used during later steps.
Example:
/* Determine the minimum size of the grab buffer.
The size is determined by the configuration of the camera
and the stream grabber. Be aware that this may change
by changing critical parameters after this call.*/
res = PylonStreamGrabberGetPayloadSize( hDev, hGrabber, &payloadSize );
CHECK( res );
/* Allocate memory for grabbing. */
for (i = 0; i < NUM_BUFFERS; ++i)
{
buffers[i] = (unsigned char*) malloc( payloadSize );
if (NULL == buffers[i])
{
fprintf( stderr, "Out of memory!\n" );
PylonTerminate();
pressEnterToExit();
exit( EXIT_FAILURE );
}
}
/* We must tell the stream grabber the number and size of the buffers
we are using. */
/* .. We will not use more than NUM_BUFFERS for grabbing. */
res = PylonStreamGrabberSetMaxNumBuffer( hGrabber, NUM_BUFFERS );
CHECK( res );
/* .. We will not use buffers bigger than payloadSize bytes. */
res = PylonStreamGrabberSetMaxBufferSize( hGrabber, payloadSize );
CHECK( res );
/* Allocate the resources required for grabbing. After this, critical parameters
that impact the payload size must not be changed until FinishGrab() is called. */
res = PylonStreamGrabberPrepareGrab( hGrabber );
CHECK( res );
/* Before using the buffers for grabbing, they must be registered at
the stream grabber. For each registered buffer, a buffer handle
is returned. After registering, these handles are used instead of the
raw pointers. */
for (i = 0; i < NUM_BUFFERS; ++i)
{
res = PylonStreamGrabberRegisterBuffer( hGrabber, buffers[i], payloadSize, &bufHandles[i] );
CHECK( res );
}
The buffer registration mechanism transfers ownership of the buffers to the stream grabber. An application must never deallocate the memory belonging to buffers that are still registered. Freeing the memory is not allowed unless the buffers are deregistered by calling PylonStreamGrabberDeregisterBuffer() first.
for (i = 0; i < NUM_BUFFERS; ++i)
{
res = PylonStreamGrabberDeregisterBuffer( hGrabber, bufHandles[i] );
CHECK( res );
free( buffers[i] );
}
Feeding the Stream Grabber's Input Queue#
Every stream grabber maintains two different buffer queues, an input queue and an output queue. The buffers to be used for grabbing must be fed to the grabber's input queue. After grabbing, buffers containing image data can be retrieved from the grabber's output queue.
The PylonStreamGrabberQueueBuffer() function is used to append a buffer to the end of the grabber's input queue. It takes two parameters, a buffer handle and an optional pointer to application-specific context information. Along with the data buffer, the context pointer is passed back to the user when retrieving the buffer from the grabber's output queue. The stream grabber does not access the memory to which the context pointer points in any way.
Example:
/* Feed the buffers into the stream grabber's input queue. For each buffer, the API
allows passing in a pointer to additional context information. This pointer
will be returned unchanged when the grab is finished. In our example, we use the index of the
buffer as context information. */
for (i = 0; i < NUM_BUFFERS; ++i)
{
res = PylonStreamGrabberQueueBuffer( hGrabber, bufHandles[i], (void*) i );
CHECK( res );
}
Note
Queuing buffers to a stream grabber's input queue does not start image acquisition. For this to happen, the camera must be programmed as described in Starting and Stopping Image Acquisition.
After buffers have been queued, the stream grabber is ready to grab image data into them, but acquisition must be started explicitly.
Starting and Stopping Image Acquisition#
Some stream grabbers, e.g., stream grabbers based on GenTL, have limitations regarding when buffers can be registered. For these stream grabbers it is mandatory to register all buffers first and call PylonStreamGrabberStartStreamingIfMandatory() afterwards. Between the PylonStreamGrabberStartStreamingIfMandatory() and PylonStreamGrabberStopStreamingIfMandatory() calls no buffers can be registered or deregistered if such a limitation exists.
Note
This method has been added in pylon 6.0 in order to support CoaXPress. Prior implementations of pylon stream grabbers did not require calling start and stop streaming. The PylonStreamGrabberIsStartAndStopStreamingMandatory(), PylonStreamGrabberStartStreamingIfMandatory(), and PylonStreamGrabberStopStreamingIfMandatory() methods allow backward-compatible operation.
To start image acquisition, use the camera's AcquisitionStart
parameter. AcquisitionStart
is a command parameter, which means that calling PylonDeviceExecuteCommandFeature() for the AcquisitionStart
parameter sends an 'acquisition start' command to the camera.
A camera device typically provides two acquisition modes:
- Single Frame mode where the camera acquires one image and then stops.
- Continuous mode where the camera continuously acquires and transfers images until acquisition is stopped explicitly.
To be precise, the acquisition start command does not necessarily start acquisition in the camera immediately. If either external triggering or software triggering is enabled, the acquisition start command prepares the camera for image acquisition. Actual acquisition starts when the camera senses an external trigger signal or receives a software trigger command.
When the camera's continuous acquisition mode is enabled, the AcquisitionStop
parameter must be used to stop image acquisition.
Normally, a camera starts to transfer image data as soon as possible after acquisition. There is no specific command to start the image transfer.
Example:
/* Start the image acquisition engine. */
res = PylonStreamGrabberStartStreamingIfMandatory( hGrabber );
CHECK( res );
/* Let the camera acquire images. */
res = PylonDeviceExecuteCommandFeature( hDev, "AcquisitionStart" );
CHECK( res );
Retrieving Grabbed Images#
Image data is written to the buffer(s) in the stream grabber's input queue. When a buffer is filled with data, the stream grabber places it on its output queue, from which it can then be retrieved by the user application.
There is a wait object associated with every stream grabber's output queue. This wait object allows the application to wait until either a grabbed image arrives at the output queue or a timeout expires.
When the wait operation returns successfully, the grabbed buffer can be retrieved using the PylonStreamGrabberRetrieveResult() function. It uses a PylonGrabResult_t struct to return information about the grab operation:
- Status of the grab (succeeded, canceled, failed)
- The buffer's handle
- The pointer to the buffer
- The user-provided context pointer
- AOI and image format
- Error number and error description if the grab failed
This also removes the buffer from the output queue. Ownership of the buffer is returned to the application. A buffer retrieved from the output queue will not be overwritten with new image data until it is placed on the grabber's input queue again.
Remember, a buffer retrieved from the output queue must be deregistered before its memory can be freed.
Use the buffer handle from the PylonGrabResult_t struct to requeue a buffer to the grabber's input queue.
When the camera ceases to send data, all not yet processed buffers remain in the input queue until the PylonStreamGrabberFlushBuffersToOutput() function is called. PylonStreamGrabberFlushBuffersToOutput() puts all buffers from the input queue to the output queue, including any buffer currently being filled. Checking the status of the PylonGrabResult_t struct returned by PylonStreamGrabberRetrieveResult(), allows to determine whether a buffer has been canceled.
The following example shows a typical grab loop:
/* Grab NUM_GRABS images */
nGrabs = 0; /* Counts the number of images grabbed */
while (nGrabs < NUM_GRABS)
{
size_t bufferIndex; /* Index of the buffer */
unsigned char min, max;
/* Wait for the next buffer to be filled. Wait up to 1000 ms. */
res = PylonWaitObjectWait( hWait, 1000, &isReady );
CHECK( res );
if (!isReady)
{
/* Timeout occurred. */
fprintf( stderr, "Grab timeout occurred\n" );
break; /* Stop grabbing. */
}
/* Since the wait operation was successful, the result of at least one grab
operation is available. Retrieve it. */
res = PylonStreamGrabberRetrieveResult( hGrabber, &grabResult, &isReady );
CHECK( res );
if (!isReady)
{
/* Oops. No grab result available? We should never have reached this point.
Since the wait operation above returned without a timeout, a grab result
should be available. */
fprintf( stderr, "Failed to retrieve a grab result\n" );
break;
}
nGrabs++;
/* Get the buffer index from the context information. */
bufferIndex = (size_t) grabResult.Context;
/* Check to see if the image was grabbed successfully. */
if (grabResult.Status == Grabbed)
{
/* Success. Perform image processing. Since we passed more than one buffer
to the stream grabber, the remaining buffers are filled while
we do the image processing. The processed buffer won't be touched by
the stream grabber until we pass it back to the stream grabber. */
unsigned char* buffer; /* Pointer to the buffer attached to the grab result. */
/* Get the buffer pointer from the result structure. Since we also got the buffer index,
we could alternatively use buffers[bufferIndex]. */
buffer = (unsigned char*) grabResult.pBuffer;
/* Perform processing. */
getMinMax( buffer, grabResult.SizeX, grabResult.SizeY, &min, &max );
printf( "Grabbed frame %2d into buffer %2d. Min. gray value = %3u, Max. gray value = %3u\n",
nGrabs, (int) bufferIndex, min, max );
#ifdef GENAPIC_WIN_BUILD
/* Display image */
res = PylonImageWindowDisplayImageGrabResult( 0, &grabResult );
CHECK( res );
#endif
}
else if (grabResult.Status == Failed)
{
fprintf( stderr, "Frame %d wasn't grabbed successfully. Error code = 0x%08X\n",
nGrabs, grabResult.ErrorCode );
}
/* Once finished with the processing, requeue the buffer to be filled again. */
res = PylonStreamGrabberQueueBuffer( hGrabber, grabResult.hBuffer, (void*) bufferIndex );
CHECK( res );
}
Finish Grabbing#
If the camera is set for continuous acquisition mode, acquisition should first be stopped:
/* ... Stop the camera. */
res = PylonDeviceExecuteCommandFeature( hDev, "AcquisitionStop" );
CHECK( res );
/* ... Stop the image acquisition engine. */
res = PylonStreamGrabberStopStreamingIfMandatory( hGrabber );
CHECK( res );
After stopping the camera you must ensure that all buffers waiting in the input queue will be moved to the output queue. You do this by calling the PylonStreamGrabberFlushBuffersToOutput() function. This will move all pending buffers from the input queue to the output queue and mark them as canceled.
An application should retrieve all buffers from the grabber's output queue before closing a stream grabber. Prior to deallocating their memory, deregister the buffers. After all buffers have been deregistered, call the PylonStreamGrabberFinishGrab() function to release all resources allocated for grabbing. PylonStreamGrabberFinishGrab() must not be called when there are still buffers in the grabber's input queue.
The last step is to close the stream grabber by calling PylonStreamGrabberClose().
Example:
/* ... We must issue a flush call to ensure that all pending buffers are put into the
stream grabber's output queue. */
res = PylonStreamGrabberFlushBuffersToOutput( hGrabber );
CHECK( res );
/* ... The buffers can now be retrieved from the stream grabber. */
do
{
res = PylonStreamGrabberRetrieveResult( hGrabber, &grabResult, &isReady );
CHECK( res );
} while (isReady);
/* ... When all buffers have been retrieved from the stream grabber, they can be deregistered.
After that, it is safe to free the memory. */
for (i = 0; i < NUM_BUFFERS; ++i)
{
res = PylonStreamGrabberDeregisterBuffer( hGrabber, bufHandles[i] );
CHECK( res );
free( buffers[i] );
}
/* ... Release grabbing related resources. */
res = PylonStreamGrabberFinishGrab( hGrabber );
CHECK( res );
/* After calling PylonStreamGrabberFinishGrab(), parameters that impact the payload size (e.g.,
the AOI width and height parameters) are unlocked and can be modified again. */
/* ... Close the stream grabber. */
res = PylonStreamGrabberClose( hGrabber );
CHECK( res );
Sample Program#
A complete sample program for acquiring images with a GigE camera in continuous mode can be found here: OverlappedGrab Sample. The sample program is installed as part of the pylon C SDK in < SDK ROOT >/Development/Samples/C/OverlappedGrab.
Using Wait Objects#
Using the PylonWaitObjectWait() and PylonWaitObjectWaitEx() functions, an application can wait for a single wait object to became signaled. This has already been demonstrated as part of the grab loop example presented in Retrieving Grabbed Images. However, it is much more common for an application to wait for events from multiple sources. For this purpose, pylon C defines a wait object container, represented by a PYLON_WAITOBJECTS_HANDLE
handle. Wait objects can be added to a container by calling PylonWaitObjectsAdd() or PylonWaitObjectsAddMany(). Once the wait objects are added to a container, an application can wait for the wait objects to become signaled:
- PylonWaitObjectsWaitForAny() and PylonWaitObjectsWaitForAnyEx() block until any single wait object in a container is signaled, while
- PylonWaitObjectsWaitForAll() and PylonWaitObjectsWaitForAllEx() block until all objects in the container are signaled.
Sample Program#
The following code snippets illustrate how a grab thread uses the PylonWaitObjectsWaitForAny() function to simultaneously wait for buffers and a termination request. The snippets are taken from the GrabTwoCameras sample program installed as part of the pylon C SDK.
The program grabs images for 5 seconds and then exits. First, the program creates a wait object container to hold all its wait objects. It then creates a system-dependent timer, which is transformed into a pylon C wait object. The wait object is then added to the container.
Note that PylonWaitObjectFromW32() is invoked with the duplicate argument set to 0, which means that ownership of the timer handle is transferred to the wait object, which is now responsible for deleting the handle during cleanup.
/* Create wait objects (must be done outside of the loop). */
res = PylonWaitObjectsCreate( &wos );
CHECK( res );
/* In this sample, we want to grab for a given amount of time, then stop. */
/* Create a Windows timer, wrap it in a pylon C wait object, and add it to
the wait object set. */
hTimer = CreateWaitableTimer( NULL, TRUE, NULL );
if (hTimer == NULL)
{
fprintf( stderr, "CreateWaitableTimer() failed.\n" );
PylonTerminate();
pressEnterToExit();
exit( EXIT_FAILURE );
}
res = PylonWaitObjectFromW32( hTimer, 0, &woTimer );
CHECK( res );
res = PylonWaitObjectsAdd( wos, woTimer, NULL );
CHECK( res );
In this code snippet, multiple cameras are used for simultaneous grabbing. Every one of these cameras has a stream grabber, which in turn has a wait object. All these wait objects are added to the container, too. This is achieved by executing the following statements in a loop, once for every camera:
/* Get a handle for the stream grabber's wait object. The wait object
allows waiting for buffers to be filled with grabbed data. */
res = PylonStreamGrabberGetWaitObject( hGrabber[deviceIndex], &hWait );
CHECK( res );
/* Add the stream grabber's wait object to our wait objects.
This is needed to be able to wait until at least one camera has
grabbed an image in the grab loop below. */
res = PylonWaitObjectsAdd( wos, hWait, NULL );
CHECK( res );
At the beginning of the grab loop, PylonWaitObjectsWaitForAny() is called. The index value returned is used to determine whether a buffer has been grabbed or the timer has expired. This means that the program should stop grabbing and exit:
/* Grab until the timer expires. */
for (;;)
{
_Bool isReady;
size_t woidx;
unsigned char min, max;
PylonGrabResult_t grabResult;
/* Wait for the next buffer to be filled. Wait up to 1000 ms. */
res = PylonWaitObjectsWaitForAny( wos, 1000, &woidx, &isReady );
CHECK( res );
if (!isReady)
{
/* Timeout occurred. */
fputs( "Grab timeout occurred.\n", stderr );
break; /* Stop grabbing. */
}
/* If the timer has expired, exit the grab loop */
if (woidx == 0)
{
fputs( "Grabbing completed successfully.\n", stderr );
break; /* timer expired */
}
/* Account for the timer. */
--woidx;
/* Retrieve the grab result. */
res = PylonStreamGrabberRetrieveResult( hGrabber[woidx], &grabResult, &isReady );
CHECK( res );
if (!isReady)
{
/* Oops. No grab result available? We should never have reached this point.
Since the wait operation above returned without a timeout, a grab result
should be available. */
fprintf( stderr, "Failed to retrieve a grab result\n" );
break;
}
/* Check to see if the image was grabbed successfully. */
if (grabResult.Status == Grabbed)
{
/* Success. Perform image processing. Since we passed more than one buffer
to the stream grabber, the remaining buffers are filled while
we do the image processing. The processed buffer won't be touched by
the stream grabber until we pass it back to the stream grabber. */
/* Pointer to the buffer attached to the grab result
Get the buffer pointer from the result structure. Since we also got the buffer index,
we could alternatively use buffers[bufferIndex]. */
unsigned char* buffer = (unsigned char*) grabResult.pBuffer;
/* Perform processing. */
getMinMax( buffer, grabResult.SizeX, grabResult.SizeY, &min, &max );
printf( "Grabbed frame #%2u from camera %2u into buffer %2p. Min. val=%3u, Max. val=%3u\n",
nGrabs, (unsigned int) woidx, grabResult.Context, min, max );
#ifdef GENAPIC_WIN_BUILD
/* Display image */
res = PylonImageWindowDisplayImageGrabResult( woidx, &grabResult );
CHECK( res );
#endif
}
else if (grabResult.Status == Failed)
{
fprintf( stderr, "Frame %u wasn't grabbed successfully. Error code = 0x%08X\n",
nGrabs, grabResult.ErrorCode );
}
/* Once finished with the processing, requeue the buffer to be filled again. */
res = PylonStreamGrabberQueueBuffer( hGrabber[woidx], grabResult.hBuffer, grabResult.Context );
CHECK( res );
nGrabs++;
}
Finally, during cleanup the timer wait object is destroyed. This frees the timer handle included within it.
/* Remove all wait objects from waitobjects. */
res = PylonWaitObjectsRemoveAll( wos );
CHECK( res );
res = PylonWaitObjectDestroy( woTimer );
CHECK( res );
res = PylonWaitObjectsDestroy( wos );
CHECK( res );
Interruptible Wait Operation#
The PylonWaitObjectsWaitForAnyEx() and PylonWaitObjectsWaitForAllEx() functions, as well as PylonWaitObjectWaitEx(), take an additional boolean argument Alertable, that allows the caller to specify whether the wait operation should be interruptible or not. An interruptible wait is terminated prematurely whenever a certain asynchronous system event (a user APC on Windows, or a signal on Unix) happens. This rarely-needed feature has special uses that are beyond the scope of this document.
Handling Camera Events#
Basler GigE Vision and USB3 Vision cameras used with Basler pylon software can send event messages. For example, when a sensor exposure has finished, the camera can send an end-of-exposure event to the computer. The event can be received by the computer before the image data for the finished exposure has been completely transferred. Retrieval and processing of event messages is described in this section.
Event Grabbers#
Receiving event data sent by a camera is accomplished in much the same way as receiving image data. While the latter involves use of a stream grabber, an event grabber is used for obtaining events.
Getting and Preparing Event Grabbers#
Event grabbers can be obtained by PylonDeviceGetEventGrabber().
/* Create and prepare an event grabber. */
/* ... Get a handle for the event grabber. */
res = PylonDeviceGetEventGrabber( hDev, &hEventGrabber );
CHECK( res );
if (hEventGrabber == PYLONC_INVALID_HANDLE)
{
/* The transport layer doesn't support event grabbers. */
fprintf( stderr, "No event grabber supported.\n" );
PylonTerminate();
pressEnterToExit();
return EXIT_FAILURE;
}
The camera object owns event grabbers created this way and manages their lifetime.
Unlike stream grabbers, event grabbers use internal memory buffers for receiving event messages. The number of buffers can be parameterized through the PylonEventGrabberSetNumBuffers() function:
/* ... Tell the grabber how many buffers to use. */
res = PylonEventGrabberSetNumBuffers( hEventGrabber, NUM_EVENT_BUFFERS );
CHECK( res );
Note
The number of buffers must be set before calling PylonEventGrabberOpen().
A connection to the device and all resources required for receiving events are allocated by calling PylonEventGrabberOpen(). After that, a wait object handle can be obtained for the application to be notified of any occurring events.
/* ... Open the event grabber. */
res = PylonEventGrabberOpen( hEventGrabber ); /* The event grabber is now ready
for receiving events. */
CHECK( res );
/* Retrieve the wait object that is associated with the event grabber. The event
will be signaled when an event message has been received. */
res = PylonEventGrabberGetWaitObject( hEventGrabber, &hWaitEvent );
CHECK( res );
Enabling Events#
Sending of event messages must be explicitly enabled on the camera by setting its EventSelector
parameter to the type of the desired event. In the following example the selector is set to the end-of-exposure event. After this, sending events of the desired type is enabled through the EventNotification
parameter:
/* Enable camera event reporting. */
/* Select the end-of-exposure event reporting. */
res = PylonDeviceFeatureFromString( hDev, "EventSelector", "ExposureEnd" );
CHECK( res );
/* Enable the event reporting.
Select the enumeration entry name depending on the SFNC version used by the camera device.
*/
if (sfncVersionMajor >= 2)
res = PylonDeviceFeatureFromString( hDev, "EventNotification", "On" );
else
res = PylonDeviceFeatureFromString( hDev, "EventNotification", "GenICamEvent" );
CHECK( res );
To be sure that no events are missed, the event grabber should be prepared before event messages are enabled (see the Getting and Preparing Event Grabbers section above).
The following code snippet illustrates how to disable the sending of end-of-exposure events:
/* ... Switch-off the events. */
res = PylonDeviceFeatureFromString( hDev, "EventSelector", "ExposureEnd" );
CHECK( res );
res = PylonDeviceFeatureFromString( hDev, "EventNotification", "Off" );
CHECK( res );
Receiving Event Messages#
Receiving event messages is very similar to grabbing images. The event grabber provides a wait object that is signaled whenever an event message becomes available. When an event message is available, it can be retrieved by calling PylonEventGrabberRetrieveEvent().
In typical applications, waiting for grabbed images and event messages is done in one common loop. This is demonstrated in the following code snippet:
/* Put the wait objects into a container. */
/* ... Create the container. */
res = PylonWaitObjectsCreate( &hWaitObjects );
CHECK( res );
/* ... Add the wait objects' handles. */
res = PylonWaitObjectsAddMany( hWaitObjects, 2, hWaitEvent, hWaitStream );
CHECK( res );
/* Start the image acquisition engine. */
res = PylonStreamGrabberStartStreamingIfMandatory( hStreamGrabber );
CHECK( res );
/* Let the camera acquire images. */
res = PylonDeviceExecuteCommandFeature( hDev, "AcquisitionStart" );
CHECK( res );
/* Grab NUM_GRABS images. */
nGrabs = 0; /* Counts the number of images grabbed. */
while (nGrabs < NUM_GRABS)
{
size_t bufferIndex; /* Index of the buffer. */
size_t waitObjectIndex; /* Index of the wait object that is signalled.*/
unsigned char min, max;
/* Wait for either an image buffer grabbed or an event received. Wait up to 1000 ms. */
res = PylonWaitObjectsWaitForAny( hWaitObjects, 1000, &waitObjectIndex, &isReady );
CHECK( res );
if (!isReady)
{
/* Timeout occurred. */
fprintf( stderr, "Timeout. Neither grabbed an image nor received an event.\n" );
break; /* Stop grabbing. */
}
if (0 == waitObjectIndex)
{
PylonEventResult_t eventMsg;
/* hWaitEvent has been signalled. At least one event message is available. Retrieve it. */
res = PylonEventGrabberRetrieveEvent( hEventGrabber, &eventMsg, &isReady );
CHECK( res );
if (!isReady)
{
/* Oops. No event message available? We should never have reached this point.
Since the wait operation above returned without a timeout, an event message
should be available. */
fprintf( stderr, "Failed to retrieve an event\n" );
break;
}
/* Check to see if the event was successfully received. */
if (0 == eventMsg.ErrorCode)
{
/* Successfully received an event message. */
/* Pass the event message to the event adapter. The event adapter will
update the parameters related to events and will fire the callbacks
registered to event related parameters. */
res = PylonEventAdapterDeliverMessage( hEventAdapter, &eventMsg );
CHECK( res );
}
else
{
fprintf( stderr, "Error when receiving an event: 0x%08x\n", eventMsg.ErrorCode );
}
}
else if (1 == waitObjectIndex)
{
/* hWaitStream has been signalled. The result of at least one grab
operation is available. Retrieve it. */
res = PylonStreamGrabberRetrieveResult( hStreamGrabber, &grabResult, &isReady );
CHECK( res );
if (!isReady)
{
/* Oops. No grab result available? We should never have reached this point.
Since the wait operation above returned without a timeout, a grab result
should be available. */
fprintf( stderr, "Failed to retrieve a grab result\n" );
break;
}
nGrabs++;
/* Get the buffer index from the context information. */
bufferIndex = (size_t) grabResult.Context;
/* Check to see if the image was grabbed successfully. */
if (grabResult.Status == Grabbed)
{
/* Success. Perform image processing. Since we passed more than one buffer
to the stream grabber, the remaining buffers are filled while
we do the image processing. The processed buffer won't be touched by
the stream grabber until we pass it back to the stream grabber. */
unsigned char* buffer; /* Pointer to the buffer attached to the grab result. */
/* Get the buffer pointer from the result structure. Since we also got the buffer index,
we could alternatively use buffers[bufferIndex]. */
buffer = (unsigned char*) grabResult.pBuffer;
getMinMax( buffer, grabResult.SizeX, grabResult.SizeY, &min, &max );
printf( "Grabbed frame #%2d into buffer %2d. Min. gray value = %3u, Max. gray value = %3u\n",
nGrabs, (int) bufferIndex, min, max );
}
else if (grabResult.Status == Failed)
{
fprintf( stderr, "Frame %d wasn't grabbed successfully. Error code = 0x%08X\n",
nGrabs, grabResult.ErrorCode );
}
/* Once finished with the processing, requeue the buffer to be filled again. */
res = PylonStreamGrabberQueueBuffer( hStreamGrabber, grabResult.hBuffer, (void*) bufferIndex );
CHECK( res );
}
}
Parsing and Dispatching Event Messages#
While the previous section explained how to receive event messages, this section describes how to interpret them.
The specific layout of event messages depends on the event type and the camera type. The pylon C API uses support from GenICam for parsing event messages. This means that the message layout is described in the camera's XML description file.
As described in the Generic Parameter Access section, a GenApi node map is created from the camera's XML description file. This node map contains node objects representing the elements of the XML file. Since the layout of event messages is also described in the camera description file, the information carried by the event messages is exposed as nodes in the node map. These can be accessed just like any other node.
For example, an end-of-exposure event carries the following information:
- ExposureEndEventFrameID: holds an identification number for the image frame that the event is related to
- ExposureEndEventTimestamp: creation time of the event
- ExposureEndEventStreamChannelIndex: the number of the image data stream used to transfer the image that the event is related to
An event adapter is used to update the event-related nodes of the camera's node map. Updating the nodes is done by passing the event message to an event adapter.
Event adapters are created by camera objects:
/* For extracting the event data from an event message, an event adapter is used. */
res = PylonDeviceCreateEventAdapter( hDev, &hEventAdapter );
CHECK( res );
if (hEventAdapter == PYLONC_INVALID_HANDLE)
{
/* The transport layer doesn't support event grabbers. */
fprintf( stderr, "No event adapter supported.\n" );
PylonTerminate();
pressEnterToExit();
return EXIT_FAILURE;
}
To update any event-related nodes, call PylonEventAdapterDeliverMessage() for every event message received:
PylonEventResult_t eventMsg;
/* hWaitEvent has been signalled. At least one event message is available. Retrieve it. */
res = PylonEventGrabberRetrieveEvent( hEventGrabber, &eventMsg, &isReady );
CHECK( res );
if (!isReady)
{
/* Oops. No event message available? We should never have reached this point.
Since the wait operation above returned without a timeout, an event message
should be available. */
fprintf( stderr, "Failed to retrieve an event\n" );
break;
}
/* Check to see if the event was successfully received. */
if (0 == eventMsg.ErrorCode)
{
/* Successfully received an event message. */
/* Pass the event message to the event adapter. The event adapter will
update the parameters related to events and will fire the callbacks
registered to event related parameters. */
res = PylonEventAdapterDeliverMessage( hEventAdapter, &eventMsg );
CHECK( res );
}
else
{
fprintf( stderr, "Error when receiving an event: 0x%08x\n", eventMsg.ErrorCode );
}
Event Callbacks#
The previous section described how event adapters are used to push the contents of event messages into a camera object's node map. The PylonEventAdapterDeliverMessage() function updates all nodes related to events contained in the message passed in.
As described in the Getting Notified About Parameter Changes section, it is possible to register callback functions that are called when nodes may have been changed. These callbacks can be used to determine if an event message contains a particular kind of event. For example, to get informed about end-of-exposure events, a callback for one of the end-of-exposure event-related nodes must be installed. The following code snippet illustrates how to install a callback function for the ExposureEndFrameId node:
/* Register the callback function for ExposureEndEventFrameID parameter. */
/*... Get the node map containing all parameters. */
res = PylonDeviceGetNodeMap( hDev, &hNodeMap );
CHECK( res );
/* Get the ExposureEndEventFrameID parameter.
Select the parameter name depending on the SFNC version used by the camera device.
*/
if (sfncVersionMajor >= 2)
res = GenApiNodeMapGetNode( hNodeMap, "EventExposureEndFrameID", &hNode );
else
res = GenApiNodeMapGetNode( hNodeMap, "ExposureEndEventFrameID", &hNode );
CHECK( res );
if (GENAPIC_INVALID_HANDLE == hNode)
{
/* There is no ExposureEndEventFrameID parameter. */
fprintf( stderr, "There is no ExposureEndEventFrameID or EventExposureEndFrameID parameter.\n" );
PylonTerminate();
pressEnterToExit();
return EXIT_FAILURE;
}
/* ... Register the callback function. */
res = GenApiNodeRegisterCallback( hNode, endOfExposureCallback, &hCallback );
CHECK( res );
The registered callback will be called by pylon C from the context of the PylonEventAdapterDeliverMessage() function.
Note
Since one event message can aggregate multiple events, PylonEventAdapterDeliverMessage() can issue multiple calls to a callback function when multiple events of the same type are contained in the message.
/* Callback will be fired when an event message contains an end-of-exposure event. */
void GENAPIC_CC endOfExposureCallback( NODE_HANDLE hNode )
{
int64_t frame;
GENAPIC_RESULT res;
res = GenApiIntegerGetValue( hNode, &frame );
CHECK( res );
#if __STDC_VERSION__ >= 199901L || defined(__GNUC__)
printf( "Got end-of-exposure event. Frame number: %lld\n", (long long) frame );
#else
printf( "Got end-of-exposure event. Frame number: %I64d\n", frame );
#endif
}
Cleanup#
Before closing and destroying the camera object, the event-related objects must be closed as illustrated in the following code snippet:
/* ... Deregister the callback. */
res = GenApiNodeDeregisterCallback( hNode, hCallback );
CHECK( res );
/* ... Close the event grabber.*/
res = PylonEventGrabberClose( hEventGrabber );
CHECK( res );
/* ... Release the event adapter. */
res = PylonDeviceDestroyEventAdapter( hDev, hEventAdapter );
CHECK( res );
Sample Program#
The code snippets in this chapter are taken from the 'Events' sample program (see Events Sample) installed as part of the pylon C SDK in < SDK ROOT >/Development/Samples/C/Events.
Chunk Parser: Accessing Chunk Features#
Basler cameras are capable of sending additional information appended to the image data as chunks of data, such as frame counters, time stamps, and CRC checksums. The information included in the chunk data is presented to an application in the form of parameters that receive their values from the chunk parsing mechanism. This section explains how to enable the chunk features and how to access the chunk data.
Enabling Chunks#
Before a feature producing chunk data can be enabled, the camera's chunk mode must be enabled:
/* Before enabling individual chunks, the chunk mode in general must be activated. */
isAvail = PylonDeviceFeatureIsWritable( hDev, "ChunkModeActive" );
if (!isAvail)
{
fprintf( stderr, "The device doesn't support the chunk mode.\n" );
PylonTerminate();
pressEnterToExit();
exit( EXIT_FAILURE );
}
/* Activate the chunk mode. */
res = PylonDeviceSetBooleanFeature( hDev, "ChunkModeActive", 1 );
CHECK( res );
After having been set to chunk mode, the camera transfers data blocks that are partitioned into a sequence of chunks. The first chunk is always the image data. When chunk features are enabled, the image data chunk is followed by chunks containing the information generated by the chunk features.
Once chunk mode is enabled, chunk features can be enabled:
/* Enable some individual chunks... */
/* ... The frame counter chunk feature. */
/* Is the chunk available? */
isAvail = PylonDeviceFeatureIsAvailable( hDev, "EnumEntry_ChunkSelector_Framecounter" );
if (isAvail)
{
/* Select the frame counter chunk feature. */
res = PylonDeviceFeatureFromString( hDev, "ChunkSelector", "Framecounter" );
CHECK( res );
/* Can the chunk feature be activated? */
isAvail = PylonDeviceFeatureIsWritable( hDev, "ChunkEnable" );
if (isAvail)
{
/* Activate the chunk feature. */
res = PylonDeviceSetBooleanFeature( hDev, "ChunkEnable", 1 );
CHECK( res );
}
}
else
{
/* try setting Standard feature naming convention (SFNC) FrameID name*/
isAvail = PylonDeviceFeatureIsAvailable( hDev, "EnumEntry_ChunkSelector_FrameID" );
if (isAvail)
{
/* Select the frame id chunk feature. */
res = PylonDeviceFeatureFromString( hDev, "ChunkSelector", "FrameID" );
CHECK( res );
/* Can the chunk feature be activated? */
isAvail = PylonDeviceFeatureIsWritable( hDev, "ChunkEnable" );
if (isAvail)
{
/* Activate the chunk feature. */
res = PylonDeviceSetBooleanFeature( hDev, "ChunkEnable", 1 );
CHECK( res );
}
}
}
/* ... The CRC checksum chunk feature. */
/* Note: Enabling the CRC chunk feature is not a prerequisite for using
chunks. Chunks can also be handled when the CRC feature is disabled. */
isAvail = PylonDeviceFeatureIsAvailable( hDev, "EnumEntry_ChunkSelector_PayloadCRC16" );
if (isAvail)
{
/* Select the CRC chunk feature. */
res = PylonDeviceFeatureFromString( hDev, "ChunkSelector", "PayloadCRC16" );
CHECK( res );
/* Can the chunk feature be activated? */
isAvail = PylonDeviceFeatureIsWritable( hDev, "ChunkEnable" );
if (isAvail)
{
/* Activate the chunk feature. */
res = PylonDeviceSetBooleanFeature( hDev, "ChunkEnable", 1 );
CHECK( res );
}
}
/* ... The Timestamp chunk feature. */
isAvail = PylonDeviceFeatureIsAvailable( hDev, "EnumEntry_ChunkSelector_Timestamp" );
if (isAvail)
{
/* Select the Timestamp chunk feature. */
res = PylonDeviceFeatureFromString( hDev, "ChunkSelector", "Timestamp" );
CHECK( res );
/* Can the chunk feature be activated? */
isAvail = PylonDeviceFeatureIsWritable( hDev, "ChunkEnable" );
if (isAvail)
{
/* Activate the chunk feature. */
res = PylonDeviceSetBooleanFeature( hDev, "ChunkEnable", 1 );
CHECK( res );
}
}
Grabbing Buffers#
Grabbing from an image stream with chunks is very similar to grabbing from an image stream without chunks. Memory buffers must be provided that are large enough to store both the image data and the added chunk data.
The camera's PayloadSize
parameter reports the necessary buffersize (in bytes):
/* Determine the required size of the grab buffer. Since activating chunks will increase the
payload size and thus the required buffer size, do this after enabling the chunks. */
res = PylonStreamGrabberGetPayloadSize( hDev, hGrabber, &payloadSize );
CHECK( res );
/* Allocate memory for grabbing. */
for (i = 0; i < NUM_BUFFERS; ++i)
{
buffers[i] = (unsigned char*) malloc( payloadSize );
if (NULL == buffers[i])
{
fprintf( stderr, "Out of memory.\n" );
PylonTerminate();
pressEnterToExit();
exit( EXIT_FAILURE );
}
}
/* We must tell the stream grabber the number and size of the buffers
we are using. */
/* .. We will not use more than NUM_BUFFERS for grabbing. */
res = PylonStreamGrabberSetMaxNumBuffer( hGrabber, NUM_BUFFERS );
CHECK( res );
/* .. We will not use buffers bigger than payloadSize bytes. */
res = PylonStreamGrabberSetMaxBufferSize( hGrabber, payloadSize );
CHECK( res );
Once the camera has been set to produce chunk data, and data buffers have been set up taking into account the additional buffer space required to hold the chunk data, grabbing works exactly the same as in the 'no chunks' case.
Accessing the Chunk Data#
The data block containing the image chunk and the other chunks has a self-descriptive layout. Before accessing the data contained in the appended chunks, the data block must be parsed by a chunk parser.
The camera object is responsible for creating a chunk parser:
/* The data block containing the image chunk and the other chunks has a self-descriptive layout.
A chunk parser is used to extract the appended chunk data from the grabbed image frame.
Create a chunk parser. */
res = PylonDeviceCreateChunkParser( hDev, &hChunkParser );
CHECK( res );
if (hChunkParser == PYLONC_INVALID_HANDLE)
{
/* The transport layer doesn't provide a chunk parser. */
fprintf( stderr, "No chunk parser available.\n" );
goto exit;
}
Once a chunk parser is created, grabbed buffers can be attached to it. When a buffer is attached to a chunk parser, it is parsed and access to its data is provided through camera parameters.
/* Check to see if we really got image data plus chunk data. */
if (grabResult.PayloadType != PayloadType_ChunkData)
{
fprintf( stderr, "Received a buffer not containing chunk data?\n" );
}
else
{
/* Process the chunk data. This is done by passing the grabbed image buffer
to the chunk parser. When the chunk parser has processed the buffer, the chunk
data can be accessed in the same manner as "normal" camera parameters.
The only exception is the CRC feature. There are dedicated functions for
checking the CRC checksum. */
_Bool hasCRC;
/* Let the parser extract the data. */
res = PylonChunkParserAttachBuffer( hChunkParser, grabResult.pBuffer, (size_t) grabResult.PayloadSize );
CHECK( res );
/* Check the CRC. */
res = PylonChunkParserHasCRC( hChunkParser, &hasCRC );
CHECK( res );
if (hasCRC)
{
_Bool isOk;
res = PylonChunkParserCheckCRC( hChunkParser, &isOk );
CHECK( res );
printf( "Frame %d contains a CRC checksum. The checksum %s ok.\n", nGrabs, isOk ? "is" : "is not" );
}
{
const char *featureName = "ChunkFramecounter";
/* Retrieve the frame counter value. */
/* ... Check the availability. */
isAvail = PylonDeviceFeatureIsAvailable( hDev, featureName );
if (!isAvail)
{
/*if not available try using the SFNC feature FrameID*/
featureName = "ChunkFrameID";
isAvail = PylonDeviceFeatureIsAvailable( hDev, featureName );
}
printf( "Frame %d %s a frame counter chunk.\n", nGrabs, isAvail ? "contains" : "doesn't contain" );
if (isAvail)
{
/* ... Get the value. */
int64_t counter;
res = PylonDeviceGetIntegerFeature( hDev, featureName, &counter );
CHECK( res );
#if __STDC_VERSION__ >= 199901L || defined(__GNUC__)
printf( "Frame counter of frame %d: %lld.\n", nGrabs, (long long) counter );
#else
printf( "Frame counter of frame %d: %I64d.\n", nGrabs, counter );
#endif
}
}
/* Retrieve the frame width value. */
/* ... Check the availability. */
isAvail = PylonDeviceFeatureIsAvailable( hDev, "ChunkWidth" );
printf( "Frame %d %s a width chunk.\n", nGrabs, isAvail ? "contains" : "doesn't contain" );
if (isAvail)
{
/* ... Get the value. */
res = PylonDeviceGetIntegerFeatureInt32( hDev, "ChunkWidth", &chunkWidth );
CHECK( res );
printf( "Width of frame %d: %d.\n", nGrabs, chunkWidth );
}
/* Retrieve the frame height value. */
/* ... Check the availability. */
isAvail = PylonDeviceFeatureIsAvailable( hDev, "ChunkHeight" );
printf( "Frame %d %s a height chunk.\n", nGrabs, isAvail ? "contains" : "doesn't contain" );
if (isAvail)
{
/* ... Get the value. */
res = PylonDeviceGetIntegerFeatureInt32( hDev, "ChunkHeight", &chunkHeight );
CHECK( res );
printf( "Height of frame %d: %d.\n", nGrabs, chunkHeight );
}
/* Retrieve the frame timestamp value. */
/* ... Check the availability. */
isAvail = PylonDeviceFeatureIsAvailable( hDev, "ChunkTimestamp" );
printf( "Frame %d %s a timestamp chunk.\n", nGrabs, isAvail ? "contains" : "doesn't contain" );
if (isAvail)
{
/* ... Get the value. */
int64_t timestamp;
res = PylonDeviceGetIntegerFeature( hDev, "ChunkTimestamp", ×tamp );
CHECK( res );
#if __STDC_VERSION__ >= 199901L || defined(__GNUC__)
printf( "Frame timestamp of frame %d: %lld.\n", nGrabs, (long long)timestamp );
#else
printf( "Frame timestamp of frame %d: %I64d.\n", nGrabs, timestamp );
#endif
}
}
Chunk data integrity may be protected by an optional checksum. To check for its presence, use PylonChunkParserHasCRC().
/* Check the CRC. */
res = PylonChunkParserHasCRC( hChunkParser, &hasCRC );
CHECK( res );
if (hasCRC)
{
_Bool isOk;
res = PylonChunkParserCheckCRC( hChunkParser, &isOk );
CHECK( res );
printf( "Frame %d contains a CRC checksum. The checksum %s ok.\n", nGrabs, isOk ? "is" : "is not" );
}
Before re-using a buffer for grabbing, the buffer must be detached from the chunk parser.
/* Before requeueing the buffer, you should detach it from the chunk parser. */
res = PylonChunkParserDetachBuffer( hChunkParser ); /* The chunk data in the buffer is now no longer accessible. */
CHECK( res );
After detaching a buffer, the next grabbed buffer can be attached and the included chunk data can be read.
After grabbing is finished, the chunk parser must be deleted:
/* ... Release the chunk parser. */
res = PylonDeviceDestroyChunkParser( hDev, hChunkParser );
CHECK( res );
Sample Program#
The code snippets in this chapter are taken from the 'Chunks' sample program (see Chunks Sample) installed as part of the pylon C SDK in < SDK ROOT >/Development/Samples/C/Chunks.
Getting Informed About Device Removal#
Callback functions can be installed that are called whenever a camera device is removed. As soon as the PylonDeviceOpen() function has been called, callback functions of the PylonDeviceRemCb_t
type can be installed for it.
Installing a callback function:
/* Register the callback function. */
res = PylonDeviceRegisterRemovalCallback( hDev, removalCallbackFunction, &hCb );
CHECK( res );
All registered callbacks must be deregistered before calling PylonDeviceClose().
/* ... Deregister the removal callback. */
res = PylonDeviceDeregisterRemovalCallback( hDev, hCb );
CHECK( res );
This is the actual callback function. It does nothing besides incrementing a counter.
/* The function to be called when the removal of an opened device is detected. */
void GENAPIC_CC removalCallbackFunction( PYLON_DEVICE_HANDLE hDevice )
{
PylonDeviceInfo_t di;
GENAPIC_RESULT res;
/* Print out the name of the device. It is not possible to read the name
from the camera since it has been removed. Use the device's device
information instead. For accessing the device information, no reading from
the device is required. */
/* Retrieve the device information for the removed device. */
res = PylonDeviceGetDeviceInfo( hDevice, &di );
CHECK( res );
/* Print out the name. */
printf( "\nCallback function for removal of device %s (%s).\n", di.FriendlyName, di.FullName );
/* Increment the counter to indicate that the callback has been fired. */
callbackCounter++;
}
The code snippets in this section are taken from the 'SurpriseRemoval' sample program (see SurpriseRemoval Sample) installed as part of the pylon C SDK in < SDK ROOT >/Development/Samples/C/SurpriseRemoval.
Advanced Topics#
Generic Parameter Access#
For camera configuration and for accessing other parameters, the pylon API uses the technologies defined by the GenICam standard hosted by the European Machine Vision Association (EMVA). The GenICam specification (http://www.GenICam.org) defines a format for camera description files. These files describe the configuration interface of GenICam compliant cameras. The description files are written in XML (eXtensible Markup Language) and describe camera registers, their interdependencies, and all other information needed to access high-level features such as Gain
, Exposure``Time
, or Image``Format
by means of low-level register read and write operations.
The elements of a camera description file are represented as software objects called nodes. For example, a node can represent a single camera register, a camera parameter such as Gain, a set of available parameter values, etc. Nodes are represented as handles of the NODE_HANDLE
type.
Nodes are linked together by different relationships as explained in the GenICam standard document available at www.GenICam.org. The complete set of nodes is stored in a data structure called a node map. At runtime, a node map is instantiated from an XML description, which may exist as a disk file on the computer connected to a camera, or may be read from the camera itself. Node map objects are represented by handles of the NODEMAP_HANDLE
type.
Every node has a name, which is a text string. Node names are unique within a node map, and any node can be looked up by its name. All parameter access functions presented so far are actually shortcuts that get a node map handle from an object, look up a node that implements a named parameter, and finally perform the desired action on the node, such as assigning a new value, for example. The sample code below demonstrates how to look up a parameter node with a known name. If no such node exists, GenApiNodeMapGetNode() returns an invalid handle. This case needs to be handled by the program like in the sample code below, but a real program may want to handle this case differently.
/* Look up the feature node */
res = GenApiNodeMapGetNode( hNodeMap, featureName, &hNode );
CHECK( res );
if (GENAPIC_INVALID_HANDLE == hNode)
{
fprintf( stderr, "There is no feature named '%s'\n", featureName );
return;
}
Nodes are generally grouped into categories, which themselves are represented as nodes of the Category type. A category node is an abstraction for a certain functional aspect of a camera, and all parameter nodes grouped under it are related to this aspect. For example, the 'AOI Controls' category might contain an 'X Offset, a 'Y Offset', a 'Width', and a 'Height' parameter node. The topological structure of a node map is that of a tree, with parameter nodes as leaves and category nodes as junctions. The sample code below traverses the tree, displaying every node found:
/* Traverse the feature tree, displaying all categories and all features. */
static void handleCategory( NODE_HANDLE hRoot, char* buf, unsigned int depth )
{
GENAPIC_RESULT res;
size_t bufsiz, siz, numfeat, i;
/* Write out node name. */
siz = bufsiz = STRING_BUFFER_SIZE - depth * 2;
res = GenApiNodeGetName( hRoot, buf, &siz );
CHECK( res );
/* Get the number of feature nodes in this category. */
res = GenApiCategoryGetNumFeatures( hRoot, &numfeat );
CHECK( res );
printf( "%s category has %u children\n", buf - depth * 2, (unsigned int) numfeat );
/* Increase indentation. */
*buf++ = ' ';
*buf++ = ' ';
bufsiz -= 2;
++depth;
/* Now loop over all feature nodes. */
for (i = 0; i < numfeat; ++i)
{
NODE_HANDLE hNode;
EGenApiNodeType nodeType;
/* Get next feature node and check its type. */
res = GenApiCategoryGetFeatureByIndex( hRoot, i, &hNode );
CHECK( res );
res = GenApiNodeGetType( hNode, &nodeType );
CHECK( res );
if (Category != nodeType)
{
/* A regular feature. */
EGenApiAccessMode am;
const char* amode;
siz = bufsiz;
res = GenApiNodeGetName( hNode, buf, &siz );
CHECK( res );
res = GenApiNodeGetAccessMode( hNode, &am );
CHECK( res );
switch (am)
{
case NI:
amode = "not implemented";
break;
case NA:
amode = "not available";
break;
case WO:
amode = "write only";
break;
case RO:
amode = "read only";
break;
case RW:
amode = "read and write";
break;
default:
amode = "undefined";
break;
}
printf( "%s feature - access: %s\n", buf - depth * 2, amode );
}
else
/* Another category node. */
handleCategory( hNode, buf, depth );
}
}
static void demonstrateCategory( PYLON_DEVICE_HANDLE hDev )
{
NODEMAP_HANDLE hNodeMap;
NODE_HANDLE hNode;
char buf[512];
GENAPIC_RESULT res;
/* Get a handle for the device's node map. */
res = PylonDeviceGetNodeMap( hDev, &hNodeMap );
CHECK( res );
/* Look up the root node. */
res = GenApiNodeMapGetNode( hNodeMap, "Root", &hNode );
CHECK( res );
handleCategory( hNode, buf, 0 );
}
In order to access a parameters value, a handle for the corresponding parameter node must be obtained first, as demonstrated in the example below for an integer feature:
/* This function demonstrates how to handle integer camera parameters. */
static void demonstrateIntFeature( PYLON_DEVICE_HANDLE hDev )
{
NODEMAP_HANDLE hNodeMap;
NODE_HANDLE hNode;
static const char featureName[] = "Width"; /* Name of the feature used in this sample: AOI Width. */
int64_t val, min, max, incr; /* Properties of the feature. */
GENAPIC_RESULT res; /* Return value. */
EGenApiNodeType nodeType;
_Bool bval;
/* Get a handle for the device's node map. */
res = PylonDeviceGetNodeMap( hDev, &hNodeMap );
CHECK( res );
/* Look up the feature node */
res = GenApiNodeMapGetNode( hNodeMap, featureName, &hNode );
CHECK( res );
if (GENAPIC_INVALID_HANDLE == hNode)
{
fprintf( stderr, "There is no feature named '%s'\n", featureName );
return;
}
/* We want an integer feature node. */
res = GenApiNodeGetType( hNode, &nodeType );
CHECK( res );
if (IntegerNode != nodeType)
{
fprintf( stderr, "'%s' is not an integer feature\n", featureName );
return;
}
/*
Query the current value, the range of allowed values, and the increment of the feature.
For some integer features, you are not allowed to set every value within the
value range. For example, for some cameras the Width parameter must be a multiple
of 2. These constraints are expressed by the increment value. Valid values
follow the rule: val >= min && val <= max && val == min + n * inc.
*/
res = GenApiNodeIsReadable( hNode, &bval );
CHECK( res );
if (bval)
{
res = GenApiIntegerGetMin( hNode, &min ); /* Get the minimum value. */
CHECK( res );
res = GenApiIntegerGetMax( hNode, &max ); /* Get the maximum value. */
CHECK( res );
res = GenApiIntegerGetInc( hNode, &incr ); /* Get the increment value. */
CHECK( res );
res = GenApiIntegerGetValue( hNode, &val ); /* Get the current value. */
CHECK( res );
#if __STDC_VERSION__ >= 199901L || defined(__GNUC__)
printf( "%s: min= %lld max= %lld incr=%lld Value=%lld\n", featureName, (long long) min, (long long) max, (long long) incr, (long long) val );
#else
printf( "%s: min= %I64d max= %I64d incr=%I64d Value=%I64d\n", featureName, min, max, incr, val );
#endif
res = GenApiNodeIsWritable( hNode, &bval );
CHECK( res );
if (bval)
{
/* Set the Width half-way between minimum and maximum. */
res = GenApiIntegerSetValue( hNode, min + (max - min) / incr / 2 * incr );
CHECK( res );
}
else
fprintf( stderr, "Cannot set value for feature '%s' - node not writable\n", featureName );
}
else
fprintf( stderr, "Cannot read feature '%s' - node not readable\n", featureName );
}
So far, only camera node maps have been considered. However, there are more objects that expose parameters through node maps:
- The PylonDeviceGetTLNodeMap() function returns the node map for a device's transport layer.
- The PylonStreamGrabberGetNodeMap() function is used to access a stream grabber's parameters.
- The PylonEventGrabberGetNodeMap() function is used to access an event grabber's parameters. Parameter access works identical for all types of node maps, and the same set of functions is used as for camera node maps. It should be noted, however, that the objects listed above, transport layers in particular, may not have any parameters at all. In this case, a call to the corresponding function would return
GENAPIC_INVALID_HANDLE
.
Browsing Parameters#
The pylon Viewer tool provides an easy way of browsing camera parameters, their names, values, and ranges. Besides grabbing images (not available for Camera Link cameras ) it is capable of displaying all node maps for a camera device, and all parameter nodes contained therein. The pylon Viewer tool has a Features window that displays a tree view of node maps, categories, and parameter nodes. Selecting a node in this view opens a dialog that displays the node's current value (if applicable), and may also allow to change it, subject to accessibility. There is also a Feature Documentation window, located at the very bottom of the display unless the layout was changed from the standard layout. The Feature Documentation window displays detailed information about the currently selected node.
Getting Notified About Parameter Changes#
The pylon C API provides the functionality for installing callback functions that will be called when a parameter's value or state (e.g. the access mode or value range) was changed.
Every callback is installed for a specific parameter. If the parameter itself has been touched or if another parameter that could possibly influence the state of the parameter has been changed, the callback will be invoked.
The example below illustrates how to find a parameter node and register a callback:
/* Register the callback function for ExposureEndEventFrameID parameter. */
/*... Get the node map containing all parameters. */
res = PylonDeviceGetNodeMap( hDev, &hNodeMap );
CHECK( res );
/* Get the ExposureEndEventFrameID parameter.
Select the parameter name depending on the SFNC version used by the camera device.
*/
if (sfncVersionMajor >= 2)
res = GenApiNodeMapGetNode( hNodeMap, "EventExposureEndFrameID", &hNode );
else
res = GenApiNodeMapGetNode( hNodeMap, "ExposureEndEventFrameID", &hNode );
CHECK( res );
if (GENAPIC_INVALID_HANDLE == hNode)
{
/* There is no ExposureEndEventFrameID parameter. */
fprintf( stderr, "There is no ExposureEndEventFrameID or EventExposureEndFrameID parameter.\n" );
PylonTerminate();
pressEnterToExit();
return EXIT_FAILURE;
}
/* ... Register the callback function. */
res = GenApiNodeRegisterCallback( hNode, endOfExposureCallback, &hCallback );
CHECK( res );
As an optimization, nodes that can only change their values as a direct result of some user action (an application writing a new value) can have their values cached on the computer to speed up read access. Other nodes can change their values asynchronously, e. g. as a result of some operation performed by a camera internally. These nodes obviously cannot be cached. An application should call the function GenApiNodeMapPoll() at regular intervals. This results in the values of non-cachable nodes being updated in the node map, which in turn may cause callbacks to be executed as explained above.
GigE Multicast/Broadcast: Grab Images of One Camera on Multiple PCs#
Basler GigE cameras can be set to send the image data stream to multiple destinations. More information on this subject can be found in the pylon C++ Programmer's Guide.
GigE Action Commands#
The action command feature lets you trigger actions in multiple GigE devices (e.g. cameras) at roughly the same time or at a defined point in time (scheduled action command) by using a single broadcast protocol message (without extra cabling). Action commands are used in cameras in the same way as for example the digital input lines.
After setting up the camera parameters required for action commands the methods PylonGigEIssueActionCommand() or PylonGigEIssueScheduledActionCommand() can be used to trigger action commands.
This is shown in the sample ActionCommands Sample.
Consult the the camera User's Manual for more detailed information on action commands.
Handling Compressed Image Data#
Selected ace models can be configured to send compressed image data. When you receive compressed image data, it needs to be decompressed before you can access it. This can be done using an image decompressor. You can use PylonImageDecompressorCreate() to create an image decompressor.
Then you have to set the compression descriptor. The descriptor contains the meta data needed to decompress image data. It can be retrieved from the camera by reading the BslImageCompressionBCBDescriptor
node. You can set the descriptor using the PylonImageDecompressorSetCompressionDescriptor() function.
After setting the compression descriptor, you can call PylonImageDecompressorDecompressImage() to decompress the image data. When you are finished, you must free the ressources used by the image decompressor by calling PylonImageDecompressorDestroy().
This is shown in the sample ImageDecompressor Sample.
More information on this subject can be found in the pylon C++ Programmer's Guide.
Migrating Existing Code for Using SFNC 2.X-Based Camera Devices#
Changes of Parameter Names and Behavior#
Most features, e.g., Gain, are named according to the GenICam Standard Feature Naming Convention (SFNC). The SFNC defines a common set of features, their behavior, and the related parameter names. All Basler USB 3.0 and CoaXPress as well as most GigE, e.g., ace 2 GigE, cameras are based on the SFNC version 2.0 or later. Older Basler GigE camera models, however, are based on previous SFNC versions. Accordingly, the behavior of these cameras and some parameter names will be different.
Additionally, parameters that are not covered by the SFNC have been prefixed with Bsl for some camera models, e.g., for all ace 2 camera models. This has been done to be able to clearly distinguish these parameter names from SFNC parameter names.
Code working with multiple camera device types that are compatible with different SFNC versions can read the SFNC version from a camera device to select the correct parameter names. The SFNC version can be read from the camera node map using the integer nodes DeviceSFNCVersionMajor, DeviceSFNCVersionMinor, and DeviceSFNCVersionSubMinor.
Example for selecting the parameter name depending on the SFNC version:
/* Determine the major number of the SFNC version used by the camera device. */
if (PylonDeviceGetIntegerFeatureInt32( hDev, "DeviceSFNCVersionMajor", &sfncVersionMajor ) != GENAPI_E_OK)
{
/* No SFNC version information is provided by the camera device. */
sfncVersionMajor = 0;
}
/* Enable camera event reporting. */
/* Select the end-of-exposure event reporting. */
res = PylonDeviceFeatureFromString( hDev, "EventSelector", "ExposureEnd" );
CHECK( res );
/* Enable the event reporting.
Select the enumeration entry name depending on the SFNC version used by the camera device.
*/
if (sfncVersionMajor >= 2)
res = PylonDeviceFeatureFromString( hDev, "EventNotification", "On" );
else
res = PylonDeviceFeatureFromString( hDev, "EventNotification", "GenICamEvent" );
CHECK( res );
Note
The above sample code snippet uses convenience methods for accessing camera device parameters. After getting the node map handle from the device using PylonDeviceGetNodeMap(), the camera parameters can also be accessed using the GenApi methods, e.g. GenApiNodeMapGetNode(), GenApiIntegerGetValue(), and others.
The following tables show how to map previous parameter names to their equivalents defined in SFNC 2.X. Some previous parameters have no direct equivalents. There are previous parameters, however, that can still be accessed using the so-called alias that is provided via the GenApiNodeGetAlias() function. The alias is another representation of the original parameter. Usually, the alias provides an Integer representation of a Float parameter.
Depending on the camera device model the alias does not provide a proper name, display name, tool tip, or description. The value range of an alias node can change when updating the camera firmware.
The following table shows how to map changes for parameters:
The actual changes between previous cameras and SFNC 2.X compliant cameras depend on the used models and the used camera firmware versions. It is possible that changes are not listed in the tables below. Other sources of information regarding changes between camera models are the Camera User's Manuals or the information shown in the Pylon Viewer tool.
Previous Parameter Name | SFNC 2.x or Equivalent with Bsl Prefix | Parameter Type | Comments |
---|---|---|---|
AcquisitionFrameCount | AcquisitionBurstFrameCount | Integer | |
AcquisitionFrameRateAbs | AcquisitionFrameRate | Float | |
AcquisitionStartEventFrameID | EventFrameBurstStartFrameID | Integer | |
AcquisitionStartEventTimestamp | EventFrameBurstStartTimestamp | Integer | |
AcquisitionStartOvertriggerEventFrameID | EventFrameBurstStartOvertriggerFrameID | Integer | |
AcquisitionStartOvertriggerEventTimestamp | EventFrameBurstStartOvertriggerTimestamp | Integer | |
AutoExposureTimeAbsLowerLimit | AutoExposureTimeLowerLimit | Float | |
AutoExposureTimeAbsUpperLimit | AutoExposureTimeUpperLimit | Float | |
AutoFunctionAOIUsageIntensity | AutoFunctionAOIUseBrightness | Boolean | |
AutoFunctionAOIUsageWhiteBalance | AutoFunctionAOIUseWhiteBalance | Boolean | |
AutoGainRawLowerLimit | Alias of AutoGainLowerLimit | Integer | |
AutoGainRawUpperLimit | Alias of AutoGainUpperLimit | Integer | |
AutoTargetValue | Alias of AutoTargetBrightness | Integer | |
BalanceRatioAbs | BalanceRatio | Float | |
BalanceRatioRaw | Alias of BalanceRatio | Integer | |
BlackLevelAbs | BlackLevel | Float | |
BlackLevelRaw | Alias of BlackLevel | Integer | |
ChunkExposureTimeRaw | Integer | ChunkExposureTimeRaw has been replaced with ChunkExposureTime. ChunkExposureTime is of type float. | |
ChunkFrameCounter | Integer | ChunkFrameCounter has been replaced with ChunkCounterSelector and ChunkCounterValue. | |
ChunkGainAll | Integer | ChunkGainAll has been replaced with ChunkGain. ChunkGain is of type float. | |
ColorAdjustmentEnable | Boolean | ColorAdjustmentEnable has been removed. The color adjustment is always enabled. | |
ColorAdjustmentEnable | BslColorAdjustmentEnable | Boolean | |
ColorAdjustmentHue | BslColorAdjustmentHue | Float | |
ColorAdjustmentHueRaw | Alias of ColorAdjustmentHue or BslColorAdjustmentHue | Integer | |
ColorAdjustmentReset | Command | ColorAdjustmentReset has been removed. | |
ColorAdjustmentSaturation | BslColorAdjustmentSaturation | Float | |
ColorAdjustmentSaturationRaw | Alias of ColorAdjustmentSaturation or BslColorAdjustmentSaturation | Integer | |
ColorAdjustmentSelector | BslColorAdjustmentSelector | Enumeration | |
ColorSpace | BslColorSpace | Enumeration | |
ColorTransformationValueRaw | Alias of ColorTransformationValue | Integer | |
ContrastMode | BslContrastMode | Enumeration | |
DefaultSetSelector | Enumeration | See additional entries in UserSetSelector. | |
ExposureEndEventFrameID | EventExposureEndFrameID | Integer | |
ExposureEndEventTimestamp | EventExposureEndTimestamp | Integer | |
ExposureTimeAbs | ExposureTime | Float | |
ExposureTimeMode | BslExposureTimeMode | Enumeration | |
ExposureTimeRaw | Alias of ExposureTime | Integer | |
FrameStartEventFrameID | EventFrameStartFrameID | Integer | |
FrameStartEventTimestamp | EventFrameStartTimestamp | Integer | |
FrameStartOvertriggerEventFrameID | EventFrameStartOvertriggerFrameID | Integer | |
FrameStartOvertriggerEventTimestamp | EventFrameStartOvertriggerTimestamp | Integer | |
GainAbs | Gain | Float | |
GainRaw | Alias of Gain | Integer | |
GammaEnable | Boolean | GammaEnable has been removed. Gamma is always enabled. | |
GammaSelector | Enumeration | The sRGB setting is automatically applied when LineSourcePreset is set to any other value than Off. | |
GevIEEE1588 | PtpEnable | Boolean | |
GevIEEE1588ClockId | PtpClockID | Integer | |
GevIEEE1588DataSetLatch | PtpDataSetLatch | Command | |
GevIEEE1588OffsetFromMaster | PtpOffsetFromMaster | Integer | |
GevIEEE1588ParentClockId | PtpParentClockID | Integer | |
GevIEEE1588Status | Enumeration | GevIEEE1588Status has been removed. Use PtpDataSetLatch and then PtpStatus instead. | |
GevIEEE1588StatusLatched | PtpStatus | Enumeration | |
GevTimestampControlLatch | TimestampLatch | Command | |
GevTimestampControlLatchReset | Command | ||
GevTimestampControlReset | TimestampReset | Command | |
GevTimestampValue | TimestampLatchValue | Integer | |
GlobalResetReleaseModeEnable | Boolean | GlobalResetReleaseModeEnable has been replaced with the enumeration ShutterMode. | |
LightSourcePreset | BslLightSourcePreset | Enumeration | |
LightSourceSelector | LightSourcePreset | Enumeration | |
LineDebouncerTimeAbs | LineDebouncerTime | Float | |
LineOverloadStatus | BslLineOverloadStatus | Boolean | |
MinOutPulseWidthAbs | LineMinimumOutputPulseWidth | Float | |
MinOutPulseWidthRaw | Alias of LineMinimumOutputPulseWidth | Integer | |
ParameterSelector | RemoveParameterLimitSelector | Enumeration | |
ProcessedRawEnable | Boolean | ProcessedRawEnable has been removed because it is not needed anymore. The camera uses nondestructive Bayer demosaicing now. | |
ReadoutTimeAbs | SensorReadoutTime | Float | |
ResultingFrameRateAbs | ResultingFrameRate | Float | |
SensorBitDepth | BslSensorBitDepth | Enumeration | |
SequenceAddressBitSelector | Enumeration | ||
SequenceAdvanceMode | Enumeration | ||
SequenceAsyncAdvance | Command | Configure a asynchronous signal as trigger source of path 1. | |
SequenceAsyncRestart | Command | Configure a asynchronous signal as trigger source of path 0. | |
SequenceBitSource | Enumeration | ||
SequenceControlConfig | Category | ||
SequenceControlSelector | Enumeration | ||
SequenceControlSource | Enumeration | ||
SequenceCurrentSet | SequencerSetActive | Integer | |
SequenceEnable | Boolean | Replaced by SequencerConfigurationMode and SequencerMode. | |
SequenceSetExecutions | Integer | ||
SequenceSetIndex | SequencerSetSelector | Integer | |
SequenceSetLoad | SequencerSetLoad | Command | |
SequenceSetStore | SequencerSetSave | Command | |
SequenceSetTotalNumber | Integer | Use the range of the SequencerSetSelector. | |
TemperatureState | BslTemperatureStatus | Enumeration | |
TestImageSelector | TestPattern | Enumeration | TestPattern instead of TestImageSelector is used for dart and pulse camera models. |
TimerDelayAbs | TimerDelay | Float | |
TimerDelayRaw | Alias of TimerDelay | Integer | |
TimerDelayTimebaseAbs | Float | The time base is always 1us. | |
TimerDurationAbs | TimerDuration | Float | |
TimerDurationRaw | Alias of TimerDuration | Integer | |
TimerDurationTimebaseAbs | Float | The time base is always 1us. | |
TriggerDelayAbs | TriggerDelay | Float | |
UserSetDefaultSelector | UserSetDefault | Enumeration | |
VignettingCorrectionLoad | BslVignettingCorrectionLoad | Command | |
VignettingCorrectionMode | BslVignettingCorrectionMode | Enumeration |
The following table shows how to map changes for enumeration values:
Previous Enumeration Name | Previous Enumeration Value Name | Value Name SFNC 2.X | Comments |
---|---|---|---|
AcquisitionStatusSelector | AcquisitionTriggerWait | FrameBurstTriggerWait | |
AutoFunctionProfile | ExposureMinimum | MinimizeExposureTime | |
AutoFunctionProfile | GainMinimum | MinimizeGain | |
ChunkSelector | GainAll | Gain | The gain value is reported via the ChunkGain node as float. |
ChunkSelector | Height | Height is part of the image information regardless of the chunk mode setting. | |
ChunkSelector | OffsetX | OffsetX is part of the image information regardless of the chunk mode setting. | |
ChunkSelector | OffsetY | OffsetY is part of the image information regardless of the chunk mode setting. | |
ChunkSelector | PixelFormat | PixelFormat is part of the image information regardless of the chunk mode setting. | |
ChunkSelector | Stride | Stride is part of the image information regardless of the chunk mode setting. | |
ChunkSelector | Width | Width is part of the image information regardless of the chunk mode setting. | |
EventNotification | GenICamEvent | On | |
EventSelector | AcquisitionStartOvertrigger | FrameBurstStartOvertrigger | |
EventSelector | AcquisitionStart | FrameBurstStart | |
LightSourceSelector | Daylight | Daylight5000K | |
LightSourceSelector | Tungsten | Tungsten2800K | |
LineSelector | Out1 | The operation mode of an I/O-Pin is chosen using the LineMode Selector. | |
LineSelector | Out2 | The operation mode of an I/O-Pin is chosen using the LineMode Selector. | |
LineSelector | Out3 | The operation mode of an I/O-Pin is chosen using the LineMode Selector. | |
LineSelector | Out4 | The operation mode of an I/O-Pin is chosen using the LineMode Selector. | |
LineSource | AcquisitionTriggerWait | FrameBurstTriggerWait | |
LineSource | UserOutput | Use UserOutput1, UserOutput2, or UserOutput3 etc. instead. | |
PixelFormat | BayerBG12Packed | The pixel format BayerBG12p is provided by USB camera devices. The memory layout of pixel format BayerBG12Packed and pixel format BayerBG12p is different. See the camera User's Manuals for more information on pixel formats. | |
PixelFormat | BayerGB12Packed | The pixel format BayerGB12p is provided by USB camera devices. The memory layout of pixel format BayerGB12Packed and pixel format BayerGB12p is different. See the camera User's Manuals for more information on pixel formats. | |
PixelFormat | BayerGR12Packed | The pixel format BayerGR12p is provided by USB camera devices. The memory layout of pixel format BayerGR12Packed and pixel format BayerGR12p is different. See the camera User's Manuals for more information on pixel formats. | |
PixelFormat | BayerRG12Packed | The pixel format BayerRG12p is provided by USB camera devices. The memory layout of pixel format BayerRG12Packed and pixel format BayerRG12p is different. See the camera User's Manuals for more information on pixel formats. | |
PixelFormat | BGR10Packed | BGR10 | |
PixelFormat | BGR12Packed | BGR12 | |
PixelFormat | BGR8Packed | BGR8 | |
PixelFormat | BGRA8Packed | BGRa8 | |
PixelFormat | Mono10Packed | The pixel format Mono10p is provided by USB camera devices. The memory layout of pixel format Mono10Packed and pixel format Mono10p is different. See the camera User's Manuals for more information on pixel formats. | |
PixelFormat | Mono12Packed | The pixel format Mono12p is provided by USB camera devices. The memory layout of pixel format Mono12Packed and pixel format Mono12p is different. See the camera User's Manuals for more information on pixel formats. | |
PixelFormat | Mono1Packed | Mono1p | |
PixelFormat | Mono2Packed | Mono2p | |
PixelFormat | Mono4Packed | Mono4p | |
PixelFormat | RGB10Packed | RGB10 | |
PixelFormat | RGB12Packed | RGB12 | |
PixelFormat | RGB16Packed | RGB16 | |
PixelFormat | RGB8Packed | RGB8 | |
PixelFormat | RGBA8Packed | RGBa8 | |
PixelFormat | YUV411Packed | YCbCr411_8 | |
PixelFormat | YUV422_YUYV_Packed | YCbCr422_8 | |
PixelFormat | YUV444Packed | YCbCr8 | |
TestImageSelector | Testimage1 | GreyDiagonalSawtooth8 | GreyDiagonalSawtooth8 instead of Testimage1 is used for dart and pulse camera models. |
TriggerSelector | AcquisitionStart | FrameBurstStart |
Migration Mode#
pylon USB and GigE devices offer a convenient migration mode that allows you to work with camera devices that are based on different SFNC versions by automatically adapting your application code accordingly. If the migration mode is enabled, the changes shown in the tables above are automatically reflected in your code where appropriate. If you are only working with SFNC 2.x-based cameras, however, Basler strongly recommends rewriting existing code to be SFNC 2.x-compliant instead of using the migration mode. How to use the migration mode is also shown in the GenApiParam Sample.
Existing applications may use features that can't be mapped automatically. In this case, the application code needs to adapted before it can be used with SFNC 2.x-based cameras. The behavior of a parameter may have changed as well, e.g., its value range. Check this carefully. Furthermore, automatically mapped alias nodes don't provide a proper name, display name, tooltip, or description. The value range of an alias node can change when updating the camera firmware.
The following simple code snippet demonstrates how to turn the migration mode on, and to set the exposure time node to the value 100 using two names mapped to the same node. See also the tables above.
GENAPIC_RESULT res;
PYLON_DEVICE_HANDLE hDev;
NODE_HANDLE hNode;
NODEMAP_HANDLE hTlNodemap;
_Bool isWritable;
// Create the first camera device found.
res = PylonCreateDeviceByIndex( 0, &hDev );
CHECK(res);
// Open the camera
res = PylonDeviceOpen( hDev, PYLONC_ACCESS_MODE_CONTROL );
CHECK( res );
// Retrieve the transport layer node map.
res = PylonDeviceGetTLNodeMap( hDev, &hTlNodemap );
CHECK(res);
// Find the migration mode enable node.
res = GenApiNodeMapGetNode( hTlNodemap, "MigrationModeEnable", &hNode );
CHECK(res);
res = GenApiNodeIsWritable( hNode, &isWritable );
CHECK(res);
if (isWritable)
{ // Enable the migration mode if available and writable.
res = GenApiBooleanSetValue( hNode, 1 );
CHECK( res );
// For demonstration purpose only, access the ExposureTimeAbs alias ExposureTime.
res = PylonDeviceSetFloatFeature( hDev, "ExposureTimeAbs", 100.0 );
CHECK( res );
// ExposureTime can still be accessed. The same node is used.
res = PylonDeviceSetFloatFeature( hDev, "ExposureTime", 100.0 );
CHECK( res );
}
The migration mode is implemented using proxy objects. If the migration mode is enabled, the proxy calls the alias node. If the migration mode is not enabled or not available for a node, the call is forwarded to the original node. This mapping also applies to calls of GenApiEnumerationGetEntryByName(), GenApiNodeToString(), GenApiNodeToStringEx(), and GenApiEnumerationEntryGetSymbolic().
Migrating to Using USB Camera Devices#
Differences in Image Transport#
The image transport on USB camera devices differs from the image transport on GigE camera devices. GigE camera devices automatically send image data to the PC when available. If the PC is not ready to receive the image data because no grab buffer is available, the image data sent by the camera device is dropped. For USB camera devices the PC has to actively request the image data. Grabbed images are stored in the frame buffer of the USB camera device until the PC requests the image data. If the frame buffer of the USB camera device is full, newly acquired frames will be dropped. Old images in the frame buffer of the USB camera device will be grabbed first the next time the PC requests image data. After that, newly acquired images are grabbed.
USB Camera Devices and Block ID#
Image data is transferred between a PC and a USB camera device using a certain sequence of data packets. In the rare case of an error during the image transport, the image data stream between PC and USB camera device is reset automatically, e.g. if the image packet sequence is out of sync. The image data stream reset causes the Block ID delivered by the USB camera device to start again at zero. Pylon indicates this error condition by setting the PylonGrabResult_t::BlockID Block ID of the grab result to its highest possible value (UINT64_MAX) for all subsequent grab results. A Block ID of UINT64_MAX is invalid and cannot be used in any further operations. The image data and other grab result data are not affected by the Block ID being invalid. The grabbing needs to be stopped and restarted to recover from this error condition if the application uses the Block ID. The Block ID starts at zero if the grabbing is restarted.
Calling the PylonStreamGrabberFlushBuffersToOutput() function resets the image stream between PC and USB camera device, too. Therefore, the value of the Block ID is set to UINT64_MAX for all subsequent grab results after calling PylonStreamGrabberFlushBuffersToOutput.
Camera Emulator#
pylon offers a camera emulation transport layer, which is similar to other transport layers. The camera emulator transport layer can create simple camera emulator devices that allow you to develop applications without having a physical camera device attached to your computer. The emulator's functionality is limited, but it is able to create test images for different bit depths.
The number of available emulator devices can be controlled by setting the < PYLON_CAMEMU > environment variable.
Example:
This will provide two emulator devices. These devices can be accessed using the pylon API and the pylon Viewer program.
When < PYLON_CAMEMU > is not set, no emulator devices are provided.
Note
A maximum of 256 emulator devices are supported.