TwinCAT Vision: constantly evolving functionality


From configuration to programming and real-time operation, TwinCAT Vision fully incorporates image processing into automation technology. Built seamlessly into the Beckhoff PC-based control system from the very start, TwinCAT Vision today continues to evolve, above all with new user requirements in mind. Beckhoff UK explains how its focus is on enhancing functionality and usability and integrating with other TwinCAT products and features.

TwinCAT Vision unites classic automation technology with image processing, smoothly and conveniently. It enables users to configure cameras, conduct geometric camera calibration directly in TwinCAT Engineering, and program image processing in IEC 61131-3 rather than learn a special programming language. In addition, it allows the PLC to respond directly to the results obtained through image processing – virtually with the next line of control code.

Running image processing algorithms within the TwinCAT real-time environment has a crucial advantage in that the vision algorithms, as well as PLC, motion control and measuring components, all run at the same cycle times – in other words, in tight sync with one another. This means there is no need to manage communication between a non-real-time application and a real-time PLC, motion control or measurement application, thus avoiding the typical delays caused by communication overhead and jitter.

Integrating image processing into the PLC has another advantage: PLC programmers can directly process the results returned by an image processing algorithm, as if from an analogue sensor. For example, they can program instructions like ‘If the object detected in the image is round, set this digital output to TRUE’. In addition, programmers can use all the familiar PLC debugging functions. Thus, they can display an image at any time in a processing flow just as if they were monitoring a variable. If an image is processed in multiple steps, the resulting image can be displayed directly in Visual Studio at every stage. This makes testing algorithms and settings exceptionally quick and easy. Programmers can change parameters online – for example, to adjust the region of interest or threshold values – and observe the effects directly. With the ability for online change – a common practice among PLC programmers – it is even possible to exchange entire functions and test routines on running PLCs, which helps to get image processing rolled out and optimised exceptionally quickly. In addition, images can be saved with the aid of function blocks in the PLC or with a camera assistant to work on it offline and develop or optimise analytics, and then load the result back into the machine.

With the distributed clocks in EtherCAT, the external devices used by machine vision applications can also be synchronised with exceptional precision. Most cameras are equipped with a digital trigger input. If this is controlled through a digital output on an EtherCAT terminal – say, an EL2596 EtherCAT LED strobe control terminal – image capture can be triggered to coordinate exactly with a particular conveyor belt position, for example. At the same time, the EL2596 can precisely control the lighting in terms of timing and current.

Init commands are an important basic feature of TwinCAT Vision. Much like the startup list used with EtherCAT modules, they serve to store camera configuration settings, providing a separate and independent solution from camera user sets. As such, they offer an easy means of ensuring that parameters are always assigned consistently to any given camera. What is new now, though, is that the commands can be viewed and edited easily in the graphical Init Command Editor, greatly improving usability when working with init commands.

The new Init Command Editor visualises camera initialisation parameters and provides a range of editing options – for selection and deselection, sequence changing, selecting alternative user sets, and forcing IP settings, for example. It also clearly indicates changes to and differences in register values.


Additional functions, drivers and TwinCAT connectivity

TwinCAT Vision’s capabilities also expand significantly with each new release. Following are several examples:

  • CLAHE (Contrast Limited Adaptive Histogram Equalization): This function to adaptively increase an image’s contrast now supports parameter-driven partitioning of images into smaller regions. This produces better results, particularly in images with very light and very dark areas, by limiting the view to smaller localised zones.
  • Matching: A new function is available to filter key-point results and to compute the homography matrix directly. It increases the precision with which rotated objects are detected and visualised.
  • Connected Components: This function is used to locate contiguous regions in binary images. It also directly identifies the centre of mass and can compute the number of pixels and an enclosing rectangle, which means it offers an alternative to the blob function, based on a different computational algorithm.
  • GeneralizedHoughBallard: This alternative matching function is based on the Hough transform, a highly robust method of detecting straight lines and circles in a binary gradient image.

Functionality has also been expanded to include new container types and a variety of additional computation options.

For TwinCAT Vision, the next TwinCAT release will provide a new driver for using the 10 GigE Ethernet functionality supported by the CX20x2 Embedded PCs, among others. With the new releases of TwinCAT Scope and TwinCAT Analytics, image data can be captured, stored and transmitted with the Scope Server and Analytics Logger. TwinCAT Analytics Logger also enables image data to be transferred to a cloud platform via MQTT. Furthermore, a new image chart type is provided for optimised image display in TwinCAT Scope View.


Visualisation using vision-specific controls

With the new release of the Vision HMI Control, the TwinCAT HMI visualisation solution can now also smoothly incorporate image processing into the state-of-the-art HTML-based user interface. This release includes an expanded image display control that supports the following:

  • Directly linking multiple image variables and switching easily between displayed images.
  • Freezing the image to stop it refreshing and allow detailed analysis of the last capture.
  • Scaling and moving the image within the vision control (by means of touch gestures, mouse input, or direct entry of specific values) for more precise viewing of image details.
  • Displaying a toolbar with directly usable control elements (e.g., selecting images, scaling, creating shapes, freezing the image refresh, and downloading the displayed image).
  • Displaying an information bar showing current details and values, such as image size, pixel coordinates, colour values and shape data.
  • Drawing shapes (points, lines, rectangles, ellipses and polygons) with modifiable positions and sizes, determine size, area and coordinates, and set regions of interest, among other things.
  • Displaying graphics (a cross, rectangles and circles) or image overlays for the purpose of setting up and positioning cameras and workpieces.

Without the convenience of this control, users would have to go through the time-consuming process of creating and coding these capabilities themselves with the help of other elements. The new image control, which incorporates a large number of separate other controls as well as extensive JavaScript programming, makes these capabilities available in full and in a readily configurable form.

In addition, the Vision HMI package’s colour control provides the following features:

  • Three options for entering and displaying colour values (a text box, a slider, and a colour input element in the browser).
  • Flexible configuration and editing of the number of channels, the value range and available controls.
  • A choice of horizontal or vertical orientation.
  • Conversion between various colour formats, such as greyscale, RGB and HSV.

The colour control likewise incorporates various other controls as well as JavaScript programming. It can also link to a four-element array variable to edit a colour filter directly from the PLC. This, too, saves users time and engineering effort when integrating image processing into control applications.

A look ahead to future features

Beckhoff will continue to advance and evolve TwinCAT Vision. The vision library is to be adapted and optimised for coding in C++ so that users can program entirely in a C++ module if they wish, without the need for a PLC. This will also make it easier and more efficient for them to code their own algorithms in C++ and to augment these with TwinCAT Vision functionality. In addition, there are plans to drive the use of machine learning in image processing and to make the machine vision functionality available on TwinCAT/BSD, the new multi-core enabled, Unix-compatible operating system.



About Author

Comments are closed.