1. Introduction: Empowering Intelligent Systems with Computer Vision on VxWorks
The convergence of machine learning and artificial intelligence (AI) is reshaping industries, driving innovation and automation. At the heart of this transformation lies machine learning, a pivotal subset of AI that enables applications to learn from data and refine their performance without explicit, rule-based programming. A significant modality for data acquisition in intelligent systems is through visual input, where images provide a wealth of information about the environment. Computer vision, a field dedicated to replicating human visual capabilities in machines, plays a crucial role in interpreting and understanding this visual data using advanced computer software and hardware.
OpenCV (Open Source Computer Vision Library) stands as a cornerstone in the domains of computer vision and machine learning. This versatile, cross-platform library, initially developed by Intel, offers a comprehensive suite of algorithms and tools specifically designed for tasks such as object detection, recognition, and image processing. Its capabilities underpin a wide array of applications, including sophisticated robotics, medical imaging analysis, advanced security systems, automated industrial processes, and autonomous vehicles.
A significant advancement in embedded systems is the integration of OpenCV within the latest releases of VxWorks 7 (SR0540 onwards). VxWorks, a real-time operating system renowned for its reliability and deterministic behavior, brings a critical dimension of safety and robustness to computer vision applications. This synergy is particularly advantageous in safety-critical industries where dependable and predictable performance is paramount.
This article will explore the practical implementation of edge detection, a fundamental image processing technique available within the OpenCV library, within the VxWorks environment. We will provide a detailed, step-by-step guide on how to build and execute an OpenCV-based edge detection application on VxWorks, highlighting the key technical considerations and procedures involved.
2. Prerequisites: Setting the Stage for OpenCV on VxWorks
To follow the steps outlined in this article, you will require the following:
VxWorks Development Environment (SR540+)
: A properly installed and configured VxWorks 7 development environment with a software license that supports the necessary components. The SR540 release or later is crucial for native OpenCV support.USB Drive with GRUB Configuration
: A USB flash drive configured with the GRand Unified Bootloader (GRUB) to facilitate booting the VxWorks image on the target hardware. This allows for flexible deployment and testing.x86-64 Target PC with Camera
: A 64-bit x86 personal computer (the target system) equipped with either an integrated or an external USB camera that is compatible with the UVC (USB Video Class) standard.Network Connectivity (Optional but Recommended)
: Network access for the target PC can be beneficial for tasks such as transferring files and establishing a Telnet connection for remote interaction.
3. Building the VxWorks Image with OpenCV Support
The first critical step involves building a custom VxWorks Source Build (VSB) that incorporates the necessary OpenCV libraries and drivers.
3.1. VSB Configuration:
Begin by creating a new VSB project using the Workbench development environment. For this example, we utilized the itl_generic_2_0_0_0
Board Support Package (BSP), configured for a CORE CPU architecture and 64-bit address mode to leverage the processing power of modern x86-64 systems.
During the kernel configuration phase of the VSB, it is essential to include the following components:
USB
: Enables support for USB devices, including the camera. This typically involves selecting the INCLUDE_USB component and its associated sub-components for host controller and device class support.FBDEV
: The Framebuffer Device interface provides an abstraction for graphics hardware, allowing OpenCV to display images on a connected monitor if desired (though not strictly necessary for edge detection processing itself). Include INCLUDE_FBDEV.GPUDEV_ITL915
: This component specifically enables support for Intel 915 graphics chipset, which is commonly found in many x86 systems. If your target system has a different GPU, you might need to select the corresponding GPUDEV component.OPENCV
: This is the core component that integrates the OpenCV library into your VxWorks image. Selecting this will include the necessary OpenCV modules and dependencies.
Once the VSB build process is complete, navigate to the following directory within your VxWorks installation to find the README file containing specific instructions for building the VxWorks Image Project (VIP) and the Real-Time Process (RTP) file that will contain our edge detection application:
<VSB_file_path>/3pp/OPENCV/opencv-3.3.1/vxworks_examples/
Note
: The exact path might vary slightly depending on your VxWorks installation directory and the specific OpenCV version included.
Follow the instructions in the README file to build the VIP, which creates the bootable VxWorks image, and the RTP, which will contain the compiled edge detection application. This process typically involves compiling the example code provided by Wind River for OpenCV on VxWorks.
4. Loading and Executing the VxWorks Image
With the VxWorks image built, the next step is to load and boot it on the target x86-64 PC.
4.1. Booting from USB:
Ensure that your target Dell laptop (or your chosen target hardware) is configured to boot from USB. Connect the USB drive containing the generated VxWorks image and power on the system. The GRUB bootloader on the USB drive should present you with options to boot the VxWorks image. Select the appropriate option to start the VxWorks kernel.
4.2. Telnet Execution of the RTP:
Once VxWorks has successfully booted, you can interact with the target system via a Telnet connection.
-
Establish Telnet Connection
: Open a terminal or a Telnet client (like PuTTY on Windows) on your host PC and connect to the IP address of your target VxWorks system. Ensure that the network configuration on both the host and target allows for Telnet communication. -
Verify Device Drivers
: After establishing the Telnet connection, you can use thedevs
command in the VxWorks shell to list the available devices. This helps confirm that the necessary drivers, such as the USB video capture driver (/uvc/0
), are loaded correctly. The output you provided shows a typical listing of devices:
-> devs
drv name
0 /null
1 /tyCo/0
2 /pcConsole/0
8 /romfs
9 /input/event
11 host:
4 /bd0:1
12 /uvc/0
6 stdio_pty_0xffff8000006d97a0.S
7 stdio_pty_0xffff8000006d97a0.M
value = 35 = 0x23 = '#'
The presence of /uvc/0
indicates that the USB Video Class driver has been successfully initialized and is ready to interact with your USB camera. The /romfs
entry indicates the RAM-based file system where the RTP executable is likely located.
Executing the Edge Detection RTP
: To run the edge detection application, which is compiled as a Real-Time Process (RTP), use the rtpSp command followed by the path to the executable file within the ROM file system:
-> rtpSp "/romfs/RTP_opencv_edge_detect.vxe"
This command will load and execute the RTP_opencv_edge_detect.vxe
file. Assuming the application is correctly implemented, it will then access the camera feed via the /uvc/0
device, perform edge detection using OpenCV functions, and potentially display the processed output (if framebuffer support is configured) or send the results elsewhere.
5. Edge Detection Code (Conceptual Overview)
While the specific C++ code for the edge detection application would be extensive, here’s a conceptual overview of the key steps involved within the RTP_opencv_edge_detect.vxe application:
-
Include OpenCV Headers
: The code will begin by including the necessary OpenCV header files, such as those for image processing (opencv2/imgproc.hpp) and video capture (opencv2/videoio.hpp). -
Open Camera Device
: It will use OpenCV’s cv::VideoCapture class to open and access the camera feed. This typically involves specifying the device index (e.g., cv::VideoCapture(0) for the first camera detected, which often corresponds to /uvc/0 in VxWorks). -
Capture Frames
: The application will then enter a loop to continuously capture frames from the camera using the videoCapture.read(frame) method. Each frame will be a cv::Mat object, OpenCV’s fundamental data structure for representing images. -
Convert to Grayscale (Optional but Common)
: Edge detection algorithms often work best on grayscale images. The cv::cvtColor() function can be used to convert the color frame to a grayscale image (cv::COLOR_BGR2GRAY). -
Apply Edge Detection Algorithm
: OpenCV provides several edge detection algorithms, such as the Canny edge detector (cv::Canny()). This function takes the input image (grayscale or color), threshold values, and potentially an aperture size for the Sobel operator (used internally by Canny).
cv::Mat edges;
cv::Canny(grayFrame, edges, lowThreshold, highThreshold, apertureSize);
-
Display or Process Results
: The resulting edges cv::Mat will contain the detected edges in the image. This output can then be displayed on a connected screen (if FBDEV is configured and the application includes display logic using OpenCV’s cv::imshow()), further processed for object recognition or other tasks, or transmitted over a network. -
Release Resources
: When the application terminates, it’s crucial to release the camera resource using videoCapture.release() and destroy any OpenCV windows that were created using cv::destroyAllWindows().
Below is the whole code for reference!
#include "opencv2/core/utility.hpp"
#include "opencv2/imgproc.hpp"
#include "opencv2/imgcodecs.hpp"
#include <stdio.h>
#include <opencv2/videoio.hpp>
#include "fboutput/cvVxDisplay.hpp"
using namespace cv;
using namespace std;
Mat blurImage, edge1, edge2, cEdge;
int edgeThresh = 50;
int edgeThreshScharr = 50;
Mat GrayFrame;
static void help()
{
printf("\nThis sample demonstrates Canny edge detection\n"
"Call:\n"
" /.edge [image_name -- Default is ../data/fruits.jpg]\n\n");
}
int main( int argc, const char** argv )
{
cv::CommandLineParser parser(argc, argv, "{@input||}{help h||}");
string input = parser.get<string>("@input");
if (parser.has("help"))
{
help();
return 0;
}
cvVxInitDisplay();
if (argv[1] != NULL )
{
edgeThreshScharr = stoi (argv[1]);
if (edgeThreshScharr == 0)
{
edgeThreshScharr = 50;
}
}
VideoCapture VideoStream(0);
VideoWriter out_vid;
Mat ReferenceFrame;
Mat frame1;
if (!VideoStream.isOpened())
{
printf("Error: Cannot open video stream from camera\n");
return 1;
}
VideoStream.set(CAP_PROP_FRAME_WIDTH, 640);
VideoStream.set(CAP_PROP_FRAME_HEIGHT, 480);
VideoStream.set(CAP_PROP_FPS, 30);
out_vid.open("edge.avi", VideoWriter::fourcc('M','J','P','G'), 10,
Size(VideoStream.get(CAP_PROP_FRAME_WIDTH),VideoStream.get(CAP_PROP_FRAME_HEIGHT)),true);
do
{
VideoStream >> frame1;
cvtColor(frame1, ReferenceFrame, COLOR_YUV2BGR_YUY2);
cEdge.create(ReferenceFrame.size(), ReferenceFrame.type());
cvtColor(ReferenceFrame, GrayFrame, COLOR_RGB2GRAY); /*convert Image to
* Grayscale */
blur(GrayFrame, blurImage, Size(3,3));
#if 0 /* This code would output a low quality edge detection*/
/* Run edge detector on grayscale */
Canny(blurImage, edge1, edgeThresh, edgeThresh*3, 3);
cEdge = Scalar::all(0); /* Fill cEdge with Zeros, all black */
ReferenceFrame.copyTo(cEdge, edge1);
#endif
Mat dx,dy;
Scharr(blurImage,dx,CV_16S,1,0);
Scharr(blurImage,dy,CV_16S,0,1);
Canny( dx,dy, edge2, edgeThreshScharr, edgeThreshScharr*3 );
cEdge = Scalar::all(0); /* Fill cEdge with Zeros, all black */
ReferenceFrame.copyTo(cEdge, edge2);
Mat rgb;
cvtColor(cEdge, rgb, COLOR_BGR2BGRA);
cvVxShow(rgb);
if (out_vid.isOpened())
out_vid.write(cEdge);
} while (1);
return 0;
}