Monday, May 31, 2010

OpenCV and Matlab (and Visual Studio C++)

Matlab and OpenCV both have advantages for image processing work. I wanted to implement some code written in OpenCV (in MSVSC++) but run from a Matlab command window. After digging around on the web for help, I managed to get this to work. What makes this challenging is the OpenCV library linking. Simple C/C++/Fortran code can be implemented as Matlab functions by creating MEX files. To gain access to the OpenCV library, a MEX file can be written that has another file linked in at compile time. This second file can use the OpenCV library as usual. Without getting into the details, here are the steps I took. Use your head and change the directory paths and what not to match your setup.

Setting up Matlab
  • From a Matlab command window, run the command mex -setup. Follow the dialog and change the compiler to use the MSVSC++ compiler.
  • Find mexopts.bat (see below for locating this file) and open for editing. 
  • Add set OPENCVDIR=C:\Program Files\OpenCV2.1
  • Add %OPENCVDIR%\include\opencv to the set INCLUDE statement. 
  • Add OPENCVDIR%\lib;%OPENCVDIR%\bin to the set LIB statement. 
  • Add cv210.lib highgui210.lib cvaux210.lib cxcore210.lib to the end of the set LINKFLAGS statement. 
The location for mexopts.bat on my system is C:\Documents and Settings\Default\Application Data\MathWorks\MATLAB\R2008b. You can find where Matlab is looking for this file by trying to compile a MEX file with the verbose option enabled (e.g. mex -v program.cpp).

Setting up Visual Studio
Edit project properties as follows to allow compilation of MEX files.
  • Add the following folders to the C/C++ additional include directories:
    • [MATLAB_DIR]\extern\include
    • [MATLAB_DIR]\simulink\include
  • Add the following folder to the Linker additional directories:
    • [MATLAB_DIR]\extern\lib\win32\microsoft
  • Add the following Linker input additional dependencies (one per line):
    • libmx.lib
    • libmex.lib
    • libmat.lib
  • Add #include "mex.h" to your source code to allow the use and compilation of mx* functions.
Note: For the instructions above, replace [MATLAB_DIR] with the actual path on your system.

Creating Files

  • OpenCV File
    • Assuming you already have a working project that uses OpenCV, modify the source file to contain only a set of functions you would like to call from a MEX file.
  • MEX File
    • The MEX file needs to be written. See MathWorks' online documentation for how to do this. I used an example from here
    • Include extern references to functions that you want to use from your OpenCV file.
    • Use the functions in your MEX file.
Compiling

  • Compile the source that uses OpenCV to create an object file (e.g. opencv_file.obj)
  • Run the following command in a Matlab command window (with correct paths to your files):
    • mex mex_file.cpp opencv_file.obj
  • Fix any errors to get this to compile.
You will now have a file similar to mex_file.mexw32. This is the MEX file that acts as a normal Matlab function does (e.g. you can call mex_file()).

Tuesday, May 18, 2010

Point Grey Bumblebee2

Received a Point Grey Research Bumblebee2 stereo camera this week. Package comes with a proprietary SDK to be used in a Windows environment. I need to have the camera running in Ubuntu, so for now, I won't be using their SDK. Since the camera uses the IEEE 1394 (FireWire) standard, I've installed libdc1394 so I can attempt to communicate with the camera. To install in Ubuntu, extract the downloaded package and from a command window:
  1. ./configure
  2. make
  3. sudo make install
Also need firewire capability (such as from a PCI card like the one supplied in my camera kit) for the computer. To display a list of connected PCI cards type "lspci" in a command window, and make sure the interface is initialized at boot time by looking for firewire in the kernel log (e.g. "grep 1394 /var/log/kern.log").

Now to figure out how to obtain some images from the camera...

Thursday, April 29, 2010

Robot Operating System from Willow Garage

A project I'm working on (more details later) will likely be using Robot Operating System (ROS) by Willow Garage. This is essentially a (primarily) Unix-based operating system that has been created to provide a common, open source environment for the development of robotic applications. Willow Garage is closely tied to OpenCV development, and ROS should make it simple for integrating the most bleeding edge OpenCV functions along with digital camera drivers necessary for my project. I have set up an Ubuntu virtual machine using Sun's VirtualBox to familiarize myself with ROS. The documentation available at these websites is superb, so I will not go into more detail about setting this up. In the near future I will set up a dedicated PC. For now, I am going through the many tutorials available on the ROS website.

[UPDATE]
I have a dedicated PC set up running Ubuntu. To install OpenCV (after downloading appropriate tarballs file):

   1. tar -xjf OpenCV-2.1.0.tar.bz2
   2. mkdir opencv.build
   3. cd opencv.build
   4. cmake [] ../OpenCV-2.1.0 # CMake-way
   5. make -j 2
   6. sudo make install
   7. sudo ldconfig # linux only


Short-term goal is to create a basic image publisher/subscriber in ROS using OpenCV IplImages.

Tuesday, April 6, 2010

OpenCV and Microsoft Visual Studio 2008

I'll be playing with some image processing. OpenCV is a great option and is open source. For starters I'll be playing with OpenCV in a Visual Studio environment. Instructions for getting everything set up can be found here.


In addition to the initial setup, new Win32 Console projects should have the following additional dependencies added to the Linker > Input configuration properties:


  • cv210.lib
  • cxcore210.lib
  • highgui210.lib
Other tips for running the OpenCV samples that come with the installation as Win32 Console projects:
  • #include "stdafx.h" must be included 
  • Files referenced in the code must be in the Projects/ProjectName/ProjectName directory (and hardcoded file paths will likely need to be modified to reflect their new location)

Tuesday, December 8, 2009

Past Projects

Here are a few of the autonomous robotics projects I did as an undergrad. I will only describe them briefly for now, but would be happy to go into more detail if there is interest.

Autonomous Can Sweeper
The goal of this project was to create a robot that could autonomously locate cans scattered randomly across a board and push the cans off the board. The robot spins and uses an infrared range sensor to find a can within the sensors viewing range and moves in the direction of a located can until infrared reflectance sensors detect the edge of the board. If the robot spins 360 degrees without locating a can, it moves to a new part of the board and continues its search. The logic for this robot was written in C and runs on an Atmel ATMega644 microcontroller. In the video below, you can see the results. The robot sometimes appears to give up on pushing a can, and this is due to the can moving out of the IR range sensor's field of view.




Autonomous Maze Navigation
The goal of this project was to have a robot autonomously navigate a maze and position itself over designated points in a certain amount of time. The dimensions of the maze are known and the kinematics of the robot are modeled. The robot uses IR range sensors and particle filter localization to determine its location within the maze. The beauty of the particle filter technique is that you can place the robot anywhere within the maze, and after a couple seconds of movement it can figure out its location within the maze with good precision. It then plans a path to the next location and moves to it while continuously updating its believed position within the maze. The code was written in C++ and ran on a painfully slow laptop that communicated with the robot via a typical wireless router. Range measurements were transmitted from robot to laptop, and movement commands were transmitted from laptop to robot. Results below.



PolyKart
This is a introductory level robotics program put on by the IEEE student branch at Cal Poly. I was a student in the program as a sophomore and directed the program as a senior. This is a great introduction to micro-controllers and a practical implementation of electrical and C-programming basics. The goal here is to autonomously navigate a high contrast line using IR reflectance sensors. At the end of the program, the robots are run head-to-head to see who has the most efficient algorithm. Overtaking the other robot or staying on the line longer than the other robot are the win conditions for each round. The video below is a match from some of the students I instructed. Note that the video plays twice the true speed (had my video settings incorrect).

First Post

Hello traveler from teh interwebs. I will be documenting my technology related work and interests using this blog. I am pursuing an MSEE at California Polytechnic State University and am interested in digital systems, especially relating to controls and signal processing. If you find anything on here that is useful to you, please leave a comment so I know who's visiting and send me a link to your blog/site.