Friday, March 29, 2013

OpenNI + Depth & IR Compression + Pandaboard

[Update 4/14] I do not have access to a pandaboard yet, therefore I am working on this in Ubuntu with a Kinect.

I am looking to compress depth images from a Kinect or Asus Xtion using OpenNI. Currently I am trying to modify NiViewer to capture and save depth frames as images.

I edited the files as listed on the google groups post. Alternate link:

Install OpenCV:

Edit the make file for NiViewer using this:

[Update 4/7]
I had a little trouble with the makefile for NiViewer using the link above. Here is what I have for  OpenNI/Platform/Linux/Build/Samples/NiViewer/Makefile

include ../../Common/CommonDefs.mak
BIN_DIR = ../../../Bin
../../../../../Include \
../../../../../Samples/NiViewer \
/usr/local/lib \
SRC_FILES = ../../../../../Samples/NiViewer/*.cpp
ifeq ("$(OSTYPE)","Darwin")
LDFLAGS += -framework OpenGL -framework GLUT
USED_LIBS += glut GL opencv_core opencv_highgui
EXE_NAME = NiViewer
CFLAGS        = -pipe -O2 -I/usr/local/include/opencv  -D_REENTRANT $(DEFINES)
CXXFLAGS      = -pipe -O2 -I/usr/local/include/opencv  -D_REENTRANT $(DEFINES)
include ../../Common/CommonCppMakefile

[Update 4/14]
I ran NiViewer, right click and start capture. So as of now, it saves depth images (in OpenNI/Platform/Linux/Bin/(your platform)/CaptureFrames). Remember to create the CapturedFrames folder.

Now, I want it to start capturing as soon as I ran the program so I edited the NiViewer.cpp file to this:

Comment this part out in the main method (the part that handles the user interface):
reshaper.zNear = 1;
reshaper.zFar = 100;

cb.mouse_function = MouseCallback;
cb.motion_function = MotionCallback;
cb.passive_motion_function = MotionCallback;
cb.keyboard_function = KeyboardCallback;
cb.reshape_function = ReshapeCallback;
glutInit(&argc, argv);
glutInitDisplayString("stencil double rgb");
glutInitWindowSize(WIN_SIZE_X, WIN_SIZE_Y);
glutCreateWindow("OpenNI Viewer");
 before the audioShutdown() command is called, I added these lines:

int i = 0;
while (i<10)
I compiled and ran NiViewer, but I was only getting blank images. This is because the saveFrame_depth() function (from the google groups post) is using a Linear Histogram. It seems that the calculateHistogram method needs to be called before the Linear Histogram is used. Before, drawFrame() was calling the calculateHistogram function and that is why it worked. I just wanted to see if this worked in general so inside the saveFrame_depth() function, there is a line that says switch(g_DrawConfig.Streams.Depth.Coloring), change this to switch(PSYCHEDELIC); Now I am able to see something in my saved depth image.

[Update 4/17]
I wanted to see how much space it would take for depth data to be stored without any compression. I first started out writing the depth values to a binary file (in plain ascii). This is pretty simple but I am just writing everything out incase if anyone is confused. To do this, I created a new file in the beginning of the saveFrame_depth function:
ofstream myfile;"depthData_ascii");
Then inside the nested for loops (after the switch case statements), you will see data being assigned to red, blue and green pointers (e.g. Bptr[nX] = nBlue and etc). I added these lines under the red, green, blue pointer assignments:
myfile << *pDepth
myfile << " "; 
The lines above should be inside the 2nd for loop. Then I added
myfile << "\n" 
at the end of the first for loop. So basically it should look like this:

   ofstream myfile;"depthData_ascii");
         myfile << *pDepth
         myfile << " ";
      myfile << "\n"

This file seems to take up around 1.2 mb of space per frame.

[Update 4/21]
Saving it as a simple binary file seems to take up too much space. 1.2mb * 30fps * 60seconds/min * 60mins/hr is 126.562gb per hour. I tried to save the depth data inside a png as 16bit unsigned short integers. To do this,  create a new matrix at the beginning of saveFrame_depth file. The dimensions of the matrix should be pDepthMD->YRes() by pDepthMD->XRes(). Instead of creating it as an 8bit unsigned, do 16bit unsigned (CV_16U). The code for this would look like:
cv::Mat depthArray = cv::Mat(pDepthMD->YRes(), pDepthMD->XRes(), CV_16U)
Inside the first for loop, there are RGB pointers being created with the colorArr variable. Create a pointer for for your depthArray in that same location.
uchar* depthArrayPtr = depthArray.ptr<uchar>(nY); 
ushort* depthArrayPtr = depthArray.ptr<ushort>(nY);
 edit: I had to use ushort for the pointer type. Im not sure exactly why because the pointer type should stay the same length whether or not we are creating a depthArray of 16bit unsigned short or 8bit unsigned char. But using ushort works, where as if I used uchar it saved the image on the left half.

image when using uchar*
image when using  ushort*

Inside the second for loop (toward the end of it), there are values being assigned to the locations of those pointers (e.g. Bptr[nX] = nBlue), under those lines, add this:
depthArrayPtr[nX] = *pDepth
All that is happening is, I am storing the raw depth value in an matrix. Save this array as a png image (add these lines at the end of the function saveFrame_depth):
vector<int> compression_params;
imwrite(str_aux_raw, depthArray, compression_params);
I had to #include "cv.h", "highgui.h", <vector>, <iostream>, and <fstream>. In addition to that, I added using namespace std; after my include statements.

This will save the image as a png and it should have the original depth values. The total size comes out to be around 100kb - 200kb (depending on the depth image).


To see if you are saving your png image correctly and the values are correct, I used MatLab. In MatLab enter this command:
image = imread('/path/to/image.png'); imagesc(image); colorbar;
This should give you a scale of the values represented in your depth image. You could even click Tools->Data Cursor (in the image window) and select a point in that image to get the specific value at that point.

other information: ~1 min of recording depth as a png (with 0 compression) =  14.5mb. It could be that it is not doing 30fps. There were over all of 117 images so thats about 2 images/frames per second.

[Update 4/21]
I am now trying to get IR data and store it into the image. I found some code that could help:

[Update 4/26]
I dont think the code is the problem here, because viewing IR data is not working at all in NiViewer. However I do have some information about compression with depth data. I was able to use PNG_COMPRESSION in imwrite and set that value to 50. The image lowered about 10kb (so not that much). However if I had a binary/text file of 1.5mb and zipped that, it would go down to 67kb. So maybe I could use gzip to zip the files as I am saving them.


Thursday, March 28, 2013

Saving RGB and Depth Data from Kinect as an Image (with OpenNI)

This is not my work, I am reposting it just incase if it gets removed. Source:!msg/openni-dev/iYtcrrA365U/wEFT2_-mH0wJ

Hi all,
I am working on foreground segmentation using kinect. I needed to
extract the color and depth images in a synchronized and registerd way
and This thread has been very useful for me. I write you the code I
have used to do it, if someone need to do the same:

I started modifying the NiViewer sample code you can find in:

Then, I modified some of their files to achieve the .jpg recording
jointly with the .oni file. To save the images, I have used opencv

extract RGB images  (in Device.cpp):

 //new includes:
#include "cv.h"
#include "highgui.h"
#include "sstream"
#include "string"

//jaume. New function to save RGB frames in jpg format.

void saveFrame_RGB(int num)
    cv::Mat colorArr[3];
    cv::Mat colorImage;
    const XnRGB24Pixel* pImageRow;
    const XnRGB24Pixel* pPixel;

//     ImageMetaData* g_ImageMD = getImageMetaData();
    pImageRow = g_ImageMD.RGB24Data();

    colorArr[0] = cv::Mat(g_ImageMD.YRes(),g_ImageMD.XRes(),CV_8U);
    colorArr[1] = cv::Mat(g_ImageMD.YRes(),g_ImageMD.XRes(),CV_8U);
    colorArr[2] = cv::Mat(g_ImageMD.YRes(),g_ImageMD.XRes(),CV_8U);

    for (int y=0; y<g_ImageMD.YRes(); y++)
      pPixel = pImageRow;
      uchar* Bptr = colorArr[0].ptr<uchar>(y);
      uchar* Gptr = colorArr[1].ptr<uchar>(y);
      uchar* Rptr = colorArr[2].ptr<uchar>(y);
              for(int x=0;x<g_ImageMD.XRes();++x , ++pPixel)
                      Bptr[x] = pPixel->nBlue;
                      Gptr[x] = pPixel->nGreen;
                      Rptr[x] = pPixel->nRed;
      pImageRow += g_ImageMD.XRes();

    char framenumber[10];

    std::stringstream ss;
    std::string str_frame_number;
//     char c = 'a';
    ss << framenumber;
    ss >> str_frame_number;

    std::string str_aux = "CapturedFrames/image_RGB_"+
str_frame_number +".jpg";
    IplImage bgrIpl = colorImage;                      // create a
IplImage header for the cv::Mat bgrImage
    cvSaveImage(str_aux.c_str(),&bgrIpl);                // save it with the


extract Depth images  (in Draw.cpp):

 //new includes:
#include "cv.h"
#include "highgui.h"

//jaume. New function to save depth map in jpg format. I have based
this implementation on the draw images function.

void saveFrame_depth(int num)
  const DepthMetaData* pDepthMD = getDepthMetaData();
  const XnDepthPixel* pDepth = pDepthMD->Data();

   cv::Mat depthImage;
   cv::Mat colorArr[3];

    colorArr[0] = cv::Mat(pDepthMD->YRes(),pDepthMD->XRes(),CV_8U);
    colorArr[1] = cv::Mat(pDepthMD->YRes(),pDepthMD->XRes(),CV_8U);
    colorArr[2] = cv::Mat(pDepthMD->YRes(),pDepthMD->XRes(),CV_8U);

  for (XnUInt16 nY = pDepthMD->YOffset(); nY < pDepthMD->YRes() +
pDepthMD->YOffset(); nY++)
    XnUInt8* pTexture = TextureMapGetLine(&g_texDepth, nY) + pDepthMD-

      uchar* Bptr = colorArr[0].ptr<uchar>(nY);
      uchar* Gptr = colorArr[1].ptr<uchar>(nY);
      uchar* Rptr = colorArr[2].ptr<uchar>(nY);

    for (XnUInt16 nX = 0; nX < pDepthMD->XRes(); nX++, pDepth++,
            XnUInt8 nRed = 0;
            XnUInt8 nGreen = 0;
            XnUInt8 nBlue = 0;
            XnUInt8 nAlpha = g_DrawConfig.Streams.Depth.fTransparency*255;

            XnUInt16 nColIndex;

            switch (g_DrawConfig.Streams.Depth.Coloring)
            case LINEAR_HISTOGRAM:
                    nBlue = nRed = nGreen = g_pDepthHist[*pDepth]*255;
            case PSYCHEDELIC_SHADES:
                    nAlpha *= (((XnFloat)(*pDepth % 10) / 20) + 0.5);
            case PSYCHEDELIC:

                    switch ((*pDepth/10) % 10)
                    case 0:
                            nRed = 255;
                    case 1:
                            nGreen = 255;
                    case 2:
                            nBlue = 255;
                    case 3:
                            nRed = 255;
                            nGreen = 255;
                    case 4:
                            nGreen = 255;
                            nBlue = 255;
                    case 5:
                            nRed = 255;
                            nBlue = 255;
                    case 6:
                            nRed = 255;
                            nGreen = 255;
                            nBlue = 255;
                    case 7:
                            nRed = 127;
                            nBlue = 255;
                    case 8:
                            nRed = 255;
                            nBlue = 127;
                    case 9:
                            nRed = 127;
                            nGreen = 255;
            case RAINBOW:
                    nColIndex = (XnUInt16)((*pDepth / (g_fMaxDepth / 256)));
                    nRed = PalletIntsR[nColIndex];
                    nGreen = PalletIntsG[nColIndex];
                    nBlue = PalletIntsB[nColIndex];
            case CYCLIC_RAINBOW:
                    nColIndex = (*pDepth % 256);
                    nRed = PalletIntsR[nColIndex];
                    nGreen = PalletIntsG[nColIndex];
                    nBlue = PalletIntsB[nColIndex];

            Bptr[nX] = nBlue ;
            Gptr[nX] = nGreen;
            Rptr[nX] = nRed;


   cv::merge(colorArr,3, depthImage);

    char framenumber[10];

    std::stringstream ss;
    std::string str_frame_number;

    ss << framenumber;
    ss >> str_frame_number;

   //CapturedFrames folder must exist!!!
    std::string str_aux = "CapturedFrames/image_depth_"+
str_frame_number +".jpg";

   IplImage bgrIpl = depthImage;                      // create a
IplImage header for the cv::Mat bgrImage
   cvSaveImage(str_aux.c_str(),&bgrIpl);                // save it with the


File where I use these new functionalities: Capture.cpp.

//New include:
#include <iostream>

//Function modified to save frames in jpg format:
XnStatus captureFrame()
        XnStatus nRetVal = XN_STATUS_OK;
        if (g_Capture.State == SHOULD_CAPTURE)
                XnUInt64 nNow;
                nNow /= 1000;

                if (nNow >= g_Capture.nStartOn)
                        g_Capture.nCapturedFrames = 0;
                        g_Capture.State = CAPTURING;

        if (g_Capture.State == CAPTURING)
                nRetVal = g_Capture.pRecorder->Record();


        return XN_STATUS_OK;

To test the code, you must execute the new NiViewer app, and use the
options Capture->start that appear just clicking in the left mouse

It's all, I hope this code will be useful.


Friday, March 1, 2013

Installing OpenNI, SensorKinect, and PrimeSense on a Raspberry Pi

I did not create these instructions, I only reposted them here just incase they got removed.
_Using Win32DiskImager (for windows)
_downloaded 2012-08-16-wheezy-raspbian from website ( )
_burnt the image onto a 16Gb Class 10 (up to 95MB/s) Sandisk Extreme Pro card
_Powerup the Pi using a 5V 1Amp supply
_Note: Italics is what is typed into the Pi shell or added to scripts/code

_Power up the Pi, I am assuming you are plugged into the dvi port, if you cannot and only want to SSH then you need to know the IP address (one way is to look at the router's DHCP assigned table)

_On Pi startup menu:
- expand to use the whole card
- change the timezone to Aust -> Syd
- change local to AST-UTF-8 and default for system GB-UTF-8
- turn on the SSH server (it should be on by default)
- do an update

_Then from the shell, Overclock: (For details see )sudo nano /boot/config.txt
_If you don't want to void your warranty on the Pi then I suggest you use these settings

_If you don't mind voiding your warranty added these lines after the last line. These are the setting I am using. if they don't work on bootup the press “shift” while booting I think to do a non-overclocked boot force_turbo=1

_Get the ipaddress for eth0 assuming you are using ethernetifconfig eth0

_Then shutdown and power cycle the Pi so the card can be expanded and new faster config can be usedsudo shutdown now

_ After rebooting check the new CPU speed:
more /proc/cpuinfo
_Also if you are worried about the temperature you can check it by going here: cd /opt/vc/bin/
_and run this script: ./vcgencmd measure_temp
_ for all the possible commands use: ./vcgencmd commands

_ Go to the shell (via ssh use IP address is easiest, tunnel X through ssh and for windows use X server like Xming)sudo apt-get update
sudo apt-get install git g++ python libusb-1.0-0-dev freeglut3-dev openjdk-6-jdk doxygen graphviz

_ Get stable OpenNI and the drivers (this failed several times but keep trying)mkdir stable
cd stable
git clone
git clone git://
git clone

_ Get unstable OpenNI and the driversmkdir unstable
cd unstable
git clone -b unstable 
git clone git:// -b unstable
git clone -b unstable

_I will do the following just for stable but all steps must be done for unstable too
Note: only do the install step for one or the other

_The calc_jobs_number() function in the scripts doesn't seem to work on the Pi, so change python scriptnano ~/stable/OpenNI/Platform/Linux/CreateRedist/
_from containing this:
MAKE_ARGS += ' -j' + calc_jobs_number()
MAKE_ARGS += ' -j1'
_ Must also change the Arm compiler settings for this distribution of the Pinano ~/stable/OpenNI/Platform/Linux/Build/Common/Platform.Arm
CFLAGS += -march=armv7-a -mtune=cortex-a8 -mfpu=neon -mfloat-abi=softfp #-mcpu=cortex-a8
CFLAGS += -mtune=arm1176jzf-s -mfpu=vfp -mfloat-abi=hard

_Then run cd ~/stable/OpenNI/Platform/Linux/CreateRedist/

Go to the Redist and run install (for stable or unstable not both)cd ~/stable/OpenNI/Platform/Linux/Redist/OpenNI-Bin-Dev-Linux-Arm-v1.5.4.0

_ Also edit the Sensor and SensorKinect makefile CFLAGS parameters

_ and the Sensor and SensorKinect redistribution scriptsnano ~/stable/Sensor/Platform/Linux/CreateRedist/RedistMaker
_ for both, change
 -j$(calc_jobs_number) -C ../Build
 -j1 -C ../Build

_ The create the redistributables
_Sensor (primesense)
cd ~/stable/Sensor/Platform/Linux/CreateRedist/
_ and SensorKinect  (note this does not work with stable OpenNI only the unstable. If fails about half way through with a problem of a missing header file) Note that on the SensorKinect git page it say that you need the unstable version of OpenNI for it to work
cd ~/stable/SensorKinect/Platform/Linux/CreateRedist/

_ Then install either stable or unstable
_ install for stablecd ~/stable/Sensor/Platform/Linux/Redist/Sensor-Bin-Linux-Arm-v5.1.0.41
 ./install.shcd ~/stable/SensorKinect/Platform/Linux/Redist/Sensor-Bin-Linux-Arm-v5.1.2.1
sudo ./
_ install for unstablecd ~/unstable/Sensor/Platform/Linux/Redist/Sensor-Bin-Linux-Arm-v5.1.2.1
sudo ./

 _ Try running the sample reading program after pluging in sensor (check with lsusb)cd ~/stable/OpenNI/Platform/Linux/Bin/Arm-Release
sudo ./Sample-NiCRead
 ./Sample-NiBackRecorder time 1 depth vga

_ Problems I had:
_you need a powered hub to run the Xtion
_ If you get timeout errors it can be because the hub isn't giving enough power, even if it shows up in "lsusb" I had to unplug the keyboard and mouse from the hub before it would work
_ I had to try different ports on the hub to get some demos to work, unplug and plug in again in a different port
_ when I used the unstable version of the Xtion driver I got:
Open failed: Device Protocol: Bad Parameter sent!

_I was using the stable version of the kinect it wouldn't even build. I get this error about halfway through the build
g++ -MD -MP -MT "./Arm-Release/XnActualGeneralProperty.d Arm-Release/XnActualGeneralProperty.o" -c -mtune=arm1176jzf-s -mfpu=vfp -mfloat-abi=hard -O3 -fno-tree-pre -fno-strict-aliasing -ftree-vectorize -ffast-math -funsafe-math-optimizations -fsingle-precision-constant -O2 -DNDEBUG -I/usr/include/ni -I../../../../Include -I../../../../Source -I../../../../Source/XnCommon -DXN_DDK_EXPORTS -fPIC -fvisibility=hidden -o Arm-Release/XnActualGeneralProperty.o ../../../../Source/XnDDK/XnActualGeneralProperty.cpp
In file included from ../../../../Source/XnDDK/XnGeneralProperty.h:28:0,
from ../../../../Source/XnDDK/XnActualGeneralProperty.h:28,
from ../../../../Source/XnDDK/XnActualGeneralProperty.cpp:25:
../../../../Source/XnDDK/XnProperty.h:29:21: fatal error: XnListT.h: No such file or directory
compilation terminated.
make[1]: *** [Arm-Release/XnActualGeneralProperty.o] Error 1
make[1]: Leaving directory `/home/pi/stable/SensorKinect/Platform/Linux/Build/XnDDK'
make: *** [XnDDK] Error 2
make: Leaving directory `/home/pi/stable/SensorKinect/Platform/Linux/Build'
_ See this page for the following notice on the above eror:
***** Important notice: *****
You must use this kinect mod version with the unstable OpenNI release......
_ with the unstable version of the Kinect it built and installed but I was getting the error
Open failed: USB interface is not supported!
_ So I had to edit
sudo nano /usr/etc/primesense/GlobalDefaultsKinect.ini
_ and uncomment this line and changed it to 1 instead of 2
 _ And then got lots of these errors
UpdateData failed: A timeout has occurred when waiting for new data!
_ I tried doing this (without luck)
rmmod -f gspca_kinect

_ Other: save image
dd if=/dev/sdc of=~/2012-09-18-wheezy-raspbian_16GB_OpenNI-Stable+Unstable.img
tar -zcvf 2012-09-18-wheezy-raspbian_16GB_OpenNI-Stable+Unstable.img.tar 2012-09-18-wheezy-raspbian_16GB_OpenNI-Stable+Unstable.img.img