We offer you
everything KINECT in one place!

/ / v /

Kinect's Accuracy Analysed!


Post Reply 
 
Thread Rating:
  • 0 Votes - 0 Average
  • 1
  • 2
  • 3
  • 4
  • 5
Kinect's Accuracy Analysed!
06-19-2012, 06:38 PM (This post was last modified: 06-19-2012 08:51 PM by Corellianrogue.)
Post: #1
Exclamation Kinect's Accuracy Analysed!
This is something really interesting, Professor Kourosh Khoshelham, an assistant professor at the Faculty of Geo-Information Science and Earth Observation of the University of Twente in Enschede in The Netherlands, has analysed Kinect to see how accurate it is. Possibly the most surprising thing he found was that Kinect's IR sensor is HD as the resolution is actually 1280x1024!! Big Grin However Kinect's processor can only process at 640x480, plus there's the Xbox 360's USB2 bandwidth limitation that we already knew about.

Some of the other interesting results include:

The maximum distance Kinect can see is 5 metres (16.5ft).

Kinect's random error accuracy is from a few millimetres of error at 0.5m (1.65ft) to 4cm of error at 5m (16.5ft).

Kinect's depth accuracy is from accurate to 2mm at 1m (3.3ft) distance from Kinect to accurate to 2.5cm at 3m (9.9ft) distance from Kinect to accurate to 7cm at 5m (16.5ft) distance from Kinect.

Kinect can be affected by strong light:

Quote:Errors caused by the measurement setup are mainly related to the lighting condition and the imaging geometry. The lighting condition influences the correlation and measurement of disparities. In strong light the laser speckles appear in low contrast in the infrared image, which can lead to outliers or gap in the resulting point cloud.

Dr. Khoshelham compared Kinect to a high-end laser scanner called the FARO LS 880 and it did very well with 84% of Kinect's point cloud's points less than 3cm apart from the FARO LS 880's point cloud's points.

Conclusions:

Quote:Conclusions

The paper presented a theoretical and experimental analysis of the geometric quality of depth data acquired by the Kinect sensor. The geometric quality measures represent the depth accuracy and resolution for individual points. Indoor mapping applications are often based on the extraction of objects instead of an irregular set of points. In order to describe the quality of extracted objects, some basic error propagation would be needed. While fitting geometric object models to the data can reduce the influence of random errors and low depth resolution, the effect of systematic errors can only be eliminated through a proper calibration procedure.
From the results of calibration and error analysis the following main conclusions can be drawn:

- To eliminate distortions in the point cloud and misalignments between the colour and depth data an accurate stereo calibration of the IR camera and the RGB camera is necessary;
- The random error of depth measurements increases quadratically with increasing distance from the sensor and reaches 4 cm at the maximum range of 5 meters;
- The depth resolution also decreases quadratically with increasing distance from the sensor. The point spacing in the depth direction (along the optical axis of the sensor) is as large as 7 cm at the maximum range of 5 meters.

In general, for mapping applications the data should be acquired within 1–3 m distance to the sensor. At larger distances, the quality of the data is degraded by the noise and low resolution of the depth measurements.


Kinect was calibrated for this analysis but I don't think the data was filtered, I think it's just the raw data from Kinect, so when developers filter it Kinect should be even more accurate.

You can download the PDF file with the full analysis here:

http://www.mdpi.com/1424-8220/12/2/1437
Find all posts by this user
Quote this message in a reply
Post Reply 


Forum Jump:


User(s) browsing this thread: 1 Guest(s)