We offer you
everything KINECT in one place!

/ / v /

Data fusion using two (or more) Kinect


Post Reply 
 
Thread Rating:
  • 0 Votes - 0 Average
  • 1
  • 2
  • 3
  • 4
  • 5
Data fusion using two (or more) Kinect
06-28-2012, 04:37 PM
Post: #1
Data fusion using two (or more) Kinect
Hi community,
I am writing about something that I did not manage to find anywhere on internet.

I am trying to achieve fusion of results obtained by more than one Kinect in real time. For example, we can consider the skeleton information (of same person) obtained by two (individual) Kinect devices tracking a person. I want to combine both Skeleton information to remove occlusions and other artifacts that would allow us to achieve 360 degree tracking.

This means the depth information obtained from each Kinect device has to be converted to a global coordinate system which will be independent of the frame of reference of either Kinect devices.

Has anyone thought or worked on this or does it already exists?

Regards
Pankaj
Find all posts by this user
Quote this message in a reply
07-04-2012, 10:47 PM
Post: #2
RE: Data fusion using two (or more) Kinect
acutally i have seen videos of a guy setting up mutple kinects to get a full all aroun 360 view of an object.

here is an exmaple of a 2second search on google i just did. in other videos a person did use 2 kinects to get both sides.

Find all posts by this user
Quote this message in a reply
Post Reply 


Forum Jump:


User(s) browsing this thread: 1 Guest(s)