Data fusion using two (or more) Kinect
|
06-28-2012, 04:37 PM
![]() |
|||
|
|||
Data fusion using two (or more) Kinect
Hi community,
I am writing about something that I did not manage to find anywhere on internet. I am trying to achieve fusion of results obtained by more than one Kinect in real time. For example, we can consider the skeleton information (of same person) obtained by two (individual) Kinect devices tracking a person. I want to combine both Skeleton information to remove occlusions and other artifacts that would allow us to achieve 360 degree tracking. This means the depth information obtained from each Kinect device has to be converted to a global coordinate system which will be independent of the frame of reference of either Kinect devices. Has anyone thought or worked on this or does it already exists? Regards Pankaj |
|||
07-04-2012, 10:47 PM
![]() |
|||
|
|||
RE: Data fusion using two (or more) Kinect
acutally i have seen videos of a guy setting up mutple kinects to get a full all aroun 360 view of an object.
here is an exmaple of a 2second search on google i just did. in other videos a person did use 2 kinects to get both sides. |
|||
« Next Oldest | Next Newest »
|
User(s) browsing this thread: 1 Guest(s)