At this year’s Build conference Microsoft surprised everyone by revealing a 4th generation Kinect sensor! I know what you’re thinking, “Wasn’t HoloLens using Kinect v4?” Well, it seems like either Alex Kipman misspoke or was misquoted in earlier interviews, as HoloLens actually uses 3rd generation Kinect sensors. Unfortunately, before you get too excited, this “Kinect 4” hasn’t yet been announced for any Xbox consoles and isn’t even a proper product yet. All that has been shown is the basic sensor which they are calling Project Kinect For Azure. As the name suggests it integrates with Microsoft’s cloud.

However, there is definitely some hope for Xbox One owners, because in the video above you can hear Satya Nadella say that he expects the new sensor to be fully integrated into many different consumer and industrial products. The most obvious consumer product to integrate this new sensor into would of course be a new Kinect device for Xbox One! The video also briefly shows the first footage from the sensor. About one second of what I think is the depth image then a few seconds of a point cloud of the same scene. Unfortunately the livestream quality of that wasn’t the greatest, so it’s not very clear.

Not a huge amount more is known about it. Possibly the biggest detail revealed after the above announcement was that the sensor’s resolution is 1024 x 1024. That may not sound that high and many probably think that Kinect 2 is already 1080p,  but that is actually just the resolution of its RGB cam. The IR sensor is actually only 512 x 512. So this new Kinect sensor has 4 times the resolution of Kinect 2!

You can read all the rest of the information that has been revealed so far in the following, which was posted by Alex Kipman on LinkedIn:

 

Introducing Project Kinect for Azure

Published on May 7, 2018

Alex Kipman
Technical Fellow – AI Perception and Mixed Reality

Hello everyone!

Microsoft Build is upon us once again. It’s my favorite time of year because it’s so exciting to introduce our developer community to the newest tools that will empower them to accelerate the world’s digital transformation and create the future.

During Satya Nadella’s Build keynote, he introduced the world to one such tool that may sound a little familiar: Project Kinect for Azure. I wanted to take a little more time to expand upon this project, what it means and the role it will play in enabling developers to apply AI over the real world in profound new ways.

What Satya described is a key advance in the evolution of the intelligent edge; the ability for devices to perceive the people, places and things around them. One of the things that makes Project Kinect for Azure unique and compelling is the combination of our category-defining depth-sensor with our Azure AI services that, together, will enable developers to make the intelligent edge more perceptive than ever before.

The technical breakthroughs in our time-of-flight (ToF) depth-sensor mean that intelligent edge devices can ascertain greater precision with less power consumption. There are additional benefits to the combination of depth-sensor data and AI. Doing deep learning on depth images can lead to dramatically smaller networks needed for the same quality outcome. This results in much cheaper-to-deploy AI algorithms and a more intelligent edge.

Earlier this year, Cyrus Bamji, an architect on our team, presented a well-received paper to the International Solid-State Circuits Conference (ISSCC) on our latest depth sensor. This is the sensor that Satya described onstage at the Build conference and is also the sensor that will give the next version of HoloLens new capabilities. The technical characteristics that make this new depth sensor best-in-class include:

  • Highest number of pixels (megapixel resolution 1024×1024)
  • Highest Figure of Merit (highest modulation frequency and modulation contrast resulting in low power consumption with overall system power of 225-950mw)
  • Automatic per pixel gain selection enabling large dynamic range allowing near and far objects to be captured cleanly
  • Global shutter allowing for improved performance in sunlight
  • Multiphase depth calculation method enables robust accuracy even in the presence of chip, laser and power supply variation.
  • Low peak current operation even at high frequency lowers the cost of modules

 

The Kinect brand has a storied history, from gaming peripheral and developer technology to the depth-sensing magic inside Microsoft HoloLens, the world’s first fully self-contained holographic computer. HoloLens today features depth-camera technology evolved from Kinect hardware, which, in conjunction with other cutting-edge technology, is already transforming businesses as we embrace the era of mixed reality.

Our vision when we created the original Kinect for Xbox 360 was to produce a device capable of recognizing and understanding people so that computers could learn to operate on human terms. Creative developers realized that the technology in Kinect (including the depth-sensing camera) could be used for things far beyond gaming. In the second generation of Kinect we improved the gaming peripheral but also provided developers with a version that could connect to a PC with Kinect for Windows. The outcome was great innovation and creativity from our developer community. We discontinued production of second generation Kinects last year, however we worked with Intel to ensure Windows developers can continue building PC solutions with Intel’s RealSense depth cameras.

With HoloLens, we saw incredible results when we took some of the magic of Kinect and applied it in a mixed reality context. The current version of HoloLens uses the third generation of Kinect depth-sensing technology to enable it to place holograms in the real world. With HoloLens we have a device that understands people and environments, takes input in the form of gaze, gestures and voice, and provides output in the form of 3D holograms and immersive spatial sound. With Project Kinect for Azure, the fourth generation of Kinect now integrates with our intelligent cloud and intelligent edge platform, extending that same innovation opportunity to our developer community.

Project Kinect for Azure unlocks countless new opportunities to take advantage of Machine Learning, Cognitive Services and IoT Edge. We envision that Project Kinect for Azure will result in new AI solutions from Microsoft and our ecosystem of partners, built on the growing range of sensors integrating with Azure AI services. I cannot wait to see how developers leverage it to create practical, intelligent and fun solutions that were not previously possible across a raft of industries and scenarios.

I’m thrilled to continue the Kinect journey with all of you through Project Kinect for Azure, and we look forward to sharing much more with you over the coming months. As always, feel free to reach out to me on Twitter in the meantime.

Enjoy the rest of Microsoft Build 2018!

Alex

 

Even though there was no mention of Xbox One or Xbox One X in regards to Project Kinect For Azure, the fact that they have announced this new sensor does at least give us hope that a new Kinect for consoles may actually be announced at E3. Especially when you combine it with the fact that Microsoft have said that this E3 will be their biggest E3 show ever! Which means they’ve obviously got something big up their sleeve. So let’s keep our fingers crossed until then. And don’t forget, the new Kinect could of course be part of Xbox VR, which would be even better! We only have to wait just over a month to find out, as Microsoft’s E3 conference is on June 10th.

Are you excited by this news? Do you think Kinect will be making a comeback on Xbox consoles this year and/or possibly part of an Xbox VR system? Let us know in the comments below, or create a discussion in our All Other Kinect Topics Forum.

 

(Click on pic to enlarge.)

 

Source: Microsoft's YouTube Channel, LinkedIn Via: Alex Kipman's Twitter