[Noisebridge-discuss] Kinect hacking/mapping?

Taylor Alexander tlalexander at gmail.com
Sun Apr 10 02:40:53 UTC 2011


I have been messing with this myself the last few weeks.

Im trying to build 3D models of my trunk to build the best sub box for my
car (isn't the future awesome?). I also need to do a 3D scan of a fist for a
sex toy someone wants me to make. :-)

I tried the RGBD v.5 demo and its not bad. It correctly assembles multiple
point clouds on the fly to build out a 3d model.... usually. Sometimes it
gets confused and starts matching the point clouds wrong, and then you
really can't so much with the data. It can be useful though, and it runs
well in windows. Its worth a try.

Most promising looks to be
http://www.ros.org/wiki/openni/Contests/ROS%203D/RGBD-6D-SLAM
After some messing around, I finally got it installed in Ubuntu on my
laptop, but I have not tried scanning with it yet.

Once you're done with either of the above softwares, you get a pointcloud
file. The free and open source program meshlab can be used to clean up the
point cloud data and turn it into surfaces. From there standard model
software should be able to work with it. I found a link explaining how to do
that in meshlab, but I'm not at my laptop right now. Ill try to remember to
send that along too, or bug me if you have any troubles.
Taylor
On Apr 9, 2011 7:16 PM, "Lamont Lucas" <lamont at cluepon.com> wrote:
> On 4/9/11 7:05 PM, Mitch Altman wrote:
>> This sounds really promising for making 3d scans. Wouldn't it be cool
>> to be able to get a 3d scan of something and then print it out in a
>> MakerBot?
>>
>> I took a look at the kinecthacks.com link -- I couldn't find out there
>> how it works, or why they call it "RGB-Demo". Is it using
>> Red-Green-Blue light to somehow? Or, does "RGB" in this case stand
>> for something different?
>
> There's at least two cameras and 3 modes on there. There's a typical
> RGB output format, just like you'd expect, red, green, blue, but there's
> also an output format where each pixel is represented by a "depth"
> number. I suspect the name RGB-Demo is a play on the RGB-D output
> name. Those output formats seem to be made from a set of custom
> on-board hardware, at least one of which is produced by the camera
> putting out a grid of IR dots and the second (IR) camera is using the
> deformation of those dots to estimate shapes and depth.
>
> Ah, from the wiki page:
>
> "The depth sensor consists of aninfrared
> <http://en.wikipedia.org/wiki/Infrared>laser
> <http://en.wikipedia.org/wiki/Laser>projector combined with a
> monochromeCMOS sensor
> <http://en.wikipedia.org/wiki/Active_pixel_sensor>, which captures video
> data in 3D under anyambient light
> <http://en.wikipedia.org/wiki/Available_light>conditions"
>
> and they call the IR dot field "infrared structured light". The company
> that made the onboard sensor has an open driver kit, but the
> libfreekinect people have figured their own out from the usb protocol.
>
> Most annoying for me is that they use a weird USB plug that provides
> 12v, and requires either a horrible hack job or at least using the AC
> injector to break it back out to regular 5v USB.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.noisebridge.net/pipermail/noisebridge-discuss/attachments/20110409/c7b0fc1c/attachment-0003.html>


More information about the Noisebridge-discuss mailing list