More ideas for interesting things one could do with a Kinect if only one could program:<div><br></div><div>- "real" eye contact during video conferencing</div><div>- interactive & collaborative white-boards (imagine one in every hackerspace on the planet!)<br>
- virtual collaborative work spaces</div><div><br></div><div>...if only one could program =/</div><div><br></div><div>/Rikke</div><div><br></div><div><br><div class="gmail_quote">On Sun, Apr 10, 2011 at 2:40 AM, Taylor Alexander <span dir="ltr"><<a href="mailto:tlalexander@gmail.com">tlalexander@gmail.com</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex;"><p>I have been messing with this myself the last few weeks. </p>
<p>Im trying to build 3D models of my trunk to build the best sub box for my car (isn't the future awesome?). I also need to do a 3D scan of a fist for a sex toy someone wants me to make. :-)</p>
<p>I tried the RGBD v.5 demo and its not bad. It correctly assembles multiple point clouds on the fly to build out a 3d model.... usually. Sometimes it gets confused and starts matching the point clouds wrong, and then you really can't so much with the data. It can be useful though, and it runs well in windows. Its worth a try.</p>
<p>Most promising looks to be <a href="http://www.ros.org/wiki/openni/Contests/ROS%203D/RGBD-6D-SLAM" target="_blank">http://www.ros.org/wiki/openni/Contests/ROS%203D/RGBD-6D-SLAM</a><br>
After some messing around, I finally got it installed in Ubuntu on my laptop, but I have not tried scanning with it yet.</p>
<p>Once you're done with either of the above softwares, you get a pointcloud file. The free and open source program meshlab can be used to clean up the point cloud data and turn it into surfaces. From there standard model software should be able to work with it. I found a link explaining how to do that in meshlab, but I'm not at my laptop right now. Ill try to remember to send that along too, or bug me if you have any troubles. <br>
Taylor </p>
<div class="gmail_quote">On Apr 9, 2011 7:16 PM, "Lamont Lucas" <<a href="mailto:lamont@cluepon.com" target="_blank">lamont@cluepon.com</a>> wrote:<br type="attribution">> On 4/9/11 7:05 PM, Mitch Altman wrote:<br>
>> This sounds really promising for making 3d scans. Wouldn't it be cool <br>
>> to be able to get a 3d scan of something and then print it out in a <br>>> MakerBot?<br>>><br>>> I took a look at the <a href="http://kinecthacks.com" target="_blank">kinecthacks.com</a> link -- I couldn't find out there <br>
>> how it works, or why they call it "RGB-Demo". Is it using <br>>> Red-Green-Blue light to somehow? Or, does "RGB" in this case stand <br>>> for something different?<br>> <br>> There's at least two cameras and 3 modes on there. There's a typical <br>
> RGB output format, just like you'd expect, red, green, blue, but there's <br>> also an output format where each pixel is represented by a "depth" <br>> number. I suspect the name RGB-Demo is a play on the RGB-D output <br>
> name. Those output formats seem to be made from a set of custom <br>> on-board hardware, at least one of which is produced by the camera <br>> putting out a grid of IR dots and the second (IR) camera is using the <br>
> deformation of those dots to estimate shapes and depth.<br>> <br>> Ah, from the wiki page:<br>> <br>> "The depth sensor consists of aninfrared <br>> <<a href="http://en.wikipedia.org/wiki/Infrared" target="_blank">http://en.wikipedia.org/wiki/Infrared</a>>laser <br>
> <<a href="http://en.wikipedia.org/wiki/Laser" target="_blank">http://en.wikipedia.org/wiki/Laser</a>>projector combined with a <br>> monochromeCMOS sensor <br>> <<a href="http://en.wikipedia.org/wiki/Active_pixel_sensor" target="_blank">http://en.wikipedia.org/wiki/Active_pixel_sensor</a>>, which captures video <br>
> data in 3D under anyambient light <br>> <<a href="http://en.wikipedia.org/wiki/Available_light" target="_blank">http://en.wikipedia.org/wiki/Available_light</a>>conditions"<br>> <br>> and they call the IR dot field "infrared structured light". The company <br>
> that made the onboard sensor has an open driver kit, but the <br>> libfreekinect people have figured their own out from the usb protocol.<br>> <br>> Most annoying for me is that they use a weird USB plug that provides <br>
> 12v, and requires either a horrible hack job or at least using the AC <br>> injector to break it back out to regular 5v USB.<br></div>
<br>_______________________________________________<br>
Noisebridge-discuss mailing list<br>
<a href="mailto:Noisebridge-discuss@lists.noisebridge.net">Noisebridge-discuss@lists.noisebridge.net</a><br>
<a href="https://www.noisebridge.net/mailman/listinfo/noisebridge-discuss" target="_blank">https://www.noisebridge.net/mailman/listinfo/noisebridge-discuss</a><br>
<br></blockquote></div><br></div>