[Noisebridge-discuss] Kinect hacking/mapping?

Gian Pablo Villamil gian.pablo at gmail.com
Mon Apr 11 01:12:59 UTC 2011


They did this in "Demolition Man" (great movie!), meetings where all
the participants are there via pivoting telepresence screens.

Nicholas Negroponte did an interesting project years and years ago
where they molded glass CRTs into the shape of people's faces, and
showed video of them. Also mounted on rotating mounts to try to
capture non-verbal cues.

Interesting field.

On Sun, Apr 10, 2011 at 11:04 AM, Geoff Shively <gshively at gmail.com> wrote:
> Yes!!!!!!!!!!!! This is why I love the payphone project- video con to
> one or many hackerspaces.
>
>  imagine a dummy with a flat screen head that swiveled to maintain eye
> contact! Or a standup version that you could stand and talk to. Hand
> gesture mimiking would be great to add in the future. Ideally in this
> potential design, the display, track, and swivel unit could just bolt
> onto the neck of any maniquinnen or half height(no legs) version for
> conference room chairs
>
>
>
> On Sunday, April 10, 2011, Rikke Rasmussen <rikke.c.rasmussen at gmail.com> wrote:
>> More ideas for interesting things one could do with a Kinect if only one could program:
>> - "real" eye contact during video conferencing- interactive & collaborative white-boards (imagine one in every hackerspace on the planet!)
>> - virtual collaborative work spaces
>> ...if only one could program =/
>> /Rikke
>>
>> On Sun, Apr 10, 2011 at 2:40 AM, Taylor Alexander <tlalexander at gmail.com> wrote:
>> I have been messing with this myself the last few weeks.
>> Im trying to build 3D models of my trunk to build the best sub box for my car (isn't the future awesome?). I also need to do a 3D scan of a fist for a sex toy someone wants me to make. :-)
>> I tried the RGBD v.5 demo and its not bad. It correctly assembles multiple point clouds on the fly to build out a 3d model.... usually. Sometimes it gets confused and starts matching the point clouds wrong, and then you really can't so much with the data. It can be useful though, and it runs well in windows. Its worth a try.
>>
>>
>> Most promising looks to be http://www.ros.org/wiki/openni/Contests/ROS%203D/RGBD-6D-SLAM
>> After some messing around, I finally got it installed in Ubuntu on my laptop, but I have not tried scanning with it yet.
>> Once you're done with either of the above softwares, you get a pointcloud file. The free and open source program meshlab can be used to clean up the point cloud data and turn it into surfaces. From there standard model software should be able to work with it. I found a link explaining how to do that in meshlab, but I'm not at my laptop right now. Ill try to remember to send that along too, or bug me if you have any troubles.
>> Taylor
>> On Apr 9, 2011 7:16 PM, "Lamont Lucas" <lamont at cluepon.com> wrote:> On 4/9/11 7:05 PM, Mitch Altman wrote:
>>>> This sounds really promising for making 3d scans.  Wouldn't it be cool
>>>> to be able to get a 3d scan of something and then print it out in a
>>>> MakerBot?
>>>>
>>>> I took a look at the kinecthacks.com link -- I couldn't find out there
>>>> how it works, or why they call it "RGB-Demo".  Is it using
>>>> Red-Green-Blue light to somehow?  Or, does "RGB" in this case stand
>>>> for something different?
>>>
>>> There's at least two cameras and 3 modes on there.  There's a typical
>>> RGB output format, just like you'd expect, red, green, blue, but there's
>>> also an output format where each pixel is represented by a "depth"
>>> number.  I suspect the name RGB-Demo is a play on the RGB-D output
>>> name.  Those output formats seem to be made from a set of custom
>>> on-board hardware, at least one of which is produced by the camera
>>> putting out a grid of IR dots and the second (IR) camera is using the
>>> deformation of those dots to estimate shapes and depth.
>>>
>>> Ah, from the wiki page:
>>>
>>> "The depth sensor consists of aninfrared
>>> <http://en.wikipedia.org/wiki/Infrared>laser
>>> <http://en.wikipedia.org/wiki/Laser>projector combined with a
>>> monochromeCMOS sensor
>>> <http://en.wikipedia.org/wiki/Active_pixel_sensor>, which captures video
>>> data in 3D under anyambient light
>>> <http://en.wikipedia.org/wiki/Available_light>conditions"
>>>
>>> and they call the IR dot field "infrared structured light".  The company
>>> that made the onboard sensor has an open driver kit, but the
>>> libfreekinect people have figured their own out from the usb protocol.
>>>
>>> Most annoying for me is that they use a weird USB plug that provides
>>> 12v, and requires either a horrible hack job or at least using the AC
>>> injector to break it back out to regular 5v USB.
>>
>>
>> _______________________________________________
>> Noisebridge-discuss mailing list
>> Noisebridge-discuss at lists.noisebridge.net
>> https://www.noisebridge.net/mailman/listinfo/noisebridge-discuss
>>
>>
>>
>>
> _______________________________________________
> Noisebridge-discuss mailing list
> Noisebridge-discuss at lists.noisebridge.net
> https://www.noisebridge.net/mailman/listinfo/noisebridge-discuss
>



More information about the Noisebridge-discuss mailing list