[Noisebridge-discuss] ML Wednesday: code up a neuron!

Zhao Lu zhao.lu.us at gmail.com
Fri Mar 13 20:42:44 UTC 2009


Did Christoph take some pictures?  I seem to recall some flash.

On Fri, Mar 13, 2009 at 12:55 PM, Praveen Sinha <dmhomee at gmail.com> wrote:

> Hey and also very public props to Josh for facilitating such an
> amazing learning session.   Too bad we don't have pictures of that
> night on our wiki because it was a great great example of technology
> enabled learning!  Good job, I'm excited about our ever evolving
> energy in our ML group and also at noisebridge!
>
> On Fri, Mar 13, 2009 at 12:19 AM, Josh Myer <josh at joshisanerd.com> wrote:
> > For those of you who weren't there: you missed a great time!  We had
> > the whole table downstairs full of folks with laptops, everyone coding
> > up their own neuron.  By the end of the night, all but one person had
> > succeeded (but, hey, he was trying to learn LISP as part of the
> > exercise).  Congratulations to everyone on their good work!
> >
> > You can see the fruits of our labors at
> >
> https://www.noisebridge.net/wiki/Machine_Learning_Meetup_Notes:_2009-03-11#Machine_Learning_Meetup_Notes:_2009-03-11
> >
> > In the end, we have linear perceptrons implemented in the following
> > languages:
> >  * python (3 times over, even)
> >  * ruby
> >  * Matlab
> >  * C (oh man)
> >  * Mathematica with a functional bent (oh man oh man)
> >
> >
> > Everyone seemed really stoked by the end of the night, so I think
> > we'll continue with this format more often.  It was great fun, and it
> > provided lots of opportunities to introduce ML ideas and techniques*.
> >
> > As a follow-up to the exercise, there are some questions on the wiki
> > for people who made it (they're copied below, as well, if you're
> > feeling lazy).  If you want to continue playing with perceptrons, each
> > of these paragraphs should take about an hour of playing with your new
> > toy to answer really well, and maybe a bit of googling to fully
> > understand.  I think I'll go further into the perceptron next week,
> > and flesh out these paragraphs as the topics to cover.  What do people
> > think?
> >
> > This week was a blast.  Thanks to everyone who came out, and good work
> > building your perceptons!
> > --
> > Josh Myer   650.248.3796
> >  josh at joshisanerd.com
> >
> >
> > * Admittedly, I didn't capitalize on this.  I spent most of my time
> >  running around, trying desperately to remember what language I was
> >  helping people with at any given second.  It was awesome fun =)
> >
> >
> > Now, if you look at your weights and think about what they mean, you'll
> notice something odd. At the end, the weights aren't equal! We trained a
> NAND gate, so every input should have an equal opportunity to change the
> output, right? Given that last leading question, what would you expect the
> ideal weights to be? Do the learned weights match that expectation? Why?
> (Hint: What does "overfitting" mean, and how is it relevant?)
> >
> > Can you build a training set for an OR gate, and train it? What other
> operators can you implement this way? All you need to do is build a new
> training set and try training, which is pretty awesome if you think about
> it. (Hint: What does "separability" mean, and how is it relevant?)
> >
> > Let's say we wanted to output smooth values instead of just 0 or 1. What
> wouuld you need to change in your evaluation step to get rid of the
> thresholding? What wouuld you need to change about learning to allow your
> neuron to learn smooth functions? (Hint: in a smooth output function, we
> want to change the amount of training we do by how far we were off, not just
> by which direction we were off.)
> >
> > What if we wanted to do something besides multiple each input by its
> weight? What if we wanted to do something crazy, like take the second input,
> square it, and multiply _that_ by the weight? That is: what if we wanted to
> make the output a polynomial equation instead of a linear one, where each
> input is x^1, x^2, etc, with the weights as their coefficients? What would
> need to change in your implementation? What if we wanted to do even crazier
> things, like bizarre trig functions?
> >
> >
> > On Tue, Mar 10, 2009 at 01:48:49PM -0700, Josh Myer wrote:
> >> Hey, an early announcement, crazy!
> >>
> >> Tomorrow night, 8PM, at 83c, we'll have a machine learning workshop.
> >> This week's ML Wednesday is going to be another experiment in format.
> >> We'll have a real quick introduction to perceptrons (cute little
> >> baaaaby neural networks), then we'll all code one up in our language
> >> of choice.  By the time you leave, you should have written your own
> >> little artificial neuron.
> >>
> >>
> >> To that end, I need a couple of things from the audience:
> >>
> >> 1. My ML-aware peeps to step up to shepherd a bit on Wednesday night
> >>    (You've all been awesome about this thus far, so I'm not worried
> >>    about it.  You might want to brush up on the learning  algorithm
> >>    used, though.  I'll do a preso, too, so it should be smooth going.)
> >>
> >> 2. Some sample code in your language of choice.  As long as you can
> >>    write a the following function, we're probably good.  Here's that
> >>    function; please have it working before you come Wednesday.
> >>
> >>
> >> dot_product:
> >>
> >> takes two arrays of equal length, multiples them along each other, and
> >> sums the products.
> >>
> >> Test cases for dot_product:
> >>
> >> dot_product([0],[1]) = 0
> >> dot_product([1,2,3,4],[1,10,100,1000]) = 4321
> >>
> >> And, a quick version in accessible ruby:
> >>
> >> def dot_product(a, b)
> >>   sum = 0.0
> >>   i = 0
> >>
> >>   while(i < a.length)
> >>     sum += a[i]*b[i]
> >>     i += 1
> >>   end
> >>
> >>   return sum
> >> end
> >>
> >>
> >> If this experimental format goes well, we could move on to doing more
> >> complex neural networks on top of the same ideas in another workshop,
> >> or maybe try some other learning algorithms in the same format.
> >>
> >> I hope you can join us and build your own little learner tomorrow!
> >> --
> >> Josh Myer   650.248.3796
> >>   josh at joshisanerd.com
> >> _______________________________________________
> >> Noisebridge-discuss mailing list
> >> Noisebridge-discuss at lists.noisebridge.net
> >> https://www.noisebridge.net/mailman/listinfo/noisebridge-discuss
> >>
> > _______________________________________________
> > Noisebridge-discuss mailing list
> > Noisebridge-discuss at lists.noisebridge.net
> > https://www.noisebridge.net/mailman/listinfo/noisebridge-discuss
> >
> _______________________________________________
> Noisebridge-discuss mailing list
> Noisebridge-discuss at lists.noisebridge.net
> https://www.noisebridge.net/mailman/listinfo/noisebridge-discuss
>



-- 
Zhao
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.noisebridge.net/pipermail/noisebridge-discuss/attachments/20090313/364d2094/attachment-0003.html>


More information about the Noisebridge-discuss mailing list