[Noisebridge-discuss] Analysing and Interpreting Quantitative Eye-Tracking Data in Studies of Programming: Phases of Debugging with Multiple Representations

Steven Dee mrdomino at gmail.com
Tue Sep 1 16:02:38 UTC 2009


Tangentially: this reasoning is anything but apocryphal. See for example
http://www.youtube.com/watch?v=wIiDomlEjJw.
To recapitulate my part of the discussion:

I think the big problem with visual tools is just that the computers haven't
caught up to our brains yet. Whiteboard diagramming is, in my view, one of
the absolute most efficient ways I've encountered of conveying models of
programs to other human beings. The trouble is that whereas other human
beings have enough mental state in common with us to follow along and
"parse" our visualizations, computers are still in the stone age with
respect to this sort of interpretation -- there's vast amounts of work to be
done in cognitive science and image recognition before they get anywhere
close.

Text, on the other hand, is a pretty low-bandwidth medium. If I have to
describe a system to you using only text (and especially if I don't get any
feedback from you -- if it's a document, not a conversation), I'm going to
have to write an extremely detailed design document. As a rule of thumb, I'm
going to go out on a limb and say that the size of this document scales
roughly linearly with the size of the resulting program in a modern
high-level language.

The advantage to programming in text is that the computers are at much less
of a comparative disadvantage to us than they are with visualizations.
Computers can handle text pretty well at this point, and -- especially when
they're allowed to make the rules for what kind of text you get to write --
they're pretty good at turning a program description (say, Ruby code) into a
working model of it (say, a running process). They're still pretty crappy
conversationalists, and they're downright terrible at seeing intent behind
code, but the situation is at least a little less unbalanced.

So the problem with visual programming tools is that they attempt to work in
a modality where you're used to fast, high-bandwidth, low-overhead
interactions with other people, but they fail to deliver because the guy
sitting next to you interpreting the whiteboard is a deaf, autistic
four-year-old, and you're really better off just writing him a letter.

In this framework, that study might suggest that advanced programmers are
more used "talking" to the computer (cf. looking at output) than they are to
trying to reverse-engineer its mental state by staring at its bits and
pieces. It's interesting stuff for sure -- thanks for the link.

On Tue, Sep 1, 2009 at 4:31 AM, Praveen Sinha <dmhomee at gmail.com> wrote:

> I heard a story once about a math teacher that always taught their students
> using hand-eye models instead of just visualization.  For example, when
> teaching simple derivatives, they would use their hand to grab the exponent
> and drag it down to the multiplier positions, and ask their students to go
> through the same motions.  The reasoning (however apocryphal it may or may
> not be) was that the hands and fingers had a lot of circuitry built in for
> the detailed oriented repeated skills like programming / knitting / making
> food / solving equations / solving video games.  Come to think of it, maybe
> I'll try a kinesthetic approach to teaching in whatever my next workshop
> is...  At any rate, maybe you are both right! :)
>
> On an unrelated note, another toy I would love to have at noisebridge is a
> eye tracking rig.....
>
>
> On Tue, Sep 1, 2009 at 12:54 AM, Naomi Most <pnaomi at gmail.com> wrote:
>
>> Yeah, that's interesting.
>> Just to recapitulate my part of the discussion of programming
>> phenomenology:
>>
>> I posit that the physical act of programming involves holding mental
>> frameworks and objects with their attendant potentialities all in mind.
>>  Most programmers I know, including myself, can attest to a "set-up" time
>> before actually producing new code of any worth that is directly
>> proportionate to the size of the project and existing codebase.
>>
>> This is the sort of set-up that happens when you've stepped away from a
>> project long enough that you've had to load other complex state in the
>> meantime (e.g. negotiating dinner with significant other). So, when you come
>> back to it, smaller projects might set you back 5 minutes of set-up time in
>> your brain, whereas larger projects might require an hour or more.
>>
>> Why this is relevant to text versus visual tools:
>>
>> Visual tools seem to attempt to provide the sort of representation that
>> happens in the brain during the practice of programming.  In the beginning
>> stages, they do seem to help; in later stages (and not very far down the
>> line) they seem to become a hindrance, making more aspects of the model
>> obtuse rather than easier to mentally manipulate.
>>
>> At some point, your brain just becomes way better as a tool of model
>> state-holding and manipulation than the visual tool, and you fall back on
>> the text to provide you with pure input and output for the models and frames
>> in your mind.
>>
>> To give a somewhat politically incorrect analogy, visual tools can get you
>> about as far with programming as an autistic brain can get you in peaceful
>> negotiations of dinner.
>>
>> --Naomi
>>
>> ps. the irony of the use of the word "manipulate", rooted in the word
>> "hand", is not lost on me.
>>
>>
>>
>>
>> On Sun, Aug 30, 2009 at 9:26 PM, Jason Dusek <jason.dusek at gmail.com>wrote:
>>
>>>  This paper echoes a notion that came out of a discussion Naomi
>>>  and I had after I mentioned some comments Meryl made in my
>>>  Ruby class. The authors say:
>>>
>>>    Overall, throughout the whole debugging session expert
>>>    programmers – who also found more bugs – relied more on the
>>>    textual representation of the program than the less
>>>    experienced programmers did. Output of the program became
>>>    more important than visualization at later phases of the
>>>    debugging strategies of experts, while novice programmers
>>>    tended to rely on the visualization.
>>>
>>>  The accompanying graph (page 8 of 15) shows us that, while
>>>  novices spend substantially more time looking at the visual
>>>  representation, both groups spend most of their time looking
>>>  at the code.
>>>
>>>  This dovetails well with my changing experience of program
>>>  development: I came to care less and less for visual tools and
>>>  code visualization as I become more comfortable with just
>>>  building the model for myself.
>>>
>>>  Perhaps our visual tools are simply inadequate and successful
>>>  programmers are those who can adapt to the paucity of tools;
>>>  or perhaps the mentality required for programming is little
>>>  aided by visuals. I tend to think the latter but the research
>>>  can be read either way.
>>>
>>> --
>>> Jason Dusek
>>>
>>> http://www.ppig.org/papers/19th-Bednarik.pdf
>>>
>>
>>
>>
>> --
>> ---
>> Naomi Most
>> Producer
>> Little Moving Pictures
>>
>> +1-415-728-7490
>> naomi at littlemovingpictures.com
>> skype: nthmost
>>
>> http://twitter.com/nthmost
>>
>> _______________________________________________
>> Noisebridge-discuss mailing list
>> Noisebridge-discuss at lists.noisebridge.net
>> https://www.noisebridge.net/mailman/listinfo/noisebridge-discuss
>>
>>
>
> _______________________________________________
> Noisebridge-discuss mailing list
> Noisebridge-discuss at lists.noisebridge.net
> https://www.noisebridge.net/mailman/listinfo/noisebridge-discuss
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.noisebridge.net/pipermail/noisebridge-discuss/attachments/20090901/81e41191/attachment-0003.html>


More information about the Noisebridge-discuss mailing list