[Noisebridge-discuss] A test of Excellence

Gavin Knight gnnrok at gmail.com
Thu Oct 18 23:52:57 UTC 2012


Related to this thread, but tangential to the previous points.

http://preyproject.com/ Is free open-source software, which can be useful
in recovering computers and phones, as many of you already know.

I suppose it could also be helpful in an experiment, but I just offer it up
as a suggestion to people in the space, if you are concerned about laptops
or phones, take a look at prey.

Also,
Noisebridge is much more public than other spaces, allowing anyone to
enter. I think it's important to factor in that many spaces enforce strict
door policies, and hours, while noisebridge does not. This is part of what
makes noisebridge special, and awesome.

Gavin

On Thu, Oct 18, 2012 at 1:58 PM, rachel lyra hospodar
<rachelyra at gmail.com>wrote:

> On 10/13/2012 4:32 PM, Martin Bogomolni wrote:
>
>> This is an interesting thought experiment.
>>>
>>
>> It started as a thought experiment some months ago.   It's progressed
>> from there.
>>
>
> i describe it as such intentionally.  Not because i have missed the fact
> that your experiment has progressed on to reality, but because i believe
> there are so many other unnacounted-for variables that it is in fact still
> in the design stage.  The reality that you are pursuing is too flawed to be
> valid.  For me the experiment is still in the "is this viable, and what can
> we determine" stage.  You'll forgive me for working to reach my own
> conclusions in that regard.
>
>
>>  How are you going to correct for different quantities of users in the
>>> various spaces?
>>>
>>
>> I chose the three spaces because they have approximately the same
>> number of users, although the demographic of those users is obviously
>> different.   I actually did a population count over the last few
>> months by coming on different days, and different times, and counting
>> how many people were there at different times.
>>
>>
>>  We don't even count usage statistics but maybe there is some way to
>>> build some data through observation and random sampling.
>>>
>>
>> Bingo :)
>>
>
> I'd like to see this data please.  how many different times?  there are
> lots of external factors on space usage (eg university and holiday
> schedules, or local events like maker faires) so you'll excuse me if i
> don't just trust you that your sampling has been broad enough to be
> relevant.  Additionally, the demographic differences that you gloss over
> are statistically influential and should be acknowledged even if they
> cannot be measured directly.
>
>
>>  What about correcting for the relative value of other objects
>>> surrounding the object in question?  EG, in a space full of nicer things,
>>> the same identical object becomes less attractive to a potential thief
>>> because other juicier objects abound.
>>>
>>
>> By choosing five objects of differing values (social, intrinsic, tool,
>> percieved etc), I'm doing my best to make them more-or-less match
>> what's in the environment.   i.e. make them all equally tempting.
>>
>>
> Mmm, i don't know if i accurately conveyed the concern - if the groups of
> objects are the same in value as each other, they will still have different
> relative values in relation to the rest of their hackerspace environments,
> since the different hackerspaces will have different concentrations of
> objects of value, and additionally those objects of value will have
> different relative values.  Unless you have already constructed this metric
> and have placed objects in the spaces that have values that are equalized
> across the relative value of the other objects in each space.  In which
> case, please share your data.
>
> Additionally, we have to note and account for the influence on relative
> value that is created by proximity to the black market.  At 16th and
> mission, 2 blocks from NB, is a prime location for buying and selling
> stolen goods, thus increasing the likelihood that a thief will be able to
> quickly divest of the stolen goods, thus perhaps increasing the likelihood
> they will be stolen.  if you cannot measure a factor that does not mean you
> can ignore it.  sometimes you can note it and assign it an approximate
> value.
>
>  That same principle also needs to be related, in a more complex way, to
>>> the neighborhood surrounding the space.  Something tells me the relative
>>> crime indices of the neighborhoods vary, from the heart of SF Mission to
>>> Santa Cruz.
>>>
>>
>> Cultures and societies are self-selecting.   All three locations have
>> varying degrees of access control, and social selective pressure.
>> Students at UCSC would find the objects as desirable as people
>> wandering into the Dallas Makerspace.
>>
>
> If there are a thousand laptops always present in one space, and you add
> one in a corner with a 'don't hack me' note, that is different than doing
> the same thing where there are only ten laptops always present. Please
> clarify.
>
> This is at the heart of what
>
>> I'm testing for, and all three handle social and physical access
>> control slightly differently, but at the core .. it comes down to
>> physical access control (door and key) and social pressure (are you
>> supposed to be here?).
>>
>>
> So let me try to understand.  The heart of what you are testing for is the
> persistence of similar objects in their respective locations.  You believe
> that this will somehow convey information about the relative success of
> three complex access control schema in three different environments?
>
> This is a hypothesis with which i disagree.
>
>  Are you going to have a variable in your equation for the ineffable
>>> benefits various differences between the spaces have in other ways, besides
>>> perhaps influencing this single metric?
>>>
>>
>> Nope, because ineffability is ineffability.   There is certainly
>> something to be said if, for example, all five objects go missing at
>> one place, but not in another.   Or perhaps that none go missing, but
>> never get put into the "correct" place, or perhaps that they will be
>> used or abused.   That will come out in post-analysis.
>>
>>
>>  I am not certain that "whether my shiny expensive shit got stolen" is
>>> the only metric of mean honestly level in a group.
>>>
>>
>> Of course not!  But it's one metric.  Also, the relative value of
>> "shiny" has been taken into account with the items.  They are
>> valuable, but not ridiculously or obviously so.
>>
>
> The calibration of this metric, of what is valuable, varies across the
> different spaces.
>
> It's entirely
>
>> possible, that to combat the Hawthorne Effect, that I have another
>> related experiment in progress but not announced in general.   I have
>> -already- run one such series without an announcement, and now I'm
>> comparing the differences.
>>
>
> Overall I find this approach to social dynamics to be paternalistic and
> relatively content-free.  There is a whole lot of
> trust-my-statistical-model handwaving.  Any one person is going to have
> blind spots, and especially in relation to geeks and social dynamics i have
> a hard time with a statistical model around complex social interactions,
> constructed by a single individual.
>
>
>>  How likely you are to have to tackle a sticky-finger sketchbomb also
>>> needs to be balanced out with how likely you are to encounter a visiting
>>> troupe of foreign journalists who have come all the way to noisebridge
>>> seeking to learn more about TECHNOLOGY. from US.  because we ROCK at
>>> technology and also at sharing, even when it is inconvenient.
>>>
>>
>> Indeed... and if sticky-fingered journalists walk away with something,
>> it still goes back directly to the core of the experiment.    How the
>> group interacts with guests is just as important as how the group
>> treats itself internally.
>>
>>
> Let me know if you design an object-permanence experiment that purports to
> measure statistically the way we interact with our guests, Mr. Scientish!
>
>  Not that I don't think we should be able to have nice things, but i think
>>> this approach is so reductionist as to be incapable of producing relevant
>>> data.
>>>
>>
>> I politely, and respectfully, disagree.   I am testing, and have
>> already tested, some of these issues in isolation from one another,
>> keeping away from complexity on purpose.
>>
>
> A mechanical engineer who functions this way makes structures that
> crumple.  You must account for the variables that you aren't isolating for,
> alongside the one you are.  The way to create a simple, testable situation
> is not to ignore all the variables you cannot quantify, or assume they
> cancel each other out.  Instead you must go through the systems identifying
> all the variables that you can, and assigning them approximate values or
> finding ways to turn them static (eg by bracing every x interval, or
> renegotiating employment contracts).  These approximations are refined as
> you make each subsequent planning or assessment pass through the system.
>  This is how to construct a budget, or a mathematical model of a static or
> dynamic system.  I haven't studied sociology or anything but something
> tells me human variables matter in human dynamics as much as the variables
> that make up fluid dynamics.
>
> It's also a reason that I've
>
>> been so quiet (when I'm normally rather gregarious) at noisebridge the
>> last few months.
>>
>>
>>  If you are interested in increasing the complexity of your model, I
>>> have suggested some variables that need to be accounted for.  I'd be
>>> happy
>>> to give feedback on the ways that you are thinking of incorporating them.
>>>
>>
>> I'm always happy to collaborate and share ideas!   That's the nature
>> of being a hacker, after all.   Although I should note that you're
>> also part of the experiment group, and if you help design the
>> experimental series, you can influence and thus invalidate the study.
>>   As long as you're unaware of the detail, though, the effect is
>> negligible.
>>
>
> again, i assert that your study's design is still so rudimentary as to be
> worthless.  If you are interested in me, personally, believing that what
> you generate even IS data, you can start by increasing the complexity of
> your model, if needed redesigning your experimental approach so that it can
> be discussed publicly.
>
>
> R.
>
>
>> -M
>>
> ______________________________**_________________
> Noisebridge-discuss mailing list
> Noisebridge-discuss at lists.**noisebridge.net<Noisebridge-discuss at lists.noisebridge.net>
> https://www.noisebridge.net/**mailman/listinfo/noisebridge-**discuss<https://www.noisebridge.net/mailman/listinfo/noisebridge-discuss>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.noisebridge.net/pipermail/noisebridge-discuss/attachments/20121018/9b6a6406/attachment-0003.html>


More information about the Noisebridge-discuss mailing list