[Noisebridge-discuss] Multi factor trustification

Sai Emrys noisebridge at saizai.com
Fri Oct 16 09:02:24 UTC 2009


I was recently reading Sephen Downes' paper "Authentication and
identification"[0], and thought of StackOverflow[1] and some related
authentication / trust network stuff I read a year or so ago[2],
and...

I'm interested in how one can 'trustificate' users. That is, given a
user who authenticates to me as (for example) having certain OpenIDs,
who (internal or external to my site) has created certain friendship
links, comments, content, etc., how can I establish a reliable metric
that determines whether the user is a) trustworthy and b) unique?

Of course, I have the various usual techniques for determining whether
some particular account is a bot or a sockpuppet - browser
fingerprinting, timing attacks, bot-foiling javascript, CAPTCHAs, etc
- so that's not *quite* what I mean to ask here. It's a more ephemeral
thing. (My difficulty in precisely stating the problem is part of what
I think someone here probably has a better answer to...)

There are two models I know of that are somewhat similar to what I want.

One is gpg/pgp web of trust via key signing, which is ish what the
trustlet wiki is about. This is essentially the same as 'friendship'
links, except that (at least a priori) I don't have any obvious way to
establish trusted nodes that are sufficiently widespread (that would
be analogous to e.g. a notary-signed key or key signed by someone you
personally know). Sure, I could manually choose some users who are
obviously good, give them high trust, and propagate that trust to
their friends etc. But this probably won't easily capture the vast
majority of my userbase, and I'm a bit skeptical of how reliable such
friendship links are a priori, since people in practice establish them
with so little actual verification of identity or veracity (certainly
some such links may be useful for trust-by-proxy, but which ones?).

The other model is that used by e.g. StackOverflow - doing certain
things earns you points, points can be transferred or earned in very
controlled ways that are relatively regulated to be hard to spam, and
one's total points are effectively a measure of how much one has
contributed (and therefore how much one should be trusted). This is a
lot closer to what I could see working for me, but doesn't take
advantage of things I might know about a new user (e.g. their friends
on other networks like Facebook & Twitter).

Ultimately what I want is a simple number that tells me roughly how
trustworthy a given user is (and distinguishes them from e.g. a
spammer, sockpuppet, troll, pointfarmer, bot, etc), which can in turn
be used to give that user rights (e.g. moderation, creating new pages
or tags, voting, etc), and which takes advantage of whatever data I
have available. (E.g.: OpenIDs, emails and whether they validate, IP
sources, generated content, etc.)

The process has to be fully automatic, relatively transparent to my
users (e.g. it's probably impractical to ask them to explicitly rate
how much they trust their "friends"), practical, reliable in practice
if not necessarily theoretically perfect, relatively proof against
gaming (or at least built so it's easy to detect and trace [networks
of] users trying to game it), and not rely on any third parties
changing how they behave (i.e. I can write code, but I can't expect
other people [like Facebook] to do so).

I'm not *as* concerned with unique authentication per se, in that I
don't mind anonymous / psuedonymous users gaining high trust levels or
even having multiple identities so long as they don't behave badly or
use that to game the system. (E.g.: creating accounts whose purpose is
to gain trust points and transfer them to an owner account, aka "trust
farming" - this has happened in every web app I know of that has both
a points economy of some sort and method for transferring points to
other users, and is something I want to avoid).

Have any of you done this before? Any other implementations that the
gpg or StackOverflow style that'd be good for me to take a look at?

I'm pretty sure that this is not an original idea, but I just haven't
seen much other than what I mentioned that actually tries to address
it.

Though this is actually a practical question for me (as in, I'm
currently writing code that does this and I'd like to make it not
suck), I'd also be interested in more theoretical discussion of
trust-based authentication etc in general, as I know several of you
have related interests and thus probably know a lot more about it than
I do.

Thanks,
- Sai

[0] http://downes.ca/post/12 - a pretty good paper, IMHO, and worth the read
[1] http://stackoverflow.com
[2] http://trustlet.org &
http://wiki.github.com/technoweenie/restful-authentication/security-patterns



More information about the Noisebridge-discuss mailing list