[cloud] Video of Freenet 0.7 darknet routing algorithm, notes, links, etc
noisebridge at saizai.com
Wed Jul 22 08:41:12 UTC 2009
I found this quite interesting and understandable.
Basically, the algorithm is as follows.
1. Each node knows k > 1 other nodes ('neighbors')
2. ... in some way that relates to human social networks, i.e.
creating a 'small world' network
1. Assign each new node a real number 0-1 on a ring. This is its 'address'.
2. Ask its neighbors what their addresses are. The difference (on
ring) between addresses is their 'distance'. Initial distances are, of
3. Swap your address with a neighbor's (and inform your neighbors of
this swap) iff that swap would make the average distance.
"Greedy" Routing protocol:
Send packet to the (live?) neighbor who is closest to the target address.
The ideal is that reaching a node that is a shorter 'distance' in
terms of address space also takes a shorter number of hops in terms of
Result: most routes take < O(log n^2) hops to target.
There's a nice visualization of this halfway through the talk.
1. There must be some connection to a '6 degrees of separation'
network... I think. I'm not certain that this routing algorithm will
work if neighbors are essentially randomly assigned.
2. Revealing your address also reveals your neighbors, ishly. So e.g.
all the anarchists are likely to cluster together in some region of
address space. Not 100%, but real tendency.
3. No clear capability to work around bridges (e.g. nodes that can
bridge between Internet and firewalled intranet)
4. Most nodes must have multiple neighbors.
5. (?) No clear provision to avoid orphan subnets other than faith in 6 Degrees.
6. No provision (except likelihood of friends being in geo proximity)
for RL routing time differences. Probably not addressable, though,
given the other constraints on darknet routing.
Superficially, this method seems to be fairly similar to those used in
other things mentioned in the previous thread, but I've not looked at
their whitepapers in any detail yet.
I'm curious how much my original idea (as partially described on
discuss) could be extended to also operate partially in a way similar
to Freenet - i.e. allowing many individual users to in some sense use
the resources of all others on the network.
Currently Freenet only does a very heuristic sort of distributed
storage. It'd be interesting if it also allowed computation, such that
all (willing?) participants essentially had access to a supercomputer.
Combined with some strong MapReduce / BOINC style method for parting
out problems (and determining whether they're suitable for it) and
ensuring minimal effective footprint, this could be a powerful tool,
since most of us have a lot of spare CPU/GPU, RAM, and HD most of the
Linkdumping for ease of reference:
Public Information Retrieval:
More information about the cloud