[Noisebridge-discuss] what could we do with one of these babies..

Jesse Zbikowski embeddedlinuxguy at gmail.com
Mon Apr 19 21:32:13 UTC 2010


On Sun, Apr 18, 2010 at 2:12 PM, joel jaeggli <joelja at bogus.com> wrote:
> On 4/18/2010 1:25 PM, Jesse Zbikowski wrote:
>> So the worst case is theoretically unbounded badness; you could be
>> sitting around all day waiting to talk, or at least milliseconds.
>
> half duplex gigabit ethernet is rare enough that i would hazard you've never
> used a system that was both gigabit and used a repeater (hub), there is no
> half duplex 10Gig,

Sorry, exponential backoff was a bad example. The point I wanted to
make, which I think is still valid, is that Ethernet latencies are
generally non-deterministic (although full duplex is certainly a step
forward). My understanding is you have to build something on top of
Ethernet to get determinism, such as EtherCAT, or perhaps design some
very clever system configuration. Maybe you know some better ways to
skin this cat?

>> EtherCAT on the other hand uses a ring topology, where every device
>> talks in turn and there is no bus contention.
>
> The realtime process control industry loved arcnet for precisely this
> reason, on the other hand it was also slow and pretty muck failed at both
> the standardization and the commiditization exercise, which is why this has
> pretty much all moved over to ethernet.

The approach EtherCAT takes is to get rid of the ring overhead by not
having each device read the whole packet and spit it back out; instead
it reads and modifies the packet on the fly with only a very small
delay (<<1us). Also separate packets are not addressed to each device,
but instead a single packet contains sections addressed to the
individual devices on the network, thus reducing a lot of packet
overhead. The advantages of EtherCAT should be most relevant when
there is a large number (dozens or 100's) of devices, which are
receiving data much smaller than the minimum packet size..

>> Hence our jitter (delay
>> between when we expect a signal and when we actually get it) is low;
>> less than a microsecond. Deterministic behavior is the key to
>> realtime.
>
> right, but by moving forward a couple of generations on the technology curve
> you can move 1 or 2 orders of magnitude more data with an order of magnitude
> lower latency which doesn't sound that painful.

Average latency is important but determinism is about optimizing the
worst case. You might have a great average latency of 5us per
transaction, but what are the odds you get a freak latency of 100us?
Now your 10kHz control loop is b0rked, your robot is shaking Obama's
hand and breaks his fingers, and you lose your funding.

I agree for some applications, over-provisioning is a good enough
solution (so maybe you lose a video frame once in a blue moon, who
cares). Other people want 100% determinism: the system will never fail
unless the hardware is broken. I like to take the middle ground: if
the probability of not meeting a deadline is overshadowed by the
probability of cosmic rays causing a data error, it's good enough!



More information about the Noisebridge-discuss mailing list