[Rack] possibility of tenants in the new rack
jim at well.com
Sun Mar 6 16:29:14 UTC 2011
beautifully said! you wanna come by some tuesday
or friday afternoon and share some of your knowledge?
there's also the issue of old equipment that's
performing some vital job: it's kind of stupid, but
the problem exists, and the the danger is that when
some part goes, there's no available replacement--
gotta have a feel for this and anticipate so's to
have a parts inventory.
On Sun, 2011-03-06 at 00:38 -0800, Dr. Jesus wrote:
> On Sun, Mar 6, 2011 at 12:08 AM, Jonathan Lassoff <jof at thejof.com> wrote:
> > On Sat, Mar 5, 2011 at 11:02 PM, jim <jim at well.com> wrote:
> >> JS: it seems there's some significant drift among
> >> some of us and our various extended circles toward
> >> getting a rack somewhere that we can share. i'm
> >> for it, in general.
> >> i really like the idea that it's at noisebridge
> >> so i can just waddle from whatever i'm doing in the
> >> space to the rack, if need be.
> >> it's not so bad if a shared rack is nearby, say
> >> downstairs or a block away. not as good, but not so
> >> bad.
> >> i could get prices from the likes of telx and
> >> quest and other such in SF, but i'd rather work on
> >> getting something more homebrew and very local.
> > Personally, I'd recommend against TelX and 200 Paul in general. I've
> > found the quality of their facilities sub-par and mostly as-expensive
> > as other providers.
> > They filed an S1 about a year ago, and they've been trying to keep
> > their revenues on the up and up quarter by quarter. For example,
> > they've started charging a recurring monthly rate for x-connects
> > whereas they haven't before.
> > Just generally shafty, in my experience.
> > I'm curious to know more about why physical access is important for
> > you. In this day and age of serial consoles, IPMI BMCs, and lights out
> > boards, what is the need to access a physical server unless it's to
> > replace failing hardware?
> Jim and I have talked about this and we agree there's some hard to
> define learning effect caused by working with the hardware directly.
> I've noticed the effect gives a hobbyist more context in which to
> place newly acquired knowledge, among other things.
> I used to teach stupid windows tricks to high school kids in a
> previous life, and that experience convinced me that a novice tech or
> engineer needs to be able to clearly visualize the hardware to have a
> solid foundation upon which to learn other concepts. It's really hard
> to explain what some BIOS options mean if the student can't picture a
> typical LPC bus, for example. We still have to put up with twiddling
> BIOS settings even on a modern UEFI machine with full remote
> everything and an IPKVM. There's other things a mentor will teach
> that students will never get out of any industry training, like the
> tap-the-DIMM trick to get at the SMBUS, how to inject a SLIC, etc.
> It also just occurred to me that in some jobs a tech needs to be able
> to spec replacement parts from the used market, especially for
> hobbyist Linux stuff running on older machines. It's easier to learn
> the PC hardware taxonomy if you can look at a bunch of PCs instead of
> reading the A+ study guide or something equally hands-off.
> > In the city proper, I'd recommend checking out Monkeybrains, Cernio,
> > GNI/365 Main, and ServePath/GoGrid.
> > I'd contact these folks directly and start asking, but I'm not sure
> > what level of interest there is. Personally, I'm interested in 1U, 1
> > Amp, and ~1 Mbit/s of average sustained bandwidth.
> All the Noisecloud sites can rack something like that given the
> conditions listed in the page on the wiki.
More information about the Rack