[Rack] so can i work on minotaur

Ben Kochie ben at nerp.net
Tue Feb 5 05:04:09 UTC 2013


Sounds like a good plan.

-ben

On Mon, 4 Feb 2013, Jake wrote:

> i can't even figure out what size the drives are because i can't even look at 
> the partition table.
>
> I have to wonder if things would be any different if i unplugged the hard 
> drive right now.
>
> i can't run scp so i'm going to plug in a usb stick and try to back things 
> up.  they might only exist in RAM for all i know.
>
> On Mon, 4 Feb 2013, Jonathan Lassoff wrote:
>
>> On Mon, Feb 4, 2013 at 8:39 PM, Jake <jake at spaz.org> wrote:
>>> I can wait but i don't know if i'll be here tomorrow.  I guess i shouldn't
>>> take it down if i dont have a plan to bring it back up.
>>> 
>>> will you do it without me or should i just dd the drive to something 
>>> myself?
>> 
>> I don't think there's anything on there that we couldn't afford to
>> lose (other than the gate codes list!).
>> 
>> Maybe grab a copy of /etc (and wherever the codes are being pulled
>> from), and deploy onto a new disk?
>> 
>> dd is crude, and will only work well if the drive or partitions are
>> the same size.
>> 
>> --j
>>> 
>>> 
>>> On Mon, 4 Feb 2013, Ben Kochie wrote:
>>> 
>>>> I'd love to give you a hand.  I have some of the tools on me for doing
>>>> this work, I just didn't have time to goto NB on the weekend.
>>>> 
>>>> Want to wait till Tuesday?  I'm also trying to dig up an SSD to replace
>>>> that drive.
>>>> 
>>>> -ben
>>>> 
>>>> On Mon, 4 Feb 2013, Jake wrote:
>>>> 
>>>>> So it's monday night and i'm at noisebridge.
>>>>> 
>>>>> I would like to take down the drive that is failing and try to copy it
>>>>> onto another drive, so unless anyone objects, i will do this.
>>>>> 
>>>>> I know it will fuck up things but i don't see another way, do you?
>>>>> 
>>>>> maybe someone can help put the pieces back together if it doesn't work
>>>>> after i'm done with the dd?
>>>>> 
>>>>> -jake
>>>>> 
>>>>> 
>>>>> Jonathan Lassoff <jof at thejof.com> wrote: I can try and reboot it, but
>>>>> I don't have faith that it will restart.
>>>>> 
>>>>> Unfortunately, I can't tell what mounts sdb backs:
>>>>> 
>>>>> root at minotaur:/etc/lvm# pvdisplay
>>>>> Bus error
>>>>> 
>>>>> However, it hosts /boot, so... that's a no good:
>>>>> 
>>>>> root at minotaur:/etc/lvm# mount | grep sdb
>>>>> /dev/sdb1 on /boot type ext2 (rw)
>>>>> 
>>>>> 
>>>>> 
>>>>> Anyone have some spare SSDs?
>>>>> 
>>>>> --j
>>>>> 
>>>>> On Fri, Feb 1, 2013 at 4:21 PM, Jonathan Lassoff <jof at thejof.com>
>>>>> wrote:
>>>>> 
>>>>>> Shit...
>>>>>> 
>>>>>> 
>>>>>> [819573.242701] sd 2:0:0:0: [sdb] Unhandled error code
>>>>>> [819573.242716] sd 2:0:0:0: [sdb]  Result: hostbyte=DID_BAD_TARGET
>>>>>> driverbyte=DRIVER_OK
>>>>>> [819573.242732] sd 2:0:0:0: [sdb] CDB: Read(10): 28 00 01 67 f9 80 00 
>>>>>> 00
>>>>>> 20 00
>>>>>> [819573.242765] end_request: I/O error, dev sdb, sector 23591296
>>>>>> [819573.246916] sd 2:0:0:0: [sdb] Unhandled error code
>>>>>> [819573.246944] sd 2:0:0:0: [sdb]  Result: hostbyte=DID_BAD_TARGET
>>>>>> driverbyte=DRIVER_OK
>>>>>> [819573.246966] sd 2:0:0:0: [sdb] CDB: Read(10): 28 00 01 67 f9 80 00 
>>>>>> 00
>>>>>> 08 00
>>>>>> [819573.247012] end_request: I/O error, dev sdb, sector 23591296
>>>>>> [819573.251895] sd 2:0:0:0: [sdb] Unhandled error code
>>>>>> [819573.251912] sd 2:0:0:0: [sdb]  Result: hostbyte=DID_BAD_TARGET
>>>>>> driverbyte=DRIVER_OK
>>>>>> [819573.251933] sd 2:0:0:0: [sdb] CDB: Read(10): 28 00 01 67 f9 80 00 
>>>>>> 00
>>>>>> 08 00
>>>>>> [819573.251984] end_request: I/O error, dev sdb, sector 23591296
>>>>>> [819592.830436] sd 2:0:0:0: [sdb] Unhandled error code
>>>>>> [819592.830450] sd 2:0:0:0: [sdb]  Result: hostbyte=DID_BAD_TARGET
>>>>>> driverbyte=DRIVER_OK
>>>>>> [819592.830465] sd 2:0:0:0: [sdb] CDB: Read(10): 28 00 01 8b ba d0 00 
>>>>>> 00
>>>>>> 08 00
>>>>>> [819592.830498] end_request: I/O error, dev sdb, sector 25934544
>>>>>> [819592.835028] sd 2:0:0:0: [sdb] Unhandled error code
>>>>>> [819592.835039] sd 2:0:0:0: [sdb]  Result: hostbyte=DID_BAD_TARGET
>>>>>> driverbyte=DRIVER_OK
>>>>>> [819592.835051] sd 2:0:0:0: [sdb] CDB: Read(10): 28 00 01 8b ba d0 00 
>>>>>> 00
>>>>>> 08 00
>>>>>> [819592.835078] end_request: I/O error, dev sdb, sector 25934544
>>>>>> 
>>>>>> 
>>>>>> 
>>>>>> And... Minotaur's disk dies.
>>>>>> I think we're way overloading this box that was intended as an
>>>>> 
>>>>> out-of-band
>>>>>> 
>>>>>> access host. :p
>>>>>> 
>>>>>> --j
>>>>>> 
>>>>>> On Fri, Feb 1, 2013 at 4:18 PM, Jake <jake at spaz.org> wrote:
>>>>>> 
>>>>>>> *** System restart required ***
>>>>>>> Last login: Thu Jan 31 22:56:43 2013 from awesome.local
>>>>>>> jake at minotaur:~$ touch sdfkj
>>>>>>> touch: cannot touch `sdfkj': Read-only file system
>>>>>>> jake at minotaur:~$ mount
>>>>>>> /dev/mapper/minotaur-root on / type ext4 (rw,errors=remount-ro)
>>>>>>> proc on /proc type proc (rw,noexec,nosuid,nodev)
>>>>>>> sysfs on /sys type sysfs (rw,noexec,nosuid,nodev)
>>>>>>> none on /sys/fs/fuse/connections type fusectl (rw)
>>>>>>> none on /sys/kernel/debug type debugfs (rw)
>>>>>>> none on /sys/kernel/security type securityfs (rw)
>>>>>>> udev on /dev type devtmpfs (rw,mode=0755)
>>>>>>> devpts on /dev/pts type devpts (rw,noexec,nosuid,gid=5,mode=**0620)
>>>>>>> tmpfs on /run type tmpfs (rw,noexec,nosuid,size=10%,**mode=0755)
>>>>>>> none on /run/lock type tmpfs (rw,noexec,nosuid,nodev,size=**5242880)
>>>>>>> none on /run/shm type tmpfs (rw,nosuid,nodev)
>>>>>>> /dev/sdb1 on /boot type ext2 (rw)
>>>>>>> /dev/mapper/minotaur-home on /home type ext4 (rw)
>>>>>>> rpc_pipefs on /run/rpc_pipefs type rpc_pipefs (rw)
>>>>>>> nfsd on /proc/fs/nfsd type nfsd (rw)
>>>>>>> 
>>>>>>> mount: warning: /etc/mtab is not writable (e.g. read-only filesystem).
>>>>>>>        It's possible that information reported by mount(8) is not
>>>>>>>        up to date. For actual information about system mount points
>>>>>>>        check the /proc/mounts file.
>>>>>>> 
>>>>> _______________________________________________
>>>>> Rack mailing list
>>>>> Rack at lists.noisebridge.net
>>>>> https://www.noisebridge.net/mailman/listinfo/rack
>>>>> 
>>>> 
>>> _______________________________________________
>>> Rack mailing list
>>> Rack at lists.noisebridge.net
>>> https://www.noisebridge.net/mailman/listinfo/rack
>> 
>



More information about the Rack mailing list