[geeks] ZFS *panics* two machines when I try to import bad pool
Micah R Ledbetter
vlack-lists at vlack.com
Thu Nov 8 00:49:36 CST 2007
I'm having trouble.
I somehow managed to nuke a disk "and a half" from a two-disk ZFS
mirror that I had running off a Sun Blade 100. The machine was using
an old version of Solaris Express: Community Release, a USB card, and
two 500GB Seagate drives in SATA<->USB2 enclosures. the disks held
about 250GB.
I'm trying to move the data off of ZFS to something that Linux can
read. (FUSE-ZFS couldn't even see the ZFS filesystem.) Originally, my
plan was to format one of the two disks from ZFS as UFS, copy the data
from ZFS to UFS, plug the UFS drive into Linux, format the old ZFS
drive as ext3, copy the data over a second time, and then re-mirror
them. That is, I was taking the cheap way out so I didn't have to go
buy another disk.
However, in trying to copy the data from the ZFS disk to something
else the first time, I have run into problems:
- It gives a shit ton of read errors when I try to follow my first
plan, and do this on my old SX:CR install. It was going extremely
slow, too - more than 14 hours of copying stuff - so I was trying to
avoid going this route, but it may be my only option.
- Under Mac OS X 10.5, it panics the kernel after transferring about
25GB. My dad didn't really want to use his system as my guinea pig, so
I only saw this behavior twice.
- Booting my Solaris 10 11/06 official install DVD from Sun on my
900MHz PC, when I do a `zpool import`, it kernel panics. I've attached
the log for this panic below on the off chance that someone could help
me. (Captured over the serial port.)
Anyone have any clues? I feel like I might be SOL. Please contradict me.
- Micah
# zpool import -f
panic[cpu0]/thread=d4e4a600: BAD TRAP: type=e (#pf Page fault)
rp=d4eb97f0 addr=747865 occurred in module "<unknown>" due to an
illegal access to a user address
zpool: #pf Page fault
Bad kernel fault at addr=0x747865
pid=276, pc=0x747865, sp=0xfa2739b0, eflags=0x10202
cr0: 8005003b<pg,wp,ne,et,ts,mp,pe> cr4: d8<pge,mce,pse,de>
cr2: 747865 cr3: cf4e000
gs: d4eb01b0 fs: fe8d0000 es: db020160 ds: d4f20160
edi: 0 esi: 0 ebp: d4eb9860 esp: d4eb9828
ebx: 100 edx: d47b9600 ecx: 747865 eax: d47b97c0
trp: e err: 0 eip: 747865 cs: 158
efl: 10202 usp: fa2739b0 ss: d47b97c0
d4eb9750 unix:die+a7 (e, d4eb97f0, 747865)
d4eb97dc unix:trap+103f (d4eb97f0, 747865, 0)
d4eb97f0 unix:_cmntrap+9a (d4eb01b0, fe8d0000,)
d4eb9860 747865 (d47b9600)
d4eb9874 zfs:dnode_buf_byteswap+1d (d47b7000, 20)
d4eb989c zfs:arc_read_done+68 (d595ed40)
d4eb9a04 zfs:zio_done+12a (d595ed40)
d4eb9a14 zfs:zio_next_stage+76 (d595ed40)
d4eb9a40 zfs:zio_read_decompress+79 (d595ed40)
d4eb9a50 zfs:zio_next_stage+76 (d595ed40)
d4eb9a64 zfs:zio_wait_for_children+43 (d595ed40, 11, d595e)
d4eb9a78 zfs:zio_wait_children_done+15 (d595ed40)
d4eb9a88 zfs:zio_next_stage+76 (d595ed40)
d4eb9ab4 zfs:zio_vdev_io_assess+129 (d595ed40)
d4eb9ac4 zfs:zio_next_stage+76 (d595ed40)
d4eb9ae8 zfs:vdev_mirror_io_done+1ce (d595ed40, d4eb9b04,)
d4eb9af4 zfs:zio_vdev_io_done+22 (d595ed40)
d4eb9b04 zfs:zio_next_stage+76 (d595ed40)
d4eb9b18 zfs:zio_wait_for_children+43 (d595ed40, 11, d595e)
d4eb9b2c zfs:zio_wait_children_done+15 (d595ed40)
d4eb9b4c zfs:vdev_mirror_io_start+169 (d595ed40)
d4eb9b70 zfs:zio_vdev_io_start+14c (d595ed40)
d4eb9b80 zfs:zio_next_stage+76 (d595ed40)
d4eb9b98 zfs:zio_ready+37 (d595ed40)
d4eb9ba8 zfs:zio_next_stage+76 (d595ed40)
d4eb9bbc zfs:zio_wait_for_children+43 (d595ed40, 1, d595ef)
d4eb9bd0 zfs:zio_wait_children_ready+15 (d595ed40)
d4eb9be0 zfs:zfsctl_ops_root+24f0a61b (d595ed40, d4eb9c20,)
d4eb9bec zfs:zio_nowait+b (d595ed40)
d4eb9c20 zfs:arc_read+3d6 (d4423080, db17e8c0,)
d4eb9c84 zfs:zfsctl_ops_root+24ed9522 (d59a5a88, d4423080,)
d4eb9cac zfs:zfsctl_ops_root+24ed96a8 (d59a5a88, d4423080,)
d4eb9ce4 zfs:dnode_hold_impl+b4 (d5623c40, 30, 0, 1,)
d4eb9d04 zfs:dnode_hold+19 (d5623c40, 30, 0, fa)
d4eb9d34 zfs:dmu_bonus_hold+26 (d5623c54, 30, 0, fa)
d4eb9d6c zfs:vdev_dtl_load+47 (d4dd8980)
d4eb9d84 zfs:zfsctl_ops_root+24efa164 (d4dd8980)
d4eb9d9c zfs:zfsctl_ops_root+24efa108 (d4dd6d80)
d4eb9db4 zfs:zfsctl_ops_root+24efa108 (d4ddb6c0, d538f828,)
d4eb9e00 zfs:spa_load+4bf (db17e8c0, d5271e78,)
d4eb9e34 zfs:spa_tryimport+82 (d5271e78)
d4eb9e58 zfs:zfs_ioc_pool_tryimport+34 (d4ff3000)
d4eb9e78 zfs:zfsdev_ioctl+fb (2d40000, 5a06, 8042)
d4eb9e98 genunix:cdev_ioctl+2b (2d40000, 5a06, 8042)
d4eb9ebc specfs:spec_ioctl+62 (d4e41c00, 5a06, 804)
d4eb9eec genunix:fop_ioctl+24 (d4e41c00, 5a06, 804)
d4eb9f84 genunix:ioctl+199 (3, 5a06, 80426d4, d)
syncing file systems... done
skipping system dump - no dump device configured
rebooting...
More information about the geeks
mailing list