[SunHELP] [50% solved, but need help ; -)]Sol10 - fsck problem / on SunSparc-AXMP

listmail listmail at triad.rr.com
Mon Mar 30 12:52:48 CDT 2009


I was speaking in context of my own setups and habits.  I prefer to 
clear out the OS mirror configs. to make the drives truly free standing 
for which either or could be booted freely without disksuite worrying 
about the other disk's state, along with other flexibility that it 
brings.  In your case you don't want to much since having a stripe.

* Anyway *, Perhaps in your case you may want to physically disconnect 
c1t0d0 to leave it in its current state and then boot c1t1d0s0 to do the 
cluster test? (installing the boot block and/or do config changes from 
single mode cdrom on it if needed).

Michael Karl wrote:
> Hello,
>
> http://docs.sun.com/app/docs/doc/817-2530/6mi6gg883?a=view
>
> 1. I didn't make metaroot and metaclear to the mirrors on root ... I'm 
> asking why ?
> 2. If I metadetach all submirrors of the boot-disk, I'm thinking that 
> I must be able to boot directly the "first-boot"-disk and the 
> "mirror-boot"-disk via ok-prompt ... but now I could only boot the 
> "first-boot"-disk without "1." well ... but the 
> "mirror-boot"-disk-root is not clean for booting ... may be it's not 
> in sync.
> 3. I don't remember, how I solved this problem in the past. I'm 
> thinking: metadetach the submirror, upgrading with patch-cluster the 
> "first-disk", looking for working of the "first-disk", metattach the 
> submirrors if everything is working ... if not, I boot the 
> metadetached mirror and make with metattach the sync to the 
> "first-disk" to the original state.
>
> Where is my mistake now ?
>
> Thanks Michael
>
>
> Michael Karl schrieb:
>> Hi,
>>
>> listmail schrieb:
>>> metastat says there are problems?  
>> everything was OK before metadetach the submirrors.
>>> Did you run metaroot to revert vfstab to use the physical slice 
>> no ... why ... if one hd fail, it should work without using metaroot 
>> ... I'm thinking.
>>> and then metadetach all (secondary) sub mirror(s) before rebooting?
>> yes
>>> And after you rebooted did you use metaclear on the (secondary) sub 
>>> mirrors and then metaclear -r each toplevel mirror?
>> no ... why ?
>>>   And then perhaps also use metadb to delete the state replicas so 
>>> that metadb -i returns nothing.   
>> ok
>>> You can still boot right?
>> yes ... with the first disk after I corrected the 
>> openbootprom-enviroment. But this misconfiguration of the obp was the 
>> reason, that the metadetached mirror-disk didn't start with 
>> inode-errors. I'm thinking, that my way was wrong yesterday.
>> 1. metadetach all submirrors
>> 2. init 5
>> 3. replacing ram-module
>> 4. switch on and boot without any parameters
>>>
>>> please show
>>> # metastat
>> Sorry ... german output ;-)
>>
>> # metastat
>> d7: Spiegel
>>    Untergeordneter Spiegel 1: d70
>>      Status: OK
>>    Kontrolllauf: 1
>>    Leseoption: roundrobin (Standard)
>>    Schreiboption: parallel (Standard)
>>    Grv_e: 12902400 Blvcke (6,2 GB)
>>
>> d70: Untergeordneter Spiegel von d7
>>    Status: OK
>>    Grv_e: 12902400 Blvcke (6,2 GB)
>>    Stripe 0:
>>        Gerdt      Startblock   Daten       Status Wiede Hot-Spare
>>        c1t0d0s7          0     Nein            OK    Ja
>>
>>
>> d6: Spiegel
>>    Untergeordneter Spiegel 1: d60
>>      Status: OK
>>    Kontrolllauf: 1
>>    Leseoption: roundrobin (Standard)
>>    Schreiboption: parallel (Standard)
>>    Grv_e: 12289200 Blvcke (5,9 GB)
>>
>> d60: Untergeordneter Spiegel von d6
>>    Status: OK
>>    Grv_e: 12289200 Blvcke (5,9 GB)
>>    Stripe 0:
>>        Gerdt      Startblock   Daten       Status Wiede Hot-Spare
>>        c1t0d0s6          0     Nein            OK    Ja
>>
>>
>> d1: Spiegel
>>    Untergeordneter Spiegel 1: d10
>>      Status: OK
>>    Kontrolllauf: 1
>>    Leseoption: roundrobin (Standard)
>>    Schreiboption: parallel (Standard)
>>    Grv_e: 4095000 Blvcke (2,0 GB)
>>
>> d10: Untergeordneter Spiegel von d1
>>    Status: OK
>>    Grv_e: 4095000 Blvcke (2,0 GB)
>>    Stripe 0:
>>        Gerdt      Startblock   Daten       Status Wiede Hot-Spare
>>        c1t0d0s1          0     Nein            OK    Ja
>>
>>
>> d0: Spiegel
>>    Untergeordneter Spiegel 1: d100
>>      Status: OK
>>    Kontrolllauf: 1
>>    Leseoption: roundrobin (Standard)
>>    Schreiboption: parallel (Standard)
>>    Grv_e: 6144600 Blvcke (2,9 GB)
>>
>> d100: Untergeordneter Spiegel von d0
>>    Status: OK
>>    Grv_e: 6144600 Blvcke (2,9 GB)
>>    Stripe 0:
>>        Gerdt      Startblock   Daten       Status Wiede Hot-Spare
>>        c1t0d0s0          0     Nein            OK    Ja
>>
>>
>> d71: Concat/Stripe
>>    Grv_e: 12902400 Blvcke (6,2 GB)
>>    Stripe 0:
>>        Gerdt      Startblock   Daten  Wiederzuw.
>>        c1t1d0s7          0     Nein    Ja
>>
>> d61: Concat/Stripe
>>    Grv_e: 12289200 Blvcke (5,9 GB)
>>    Stripe 0:
>>        Gerdt      Startblock   Daten  Wiederzuw.
>>        c1t1d0s6          0     Nein    Ja
>>
>> d11: Concat/Stripe
>>    Grv_e: 4095000 Blvcke (2,0 GB)
>>    Stripe 0:
>>        Gerdt      Startblock   Daten  Wiederzuw.
>>        c1t1d0s1          0     Nein    Ja
>>
>> d101: Concat/Stripe
>>    Grv_e: 6144600 Blvcke (2,9 GB)
>>    Stripe 0:
>>        Gerdt      Startblock   Daten  Wiederzuw.
>>        c1t1d0s0          0     Nein    Ja
>>
>> Device Relocation Information:
>> Device   Reloc  Device ID
>> c1t0d0   Ja     id1,sd at SFUJITSU_MAG3182LC___________50002837
>> c1t1d0   Ja     id1,sd at SFUJITSU_MAG3182LC___________50013362
>>
>> d101, d11, d61, d71 are the metadetached submirrors
>>>
>>> # metastat -p
>> # metastat -p
>> d7 -m d70 1
>> d70 1 1 c1t0d0s7
>> d6 -m d60 1
>> d60 1 1 c1t0d0s6
>> d1 -m d10 1
>> d10 1 1 c1t0d0s1
>> d0 -m d100 1
>> d100 1 1 c1t0d0s0
>> d71 1 1 c1t1d0s7
>> d61 1 1 c1t1d0s6
>> d11 1 1 c1t1d0s1
>> d101 1 1 c1t1d0s0
>>>
>>> # metadb -i
>> # metadb -i
>>        flags           first blk       block count
>>     a m  p  luo        16              8192            /dev/dsk/c1t1d0s3
>>     a    p  luo        8208            8192            /dev/dsk/c1t1d0s3
>>     a    p  luo        16400           8192            /dev/dsk/c1t1d0s3
>>     a        u         16              8192            /dev/dsk/c1t0d0s3
>>     a        u         8208            8192            /dev/dsk/c1t0d0s3
>>     a        u         16400           8192            /dev/dsk/c1t0d0s3
>>>
>>> # cat /etc/vfstab
>> #device         device          mount           FS      fsck    
>> mount   mount
>> #to mount       to fsck         point           type    pass    at 
>> boot options
>> #
>> /proc           -               /proc           proc    -       
>> no      -
>> /dev/md/dsk/d1  -               -               swap    -       
>> no      -
>> /dev/md/dsk/d0  /dev/md/rdsk/d0 /               ufs     1       
>> no      -
>> /dev/md/dsk/d6  /dev/md/rdsk/d6 /usr            ufs     1       
>> no      -
>> /dev/md/dsk/d7  /dev/md/rdsk/d7 /win            ufs     2       
>> yes     -
>> /devices        -               /devices        devfs   -       
>> no      -
>> ctfs            -       /system/contract        ctfs    -       
>> no      -
>> objfs           -       /system/object          objfs   -       
>> no      -
>> swap            -               /tmp            tmpfs   -       
>> yes     -
>>
>> BTW. /win is for a SunPCiII-Card
>>
>> Thank in advance
>>
>> Michael
>>>
>>>
>>>
>>> Michael Karl wrote:
>>>> Hello Andrew,
>>>>
>>>> Sandwich Maker schrieb:
>>>>> " From: Michael Karl <mk at lexcom-net.de>
>>>>> " " " This Sun is more then 8 years old (4x 360Mhz UltraII, 2GB 
>>>>> RAM). The " Solaris 10 has no actual recommended patch-cluster.
>>>>> " What is the best und easiest way to install the patch-cluster on 
>>>>> the " first disk and have the option of the now working Solaris 10 
>>>>> on the " second metadetached mirror-disk?
>>>>>
>>>>> probably your best approach is to un-mirror the disks, so that you 
>>>>> can
>>>>> boot from either independently.  re-mirror when you decide which one
>>>>> you want to go with.
>>>>>   
>>>> That was my first idea ... but I'm making something wrong, that the 
>>>> metadetached ROOT is damaged. Is in Sol10 something different to 
>>>> Sol8/9 ?
>>>>
>>>> Sorry about my simple question ... I had done this so many times 
>>>> years ago ... but I had an accident  2,5 years ago ... after 4 
>>>> operations I'm now not fit at all.
>>>>
>>>> Thank you for your help
>>>>
>>>> Michael
>>>>> ________________________________________________________________________ 
>>>>>
>>>>> Andrew Hay                                  the genius nature
>>>>> internet rambler                            is to see what all 
>>>>> have seen
>>>>> adh at an.bradford.ma.us                       and think what none 
>>>>> thought
>>>>> _______________________________________________
>>>>> SunHELP maillist  -  SunHELP at sunhelp.org
>>>>> http://www.sunhelp.org/mailman/listinfo/sunhelp
>>>> _______________________________________________
>>>> SunHELP maillist  -  SunHELP at sunhelp.org
>>>> http://www.sunhelp.org/mailman/listinfo/sunhelp
>>> _______________________________________________
>>> SunHELP maillist  -  SunHELP at sunhelp.org
>>> http://www.sunhelp.org/mailman/listinfo/sunhelp
>> _______________________________________________
>> SunHELP maillist  -  SunHELP at sunhelp.org
>> http://www.sunhelp.org/mailman/listinfo/sunhelp
> _______________________________________________
> SunHELP maillist  -  SunHELP at sunhelp.org
> http://www.sunhelp.org/mailman/listinfo/sunhelp



More information about the SunHELP mailing list