Hier ein Tip von Bernd
Hallo erst mal Gratulation zu deinem gelungenen Artikel Solaris auf Strato.
Ein paar Ergänzungen noch,
Die einfachste Möglichkeit das die Büchse von der ersten Platte bootet
Bzw die das System ohne LInux Grub bootfähig zu machen ist folgende, man
nimmt den Live Upgrade kram.
Im ersten Schritt auf jeden Fall wie Du schon beschrieben hast in das
Miniroot booten.
Den Devicetree aufräumen, bootadm update....
Boot wie bei Dir beschrieben.
Dann im nächsten Schritt, nach dem erfolgreichen Boot.
mit fdisk, die Linux Partitions zertrümmern...
Dann mit format die Platte aufräumen bzw. einrichten...
Einen Zpool anlegen siehe auch beigefügte Sequenz. ---->zpool create
rpool2 c2d0s0
Diesen Zpool dann als Liveupgrade ziel angeben!
lucreate -c initialboot -n new-zpoolboot2 -p rpool2
Wenn Live Upgrade mit den copying Exzessen durch ist..
luactivate new-zpoolboot2
Noch mal verifizieren ob die neue Platte als active gekennzeichnet ist
im fdisk.
Und bei möglicher Unsicherheit, mittels installgrub das ganze noch mal
aufs Device hämmern, sollte aber nich notwendig sein, da der Live
Upgrade das erledigt.
Wenn die Büchse dann gebootet hat....
Unbedingt eine Weile warten, der LIve Upgrade frickelt nämlich noch im
Hintergrund rum.
Danach kann man her gehen, den alten Pool zerstören...
zpool destroy rootpoolvm
zpool attach rpool2 c1d0s0 c2d0s0
Dann klappts auch mit dem spiegeln. ;)
Ach ja beim einspielen der Recommended Patch Clusters die kann man
ebenfalls mit Liveupgrade erschlagen.
http://unixhaus.de/index.php?/archives/2170-Live-Upgrade-luupgrade.html
Mit freundlichen Gruessen und noch ein schoenes Wochenede. ;)
# format -e
Searching for disks...done
AVAILABLE DISK SELECTIONS:
0. c1d0 <DEFAULT cyl 2085 alt 2 hd 255 sec 63>
/pci@0,0/pci-ide@12/ide@0/cmdk@0,0
1. c2d0 <DEFAULT cyl 246 alt 2 hd 255 sec 126>
/pci@0,0/pci-ide@12/ide@1/cmdk@0,0
Specify disk (enter its number): 2
`2' is out of range.
Specify disk (enter its number): 1
selecting c2d0
Controller working list found
[disk formatted, defect list found]
FORMAT MENU:
disk - select a disk
type - select (define) a disk type
partition - select (define) a partition table
current - describe the current disk
format - format and analyze the disk
fdisk - run the fdisk program
repair - repair a defective sector
show - translate a disk address
label - write label to the disk
analyze - surface analysis
defect - defect list management
backup - search for backup labels
verify - read and display labels
save - save new disk/partition definitions
volname - set 8-character volume name
!<cmd> - execute <cmd>, then return
quit
format> fdisk
Total disk size is 60800 cylinders
Cylinder size is 32130 (512 byte) blocks
Cylinders
Partition Status Type Start End Length %
========= ====== ============ ===== === ====== ===
1 Active Solaris2 1 60799 60799 100
SELECT ONE OF THE FOLLOWING:
1. Create a partition
2. Specify the active partition
3. Delete a partition
4. Change between Solaris and Solaris2 Partition IDs
5. Exit (update disk configuration and exit)
6. Cancel (exit without updating disk configuration)
Enter Selection: 5
---------------------------------------------------------------------------------------
FORMAT MENU:
disk - select a disk
type - select (define) a disk type
partition - select (define) a partition table
current - describe the current disk
format - format and analyze the disk
fdisk - run the fdisk program
repair - repair a defective sector
show - translate a disk address
label - write label to the disk
analyze - surface analysis
defect - defect list management
backup - search for backup labels
verify - read and display labels
save - save new disk/partition definitions
volname - set 8-character volume name
!<cmd> - execute <cmd>, then return
quit
format> q
# format
Searching for disks...done
AVAILABLE DISK SELECTIONS:
0. c1d0 <DEFAULT cyl 2085 alt 2 hd 255 sec 63>
/pci@0,0/pci-ide@12/ide@0/cmdk@0,0
1. c2d0 <DEFAULT cyl 60797 alt 2 hd 255 sec 126>
/pci@0,0/pci-ide@12/ide@1/cmdk@0,0
Specify disk (enter its number): ^C
# format
Searching for disks...done
AVAILABLE DISK SELECTIONS:
0. c1d0 <DEFAULT cyl 2085 alt 2 hd 255 sec 63>
/pci@0,0/pci-ide@12/ide@0/cmdk@0,0
1. c2d0 <DEFAULT cyl 60797 alt 2 hd 255 sec 126>
/pci@0,0/pci-ide@12/ide@1/cmdk@0,0
Specify disk (enter its number): 1
selecting c2d0
Controller working list found
[disk formatted, defect list found]
FORMAT MENU:
disk - select a disk
type - select (define) a disk type
partition - select (define) a partition table
current - describe the current disk
format - format and analyze the disk
fdisk - run the fdisk program
repair - repair a defective sector
show - translate a disk address
label - write label to the disk
analyze - surface analysis
defect - defect list management
backup - search for backup labels
verify - read and display labels
save - save new disk/partition definitions
volname - set 8-character volume name
!<cmd> - execute <cmd>, then return
quit
format> par
PARTITION MENU:
0 - change `0' partition
1 - change `1' partition
2 - change `2' partition
3 - change `3' partition
4 - change `4' partition
5 - change `5' partition
6 - change `6' partition
7 - change `7' partition
select - select a predefined table
modify - modify a predefined partition table
name - name the current table
print - display the current table
label - write partition map and label to the disk
!<cmd> - execute <cmd>, then return
quit
partition> pri
Current partition table (original):
Total disk cylinders available: 60797 + 2 (reserved cylinders)
Part Tag Flag Cylinders Size Blocks
0 unassigned wm 0 0 (0/0/0) 0
1 unassigned wm 0 0 (0/0/0) 0
2 backup wu 0 - 60796 931.46GB (60797/0/0) 1953407610
3 unassigned wm 0 0 (0/0/0) 0
4 unassigned wm 0 0 (0/0/0) 0
5 unassigned wm 0 0 (0/0/0) 0
6 unassigned wm 0 0 (0/0/0) 0
7 unassigned wm 0 0 (0/0/0) 0
8 boot wu 0 - 0 15.69MB (1/0/0) 32130
9 alternates wm 1 - 2 31.38MB (2/0/0) 64260
partition> 0
Part Tag Flag Cylinders Size Blocks
0 unassigned wm 0 0 (0/0/0) 0
Enter partition id tag[unassigned]:
Enter partition permission flags[wm]:
Enter new starting cyl[0]: 3
Enter partition size[0b, 0c, 3e, 0.00mb, 0.00gb]: $ALL
partition>
zpool create rpool2 c2d0s0
lucreate -c initialboot -n new-zpoolboot2 -p rpool2
# lustatus
Boot Environment Is Active Active Can Copy
Name Complete Now On Reboot Delete Status
-------------------------- -------- ------ --------- ------ ----------
initialboot yes yes yes no -
new-zpoolboot2 yes no no yes -
#
luactivate new-zpoolboot2
Activation of boot environment <new-zpoolboot2> successful.
# zpool list
NAME SIZE ALLOC FREE CAP HEALTH ALTROOT
rootpoolvm 15.9G 10.7G 5.16G 67% DEGRADED -
rpool2 928G 5.21G 923G 0% ONLINE -
# zpool status
pool: rootpoolvm
state: DEGRADED
status: One or more devices has experienced an error resulting in data
corruption. Applications may be affected.
action: Restore the file in question if possible. Otherwise restore the
entire pool from backup.
see: http://www.sun.com/msg/ZFS-8000-8A
scrub: none requested
config:
NAME STATE READ WRITE CKSUM
rootpoolvm DEGRADED 0 0 54
c1d0s0 DEGRADED 0 0 108 too many errors
errors: 54 data errors, use '-v' for a list
pool: rpool2
state: ONLINE
scrub: none requested
config:
NAME STATE READ WRITE CKSUM
rpool2 ONLINE 0 0 0
c2d0s0 ONLINE 0
# init 6
propagating updated GRUB menu
Saving existing file </boot/grub/menu.lst> in top level dataset for BE
<new-zpoolboot2> as <mount-point>//boot/grub/menu.lst.prev.
File </boot/grub/menu.lst> propagation successful
File </etc/lu/GRUB_backup_menu> propagation successful
File </etc/lu/menu.cksum> propagation successful
File </sbin/bootadm> propagation successful
# svc.startd: The system is coming down. Please wait.
svc.startd: 76 system services are now being stopped.
Aug 18 10:50:14 h1906375 syslogd: going down on signal 15
propagating updated GRUB menu
Saving existing file </boot/grub/menu.lst> in top level dataset for BE
<new-zpoolboot2> as <mount-point>//boot/grub/menu.lst.prev.
File </boot/grub/menu.lst> propagation successful
File </etc/lu/GRUB_backup_menu> propagation successful
File </etc/lu/menu.cksum> propagation successful
File </sbin/bootadm> propagation successful
svc.startd: The system is down.
syncing file systems... done
rebooting.
Auch nach dem Reboot noch mal warten.... die Live Upgrade Skripte...
fummeln noch, nicht ungeduldig
werden. Kaffee trinken und zwei Zigaretten rauchen.
NICHT ungeduldig werden nach init 6 wurschtelt LIve Upgrade noch rum....
also warten!!!
0
---------- ----- ----- ----- ----- ----- -----
^C# lustatus
Boot Environment Is Active Active Can Copy
Name Complete Now On Reboot Delete Status
-------------------------- -------- ------ --------- ------ ----------
rootpolvm yes yes yes no -
new-zfsBE yes no no yes -
# bash
bash-3.00# luactivate new-zfsBE
Generating boot-sign, partition and slice information for PBE <rootpolvm>
Saving existing file </etc/bootsign> in top level dataset for BE
<rootpolvm> as <mount-point>//etc/bootsign.prev.
A Live Upgrade Sync operation will be performed on startup of boot
environment <new-zfsBE>.
Setting failsafe console to <ttya>.
Generating boot-sign for ABE <new-zfsBE>
NOTE: File </etc/bootsign> not found in top level dataset for BE <new-zfsBE>
Generating partition and slice information for ABE <new-zfsBE>
Copied boot menu from top level dataset.
Generating multiboot menu entries for PBE.
Generating multiboot menu entries for ABE.
Disabling splashimage
No more bootadm entries. Deletion of bootadm entries is complete.
GRUB menu default setting is unaffected
Done eliding bootadm entries.
**********************************************************************
The target boot environment has been activated. It will be used when you
reboot. NOTE: You MUST NOT USE the reboot, halt, or uadmin commands. You
MUST USE either the init or the shutdown command when you reboot. If you
do not use either init or shutdown, the system will not boot using the
target BE.
**********************************************************************
In case of a failure while booting to the target BE, the following process
needs to be followed to fallback to the currently working boot environment:
1. Boot from the Solaris failsafe or boot in Single User mode from Solaris
Install CD or Network.
2. Mount the Parent boot environment root slice to some directory (like
/mnt). You can use the following commands in sequence to mount the BE:
zpool import rootpoolvm
zfs inherit -r mountpoint rootpoolvm/ROOT/s10x_u7wos_08
zfs set mountpoint=<mountpointName> rootpoolvm/ROOT/s10x_u7wos_08
zfs mount rootpoolvm/ROOT/s10x_u7wos_08
3. Run <luactivate> utility with out any arguments from the Parent boot
environment root slice, as shown below:
<mountpointName>/sbin/luactivate
4. luactivate, activates the previous working boot environment and
indicates the result.
5. Exit Single User mode and reboot the machine.
**********************************************************************
Modifying boot archive service
Propagating findroot GRUB for menu conversion.
File </etc/lu/installgrub.findroot> propagation successful
File </etc/lu/stage1.findroot> propagation successful
File </etc/lu/stage2.findroot> propagation successful
File </etc/lu/GRUB_capability> propagation successful
Deleting stale GRUB loader from all BEs.
File </etc/lu/installgrub.latest> deletion successful
File </etc/lu/stage1.latest> deletion successful
File </etc/lu/stage2.latest> deletion successful
Activation of boot environment <new-zfsBE> successful.
bash-3.00#
ash-3.00# cat /rootpoolvm/boot/grub/menu.lst
default 0
timeout 60
serial --unit=0 --speed=57600
terminal serial
# splashimage /boot/grub/splash.xpm.gz
[...]
#---------- ADDED BY BOOTADM - DO NOT EDIT ----------
title Oracle Solaris 10 5/09 s10x_u7wos_08 X86
findroot (pool_rootpoolvm,0,a)
kernel$ /platform/i86pc/multiboot -B
$ZFS-BOOTFS,console=ttya,ttya-mode="57600,8,n,1,-"
module /platform/i86pc/boot_archive
#---------------------END BOOTADM--------------------
#---------- ADDED BY BOOTADM - DO NOT EDIT ----------
title Solaris failsafe
findroot (pool_rootpoolvm,0,a)
kernel /boot/multiboot -s -B console=ttya,ttya-mode="57600,8,n,1,-"
module /boot/amd64/x86.miniroot-safe
#---------------------END BOOTADM--------------------