Documenti di Didattica
Documenti di Professioni
Documenti di Cultura
Version 1.3
Preface
Purpose
This manual explains how to mount a Fibre Channel card on SPARC Enterprise and build a SAN Boot
environment, in which the OS can be booted from ETERNUS storage system.
Disk array devices other than ETERNUS are not described at this guide. See Chapter 2.1.1 Required
Hardware.
Intended Audience
This manual is intended for builders and administrators of SAN Boot environments.
Organization
Chapter 1 Overview
Chapter 3 Precautions
# /usr/sbin/FJSVpfca/fc_info -a <Return>
Trademark Notice
Sun, Sun Microsystems, the Sun Logo, Solaris and all Solaris based marks and logos are trademarks or
registered trademarks of Sun Microsystems, Inc. in the U.S. and other countries, and are used under
license.
Fujitsu Limited
April 2009
Notice
This manual shall not be copied without the permission of the publisher.
Chapter 1 Overview............................................................................. 1
1.1 Configuration Patterns........................................................................................ 5
1.1.1 Basic configuration................................................................................................................. 5
1.1.2 Disk array device mirroring configuration based on PRIMECLUSTER GDS ....................... 7
1.1.3 Cluster configuration based on PRIMECLUSTER................................................................. 8
1.1.4 ETERNUS Configuration required to use the Advanced Copy function ................................ 9
5.2 Backing Up/Restoring after Booting the OS from an Internal Disk .................82
5.2.1 Backup procedure .................................................................................................................82
5.2.2 Restore procedure .................................................................................................................84
D.2 If the OS Has Been Installed As Per Section 4.1.2, "Creating a boot
disk by copying an existing boot disk residing on an internal disk"...............108
Appendix E Fibre Channel Driver/Boot Code Auto-
Target Binding Functions ............................................ 110
E.1 Fibre Channel Driver Auto-Target Binding Function .................................... 110
E.2 Fibre Channel Boot Code Auto-Target Binding Function ............................. 112
The term SAN Boot refers to having an operating system (OS) or application stored in external SAN
storage, not on an internal disk in a server, and starting (that is, booting) the OS or application from there.
This document describes the workflow for building a SAN Boot environment, in which a Fibre Channel
card is mounted on a server to start the OS from ETERNUS storage system (RAID).
Having an OS boot disk on an external disk array device offers the following advantages:
1. Enhanced availability
Use of a high-reliability disk array (RAID) device
Enhanced reliability results from managing the boot disk on a disk array (RAID) device.
Use of the disk copy feature of a disk array device drastically cuts the period during which
business is stopped for backing up and restoring system volumes. The CPU load incurred while
backing up and restoring the system volumes is also reduced.
For more details, see Section 1.1.4, "ETERNUS Configuration required to use the Advanced
Copy function."
1
Note
ETERNUS SF AdvancedCopy Manager (ACM) or PRIMECLUSTER GDS Snapshot is
required to use the disk copy feature (Advanced Copy feature) of ETERNUS (disk array device).
Boot disks that were previously spread among multiple servers can be kept under consolidated
management, as they are contained in a single disk array device.
Multiple development environments maintained on a single disk array device can be switched,
as required. This eliminates the need to keep a server for each development environment,
thereby allowing the number of servers and operational workload to be reduced.
2
3. Better maintainability
Handling of disk failures made simpler
If a disk (system volume) fails, the system administrator notifies a service engineer in charge of
the fact and lets the engineer replace the disk for the system to recover automatically. The
system administrator's workload is thus lightened.
Use of the disk copy feature of a disk array device trims the period of business stop for backing
up system volumes before patches are applied on them. With the OS configured to boot from a
backup volume (*1), if a problem occurs after applying the patches, the system can be rolled
back to its status in effect prior to applying the patch by rebooting the server and switching the
boot volume. For more details, see Section 1.1.4, "ETERNUS Configuration required to use the
Advanced Copy feature."
(*1) PRIMECLUSTER GDS Snapshot provides this functionality with a simple command
operation.
3
4
1.1 Configuration Patterns
Boot the OS from an external disk array (RAID) device using Fibre Channel Cards in the patterns of the
Fibre Channel connection configuration shown below. Points to watch concerning each configuration
pattern are also given.
Implement a multipath configuration pattern based on the ETERNUS multipath disk driver,
in which at least two routes of Fibre Channel connections are maintained between the
server and each disk array device.
If disk swaps occur, they could result in degraded application disk access performance.
Actions that help prevent disk swaps include adding to the memory mounted on the server
or reducing the rate of memory usage by applications.
If a server that is not equipped with an internal disk is used, an install server is required to
install the OS and recover the boot disk.
5
2. Using a disk array device from multiple servers
Fabric connection
Implement a multipath configuration pattern based on the ETERNUS multipath disk driver,
in which at least two routes of Fibre Channel connections are maintained between each
server and the disk array device.
If disk swaps occur, they could result in degraded application disk access performance.
Actions that help prevent disk swaps include adding to the memory mounted on each server
or reducing the rate of memory usage by applications.
6
When a server panics, the other servers having their boot disks placed in the same RAID
group as the server might suffer degraded boot disk access performance for several tens of
seconds. See "2.1.2 Boot disk configuration."
If a server that is not equipped with an internal disk is used, a dedicated install server for
installing the OS and recovering the boot disk is required.
With the 1Gbps/2Gbps Fibre Channel card (PW008FC3), one card was required per path.
With the single-channel 4Gbps Fibre Channel card (SE0X7F11x) and the dual-channel
4Gbps Fibre Channel card (SE0X7F12x), in contrast, two Fibre Channel cards can be
combined to build a disk array device mirroring configuration based on PRIMECLUSTER
GDS, because multiple disk array device configurations in which boot disk is recognized
are enabled.
Implement a multipath configuration pattern based on the ETERNUS multipath disk driver,
in which at least two routes of Fibre Channel connections are maintained between each
server and a disk array device.
If disk swaps occur, they could result in degraded application disk access performance.
Actions that help prevent disk swaps include adding to the memory mounted on each server
or reducing the rate of memory usage by applications.
When a server panics, the other servers having their boot disk placed in the same RAID
group as the server may suffer degraded boot disk access performance for several tens of
seconds.
7
If a server that is not equipped with an internal disk is used, a dedicated install server for
installing the OS and recovering the boot disk is required separately.
8
2. Cluster configuration based on multiple disk array devices
9
If backup/restore is executed using the ETERNUS Advanced Copy function (OPC/EC) in a SAN Boot
environment, business can be carried out without stopping business while disk device copying is in
progress, thus drastically cutting the duration of business stop.
Backup/restore using the Advanced Copy function is operable in two ways: one using ETERNUS SF
AdvancedCopy Manager (ACM) and one using PRIMECLUSTER GDS Snapshot (GDS Snapshot).
10
11
The table below contains a summary description of the features of ACM and those of GDS Snapshot.
Note
Where system volumes are managed using PRIMECLUSTER GDS, their backup/restore is operable with
both ACM and GDS Snapshot, but use of GDS Snapshot is recommended for system volumes built on a
soft-mirroring configuration.
ACM and GDS Snapshot features
: Advantage
Operational server -
A dedicated server for executing the No separate server is required, because
backup/restore operations is required in the backup/restore operations are
addition to the server to be backed up executed on the server to be backed up
and restored. and restored.
Backup operation -
Shut down the server to be backed up Reboot the server to be backed up in
once, and launch OPC from its backup single-user mode, and launch OPC.
server. The server can be rebooted to When physical copying completes,
resume suspended business without reboot the server in multi-user mode to
waiting for physical copying to resume suspended business.
complete.
Restore operation
(mirroring by
Shut down the server to be restored Suspended business can be resumed by
PRIMECLUSTER
once, and launch OPC. The server can rebooting the server to be restored and
GDS not
then be rebooted to resume suspended switching the boot volume.
implemented)
business without waiting for physical
copying to complete.
Restore operation - -
(mirroring by
The mirrored disk needs to be When OPC physical copying
PRIMECLUSTER
disconnected and then reconnected completes, switch back to the original
GDS implemented)
before and after the OPC restore boot volume to resume suspended
operation. business.
Multi-server -
backup efficiency
Backup volumes on multiple servers The backup/restore operations are
can be placed under consolidated executed on each individual server.
management from a single backup
server.
Function for -
booting from a
The mount point needs to be changed The function can be easily configured
backup volume
by editing the vfstab file. using command.
12
For more feature details, refer to the ETERNUS SF AdvancedCopy Manager and PRIMECLUSTER
GDS manuals.
Instructions on how to verify the progress of OPC physical copying are included in the ETERNUS SF
AdvancedCopy Manager and PRIMECLUSTER GDS manuals.
13
Chapter 2 Hardware/Software
Configuration
The hardware configuration and the software configuration described in this chapter are prerequisite to
booting the OS from an external disk array device using Fibre Channel cards.
M8000/M9000
If a server that is not equipped with an internal disk is used, an install server is required separately to
install and recover the OS.
14
2.1.2 Boot disk configuration
SAN Boot has a system disk placed on the ETERNUS disk array. In addition to system disks, a variety
of disk volumes are placed on the disk array device. The way these disk volumes are placed on the disk
array device could affect the performance of access to system disks residing on other servers and user
data disks such as databases. To keep disk access performance unaffected, take the following precaution
in implementing the disk configuration:
Do not place in the same RAID group a system disk area and areas (system and data disks)
that are accessible from other servers.
15
If any other kind of disk configuration is used, the following problem may occur:
If multiple system disks are placed in the same RAID group, the occurrence of disk swaps
could result in degraded disk access performance for volumes residing in the same RAID
group.
16
Placing a system disk and a shared data area in the same RAID group would degrade
access performance for the shared data area by several tens of seconds during a memory
dump access session triggered by a server panic.
It is necessary to apply
Solaris 10 8/07 or higher
when SPARC Enterprise
T5120/T5140/T5220 is used.
It is necessary to apply
Solaris 10 5/08 or higher
when SPARC Enterprise
T5240/T5440 is used.
It is necessary to apply
17
Solaris 10 10/08 or higher
when SPARC Enterprise
M3000 is used.
18
Software Version Remarks
ETERNUS SF AdvancedCopy 13.0 or Both agents and the manager are Solaris 10-ready.
Manager higher
19
Chapter 3 Precautions
1. The single-channel 4 Gbps Fibre Channel card (SE0X7F11x) and dual-channel 4 Gbps Fibre
Channel card (SE0X7F12x) support a boot code to allow the OS to be booted from a disk
array device connected to the Fibre Channel card. The Fibre Channel cards come with this
boot code disabled by default. The boot code needs to be enabled before the OS can be booted
using a file channel card.
Enabling or disabling the boot function on either port of the dual-channel 4Gbps Fibre Channel
card (SE0X7F12x) will automatically impart the same setting to the other port. It is not
possible to make a change to the setting of either port alone and not both.
For information about enabling and disabling the boot function on Fibre Channel cards, see
Appendix A.2 1 fjpfca-set-bootfunction.
2. Be sure to install the OS on the boot disk using one of the methods described in Chapter 4.
It is not possible to copy any boot disk that has been used on another host by using the dd
command or using ETERNUS EC (Equivalent Copy) or OPC (One Point Copy) and use it.
Boot disks cannot be created in any procedure other than that described in Chapter 4.
3. Keep a record of the boot configuration information set on the Fibre Channel card.
The boot configuration information is required when the Fibre Channel card is replaced,
because it needs to be reproduced on the new Fibre Channel card.
For information on the kinds of information to be recorded, see Appendix C, "Recording SAN
Boot Setting Information."
4. Fujitsu recommends not placing in the same RAID group the boot disks for different hosts.
For more details, see Section 2.1.2, "Boot disk configuration."
20
5. Do not create a huge file or a large number of files in /tmp (tmpfs).
When creating files in /tmp (tmpfs), take care to ensure that the size of space used by /tmp
(tmpfs) does not exceed the installed memory size.
If the size of space used by /tmp (tmpfs) should exceed the installed memory size as a result of
having created a huge file or a large number of files in /tmp (tmpfs), the system could slow
down due to a lack of sufficient memory available to it.
This precaution also applies when an internal disk is used as a boot disk.
7. The access path settings based on WWN (World Wide Port Name) described below are
recommended for the ETERNUS disk array device and the ETERNUS SN200 Fibre Channel
switch.
These settings remove the need to reconfigure the following devices for using the resource
management software, Systemwalker Resource Coordinator, facilitating the transition process
and speed up the transition.
Configure as host table or enable the host affinity feature with regard to the ETERNUS FC-CA
port, and set the WWN of the Fibre Channel card used as a host World Wide Name. Refer to
each Server Connection Guide for the setting of the ETERNUS disk array device.
Using the WWN of the Fibre Channel card and that of each disk array device, configure a one-
to-one WWN zoning plan, which establishes zoning using the WWN of the host HBA port and
that of the FC-CA port.
8. Disks bearing the EFI (Extensible Firmware Interface) disk label cannot be used as a boot disk.
The EFI disk label supports disks larger than 1Tbyte on a system running the 64-bit Solaris
kernel. The OS cannot be booted from a disk bearing the EFI disk label, though.
21
9. The warning message shown below is displayed when a multipath is created. This message
may be ignored.
This message reports that the disk array device has received a SCSI RESET that is issued at
multipath build time and does not relate to any disk array device or server operation.
This message will be displayed only when a new multipath is built or additions are made to a
fleet of disk array devices. If message monitoring is implemented, disable the monitoring
process temporarily or simply ignore the output message when it is displayed.
10. If a system disk is mirrored using PRIMECLUSTER GDS, the following message may be
displayed at boot time, but simply ignore it, because there is no actual problem with the system:
This message is displayed if the setting forceload is defined in duplicate in /etc/system file. To
leave this message hidden, delete later occurrences of the setting of forceload.
forceload: drv/mplb
~
forceload: drv/mplb Delete this line.
22
11. In configuring the Fibre Channel driver, the link speed (transmission line speed) setting can be
set to automatic selection to facilitate connectivity, but the expected link speed (especially for
the 4Gbps) may not be attained depending on the connection timing. Enter the link speed
setting in /kernel/drv/fjpfca.conf.
port=
"fjpfca0:nport:sp4";
For instructions how to set a link speed, refer to "FUJITSU PCI Fibre Channel Guide."
12. Be sure to configure LUN 0(Host Logical Unit Number 0) on ETERNUS. The LUN 0 is used
to recognize ETERNUS by Fibre Channel boot code.
13. When you install the Solaris 10 10/08 or higher by the installation server, the installation server
should work on Solaris 10 10/08 environment or higher or on Solaris 10 environment that
137137-09 or later patch are applied; otherwise driver packages cannot be installed.
The Sun Microsystems Inc. is not recommending the construction of the ZFS file system
environment on the disk array device. Refer to "Solaris ZFS Administration Guide" for details.
14. If you backup and restoring data in ZFS environment, you should use Solaris 10 10/08
environment or on Solaris 10 environment that 137137-09 or later patch are applied; otherwise
there is a possibility that the import of the system volume cannot be done.
15. In the SAN Boot environment by the ZFS file system, the mirroring of the system disk by
PRIMECLUSTER GDS cannot be done.
16. When you restore the boot disk, you should use the boot block on the restore destination device
for the boot block creating.
17. The OS booting process might hang due to fibre channel card's (SE0X7F11x, SE0X7F12x)
problem. In this case, turn the power off, and then turn the power on, and boots again.
23
Chapter 4 Building an OS Boot
Environment
Before performing this procedure, set up a disk array device to make available a lun in which to create a
boot disk.
If a server that is not equipped with an internal disk is used, only the method described in Section 4.1.1,
"Creating a boot disk using a network install server," can be used. Since the FUJITSU PCI Fibre Channel
driver is not contained in Solaris OS, OS boot environment cannot be built using CD/DVD of Solaris OS.
Boot environments cannot be built in any procedure other than that described in this guide.
In Section 4.1.1, "Creating a boot disk using a network install server," the method of identifying the disk
array device to become a boot disk can be automated. This method facilitates the job of configuring the
Fibre Channel driver and the Fibre Channel boot code. For more details, see Section 4.1.1.2,
"Configuring a network install server."
If a disk array device is identified through automatic setting, set zoning with the FC Switch to disable a
Fibre Channel card from connecting to multiple disk array devices. In an environment in which a Fibre
Channel card is allowed to connect to multiple disk array devices, implement disk array device
identification manually (manual setting).
For information on building a cluster environment, see Section 4.3.2, "Notes on using
PRIMECLUSTER."
24
Install the OS as instructed in Section 4.1, "Creating a Boot Disk on a Disk Array Device," and create a
boot disk on a disk array device.
Then, boot the OS in single-path mode as instructed in Section 4.1.2.6, "Booting from a disk array
device."
Lastly, define a multipath configuration as per Section 4.2, "Making the Path to a Boot Disk Redundant."
25
4.1 Creating a Boot Disk on a Disk Array Device
There are two ways to create a boot disk on a disk array device, as follows:
1. Creating a boot disk using a network install server
2. Creating a boot disk by copying an existing boot disk residing on an internal disk (only if the
server is equipped with an internal disk)
Work with the install machine from the install machine console. Work with the install server from a
terminal, which is marked as "(INSTALL SERVER)" in the examples appearing in this guide.
26
4.1.1.1 Creating a network install server
Configure an install server to execute a network install. For more details on the work of creating an
install server, refer to " Solaris x x/x Release and Installation Collection" at docs.sun.com.
If there are multiple hosts that use a disk array device as a boot device each, an OS image needs to be
created for each host. The hosts may share a single install image, however, in these situations:
1. Hosts use AL direct Fibre Channel connection, sharing the same values of target ID and max
throttle.
2. Hosts use automatically configured disk array devices on a FC switch connection, sharing the
same values of target ID and max throttle.
Note) Multiple hosts which are the same architecture type can share the same OS install image,
but the same OS install image can not be used for the other architecture type. SPARC Enterprise
M3000/M4000/M5000/M8000/M9000 (which are sun4u) can share the same OS install image.
SPARC Enterprise T1000/T2000/T5120/T5140/T5220/T5240/T5440 (sun4v) can share the
same OS install image.
Include the host name hostname of the install machine in the created directory name to allow
management by host.
4. When copying of the Solaris 10 Operating System completes, demount the DVD-ROM.
27
(INSTALL SERVER)# cd / <RETURN>
(INSTALL SERVER)# eject cdrom <RETURN>
This step makes the disk array device connected to the Fibre Channel card identifiable. Mount
"FUJITSU PCI Fibre Channel 4.0" on the CD-ROM drive in the network install server and do
the following:
The process of installing the Fibre Channel driver varies with each target machine model.
Solaris 10 5/08 or older
SPARC Enterprise T1000/T2000/T5120/T5140/T5220/T5240/T5440
28
(INSTALL SERVER) # bin/pfcapkgadd.sh -R
/export/install/Solaris10_hostname/Tools/Boot/ -p sun4u <RETURN>
(2) Unpack the miniroot to the work directory using the root_archive(1M) command.
- The following messages might be displayed when root_archive command is executed. you
might ignore these messages.
The error messages might be displayed when root_archive command is executed. you might
ignore these messages.
29
(5) Copy the file in /tmp/media directory to the installation image on the install server.
Target device path name for "umount -f" and "lofiadm -d" commands can confirm by "df -k"
command.
The error messages might be displayed when root_archive command is executed. you might
ignore these messages.
4. Verify the correspondence between the Fibre Channel card location and driver instance
number.
Boot the installation target machine from the network in single-user mode, with the -s option
specified.
5. Execute the following to verify the correspondence between the Fibre Channel card device
path and the driver instance:
Each line of this command listing contains device path, instance number and instance name in
this order.
In the example above, the drive instance of the Fibre Channel card mounted at device path
"/pci@1,700000/XXXX@0" is fjpfca0.
The correspondence between device paths and server slot positions is described in the relevant
server’s user's guide. If any other server is used or if the relevant user's guide is not available for
reference, check the correspondence in the following way:
6. Flash the LED on the Fibre Channel card associated with the driver instance. The flashing
LED lets you identify the Fibre Channel card location for a single-channel 4Gbps Fibre
Channel card (SE0X7F11x), or the Fibre Channel card location and the port position
associated with the driver instance for a dual-channel 4Gbps Fibre Channel card
30
(SE0X7F12x). The LED at fjpfca0 can be flashed as instructed below. The LINK LED will
flash for 3 minutes.
To stop the flashing of the LED, enter Ctrl-c (press the c key while holding down Ctrl key).
For information on using fc_adm, refer to "FUJITSU PCI Fibre Channel Guide."
8. Configure the definition files that are used for booting the OS at network install time.
When Solaris 10 10/08 or higher is installed, the following procedures are executed
beforehand.
Solaris 10 10/08 or higher
Unpack the miniroot to the work directory using the root_archive(1M) command.
- The following messages might be displayed when root_archive command is executed. you
might ignore these messages.
The files require the following setting on the network install server:
31
There are two ways to make a disk array device identifiable on a fabric connection
(using the FC Switch): automatic setting and manual setting. Automatic setting for
making a disk array device identifiable, when selected, offers the following benefits:
1. If multiple hosts use a disk array device as a boot device each, they can still
share and use the Solaris OS install image on the install server.
These two methods of identifying a disk array device are described below.
a. [Automatic setting]
Example: Set a fabric connection as a topology, a link speed of 4 Gbps and
automatic setting as method of disk array device identification for fjpfca0.
port=
"fjpfca0:nport:sp4";
fcp-auto-bind-function=1;
For more information about the automatic identification function, see Appendix
E, "Fibre Channel Driver/Boot Code Auto-Target Binding Functions."
If a disk array device is identified through automatic setting, set zoning with
the FC Switch to disable a Fibre Channel card from connecting to multiple disk
array devices. In an environment in which a Fibre Channel card is allowed to
connect to multiple disk array devices, allow disk array devices to be identified
through manual setting.
For information on how to set zoning with the FC Switch, refer to the relevant
FC switch’s manual.
b. [Manual setting]
Example: Set a fabric connection as a FC switch topology, a link speed of 4
Gbps and binding of a disk array device with target ID 16 for fjpfca0.
port=
"fjpfca0:nport:sp4";
fcp-bind-target=
"fjpfca0t16:0x210000c0004101d9";
32
Configure the target driver setting file (sd.conf) to make the logical unit (LU) of
the disk array device on which to create a boot disk identifiable. Define only the
boot disk on the disk array device. If an identifiable logical unit of the disk array
unit is already defined, the entry may be bypassed.
Example: Identify target ID 16 and logical unit 0.
(2) Copy the file in /tmp/media directory to the installation image on the install server.
Target device path name for "umount -f" and "lofiadm -d" commands can confirm by
"df -k" command.
2. Create a disk label in the lun that is used as a boot disk by carrying out the format (1M)
command, and then check the size of the lun.
# format <RETURN>
33
AVAILABLE DISK SELECTIONS:
0. c7t16d0 <FUJITSU-ETERNUS-4000 cyl 1038 alt 2 hd 64 sec 256>
/pci@1,700000/fibre-channel@0/sd@10,0
Specify disk (enter its number): 0<RETURN>
selecting c7t16d0
[disk formatted]
Disk not labeled. Label it now? y <RETURN>FORMAT MENU:
disk - select a disk
type - select (define) a disk type
partition - select (define) a partition table
current - describe the current disk
format - format and analyze the disk
repair - repair a defective sector
label - write label to the disk
analyze - surface analysis
defect - defect list management
backup - search for backup labels
verify - read and display labels
save - save new disk/partition definitions
inquiry - show vendor, product and revision
volname - set 8-character volume name
!<cmd> - execute <cmd>, then return
quit
format> partition <RETURN>
PARTITION MENU:
0 - change '0' partition
1 - change '1' partition
2 - change '2' partition
3 - change '3' partition
4 - change '4' partition
5 - change '5' partition
6 - change '6' partition
7 - change '7' partition
select - select a predefined table
modify - modify a predefined partition table
name - name the current table
print - display the current table
label - write partition map and label to the disk
!<cmd> - execute <cmd>, then return
quit
partition> print <RETURN>
Current partition table (original):
Total disk cylinders available: 4254 + 2 (reserved cylinders)
34
format> quit <RETURN>
Copy the CD image of FUJITSU PCI Fibre Channel to the jumpstart directory on the install
server.
When FUJITSU PCI GigabitEthernet/FUJITSU ULTRA LVD SCSI Host Bus Adapter Driver
install, the following execute.
Copy the CD image of FUJITSU PCI GigabitEthernet 3.0 Update1 or higher to the
jumpstart directory on the install server.
35
(INSTALL SERVER)# cp -p /cdrom/cdrom0/install /jumpstart/fjgi/. <RETURN>
(INSTALL SERVER)# cp -p /cdrom/cdrom0/admin /jumpstart/fjgi/. <RETURN>
(INSTALL SERVER)# cp -pr /cdrom/cdrom0/FJSVgid_3.0/10/* /jumpstart/fjgi/.
<RETURN>
Copy the CD image of FUJITSU PCI GigabitEthernet 4.0 or higher to the jumpstart
directory on the install server.
Copy the CD image of FUJITSU ULTRA LVD SCSI Host Bus Adapter Driver to the
jumpstart directory on the install server.
(INSTALL SERVER)# cp -r
/export/install/Solaris10_hostname/Solaris_10/Misc/jumpstart_sample/* /jumpstart
<RETURN>
Create a profile to meet the installation target machine configuration. Create one as instructed
in "Solaris Installation Guide: Custom JumpStart and Advanced Installations."
The setting procedure of profile are different by the file system of the system disk.
36
Profile sample setting : UFS file system
Copy a sample of the finish script from the FJPFCA directory to the /jumpstart directory as
finish.
(INSTALL SERVER)# cp
/jumpstart/FJPFCA/FJPFCA4.0/tool/FJPFCA_jumpstart_finish.sample
/jumpstart/finish <RETURN>
JUMPSTART_DIR Specifies the directory in which the JumpStart setting file is stored.
Edit this parameter only when a directory other than /jumpstart is used.
37
When FUJITSU PCI GigabitEthernet/FUJITSU ULTRA LVD SCSI Host Bus Adapter Driver
install , add the following contents. In the following examples, it adds it under
"PF_ARCH=`uname -m`".
#!/bin/sh
### Edit here ###
JUMPSTART_HOST=
JUMPSTART_DIR=/jumpstart
### End of edit ###
PF_ARCH=`uname -m`
${MNT}/fjgi/install -R /a -d ${MNT}/fjgi -p "$PF_ARCH"
${MNT}/fjulsa/install -R /a -d ${MNT}/fjulsa -p "$PF_ARCH"
${MNT}/FJPFCA/bin/pfcapkgadd.sh -R /a -p "$PF_ARCH"
# Copy fjpfca.conf
if [ -f /kernel/drv/fjpfca.conf ]
then
echo "copying fjpfca.conf "
cp /kernel/drv/fjpfca.conf /a/kernel/drv/fjpfca.conf
COPY_STATUS="$?"
if [ "$?" != "0" ]
then
38
echo "ERROR: fjpfca.conf copy failed."
fi
else
echo "NOTICE: /kernel/drv/fjpfca.conf does not exists."
fi
## Copy sd.conf
if [ -f /kernel/drv/sd.conf ]
then
echo "copying sd.conf "
cp /kernel/drv/sd.conf /a/kernel/drv/sd.conf
COPY_STATUS="$?"
if [ "$?" != "0" ]
then
echo "ERROR: sd.conf copy failed."
fi
else
echo "NOTICE: /kernel/drv/sd.conf does not exists."
fi
umount ${MNT}
Edit the /jumpstart/rules file with a text editor. Specify a profile and finish script used for each
host in the rules file.
The rules file comes with a number of sample settings by default. Comment them out, because
they are not required.
39
(INSTALL SERVER)# /jumpstart/check -p /export/install/Solaris10_hostname -r rules
<RETURN>
If the check command is executed and the following error messages are displayed, the
check command execute again after the following procedures.
Error message:
ERROR: /tmp/media is not a valid Solaris 2.x CD image
40
ok setenv auto-boot? false <RETURN>
ok reset-all <RETURN>
ok reset-all <RETURN>
2. Make sure that Fibre Channel card is identified on the OBP. Check the physical path name of
the slot in which the Fibre Channel card is mounted.
Example of having a single-channel 4 Gbps Fibre Channel card (SE0X7F11x) and a dual-
channel 4 Gbps Fibre Channel card (SE0X7F12x) mounted on a server
ok show-devs <RETURN>
/pci@1,700000
/pci@2,600000
**
/openprom
/chosen
/packages
/pci@1,700000/fibre-channel@0 *physical path name of single-channel 4Gbps Fibre
Channel card
/pci@2,600000/fibre-channel@0 *physical path name of dual-channel 4Gbps Fibre
Channel card port0
/pci@2,600000/fibre-channel@0,1 *physical path name of dual-channel 4Gbps Fibre
Channel card port1
/mc@0,0/bank@0,c0000000
/mc@0,0/bank@0,80000000
3. With the boot code enabled on the Fibre Channel card used for booting the OS, restart the
server. Move to the Fibre Channel card physical path (/pci@1,700000/fibre-channel@0)
confirmed in Step 2 before executing a setting command. There is no need to enable the boot
code on those Fibre Channel cards that do not use the boot feature.
Perform this step on all cards used for booting the OS.
4. After restarting the server, view information about the disk array devices connected to it.
41
Example: fabric connection
ok cd /pci@1,700000/fibre-channel@0 <RETURN>
ok PROBE fjpfca-info <RETURN>
Target - DID 10500 210000e00040101d9 FUJITSU-E4000-0000
Target - DID 10600 210000e00040101da FUJITSU-E4000-0000
Targets residing on a fabric connection may appear as "-," but this is of no concern.
5. Configure the disk array device to make it identifiable with the Fibre Channel boot code.
There are two ways to make a disk array device identifiable on a fabric connection (using the
FC Switch): automatic setting and manual setting. Automatic setting for making a disk array
device identifiable, when selected, offers the following benefits:
(1) If multiple hosts use a disk array device as a boot device each, they still can share and use the
Solaris OS install image on the install server.
(2) The Fibre Channel driver is made easier to configure.
These two methods of identifying a disk array device are described below.
a. [Automatic setting]
Remove the disk array device setting in the Fibre Channel boot code. Execute the
following command:
If a disk array device is identified through automatic setting, set zoning with the FC
Switch to disable a Fibre Channel card from connecting to multiple disk array devices. In
an environment in which a Fibre Channel card is allowed to connect to multiple disk array
devices, allow disk array devices to be identified through manual setting.
b. [Manual setting]
In the Fibre Channel switch (fabric connection) environment, execute the fjpfca-bind-
target command to define a disk array device that is identifiable to the Fibre Channel boot
code.
The WWPN displayed in Step 4 or the DID value is required at this time. There is no
need to execute the fjpfca-bind-target command in the FC-AL environment, because the
disk array device is configured automatically.
o Definition by WWPN
42
ok 10 target-wwpn 210000e0004101d9 fjpfca-bind-target <RETURN>
fjpfca-bind-target: Change bind target parameter
o Definition by DID
6. Other settings
The connection topology, link speed, and server and disk array device power supply interlock
wait time can be modified as required. The connection topology and link speed are set to "auto"
(automatic setting) by default.
For more details, see Appendix A, " Boot Device Setup Commands."
For example, if automatic setting is not used on a connection with a Fibre Channel switch ready
for 2 Gbps, enter the following:
ok 2g fjpfca-set-linkspeed <RETURN>
ok nport fjpfca-set-topology <RETURN>
ok fjpfca-output-prop <RETURN>
boot function: ENABLE
topology : N_Port
link-speed : 2G
boot wait time: DISABLE (interval time: DISABLE/ boot wait msg: DISABLE)
bind-target: Target_ID=16,WWN=0x210000c0004101d9
For example, if a power supply interlocking boot wait time of 1200 seconds (20 minutes) is set
on a direct connection with a disk array device ready for 2 Gbps, enter the following:
ok 2g fjpfca-set-linkspeed <RETURN>
ok al fjpfca-set-topology <RETURN>
ok d# 1200 fjpfca-set-boot-wait-time <RETURN>
ok fjpfca-output-prop <RETURN>
43
boot function: ENABLE
topology : AL
link-speed : 2G
boot wait time: 1200 sec (interval time: DISABLE/ boot wait msg: DISABLE
bind-target: Target_ID=16,WWN=0x210000c0004101d9
ok reset-all <RETURN>
When the installation of the OS has completed and a prompt is displayed, proceed to the next procedure.
If Solaris10 is installed on a server that supports a graphics card to use a bitmapped display, right-click
anywhere on the screen to display a menu, from which open a terminal and proceed further. The message
"Click <Reboot> to continue" is displayed, but ignore it for now.
When the network install completes, the OS will automatically boot itself from the disk array device.
44
4.1.2 Creating a boot disk by copying an existing boot
disk residing on an internal disk
This section explains how to copy a boot disk that has already been created on an internal disk or
elsewhere and use it to create a boot disk on a disk array device. Before getting started, make sure that
the server has been started from the OS stored on the internal disk and a connection with the disk array
device has been configured via a Fibre Channel card.
A single-path connection with the disk array device will do for now. Convert the connection to a
multipath implementation according to Section 4.2, "Making the Path to a Boot Disk Redundant."
This procedure does not allow a boot disk to be created on a target device that has been connected by the
auto-target bind feature of the Fibre Channel driver. Be sure to configure a target device with fcp-bind-
target to create a boot disk on it.
For more information about configuring fcp-bind-target, refer to "FUJITSU PCI Fibre Channel Guide."
45
4.1.2.1 Getting ready to copy the boot disk to a disk array
device
1. Execute the format command or the like to confirm the disk array device on which to create a
boot disk.
# format <RETURN>
Searching for disks...done
AVAILABLE DISK SELECTIONS
0. c7t16d0 <FUJITSU-ETERNUS-4000 cyl 1038 alt 2 hd 64 sec 256>
/pci@1,700000/fibre-channel@0/sd@10,0
Check for the availability of sufficient disk space at the boot disk creation destination and
sufficient associated partition space when compared with the source to copy from. If a slice has
not been created in the lun at the boot disk creation destination, execute the format command to
create one.
Procedure for copying internal disk (UFS file system) to disk array device (ZFS file system).
Procedure for copying internal disk (ZFS file system) to disk array device (ZFS file system).
The environment that can be made is different according to the operating system.
Procedure for copying internal disk (UFS file system) to disk array
Solaris 10 5/08 or older
device (UFS file system).
Procedure for copying internal disk (UFS file system) to disk array
Solaris 10 5/08 or older and device (UFS file system).
patch 137137-09 or later Procedure for copying internal disk (UFS file system) to disk array
device (ZFS file system).
Procedure for copying internal disk (UFS file system) to disk array
device (UFS file system).
Procedure for copying internal disk (UFS file system) to disk array
Solaris 10 10/08 or higher
device (ZFS file system).
Procedure for copying internal disk (ZFS file system) to disk array
device (ZFS file system).
46
It explains each procedure as follows.
Procedure for copying internal disk (UFS file system) to disk array device (UFS file system).
1. Migrate the system to the obp environment.
ok boot -s <RETURN>
Write a boot block to the system block (using a path that has been verified by carrying out the
format command or the like).
4. Execute the mount command (mount the LUN for the boot disk on the identified disk array
device).
The command above, when carried out, copies data located in directories other than /mnt to
RAID.
If /var and /opt have been defined as separate partitions, repeat Steps 3 to 5 above for each
partition.
Procedure for copying internal disk (UFS file system) to disk array device (ZFS file system).
47
1. Migrate the system to the obp environment.
ok boot -s <RETURN>
By way of the example, the file system used to the root(/) make a name of rootfs, the swap area
is made by 2GB and the dump areas are made by 2GB. The mount point of rootpool/rootfs set to
legacy.
Write a boot block to the system block (using a path that has been verified by carrying out the
format command or the like).
If /var and /opt have been defined as separate partitions, repeat Steps 4 to 5 above for each
partition.
Procedure for copying internal disk (ZFS file system) to disk array device (ZFS file system).
48
1. Migrate the system to the obp environment.
ok boot -s <RETURN>
By way of the example, the file system used to the root(/) make a name of rootfs, the swap area
is made by 2GB and the dump areas are made by 2GB. The mount point of rootpool/rootfs set
to legacy.
Write a boot block to the system block (using a path that has been verified by carrying out the
format command or the like).
4. Create a snapshot.
49
rpool/export 32.0M 61.1G 20K /export
rpool/export/home 32.0M 61.1G 32.0M /export/home
rpool/swap 512M 61.6G 10.8M -
# zfs snapshot rpool/ROOT/s10_1008@snapshot <RETURN>
If /var have been defined as separate partitions, repeat Steps 4 to 5 above for each partition.
In the UFS file system, all the access paths is added, comment out access paths that are of no use.
50
/dev/dsk/c7t16d0s3 - - swap - no -
/dev/dsk/c7t16d0s0 /dev/rdsk/c7t16d0s0 / ufs 1 no
-
/dev/dsk/c7t16d0s1 /dev/rdsk/c7t16d0s1 /var ufs 1 no
-
..
In the ZFS file system, only swap is added, comment out access paths that are of no use.
Example: SAN Boot environment by ZFS file system(the root device is not set.)
If a disk array device has a LU other than its boot disk defined in sd.conf, remove the definition from
/mnt/kernel/drv/sd.conf.
The following procedures are executed continuously only when the SAN Boot environment by the ZFS
file system is constructed.
− It advances to the following procedure ignoring it though the following messages are
displayed when the mountpoint property is set to the root path (/).
2. Bootfs setting.
51
4.1.2.4 Configuring the Fibre Channel boot code
Configure the Fibre Channel boot code to allow the OS to be booted from a disk array device.
The OS is booting
ok reset-all <RETURN>
Turn on the server and execute “The OS is booting” or “In the obp environment” depending on
the system status.
52
If you use SPARC Enterprise M3000/M4000/M5000/M8000/M9000 set the server mode switch to
service mode and execute the following commands depending on the system status.
The OS is booting
Set the server mode switch to service mode and execute the following command:
ok reset-all <RETURN>
Set the server mode switch to service mode and then execute the following command:
ok reset-all <RETURN>
Set the server mode switch to service mode and then turn on the server.
(*1) The value (in the example above, "10,0") that follows "disk" specified by boot denotes the
target_id/LUN. It must be the same as the value of target_id/LUN of the disk array device that is
identified by the Fibre Channel driver after the OS boots. In the FC-AL environment, specify the
value of target_id that is displayed when fjpfca-info is executed.
The values of target_id and LUN need be specified in hexadecimal at boot time.
53
4.2 Making the Path to a Boot Disk Redundant
This section explains how to make the path to a boot device based on the ETERNUS Multipath Driver
redundant.
1. Specify the disk array device and boot the OS in single-user mode.
2. Install Enhanced Support Facility as instructed in "Enhanced Support Facility Install Guide."
ok reset-all <RETURN>
54
4.2.2 Configuring the ETERNUS Multipath Driver
This section explains how to configure the ETERNUS Multipath Driver to make the path to a boot device
redundant.
About target ID of the equipment used as a boot device, it recommends setting it as the same value with
two Fibre Channel cards which constitute a multipath.
If the ETERNUS Multipath Driver package has already been installed, execute grmpdautoconf
to proceed to the multipath building process in Step 2.
# /usr/sbin/grmpdautoconf <RETURN>
55
Different manual path selection screens are displayed depending on which method of disk
array device identification by the Fibre Channel driver has been selected, automatic setting
or manual setting.
a. [Automatic setting]
Among the paths marked "New," enter the number of the access path you want
registered with the system.
** The access path list is redisplayed after the entry.
** Selected paths are marked with "*."
** If an invalid number is selected, enter that number again to deselect it.
** All paths marked "Exist" are eligible for LUN installation.
** All devices residing on an AL connection are eligible for LUN and device
installation.
Enter path numbers ---> Enter numerals (multiple path numbers can be entered
separated from each
other with a comma)
Conclude entry ---> 'x'
Quit ---> 'q'
56
Among the paths marked "New," enter the number of the access path you want
registered with the system.
** The access path list is redisplayed after the entry.
** Selected paths are marked with "*."
** If an invalid number is selected, enter that number again to deselect it.
** All paths marked "Exist" are eligible for LUN installation.
** All devices residing on an AL connection are eligible for LUN and device
installation.
Enter path numbers ---> Enter numerals (multiple path numbers can be entered
separated from each
other with a comma)
Conclude entry ---> 'x'
Quit ---> 'q'
The setting of the path (disk array device) selected above is reflected in the Fibre Channel
driver setting file (/kernel/drv/fjpfca.conf). Once this step completes, the setting of the disk
array device that is identified by the Fibre Channel driver is fixed, so the following setting in the
Fibre Channel driver setting file can be removed:
fcp-auto-bind-function=1;
b. [Manual setting]
The wwn entered in fjpfca.conf appears as "Exist" or "AL." Leave all other paths unselected,
selecting "Confirmed (x)" for them.
Among the paths marked "New," enter the number of the access path you want
registered with the system.
** The access path list is redisplayed after the entry.
57
** Selected paths are marked with "*."
** If an invalid number is selected, enter that number again to deselect it.
** All paths marked "Exist" are eligible for LUN installation.
** All devices residing on an AL connection are eligible for LUN and device
installation.
Enter path numbers ---> Enter numerals (multiple path numbers can be entered
separated from each
other with a comma)
Conclude entry ---> 'x'
Quit ---> 'q'
Among the paths marked "New," enter the number of the access path you want
registered with the system.
** The access path list is redisplayed after the entry.
** Selected paths are marked with "*."
** If an invalid number is selected, enter that number again to deselect it.
** All paths marked "Exist" are eligible for LUN installation.
** All devices residing on an AL connection are eligible for LUN and device
installation.
Enter path numbers ---> Enter numerals (multiple path numbers can be entered
separated from each
other with a comma)
Conclude entry ---> 'x'
Quit ---> 'q'
Enter a key. [Path number ,x,q] x <RETURN>
58
When SPARC Enterprise T1000/T2000/T5120/T5140/T5220/T5240/T5440 is being used, it
is necessary to choose the mode of an access path. Select mplb mode at the prompt for access
path mode selection. The ETERNUS Multipath Driver supports two different multipath
access modes: Solaris standard mode, in which the multipath is accessed from a Solaris
standard special file, and mplb mode, in which the multipath is accessed from a mplb special
file as in the past.
In the Solaris10 environment, select mplb mode for the access path mode. Solaris standard
mode does not work in the SAN Boot environment.
3. Check the device path name of the boot device. The grmpdautoconf command carried out in
Step 2 displays a combination of a multipath management special file and a selected access
special file. Use the output of the ls command to identify the boot disk and the physical device
path name of each configuration path. Physical device path names thus found are used in Steps
6 and 9.
The boot disk and the paths that configure it up are as follows:
59
Boot disk /dev/FJSVmplb/rdsk/mplb0s0
Configuration path /dev/rdsk/c2t16d0s2
/dev/rdsk/c13t16d0s2
# ls -l /dev/FJSVmplb/rdsk/mplb0s0 <RETURN>
lrwxrwxrwx 1 root root 36 Aug 29 12:05 /dev/FJSVmplb/rdsk/mplb0s0 -> (line wrapping)
../../../devices/pseudo/mplb@0:a,raw <RETURN>
^^^^^^^^^^^^^^^
# ls -l /dev/rdsk/c2t16d0s2 <RETURN>
lrwxrwxrwx 1 root root 58 Aug 29 17:13 /dev/rdsk/c2t16d0s2 -> (line wrapping)
../../devices/pci@1,700000/fibre-channel@0/mplbt@10,0:c,raw <RETURN>
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
# ls -l /dev/rdsk/c13t16d0s2 <RETURN>
lrwxrwxrwx 1 root root 58 Aug 29 17:13 /dev/rdsk/c13t16d0s2 -> (line wrapping)
../../devices/pci@2,600000/fibre-channel@0/mplbt@10,0:c,raw <RETURN>
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Edit the Fibre Channel driver setting file (/kernel/drv/fjpfca.conf) to set a Fibre Channel link
speed.
In configuring the Fibre Channel driver, the link speed setting can be set to automatic selection
for the sake of easier connectivity. The expected link speed may not be attained depending on
the connection status. Therefore, set the highest transmission rate available in the environment.
port=
"fjpfca0:nport:sp4";
For more information about configuring fjpfca.conf, refer to "FUJITSU PCI Fibre Channel
Guide."
5. Load all the Fibre Channel boot codes that are used to access the boot disk with a disk array
device boot setting.
60
# /usr/sbin/FJSVpfca/fc_hbaprp -i fjpfca0 -b ENABLE <RETURN>
# /usr/sbin/FJSVpfca/fc_hbaprp -i fjpfca0 -c /kernel/drv/fjpfca.conf <RETURN>
# /usr/sbin/FJSVpfca/fc_hbaprp -i fjpfca0 -v <RETURN>
boot_function : ENABLE
topology : N_Port
link-speed : 4G
boot wait time : DISABLE ( interval time : DISABLE , boot wait msg : DISABLE )
bind-target: Target_ID=16,WWN=0x210000e0004101d9
Edit /etc/system file to set rootdev and forceload. For the rootdev setting, set the boot disk
physical device name as found in Step 3 above, excluding "../../devices" at the beginning and
",raw" at the end.
When the setting concerning forceload to each driver exists in /etc/system file, an additional
setting need not be done.
rootdev: /pseudo/mplb@0:a
forceload: drv/mplbt
forceload: drv/mplb
forceload: drv/sd
Edit /etc/vfstab file to rewrite each entry to a path name after the multipath implementation.
When target ID of the equipment used as a boot device has differed with two Fibre Channel
cards which constitute a multipath, target ID about all the paths of a boot disk may not be written
to sd.conf. In that case, the following setup is needed.
○ Edit /kernel/drv/sd.conf
61
○ Reconstruction of sd driver
# touch /reconfigure <RETURN>
or
# update_drv –f sd <RETURN>
forceload: drv/mplb
forceload: drv/sd
When target ID of the equipment used as a boot device has differed with two Fibre Channel
cards which constitute a multipath, target ID about all the paths of a boot disk may not be written
to sd.conf. In that case, the following setup is needed.
○ Edit /kernel/drv/sd.conf
○ Reconstruction of sd driver
# touch /reconfigure <RETURN>
or
# update_drv –f sd <RETURN>
62
9. Configure a boot device.
Configure a boot device on all redundant paths to the boot disk on the OBP. Take the physical
device name of each configuration path found in Step 3, excluding "../../devices" at the
beginning and ":*,raw" at the end, and set it with "mplbt" being replaced with "disk."
ok boot <RETURN>
Launch the host from the boot disk on the disk array device. Then, install ETERNUS
Multipath Driver as instructed in "ETERNUS Multipath Driver Install Guide." When the install
completes, respond with "y" at the following prompt to let the grmpdautoconf command execute
automatically to proceed to the multipath building process in Step 2.
If the ETERNUS Multipath Driver package has already been installed, execute grmpdautoconf
to proceed to the multipath building process in Step 2.
# /usr/sbin/grmpdautoconf <RETURN>
Work with grmpdautoconf interactively. For more details, refer to "ETERNUS Multipath
Driver User's Guide." During this interactive session, make the following choices:
63
Select an access path automatically or manually?
** If automatic selection is selected, all access paths marked "New" are registered with the
system.
** Select automatic selection if an access path has been properly selected at ETERNUS and
switch setup.
** All paths marked "Exist" are eligible for LUN installation.
** All devices residing on an AL connection are eligible for LUN and device installation.
Manual selection ---> 'm'
Automatic selection ---> 'a'
Quit ---> 'q'
Enter a key. [m,a,q] m <RETURN>
Different manual path selection screens are displayed depending on which method of disk
array device identification has been selected in the Fibre Channel driver, automatic setting
or manual setting.
a. [Automatic setting]
Among the paths marked "New," enter the number of the access path you want
registered with the system.
** The access path list is redisplayed after the entry.
** Selected paths are marked with "*."
** If an invalid number is selected, enter that number again to deselect it.
** All paths marked "Exist" are eligible for LUN installation.
** All devices residing on an AL connection are eligible for LUN and device
installation.
Enter path numbers ---> Enter numerals (multiple path numbers can be entered
separated from each
other with a comma)
Conclude entry ---> 'x'
64
Quit ---> 'q'
Among the paths marked "New," enter the number of the access path you want
registered with the system.
** The access path list is redisplayed after the entry.
** Selected paths are marked with "*."
** If an invalid number is selected, enter that number again to deselect it.
** All paths marked "Exist" are eligible for LUN installation.
** All devices residing on an AL connection are eligible for LUN and device
installation.
Enter path numbers ---> Enter numerals (multiple path numbers can be entered
separated from each
other with a comma)
Conclude entry ---> 'x'
Quit ---> 'q'
The setting of the path (disk array device) selected above is reflected in the Fibre Channel
driver setting file (/kernel/drv/fjpfca.conf). Once this step completes, the setting of the disk
array device that is identified by the Fibre Channel driver is fixed, so the following setting in the
Fibre Channel driver setting file can be removed:
fcp-auto-bind-function=1;
b. [Manual setting]
The wwn entered in fjpfca.conf appears as "Exist" or "AL." Leave all other paths unselected,
selecting "Confirmed (x)" for them.
65
---+-----------------------------------+-----+------------------------------------------------------+----
-
fjpfca0 100000000e24ac06 1 210000e0004101d9 E4000 CM1CA0P0
Exist
[ ] 1 fjpfca1 100000000e244737 3 210000e0004101da E4000 CM0CA0P0
New
Among the paths marked "New," enter the number of the access path you want
registered with the system.
** The access path list is redisplayed after the entry.
** Selected paths are marked with "*."
** If an invalid number is selected, enter that number again to deselect it.
** All paths marked "Exist" are eligible for LUN installation.
** All devices residing on an AL connection are eligible for LUN and device
installation.
Enter path numbers ---> Enter numerals (multiple path numbers can be entered
separated from each
other with a comma)
Conclude entry ---> 'x'
Quit ---> 'q'
Among the paths marked "New," enter the number of the access path you want
registered with the system.
** The access path list is redisplayed after the entry.
** Selected paths are marked with "*."
** If an invalid number is selected, enter that number again to deselect it.
** All paths marked "Exist" are eligible for LUN installation.
** All devices residing on an AL connection are eligible for LUN and device
installation.
66
Enter path numbers ---> Enter numerals (multiple path numbers can be entered
separated from each
other with a comma)
Conclude entry ---> 'x'
Quit ---> 'q'
67
4. Delete all lines except for the system volume. Edit /tmp/mplb-file1 using a vi editor or the like
to delete all except for the path to a system disk.
With a cluster system, the instance number of each local multipathdisk, such as a boot disk,
must not be defined in duplicate among the nodes that configure the cluster. Change instance
numbers (designated by X in mplbX) between 0 and 2047 to avoid duplication with other nodes.
5. Apply the edited file, converting the system volume into a multipath implementation.
6. Check the device path name of the boot device. The grmpdautoconf command carried out in
Step 4 displays a combination of a multipath management special file and a selected access
special file. Use the output of the ls command to identify the boot disk and the physical device
path name of each configuration path. Physical device path names thus found are used in Steps
9 and 12.
68
*** Phase 3: read /devices ***
*** Phase 4: compare mplb.conf and /devices ***
Path : Action : Element path : LUN : Storage
mplb0 : new : c2t16d0s2 c13t16d0s2 : 0 : E40004641- 130011 :
mplb1 : new : c2t16d1s2 c13t16d1s2 : 1 : E40004641- 130011 :
mplb2 : new : c2t16d2s2 c13t16d2s2 : 2 : E40004641- 130011 :
The booth disk and the paths that configure it up are as follows:
/dev/rdsk/c13t16d0s2
# ls -l /dev/FJSVmplb/rdsk/mplb0s0 <RETURN>
lrwxrwxrwx 1 root root 36 Aug 29 12:05 /dev/FJSVmplb/rdsk/mplb0s0 -> (line wrapping)
../../../devices/pseudo/mplb@0:a,raw <RETURN>
^^^^^^^^^^^^^^^
# ls -l /dev/rdsk/c2t16d0s2 <RETURN>
lrwxrwxrwx 1 root root 58 Aug 29 17:13 /dev/rdsk/c2t16d0s2 -> (line wrapping)
../../devices/pci@1,700000/fibre-channel@0/mplbt@10,0:c,raw <RETURN>
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
# ls -l /dev/rdsk/c13t16d0s2 <RETURN>
lrwxrwxrwx 1 root root 58 Aug 29 17:13 /dev/rdsk/c13t16d0s2 -> (line wrapping)
../../devices/pci@2,600000/fibre-channel@0/mplbt@10,0:c,raw <RETURN>
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Edit the Fibre Channel driver setting file (/kernel/drv/fjpfca.conf) to set a Fibre Channel driver
link speed.
In configuring the Fibre Channel driver, the link speed setting can be set to automatic selection
for the sake of easier connectivity. The expected link speed may not be attained depending on
the connection status. Therefore, set the highest transmission rate available in the environment.
69
Example: Set fjpfca0 to a link speed of 4 Gbps.
port=
"fjpfca0:nport:sp4";
For more information about configuring fjpfca.conf, refer to the "FUJITSU PCI Fibre Channel
Guide."
8. Load all the Fibre Channel cards that are used to access the boot disk with a disk array device
boot setting.
Edit /etc/system file to set rootdev and forceload. For the rootdev setting, set the boot disk
physical device name as found in Step 6 above, excluding "../../devices" at the beginning and
",raw" at the end.
When the setting concerning forceload to each driver exists in /etc/system file, an additional
setting need not be done.
rootdev: /pseudo/mplb@0:a
forceload: drv/mplbt
forceload: drv/mplb
forceload: drv/sd
Edit /etc/vfstab file to rewrite each entry to a path name after the multipath implementation.
70
/dev/FJSVmplb/dsk/mplb0s3 - - swap - no -
When target ID of the equipment used as a boot device has differed with two Fibre Channel
cards which constitute a multipath, target ID about all the paths of a boot disk may not be written
to sd.conf. In that case, the following setup is needed.
○ Edit /kernel/drv/sd.conf
○ Reconstruction of sd driver
# touch /reconfigure <RETURN>
or
# update_drv –f sd <RETURN>
forceload: drv/mplb
forceload: drv/sd
When target ID of the equipment used as a boot device has differed with two Fibre Channel
cards which constitute a multipath, target ID about all the paths of a boot disk may not be written
to sd.conf. In that case, the following setup is needed.
○ Edit /kernel/drv/sd.conf
○ Reconstruction of sd driver
# touch /reconfigure <RETURN>
or
# update_drv –f sd <RETURN>
71
SAN Boot environment by the ZFS file system
Configure a boot device on all redundant paths to the boot device on the OBP. Take the
physical device name of each configuration path found in Step 6, excluding "../../devices" at the
beginning and ":*,raw" at the end, and set it with "mplbt" being replaced with "disk."
ok boot <RETURN>
72
4.3 Boot Disk Mirroring
This section explains how to mirror between multipath implementations of two boot disks while the OS is
booted from one of them.
1. Verify that zoning has been implemented by the FC Switch as shown above.
2. Verify that the connection of the Fibre Channel card to the disk array device at the mirroring
destination has been configured properly in the OBP environment.
ok cd /pci@1,700000/fibre-channel@0 <RETURN>
ok PROBE fjpfca-info <RETURN>
Target -- DID 10500 210000e00040101d9 FUJITSU-E4000-0000
Target -- DID 10600 210000e00040101da FUJITSU-E4000-0000
3. Boot the OS and verify that the connection of the Fibre Channel driver to the disk array device
at the mirroring destination has been configured properly.
# /usr/sbin/FJSVpfca/fc_info -p <RETURN>
adapter=fjpfca#0 :
port_id=0x010500 tid=0 wwn=210000e00040101d9 adapter=fjpfca#1 connected
class=class3
port_id=0x01060 tid=0 wwn=210000e00040101da adapter=fjpfca#1 connected
class=class3
73
4. If a Multipath Driver is yet to be set on the disk array device at the mirroring destination, set it.
Referring to "ETERNUS Multipath Driver User's Guide," set a Multipath Driver and create a
multipath disk on the disk array device (ETERNUS #2) at the mirroring destination.
5. If using the ETERNUS Multipath Driver, add a definition of the mirroring destination disk
to target driver setting file /kernel/drv/sd.conf.
Example: Identify target ID 16, logical unit 1.
7. Refer to “PRIMECLUSTER Global Disk Services Guide” and mirror the disk at the mirroring
source and destination with each other.
If a system disk is mirrored using PRIMECLUSTER GDS, the message below may be
displayed at boot time. This message may be ignored.
This message is displayed if the setting forceload is defined in duplicate in /etc/system file. To
leave this message hidden, delete later occurrences of the setting of forceload.
forceload: drv/mplb
~
forceload: drv/mplb Delete this line.
74
4.3.2 Notes on using PRIMECLUSTER
75
Chapter 5 Backing Up and Restoring Boot
Disks
Boot disks can be backed up in this environment described in this guide by following the procedural steps
explained below.
Boot the OS from an internal disk and back up and restore it by file system or by lun.
Boot the OS from a network and back up and restore the boot disk by file system or by lun.
Back up and restore up the boot disk using ETERNUS EC (Equivalent Copy) or OPC (One
Point Copy).
* ETERNUS SF AdvancedCopy Manager is required.
In the environment explained in this guide, the method of booting the Solaris OS from CD/DVD and
backing up the boot disk cannot be used. This chapter concerns the procedures: "Boot the OS from an
internal disk and back up and restore it by file system or by lun" and "Boot the OS from a network and
back up and restore the boot disk by file system or by lun." For information on the procedure "Back up
and restore the boot disk using ETERNUS EC (Equivalent Copy) or OPC (One Point Copy) with
ETERNUS SF AdvancedCopy Manager," refer to the ETERNUS SF AdvancedCopy Manager manual.
For information on the procedures for backing up and restoring a system disk that is mirrored by
PRIMECLUSTER GDS, refer to the "PRIMECLUSTER Global Disk Services Guide." Although the
method of booting the Solaris OS from CD/DVD and backing up and restoring a system disk is covered
here, the same method can also be used when the OS has been booted from a network or internal disk. If
PRIMECLUSTER GDS Snapshot is used, the OS can be booted from a boot disk on a disk array device
and then the boot disk can be backed up and restored using the ETERNUS Advanced Copy function or
PRIMECLUSTER GDS copy function.
The type of tape device used may dictate certain precautions may apply in the procedure of configuring
or implementing the backup and restore operations. Refer in advance to the instruction manual pertaining
to the type of tape device used to ensure that boot disks are backed and restored as instructed.
In an environment in which a system disk has optional software installed on it that has a module running
as part of a kernel, such as a driver or file system, additional precautions may apply. Refer to the manual
for the optional software and follow its instructions.
Refer to "Solaris ZFS Administration Guide" for details of back up and restore of the ZFS file system
environment.
This chapter assumes that Solaris is installed on disk device c7t16d0 on a disk array device.
76
5.1 Backing Up/Restoring after Booting OS
from a Network
If a boot disk residing on a disk array device has been created by installing the OS from a network, follow
the procedures explained in this section to back up and restore the boot disk.
2. Back up the boot disk. The boot disk can be backed up by file system or by lun.
a. Back up by file system
UFS file system environment
(1) The procedure for backing up a boot disk by file system using the ufsdump(1M)
command is explained below. Disk partition information, such as slice size, is not
backed up and needs to be recorded beforehand using the prtvtoc(1M) command or
format(1M) command.
or
(2) Back up a boot disk using the ufsdump(1M) command. In this example,
/dev/dsk/c7t16d0s0 is used as a boot disk, and tape device /dev/rmt/0 is used.
or
77
partition> print <Return>
rpool ONLINE
c7t16d0s0 ONLINE
# zpool import 4856116377389642800 <Return> Specify the ID that confirmed in
zpool import.
# zfs list <Return>
NAME USED AVAIL REFER MOUNTPOINT
rpool 6.33G 13.2G 94K /rpool
rpool/ROOT 4.83G 13.2G 18K legacy
rpool/ROOT/s10_1008 4.83G 13.2G 4.76G /
rpool/dump 1.00G 13.2G 1.00G -
rpool/export 38K 13.2G 20K /export
rpool/export/home 18K 13.2G 18K /export/home
rpool/swap 512M 13.7G 10.0M -
#
When failing in creating the snapshot, the following procedures are executed. And,
the snapshot is created again.
(4) Back up a boot disk using the zfs(1M) command. In this example,
rpool/ROOT/s10_1008 is used as a boot path, and tape device /dev/rmt/0 is used.
78
# zfs send rpool/ROOT/s10_1008@snapshot > /dev/rmt/0 <Retrun>
b. Backup by disk
(1) Back up a boot disk using the dd(1M) command. In this example, tape device
/dev/rmt/0 is used.
In this format, if= is followed by the name of the disk to be backed up, such as
/dev/rdsk/c0t0d0s2, specified in the character type (/dev/rdsk/...). Remember to
specify s2, a slice that designates a disk as a whole.
The dd(1M) command might not be able to be backed up according to the size of
LUN because it doesn't correspond to the multi volume.
2. Restore the boot disk. Restore it in the same unit in which it has been backed up.
a. Restore by file system
UFS file system environment
(1) If the boot disk has a new lun defined or has its lun derived from another use, create a
disk slice and a disk label using the format(1M) command. Reference the disk partition
information that has been recorded at backup for the size of the slice created and other
details.
# format <Return>
For information on creating a disk slice and a disk label using the format(1M)
command, browse through the online manual.
Here, specify the slice name of the restore destination device in the character type
(/dev/rdsk/...).
(3) Mount the boot disk. In this example, /dev/dsk/c7t16d0s0 is used as a boot disk.
79
(4) Move to the mounted directory.
# cd /mnt <Return>
(5) Restore the boot disk using the ufsrestore(1M) command. In this example, tape device
/dev/rmt/0 is used. For example, the boot disk might be restored from another LU in
the disk array device.
(6) Create a boot block using the installboot(1M) command. For information on creating a
boot block using installboot(1M) command, browse through the online manual.
Use the boot block on the restore destination device for the boot block creating.
Here, specify slice 0 of the restore destination device in the character type
(/dev/rdsk/...). In this example, /dev/dsk/c7t16d0s0 is used as a boot disk.
(7) Move to the root directory and unmount the boot disk.
# cd / <Return>
# umount /mnt <Return>
(8) Check the file system for consistency using the fsck(1M) command.
Here, specify the slice name the restore destination device in the character type
(/dev/rdsk/...).
# format <Return>
For information on creating a disk slice and a disk label using the format(1M)
command, browse through the online manual.
80
# zpool create rpool c7t16d0s0 <Return>
# zfs create rpool/ROOT <Return>
(6) Create a boot block using the installboot(1M) command. For information on creating a
boot block using installboot(1M) command, browse through the online manual.
Use the boot block on the restore destination device for the boot block creating.
Here, specify slice 0 of the restore destination device in the character type
(/dev/rdsk/...). In this example, /dev/dsk/c7t16d0s0 is used as a boot disk.
b. Restore by disk
(1) Restore the boot disk using the dd(1M) command. In this example, tape device
/dev/rmt/0 is used.
Specify the name of the disk to be backed up in the character type (/dev/rdsk/...).
Remember to specify s2, a slice that designates a disk as a whole.
81
5.2 Backing Up/Restoring after Booting the OS
from an Internal Disk
If a boot disk residing on a disk array device has been created by copying the OS from an internal disk,
follow the procedures explained in this section to back up and restore the boot disk.
2. Back up the boot disk. The boot disk can be backed up by file system or by lun.
a. Back up by file system
UFS file system environment
(1) The procedure for backing up a boot disk by file system using the ufsdump(1M)
command is explained below. Disk partition information, such as slice size, is not
backed up and needs to be recorded beforehand using the prtvtoc(1M) command or
format(1M) command.
or
(2) Back up the boot disk using the ufsdump(1M) command. In this example,
/dev/dsk/c7t16d0s0 is used as a boot disk, and tape device /dev/rmt/0 is used.
or
82
format> partition <Return>
partition> print <Return>
rpool ONLINE
c7t16d0s0 ONLINE
# zpool import 4856116377389642800 <Return> ID confirmed with zpool import
is specified.
# zfs list <Return>
NAME USED AVAIL REFER MOUNTPOINT
rpool 6.33G 13.2G 94K /rpool
rpool/ROOT 4.83G 13.2G 18K legacy
rpool/ROOT/s10_1008 4.83G 13.2G 4.76G /
rpool/dump 1.00G 13.2G 1.00G -
rpool/export 38K 13.2G 20K /export
rpool/export/home 18K 13.2G 18K /export/home
rpool/swap 512M 13.7G 10.0M -
#
When failing in creating the snapshot, the following procedures are executed. And,
the snapshot is created again.
83
(4) Back up a boot disk using the zfs(1M) command. In this example,
rpool/ROOT/s10_1008 is used as a boot path, and tape device /dev/rmt/0 is used.
b. Back up by disk
(1) Back up the boot disk using the dd(1M) command. In this example, tape device
/dev/rmt/0 is used. The boot disk may also be backed up to another LU on the disk
array device.
In this format, if= is followed by the name of the disk to be backed up, such as
/dev/rdsk/c0t0d0s2, specified in the character type (/dev/rdsk/...). Remember to
specify s2, a slice that designates a disk as a whole.
The dd(1M) command might not be able to be backed up according to the size of
LUN because it doesn't correspond to the multi volume.
2. Restore the boot disk in the same unit in which it has been backed up.
a. Restore by file system
UFS file system environment
(1) If the boot disk has a new lun defined or has its lun derived from another use, create a
disk slice and a disk label using the format(1M) command. Reference the disk partition
information that has been recorded at backup for the size of the slice created and other
details.
# format <Return>
For information on creating a disk slice and a disk label using the format(1M)
command, browse through the online manual.
Specify the name of the slice in which the boot disk is restored in the character type
(/dev/rdsk/...).
84
In this example, /dev/dsk/c7t16d0s0 is used as a boot disk.
# cd /mnt <Return>
(5) Restore the boot disk using the ufsrestore(1M) command. In this example, tape device
/dev/rmt/0 is used.
(6) Create a boot block using the installboot(1M) command. For information on creating a
boot block using installboot(1M) command, browse through the online manual.
Use the boot block on the restore destination device for the boot block creating.
Here, specify slice 0 of the restore destination device in the character type
(/dev/rdsk/...). In this example, /dev/dsk/c7t16d0s0 is used as a boot disk.
(7) Move to the root directory and unmount the boot disk.
# cd / <Return>
# umount /mnt <Return>
(8) Check the file system for consistency using the fsck(1M) command.
Specify the name of the slice in which the boot disk is restored in the character type
(/dev/rdsk/...).
# format <Return>
85
For information on creating a disk slice and a disk label using the format(1M)
command, browse through the online manual.
(6) Create a boot block using the installboot(1M) command. For information on creating a
boot block using installboot(1M) command, browse through the online manual.
Use the boot block on the restore destination device for the boot block creating.
Here, specify slice 0 of the restore destination device in the character type
(/dev/rdsk/...). In this example, /dev/dsk/c7t16d0s0 is used as a boot disk.
b. Restore by disk
(1) Restore the boot disk using the dd(1M) command. In this example, tape device
/dev/rmt/0 is used. The boot disk may also be restored from another LU on the disk
array unit that has been backed up beforehand.
86
# dd if=/dev/rmt/0 of=/dev/rdsk/c7t16d0s2 bs=64k <Return>
Here, specify the name of the disk to be backed up in the character type
(/dev/rdsk/...). Remember to specify s2, a slice that designates a disk as a whole.
87
Appendix A Boot Device Setup Commands
This appendix focuses on the commands used to configure the boot code on the Fibre Channel card.
These commands are executable on either the OS or the OBP.
The commands introduced here work only on the single-channel 4Gbps Fibre Channel card (SE0X7F11x)
and dual-channel 4Gbps Fibre Channel card (SE0X7F12x).
1 fc_hbaprp
Name
fc_hbaprp
Format
-f tgt_id -I PORT_ID
-d tgt_id
-D [-y]
-w boot-wait-time
-l linkspeed
-t topology
-v
-s savefile
-r|-R filename
-c conffile
-C [-y]
-b ENABLE|DISABLE
88
Function
Operands
The settings that work on the boot code on the Fibre Channel card are listed below.
All these settings need to be accompanied by the specification of -i adpname. Specify the
instance name of the Fibre Channel driver as adpname.
WWN Specifies WWPN of the target device with a hexadecimal number (boot device
specification by WWPN).
PORT_ID Specifies Port_ID(DID) of the target device with a hexadecimal number (boot device
specification by Port_ID).
-i adpname -d tgt_id
Erases the setting of the target device. The following value can be set:
-i adpname -D [-y]
If -y is not attached, a message asking if you are sure you want to erase the settings is displayed.
-i adpname -w boot-wait-time
Sets a boot wait time in seconds. The following value can be set:
boot-wait-time Specifies a boot wait time with a decimal number. Either 0 second (no boot-
wait-time) or a value between 180 and 86,400 seconds can be set.
-i adpname -l linkspeed
89
4G|4g : Sets 4 Gbps.
-i adpname -t topology
-i adpname -v
boot wait time DISABLE or numeric value Set boot wait time. Value is specified in seconds.
(decimal) DISABLE overrides a boot wait time.
interval time DISABLE This item is not available and cannot be changed.
boot wait msg DISABLE This item is not available and cannot be changed.
90
-i adpname -s savefile
boot wait time DISABLE or numeric value Set boot wait time. Value is specified in seconds.
(decimal) DISABLE overrides a boot wait time.
interval time DISABLE This item is not available and cannot be changed.
boot wait msg DISABLE This item is not available and cannot be changed.
Updates the boot code on the Fibre Channel card with the settings saved with -s.
If -r is specified, the boot code on the Fibre Channel card is updated with all settings, except for
the boot function.
If -R is specified, the boot code on the Fibre Chanel card is updated with all settings, including
the boot function.
-i adpname -c conffile
Updates the boot code on the Fibre Channel card with the settings of the driver setting file
(/kernel/drv/fjpfca.conf).
-i adpname -C [-y]
If -y is not attached, a message asking if you are sure you want to erase the settings is displayed.
91
-i adpname -b ENABLE|DISABLE
Enables or disables the Fibre Channel card boot function. One of the following values can be
set:
Note
Enabling or disabling the boot function on either port of the dual-channel 4Gbps Fibre Channel
card (SE0X7F12x) will automatically impart the same setting to the other port. It is not possible
to make a change to the setting of either port alone and not both.
Example
92
A.2 Command Executable on the OBP
Before starting the procedure, set the to maintenance mode and restart the server. If you use SPARC
Enterprise T1000/T2000/T5120/T5140/T5220/T5240/T5440, execute the following command:
If you use SPARC Enterprise M3000/M4000/M5000/M8000/M9000, set the server mode switch to
service mode and execute the following command:
ok reset-all <RETURN>
If an attempt is made to configure the Fibre Channel card in any other condition, it could hang up. When
it does, turn the power to the server off, and then turn it back on.
To carry out this command, you need to move to the node of the Fibre Channel card on which to
configure.
Example: single-channel 4Gbps Fibre Channel card (SE0X7F11x) and dual-channel 4Gbps Fibre
Channel card (SE0X7F12x) mounted on a server
ok show-devs <RETURN>
/pci@1,700000
/pci@2,600000
**
/openprom
/chosen
/packages
/pci@1,700000/fibre-channel@0 *physical path name of the single channel 4 Gbps Fibre
Channel card
/pci@2,600000/fibre-channel@0 *physical path name of the dual-channel 4Gbps Fibre
Channel card port0
/pci@2,600000/fibre-channel@0,1 physical path name of the dual-channel 4Gbps Fibre
Channel card port1
ok cd /pci@1,700000/fibre-channel@0 <RETURN>
ok
1 fjpfca-set-bootfunction
Name
fjpfca-set-bootfunction
93
Format
Function
Note)The server must be restarted or the reset-all command must be carried out after this
command has been carried out. In configuring on more than one card as well, remember to carry
out reset-all for every card mounted.
Operands
Note
Enabling or disabling the boot function on either port of the dual-channel 4Gbps Fibre Channel
card (SE0X7F12x) will automatically impart the same setting to the other port. It is not possible
to make a change to the setting of either port alone and not both.
Example
2 fjpfca-output-prop
Name
fjpfca-output-prop
Format
fjpfca-output-prop
94
Function
boot wait time DISABLE or numeric value Set boot wait time. Value is specified
(decimal) in seconds.
DISABLE overrides a boot wait time.
boot wait msg DISABLE This item is not available and cannot
be changed.
Example
ok fjpfca-output-prop <RETURN>
boot function : ENABLE
topology : AUTO
link-speed : AUTO
boot wait time : DISABLE ( interval time : DISABLE , boot wait msg : DISABLE )
bind-target: Target_ID=0,WWPN=0x210000e0001014d9
95
3 fjpfca-set-linkspeed
Name
fjpfca-set-linkspeed
Format
1g | 2g | 4g | auto fjpfca-set-linkspeed
Function
Operands
1g : Sets 1 Gbps.
2g : Sets 2 Gbps.
4 g : Sets 4 Gbps.
Example
ok 1g fjpfca-set-linkspeed <RETURN>
ok 2g fjpfca-set-linkspeed <RETURN>
ok 4g fjpfca-set-linkspeed <RETURN>
ok auto fjpfca-set-linkspeed <RETURN>
Default value
auto
Note
4 fjpfca-set-topology
Name
fjpfca-set-topology
Format
Function
96
Sets a topology.
Operands
Example
Default value
auto
Note
5 fjpfca-bind-target
Name
fjpfca-bind-target
Format
Function
Operands
target-wwpn: Specifies WWPN of the target device (boot device specification by WWPN).
target-alpa: Specifies Port_ID (DID) of the target device (boot device specification by
Port_ID).
target-did: Specifies Port_ID (DID) of the target device (boot device specification by
Port_ID).
97
value2: Specifies Port_ID (DID or WWPN with a hexadecimal number.
Example
Note
6 led-flash
Name
led-flash
Format
[sec-time] led-flash
Function
Flashes the LED on the Fibre Channel card (for 10 seconds by default and for 60 seconds at the
longest).
Use this command to verify the location of the Fibre Channel card and where the ports are
positioned (on a dual-channel 4Gbps Fibre Channel card).
Operands
A number preceded by d# is assumed to be a decimal number. All other numbers are assumed
to be hexadecimal.
Example
Note
98
7 fjpfca-set-boot-wait-time
Name
fjpfca-set-boot-wait-time
Format
Function
If a power supply interlock control is implemented between the server and a disk array device,
it is necessary to let the disk array device start up before launching a boot sequence. Use the
fjpfca-set-boot-wait-time command to delay the launch of the boot sequence for a specified
period of time.
The Fibre Channel card is monitoring the status of the disk array device even while the OS
waits to be booted, ready to start booting automatically as soon as it confirms that the disk array
device is seen started up, even before the specified period of time expires.
The period of time for which the boot sequence is delayed can be set in seconds between 180
and 86,400 seconds.
Referring to the manual supplied with the disk array device, set (for the boot wait time) the
amount of time that it takes for the system to enter the READY state after the POWER switch is
pressed.
Operands
Specify a wait time (in seconds) at boot time with a hexadecimal number.
Example
Default value
DISABLE
Note
99
8 fjpfca-info
Name
fjpfca-info
Format
Function
Operands
PROBE Displays a list of target devices connectable from the Fibre Channel card.
Example
Note
9 fjpfca-target-cancel
Name
fjpfca-target-cancel
Format
tgt_id fjpfca-target-cancel
Function
100
Operands
Example
ok 0 fjpfca-target-cancel <RETURN>
Note
10 fjpfca-all-target-cancel
Name
fjpfca-all-target-cancel
Format
fjpfca-all-target-cancel
Function
Example
ok fjpfca-all-target-cancel <RETURN>
delete all bind registration ? [ y(Y),n(N) ] y
Note
101
Appendix B Checking the Fibre Channel
Card Boot Code Version
Number
This appendix explains how to confirm the boot code (firmware) version number of the Fibre Channel
card.
There are two ways to perform this confirmation: confirming the code by checking the OS and confirming
the code by checking OBP.
View /var/adm/messages to check the boot code version number from the following display:
If you use SPARC Enterprise M3000/M4000/M5000/M8000/M9000, set the server mode switch to
service mode and execute the following command:
ok reset-all <RETURN>
Move to the node of the Fibre Channel card whose boot code version number is to be confirmed, and
execute the .properties command.
102
Read the value of fjpfca_fcode_vl.
ok cd /pci@1,700000/fibre-channel@0 <RETURN>
ok .properties <RETURN>
status okay
fru PCI slot(PCI#08)
component-name PCI#08
assigned-addresses 81001814 00000000 00000700 00000000 00000100
(Omission)
fjpfca_fcode_vl v12l30
(Omission)
ok
103
Appendix C Recording SAN Boot Setting
Information
Record and save to any other document the values listed in the "SAN Boot Setting Information" table that
have been set at installation. Whenever a Fibre Channel card in use is replaced, reconfigure the same set
of values of this "SAN Boot Setting Information" on the replacement card as well.
The reconfiguration sequence takes different courses depending on which of the following cases applies:
1. Active replacement of the card during OS running on an alternate path.
2. Cold replacement of the card when OS is available on alternate path.
3. Cold replacement of the card when OS can not boot on any paths.
When a card is replaced in Cases 1 and 2 above, the SAN boot setting information can be reconfigure
into the new card from the Fibre Channel driver environment definition files using the fc_hbaprp
command. This "SAN Boot Setting Information" is required when replacing a card in Case 3. In this
case, carry out a command executable on the OBP to reconfigure the SAN boot setting information.
Keep a record of this setting information with regard to all Fibre Channel cards used for rebooting the OS.
104
"SAN Boot Setting Information"
105
Appendix D Making Fixes to Setting Files
after a Boot Failure
Errors in any SAN Boot setting file (such as sd.conf, mplb.conf or /etc/system) could prevent the OS
from starting up after a boot failure. When this occurs, start the OS in single-user mode and mount the
system disk on a disk array device to make fixes to the setting file as instructed in this appendix.
The way the OS is started depends on which of the following ways has been used to install the OS on the
system disk on a disk array device.
System disk installation as per Section 4.1.1, "Creating a boot disk using a network install server"
System disk installation as per Section 4.1.2, "Creating a boot disk by copying an existing boot disk
residing on an internal disk"
ok reset-all <RETURN>
106
action: The pool can be imported using its name or numeric identifier.
config:
raid_pool ONLINE
c7t16d0s0 ONLINE
# zpool import 9153334525621735888 <RETURN> Specify the ID that confirmed in zpool
import.
# zfs list <RETURN>
NAME USED AVAIL REFER MOUNTPOINT
raid_pool 5.98G 92.5G 93K /raid_pool
raid_pool/ROOT 4.98G 92.5G 18K legacy
raid_pool/ROOT/s10_1008 4.98G 92.5G 4.98G /
raid_pool/dump 512M 92.5G 512M -
raid_pool/export 38K 92.5G 20K /export
raid_pool/export/home 18K 92.5G 18K /export/home
raid_pool/swap 512M 92.9G 88.0M -
# zfs set mountpoint=legacy raid_pool/ROOT/s10_1008 <RETURN>
# mount -F zfs raid_pool/ROOT/s10_1008 /mnt <RETURN>
5. When the fix is completed, unmount the system disk and migrate to the obp environment as
instructed below.
UFS file system
# cd / <RETURN>
# umount /mnt <RETURN>
# cd / <RETURN>
# umount /mnt <RETURN>
# zfs set mountpoint=/ raid_pool/ROOT/s10_1008 <RETURN>
107
D.2 If the OS Has Been Installed As Per Section
4.1.2, "Creating a boot disk by copying an
existing boot disk residing on an internal
disk"
1. Initialize the obp environment.
ok reset-all <RETURN>
raid_pool ONLINE
c7t16d0s0 ONLINE
# zpool import 9153334525621735888 <RETURN> Specify the ID that confirmed in zpool
import.
# zfs list <RETURN>
NAME USED AVAIL REFER MOUNTPOINT
raid_pool 5.98G 92.5G 93K /raid_pool
raid_pool/ROOT 4.98G 92.5G 18K legacy
108
raid_pool/ROOT/s10_1008 4.98G 92.5G 4.98G /
raid_pool/dump 512M 92.5G 512M -
raid_pool/export 38K 92.5G 20K /export
raid_pool/export/home 18K 92.5G 18K /export/home
raid_pool/swap 512M 92.9G 88.0M -
# zfs set mountpoint=legacy raid_pool/ROOT/s10_1008 <RETURN>
# mount -F zfs raid_pool/ROOT/s10_1008 /mnt <RETURN>
5. When the fix is completed, unmount the system disk and migrate to the obp environment as
instructed below.
UFS file system
# cd / <RETURN>
# umount /mnt <RETURN>
# cd / <RETURN>
# umount /mnt <RETURN>
# zfs set mountpoint=/ raid_pool/ROOT/s10_1008 <RETURN>
109
Appendix E Fibre Channel Driver/Boot
Code Auto-Target Binding
Functions
This appendix describes the Fibre Channel driver/boot code auto-target binding functions.
The Fibre Channel driver connects all target devices attached to a fabric device to the lowest available
Target_ID in ascending order of WWNs.
Server
FC Card
FC Switch
Target_ID:3
Target_ID:0
Target_ID:2 Target_ID:1
CM0 CM1 CM0 CM1
ETERNUS #1 ETERNUS #2
110
In the example shown above, the Fibre Channel driver automatically connects target disk array devices in
ascending order of WWNs as follows:
The Fibre Channel driver auto-target binding function is designed with primary emphasis on building a
SAN Boot environment effortlessly, and its use is recommended only for the purpose of building an
environment.
If the Fibre Channel driver auto-target binding function is used in normal operations, it might fail to
complete intended target device connections owing to the effect of the failure of target devices or the like.
The use of fcp-bind-target to make target device connections is recommended for normal operations.
For information about marking target device connections using fcp-bind-target, refer to the "FUJITSU
PCI Fibre Channel Guide."
111
E.2 Fibre Channel Boot Code Auto-Target
Binding Function
This section describes the Fibre Channel boot code auto-target binding function.
The Fibre Channel boot code auto-target binding function lets the Fibre Channel boot code detect target
devices automatically without requiring their definitions in fjpfca-bind-target, allowing only those target
devices having the lowest Port-ID assigned to them to be connected to a server for SAN Boot.
ETERNUS #1 ETERNUS #2
In the example shown above, target devices located at ETERNUS#1 CM0 with the lowest assigned
Port_ID are connected to implement SAN Boot.
Because the Fibre Channel boot code auto-target binding function focuses on effortless implementation of
SAN Boot, it use is recommended only in environment construction and in environments in which FC
switch-based zoning is implemented. If the Fibre Channel boot code auto-target binding function is used
elsewhere, it might fail to complete intended target device connections owing to the effect of the failure
of target devices or the like. The use of fjpfca-bind-target to manually configure target device
connections is recommended in incompatible environments. For information about making target device
connections using fjpfca-bind-target, see Appendix A, "Boot Device Setup Commands."
If SAN Boot is implemented with the boot code auto-target binding function, the target device
information is imparted to the Fibre Channel driver, enabling it to make target device connections
automatically (fcode-auto-bind function). For more information about the fcode-auto-bind function, refer
to "FUJITSU PCI Fibre Channel Guide."
112
Appendix F SAN Boot release procedure
If you release the multipath definition of the boot disk, it does the following procedures.
It becomes impossible might do boot when it releases by the methods other than the procedure or it
makes a mistake in the procedure.
After the mirroring is released, the following procedures are executed when mirroring it with
PRIMECLUSTER GDS.
1. In the SAN Boot environment by the UFS file system, edit /etc/vfstab file to rewrite mount
point.
In the SAN Boot environment by the ZFS file system, it advances to Step 2.
/dev/FJSVmplb/dsk/mplb0s0 /dev/FJSVmplb/rdsk/mplb0s0
↓ ↓
/dev/dsk/c2t16d0s0 /dev/rdsk/c2t16d0s0
Former mount point is either path constituting multipath displayed by the iompadm command.
# mplbconfig -r
Cannot unload module: mplb
Will be unloaded upon reboot.
Forcing update of mplb.conf.
113
3. Edit /kernel/drv/mplbt.conf file to delete all the definitions.
Example)The following delete.
mplbh-path-0="pci10cf,1178-0-10" mplbh-path-1="pci10cf,1178-1-10"
mplbh-disk-name="E30004641- 130011-0010";
mplbh-detect-disk-num=1;
mplbh-detect-disk-0="E30004641- 130011-0010";
mplbh-used-path-num=2;
mplbh-used-path-0="pci10cf,1178-0-10";
mplbh-used-path-1="pci10cf,1178-1-10";
rootdev: /pseudo/mplb@0:a
forceload: drv/mplbt
forceload: drv/mplb
forceload: drv/sd
114
forceload: drv/mplbt
forceload: drv/mplb
forceload: drv/sd
# update_drv -f sd
Cannot unload module: sd
Will be unloaded upon reboot.
Forcing update of sd.conf.
# touch /reconfigure
# reboot
10. When the dump device changed, it returns. (Only the SAN Boot environment by the UFS file
system)
# dumpadm -d /dev/dsk/c2t16d0s3
115