Sei sulla pagina 1di 123

C120-E392-05EN

SPARC Enterprise – ETERNUS


SAN Boot Environment Build Guide
- Solaris (TM) Operating System -

Version 1.3
Preface

Purpose

This manual explains how to mount a Fibre Channel card on SPARC Enterprise and build a SAN Boot
environment, in which the OS can be booted from ETERNUS storage system.

Disk array devices other than ETERNUS are not described at this guide. See Chapter 2.1.1 Required
Hardware.

Intended Audience

This manual is intended for builders and administrators of SAN Boot environments.

Organization

This manual consists of the following chapters:

Chapter 1 Overview

Introductory information about a SPARC Enterprise SAN Boot environment

Chapter 2 Hardware/Software Configuration

Patterns of configuration prerequisite to building a SAN Boot environment

Chapter 3 Precautions

Precautions in building and running a SAN Boot environment

Chapter 4 Building an OS Boot Environment

How to build a SAN Boot environment

Chapter 5 Backing Up and Restoring Boot Disks

How to back up and restore boot disks in a SAN Boot environment


Notation

The methods of notation are used in this manual:

 Solaris(TM) 10 Operating System is noted as "Solaris 10."

 Actual commands appear in boldface.

# /usr/sbin/FJSVpfca/fc_info -a <Return>
Trademark Notice

Sun, Sun Microsystems, the Sun Logo, Solaris and all Solaris based marks and logos are trademarks or
registered trademarks of Sun Microsystems, Inc. in the U.S. and other countries, and are used under
license.

Fujitsu Limited

April 2009

Edition 1.3 Apr 2009

Notice

 This manual shall not be copied without the permission of the publisher.

 The contents of this manual are subject to change without notice.

All Rights Reserved, Copyright (C) Fujitsu Limited 2006-2009


Manual Update History

Edition Date published Description

1 May, 23rd, 2007 First release

1.1 Sep, 14rd, 2007 Support ETERNUS 2000 series

Support SPARC Enterprise T5120/T5220

1.2 Oct, 14rd, 2008 Support Solaris 10 10/08

Support SPARC Enterprise


T5140/T5240/T5440/M3000

1.3 Apr, 3rd, 2009 Change the restore procedure.

Change the check command procedure of


Custom JumpStart.
Contents

Chapter 1 Overview............................................................................. 1
1.1 Configuration Patterns........................................................................................ 5
1.1.1 Basic configuration................................................................................................................. 5
1.1.2 Disk array device mirroring configuration based on PRIMECLUSTER GDS ....................... 7
1.1.3 Cluster configuration based on PRIMECLUSTER................................................................. 8
1.1.4 ETERNUS Configuration required to use the Advanced Copy function ................................ 9

Chapter 2 Hardware/Software Configuration................................ 14


2.1 Hardware Environment .................................................................................... 14
2.1.1 Required Hardware............................................................................................................... 14
2.1.2 Boot disk configuration ........................................................................................................ 15

2.2 Software Environment...................................................................................... 17


2.2.1 Required software................................................................................................................. 17
2.2.2 Optional software.................................................................................................................. 18

Chapter 3 Precautions ....................................................................... 20

Chapter 4 Building an OS Boot Environment ................................ 24


4.1 Creating a Boot Disk on a Disk Array Device ................................................. 26
4.1.1 Creating a boot disk using a network install server............................................................... 26
4.1.1.1 Creating a network install server ...................................................................................... 27
4.1.1.2 Configuring a network install server................................................................................. 28
4.1.1.3 Labeling disks................................................................................................................... 33
4.1.1.4 Configuring Custom JumpStart ........................................................................................ 35
4.1.1.5 Configuring the Fibre Channel boot code......................................................................... 40
4.1.1.6 Executing network installation ......................................................................................... 44
4.1.2 Creating a boot disk by copying an existing boot disk residing on an internal disk.............. 45
4.1.2.1 Getting ready to copy the boot disk to a disk array device ............................................... 46
4.1.2.2 Creating a boot disk.......................................................................................................... 46
4.1.2.3 Editing mount table information....................................................................................... 50
4.1.2.4 Configuring the Fibre Channel boot code......................................................................... 52
4.1.2.5 Resetting the server .......................................................................................................... 52
4.1.2.6 Booting from a disk array device...................................................................................... 53
4.2 Making the Path to a Boot Disk Redundant......................................................54
4.2.1 Installing the Enhanced Support Facility...............................................................................54
4.2.2 Configuring the ETERNUS Multipath Driver.......................................................................55
4.2.2.1 Single system (non-cluster system) ...................................................................................55
4.2.2.2 Cluster system...................................................................................................................63
4.3 Boot Disk Mirroring .........................................................................................73
4.3.1 Mirroring by PRIMECLUSTER GDS ..................................................................................73
4.3.2 Notes on using PRIMECLUSTER ........................................................................................75
4.3.2.1 Cluster system building procedure....................................................................................75

Chapter 5 Backing Up and Restoring Boot Disks........................... 76


5.1 Backing Up/Restoring after Booting OS from a Network ................................77
5.1.1 Backup procedure .................................................................................................................77
5.1.2 Restore procedure .................................................................................................................79

5.2 Backing Up/Restoring after Booting the OS from an Internal Disk .................82
5.2.1 Backup procedure .................................................................................................................82
5.2.2 Restore procedure .................................................................................................................84

Appendix A Boot Device Setup Commands....................................... 88


A.1 Command Executable on the OS ......................................................................88

A.2 Command Executable on the OBP ...................................................................93

Appendix B Checking the Fibre Channel Card Boot


Code Version Number .................................................. 102
B.1 Checking on the OS ........................................................................................102

B.2 Checking on the OBP......................................................................................102

Appendix C Recording SAN Boot Setting Information ................. 104

Appendix D Making Fixes to Setting Files after a Boot Failure.... 106


D.1 If the OS Has Been Installed As Per Section 4.1.1, "Creating a boot
disk using a network install server" ................................................................106

D.2 If the OS Has Been Installed As Per Section 4.1.2, "Creating a boot
disk by copying an existing boot disk residing on an internal disk"...............108
Appendix E Fibre Channel Driver/Boot Code Auto-
Target Binding Functions ............................................ 110
E.1 Fibre Channel Driver Auto-Target Binding Function .................................... 110

E.2 Fibre Channel Boot Code Auto-Target Binding Function ............................. 112

Appendix F SAN Boot release procedure........................................ 113

F.1 ETERNUS Multipath Driver.......................................................................... 113


Chapter 1 Overview

The term SAN Boot refers to having an operating system (OS) or application stored in external SAN
storage, not on an internal disk in a server, and starting (that is, booting) the OS or application from there.

This document describes the workflow for building a SAN Boot environment, in which a Fibre Channel
card is mounted on a server to start the OS from ETERNUS storage system (RAID).

Having an OS boot disk on an external disk array device offers the following advantages:

1. Enhanced availability
 Use of a high-reliability disk array (RAID) device

Enhanced reliability results from managing the boot disk on a disk array (RAID) device.

 Backup/restore work made more efficient

Use of the disk copy feature of a disk array device drastically cuts the period during which
business is stopped for backing up and restoring system volumes. The CPU load incurred while
backing up and restoring the system volumes is also reduced.

For more details, see Section 1.1.4, "ETERNUS Configuration required to use the Advanced
Copy function."

1
Note
ETERNUS SF AdvancedCopy Manager (ACM) or PRIMECLUSTER GDS Snapshot is
required to use the disk copy feature (Advanced Copy feature) of ETERNUS (disk array device).

2. Greater ease of operation management


 System volumes kept under consolidated management

Boot disks that were previously spread among multiple servers can be kept under consolidated
management, as they are contained in a single disk array device.

 Development environment generation management

Multiple development environments maintained on a single disk array device can be switched,
as required. This eliminates the need to keep a server for each development environment,
thereby allowing the number of servers and operational workload to be reduced.

2
3. Better maintainability
 Handling of disk failures made simpler

If a disk (system volume) fails, the system administrator notifies a service engineer in charge of
the fact and lets the engineer replace the disk for the system to recover automatically. The
system administrator's workload is thus lightened.

 Applying patches made simpler

Use of the disk copy feature of a disk array device trims the period of business stop for backing
up system volumes before patches are applied on them. With the OS configured to boot from a
backup volume (*1), if a problem occurs after applying the patches, the system can be rolled
back to its status in effect prior to applying the patch by rebooting the server and switching the
boot volume. For more details, see Section 1.1.4, "ETERNUS Configuration required to use the
Advanced Copy feature."

(*1) PRIMECLUSTER GDS Snapshot provides this functionality with a simple command
operation.

3
4
1.1 Configuration Patterns
Boot the OS from an external disk array (RAID) device using Fibre Channel Cards in the patterns of the
Fibre Channel connection configuration shown below. Points to watch concerning each configuration
pattern are also given.

1.1.1 Basic configuration


1. Using a disk array device from a single server

 Implement a multipath configuration pattern based on the ETERNUS multipath disk driver,
in which at least two routes of Fibre Channel connections are maintained between the
server and each disk array device.

 If disk swaps occur, they could result in degraded application disk access performance.
Actions that help prevent disk swaps include adding to the memory mounted on the server
or reducing the rate of memory usage by applications.

 If a server that is not equipped with an internal disk is used, an install server is required to
install the OS and recover the boot disk.

5
2. Using a disk array device from multiple servers

 Fabric connection

 FC-AL direct connection

 Implement a multipath configuration pattern based on the ETERNUS multipath disk driver,
in which at least two routes of Fibre Channel connections are maintained between each
server and the disk array device.

 If disk swaps occur, they could result in degraded application disk access performance.
Actions that help prevent disk swaps include adding to the memory mounted on each server
or reducing the rate of memory usage by applications.

6
 When a server panics, the other servers having their boot disks placed in the same RAID
group as the server might suffer degraded boot disk access performance for several tens of
seconds. See "2.1.2 Boot disk configuration."

 If a server that is not equipped with an internal disk is used, a dedicated install server for
installing the OS and recovering the boot disk is required.

1.1.2 Disk array device mirroring configuration based


on PRIMECLUSTER GDS

 With the 1Gbps/2Gbps Fibre Channel card (PW008FC3), one card was required per path.
With the single-channel 4Gbps Fibre Channel card (SE0X7F11x) and the dual-channel
4Gbps Fibre Channel card (SE0X7F12x), in contrast, two Fibre Channel cards can be
combined to build a disk array device mirroring configuration based on PRIMECLUSTER
GDS, because multiple disk array device configurations in which boot disk is recognized
are enabled.

 Implement a multipath configuration pattern based on the ETERNUS multipath disk driver,
in which at least two routes of Fibre Channel connections are maintained between each
server and a disk array device.

 If disk swaps occur, they could result in degraded application disk access performance.
Actions that help prevent disk swaps include adding to the memory mounted on each server
or reducing the rate of memory usage by applications.

 When a server panics, the other servers having their boot disk placed in the same RAID
group as the server may suffer degraded boot disk access performance for several tens of
seconds.

7
 If a server that is not equipped with an internal disk is used, a dedicated install server for
installing the OS and recovering the boot disk is required separately.

1.1.3 Cluster configuration based on PRIMECLUSTER


A cluster system based on a SAN Boot environment may also be built.

1. Cluster configuration based on a single disk array device

8
2. Cluster configuration based on multiple disk array devices

1.1.4 ETERNUS Configuration required to use the


Advanced Copy function
On a traditional system on which the OS is booted from an internal disk, business needs to be stopped for
a long period of time while backup/restore is executed using tape devices.

9
If backup/restore is executed using the ETERNUS Advanced Copy function (OPC/EC) in a SAN Boot
environment, business can be carried out without stopping business while disk device copying is in
progress, thus drastically cutting the duration of business stop.

Backup/restore using the Advanced Copy function is operable in two ways: one using ETERNUS SF
AdvancedCopy Manager (ACM) and one using PRIMECLUSTER GDS Snapshot (GDS Snapshot).

 Using ETERNUS SF AdvancedCopy Manager (ACM)

Use of ACM offers these advantages:

 Shorter period of business stop

 Higher efficiency in backing up multiple servers (consolidated management)

 Using PRIMECLUSTER GDS Snapshot (GDS Snapshot)

Use of GDS Snapshot offers the following advantages:

 Shorter period of business stop

 Restore made easier on a soft-mirroring configuration based on PRIMECLUSTER


GDS

10
11
The table below contains a summary description of the features of ACM and those of GDS Snapshot.

Select the method suited to your system requirements.

Note
Where system volumes are managed using PRIMECLUSTER GDS, their backup/restore is operable with
both ACM and GDS Snapshot, but use of GDS Snapshot is recommended for system volumes built on a
soft-mirroring configuration.
ACM and GDS Snapshot features
: Advantage

ETERNUS SF AdvancedCopy PRIMECLUSTER GDS Snapshot


Manager (ACM)

Operational server - 
A dedicated server for executing the No separate server is required, because
backup/restore operations is required in the backup/restore operations are
addition to the server to be backed up executed on the server to be backed up
and restored. and restored.

Backup operation  -
Shut down the server to be backed up Reboot the server to be backed up in
once, and launch OPC from its backup single-user mode, and launch OPC.
server. The server can be rebooted to When physical copying completes,
resume suspended business without reboot the server in multi-user mode to
waiting for physical copying to resume suspended business.
complete.

Restore operation  
(mirroring by
Shut down the server to be restored Suspended business can be resumed by
PRIMECLUSTER
once, and launch OPC. The server can rebooting the server to be restored and
GDS not
then be rebooted to resume suspended switching the boot volume.
implemented)
business without waiting for physical
copying to complete.

Restore operation - -
(mirroring by
The mirrored disk needs to be When OPC physical copying
PRIMECLUSTER
disconnected and then reconnected completes, switch back to the original
GDS implemented)
before and after the OPC restore boot volume to resume suspended
operation. business.

Multi-server  -
backup efficiency
Backup volumes on multiple servers The backup/restore operations are
can be placed under consolidated executed on each individual server.
management from a single backup
server.

Function for - 
booting from a
The mount point needs to be changed The function can be easily configured
backup volume
by editing the vfstab file. using command.

12
For more feature details, refer to the ETERNUS SF AdvancedCopy Manager and PRIMECLUSTER
GDS manuals.

Instructions on how to verify the progress of OPC physical copying are included in the ETERNUS SF
AdvancedCopy Manager and PRIMECLUSTER GDS manuals.

13
Chapter 2 Hardware/Software
Configuration

The hardware configuration and the software configuration described in this chapter are prerequisite to
booting the OS from an external disk array device using Fibre Channel cards.

2.1 Hardware Environment


2.1.1 Required Hardware
The following types of servers and Fibre Channel cards can be used:

Type Device name Remarks

Server SPARC Enterprise


T1000/T2000/T5120/T5140/T5220/T5240/T5440/M3000/M4000/M5000/

M8000/M9000

Fibre Channel Single-channel 4 Gbps Fibre Channel card [SE0X7F11x]


card mounted
on servers Dual-channel 4 Gbps Fibre Channel card [SE0X7F12x]

Disk array ETERNUS2000 Model 50/100/200


device
ETERNUS3000 Model 80/100/300/500/700

ETERNUS4000 Model 80/100/300/500

ETERNUS6000 Model 500/700/900/1100

ETERNUS8000 Model 700/900/1100/1200

Fibre Channel ETERNUS SN200 Series


switch

If a server that is not equipped with an internal disk is used, an install server is required separately to
install and recover the OS.

14
2.1.2 Boot disk configuration
SAN Boot has a system disk placed on the ETERNUS disk array. In addition to system disks, a variety
of disk volumes are placed on the disk array device. The way these disk volumes are placed on the disk
array device could affect the performance of access to system disks residing on other servers and user
data disks such as databases. To keep disk access performance unaffected, take the following precaution
in implementing the disk configuration:

 Do not place in the same RAID group a system disk area and areas (system and data disks)
that are accessible from other servers.

15
If any other kind of disk configuration is used, the following problem may occur:

 If multiple system disks are placed in the same RAID group, the occurrence of disk swaps
could result in degraded disk access performance for volumes residing in the same RAID
group.

16
 Placing a system disk and a shared data area in the same RAID group would degrade
access performance for the shared data area by several tens of seconds during a memory
dump access session triggered by a server panic.

2.2 Software Environment


This section describes the required software.

2.2.1 Required software


The following software components are required:

Software Version Remarks

Solaris(TM) operating system Solaris(TM) 10 operating system  Solaris 10 11/06 or higher

 It is necessary to apply
Solaris 10 8/07 or higher
when SPARC Enterprise
T5120/T5140/T5220 is used.

 It is necessary to apply
Solaris 10 5/08 or higher
when SPARC Enterprise
T5240/T5440 is used.

 It is necessary to apply

17
Solaris 10 10/08 or higher
when SPARC Enterprise
M3000 is used.

FUJITSU PCI Fibre Channel 4.0 or higher


driver

ETERNUS Multipath Driver 2.0.1 or higher  Patch 914267-04 or later


needs to be installed.

 It is necessary to apply since


patch 914267-05 when
ETERNU2000 is used.

 When EMPD 2.0.1 is used in


Solaris 10 10/08, then please
apply patch 914267-07.

2.2.2 Optional software


 PRIMECLUSTER GDS is required to implement system volume mirroring between disk array
devices.

Software Version Remarks

PRIMECLUSTER GDS 4.2 or  Patch 914423-03 or later needs to be installed.


higher
 It is necessary to apply since patch 914423-05
when ETERNU2000 is used.

 It is necessary to apply since patch 914423-08


when create an alternate boot environment.

 One of the following PRIMECLUSTER products is required to build a cluster system:

Software Version Remarks

PRIMECLUSTER Enterprise Edition 4.2 or  This product is bundled with PRIMECLUSTER


higher GDS.
PRIMECLUSTER HA Server
 Patches 901201-06 or later, 914325-03 or later,
PRIMECLUSTER Lite Pack 914468-01 or later and 914499-01 or later need to
be installed.

 ETERNUS SF AdvancedCopy Manager is required to execute backup/restore using ETERNUS


Advanced Copy function.

18
Software Version Remarks

ETERNUS SF AdvancedCopy 13.0 or  Both agents and the manager are Solaris 10-ready.
Manager higher

ETERNUS SF AdvancedCopy 13.0 or  Required to back up to tape with ETERNUS SF


Manager tape agent license higher AdvancedCopy Manager.

 One copy is required for each tape agent installed.

 Tape agents are Solaris 10-ready.

 ETERNUS SF AdvancedCopy Manager tape


server option is required to be installed on tape
server.
NOTE: The tape server is only Solaris 8 or Solaris
9-ready.

For more details on ETERNUS SF AdvancedCopy Manager, refer to the ETERNUS SF


AdvancedCopy Manager manual.

 PRIMECLUSTER GDS Snapshot is required to create snapshots of a system volume, or create


an alternate boot environment using the ETERNUS Advanced Copy function or the
PRIMECLUSTER GDS copy function.

Software Version Remarks

PRIMECLUSTER GDS Snapshot 4.2 or  A PRIMECLUSTER product bundled with


higher PRIMECLUSTER GDS is required to use this
product.

 Patch 914457-02 or later needs to be installed.

 It is necessary to apply since patch 914457-03


when using OS since Solaris 10 10/08 is used or
the patch (137137-09 or later) suitable since
Solaris 10 10/08 is used.

For more details on PRIMECLUSTER GDS Snapshot, refer to "PRIMECLUSTER Global


Disk Service Guide."

19
Chapter 3 Precautions

1. The single-channel 4 Gbps Fibre Channel card (SE0X7F11x) and dual-channel 4 Gbps Fibre
Channel card (SE0X7F12x) support a boot code to allow the OS to be booted from a disk
array device connected to the Fibre Channel card. The Fibre Channel cards come with this
boot code disabled by default. The boot code needs to be enabled before the OS can be booted
using a file channel card.
Enabling or disabling the boot function on either port of the dual-channel 4Gbps Fibre Channel
card (SE0X7F12x) will automatically impart the same setting to the other port. It is not
possible to make a change to the setting of either port alone and not both.
For information about enabling and disabling the boot function on Fibre Channel cards, see
Appendix A.2 1 fjpfca-set-bootfunction.

2. Be sure to install the OS on the boot disk using one of the methods described in Chapter 4.
It is not possible to copy any boot disk that has been used on another host by using the dd
command or using ETERNUS EC (Equivalent Copy) or OPC (One Point Copy) and use it.
Boot disks cannot be created in any procedure other than that described in Chapter 4.

3. Keep a record of the boot configuration information set on the Fibre Channel card.
The boot configuration information is required when the Fibre Channel card is replaced,
because it needs to be reproduced on the new Fibre Channel card.
For information on the kinds of information to be recorded, see Appendix C, "Recording SAN
Boot Setting Information."

4. Fujitsu recommends not placing in the same RAID group the boot disks for different hosts.
For more details, see Section 2.1.2, "Boot disk configuration."

20
5. Do not create a huge file or a large number of files in /tmp (tmpfs).
When creating files in /tmp (tmpfs), take care to ensure that the size of space used by /tmp
(tmpfs) does not exceed the installed memory size.
If the size of space used by /tmp (tmpfs) should exceed the installed memory size as a result of
having created a huge file or a large number of files in /tmp (tmpfs), the system could slow
down due to a lack of sufficient memory available to it.
This precaution also applies when an internal disk is used as a boot disk.

6. When using PRIMECLUSTER or PRIMECLUSTER GDS to build a system implementation,


see Section 4.3.2, "Notes on using PRIMECLUSTER," for additional tips.

7. The access path settings based on WWN (World Wide Port Name) described below are
recommended for the ETERNUS disk array device and the ETERNUS SN200 Fibre Channel
switch.
These settings remove the need to reconfigure the following devices for using the resource
management software, Systemwalker Resource Coordinator, facilitating the transition process
and speed up the transition.

 ETERNUS disk array device

Configure as host table or enable the host affinity feature with regard to the ETERNUS FC-CA
port, and set the WWN of the Fibre Channel card used as a host World Wide Name. Refer to
each Server Connection Guide for the setting of the ETERNUS disk array device.

 ETERNUS SN200 Fibre Channel switch

Using the WWN of the Fibre Channel card and that of each disk array device, configure a one-
to-one WWN zoning plan, which establishes zoning using the WWN of the host HBA port and
that of the FC-CA port.

8. Disks bearing the EFI (Extensible Firmware Interface) disk label cannot be used as a boot disk.
The EFI disk label supports disks larger than 1Tbyte on a system running the 64-bit Solaris
kernel. The OS cannot be booted from a disk bearing the EFI disk label, though.

21
9. The warning message shown below is displayed when a multipath is created. This message
may be ignored.
This message reports that the disk array device has received a SCSI RESET that is issued at
multipath build time and does not relate to any disk array device or server operation.
This message will be displayed only when a new multipath is built or additions are made to a
fleet of disk array devices. If message monitoring is implemented, disable the monitoring
process temporarily or simply ignore the output message when it is displayed.

WARNING: /pci@1,700000/fibre-channel@0/sd@10,0 (sd805) :


Error for Command: write (10) Error Level: Retryable
Requested Block: 5651696 Error Block: 5651696
Vendor: FUJITSU Serial Number: 0000080115
Sense Key: Unit Attention
ASC: 0x29 (bus device reset message occurred) , ASCQ: 0x3, FRU: 0x1

10. If a system disk is mirrored using PRIMECLUSTER GDS, the following message may be
displayed at boot time, but simply ignore it, because there is no actual problem with the system:

NOTICE: "forceload: drv/<driver name> appears more than once in /etc/system.

* One of mplb, mplbt and sd appears in place of <driver name>.

This message is displayed if the setting forceload is defined in duplicate in /etc/system file. To
leave this message hidden, delete later occurrences of the setting of forceload.

forceload: drv/mplb
~
forceload: drv/mplb  Delete this line.

22
11. In configuring the Fibre Channel driver, the link speed (transmission line speed) setting can be
set to automatic selection to facilitate connectivity, but the expected link speed (especially for
the 4Gbps) may not be attained depending on the connection timing. Enter the link speed
setting in /kernel/drv/fjpfca.conf.

Example: Set fjpfca0 to a link speed of 4 Gbps.

port=
"fjpfca0:nport:sp4";

For instructions how to set a link speed, refer to "FUJITSU PCI Fibre Channel Guide."

12. Be sure to configure LUN 0(Host Logical Unit Number 0) on ETERNUS. The LUN 0 is used
to recognize ETERNUS by Fibre Channel boot code.

13. When you install the Solaris 10 10/08 or higher by the installation server, the installation server
should work on Solaris 10 10/08 environment or higher or on Solaris 10 environment that
137137-09 or later patch are applied; otherwise driver packages cannot be installed.
The Sun Microsystems Inc. is not recommending the construction of the ZFS file system
environment on the disk array device. Refer to "Solaris ZFS Administration Guide" for details.

14. If you backup and restoring data in ZFS environment, you should use Solaris 10 10/08
environment or on Solaris 10 environment that 137137-09 or later patch are applied; otherwise
there is a possibility that the import of the system volume cannot be done.

15. In the SAN Boot environment by the ZFS file system, the mirroring of the system disk by
PRIMECLUSTER GDS cannot be done.

16. When you restore the boot disk, you should use the boot block on the restore destination device
for the boot block creating.

17. The OS booting process might hang due to fibre channel card's (SE0X7F11x, SE0X7F12x)
problem. In this case, turn the power off, and then turn the power on, and boots again.

23
Chapter 4 Building an OS Boot
Environment

This chapter describes the following topic:

 Building an OS boot environment from a disk array device

Before performing this procedure, set up a disk array device to make available a lun in which to create a
boot disk.

If a server that is not equipped with an internal disk is used, only the method described in Section 4.1.1,
"Creating a boot disk using a network install server," can be used. Since the FUJITSU PCI Fibre Channel
driver is not contained in Solaris OS, OS boot environment cannot be built using CD/DVD of Solaris OS.

Boot environments cannot be built in any procedure other than that described in this guide.

In Section 4.1.1, "Creating a boot disk using a network install server," the method of identifying the disk
array device to become a boot disk can be automated. This method facilitates the job of configuring the
Fibre Channel driver and the Fibre Channel boot code. For more details, see Section 4.1.1.2,
"Configuring a network install server."

If a disk array device is identified through automatic setting, set zoning with the FC Switch to disable a
Fibre Channel card from connecting to multiple disk array devices. In an environment in which a Fibre
Channel card is allowed to connect to multiple disk array devices, implement disk array device
identification manually (manual setting).

For information on building a cluster environment, see Section 4.3.2, "Notes on using
PRIMECLUSTER."

The workflow for building an OS boot environment is as follows:

24
Install the OS as instructed in Section 4.1, "Creating a Boot Disk on a Disk Array Device," and create a
boot disk on a disk array device.

Then, boot the OS in single-path mode as instructed in Section 4.1.2.6, "Booting from a disk array
device."

Lastly, define a multipath configuration as per Section 4.2, "Making the Path to a Boot Disk Redundant."

25
4.1 Creating a Boot Disk on a Disk Array Device
There are two ways to create a boot disk on a disk array device, as follows:
1. Creating a boot disk using a network install server
2. Creating a boot disk by copying an existing boot disk residing on an internal disk (only if the
server is equipped with an internal disk)

4.1.1 Creating a boot disk using a network install


server
Executing a network install requires an install server, in addition to the host (install machine) that uses a
disk array device as a boot device.

Work with the install machine from the install machine console. Work with the install server from a
terminal, which is marked as "(INSTALL SERVER)" in the examples appearing in this guide.

26
4.1.1.1 Creating a network install server
Configure an install server to execute a network install. For more details on the work of creating an
install server, refer to " Solaris x x/x Release and Installation Collection" at docs.sun.com.

If there are multiple hosts that use a disk array device as a boot device each, an OS image needs to be
created for each host. The hosts may share a single install image, however, in these situations:
1. Hosts use AL direct Fibre Channel connection, sharing the same values of target ID and max
throttle.
2. Hosts use automatically configured disk array devices on a FC switch connection, sharing the
same values of target ID and max throttle.

Note) Multiple hosts which are the same architecture type can share the same OS install image,
but the same OS install image can not be used for the other architecture type. SPARC Enterprise
M3000/M4000/M5000/M8000/M9000 (which are sun4u) can share the same OS install image.
SPARC Enterprise T1000/T2000/T5120/T5140/T5220/T5240/T5440 (sun4v) can share the
same OS install image.

A sample flow for creating an image of Solaris10 OS is described below.

In this flow, /export/install/Solaris10_hostname is used as the name of a sample directory in which an


OS image is created.
1. Become a superuser on the install server.

(INSTALL SERVER) % su - <RETURN>


Password: password

2. Make a directory in which to create an OS image.

(INSTALL SERVER) # mkdir /export/install <RETURN>


(INSTALL SERVER) # cd /export/install <RETURN>
(INSTALL SERVER) # mkdir Solaris10_hostname <RETURN>

Include the host name hostname of the install machine in the created directory name to allow
management by host.

3. Mount the Solaris 10 Operating System DVD-ROM.

(INSTALL SERVER)# cd /cdrom/cdrom0/s0/Solaris_10/Tools <RETURN>


(INSTALL SERVER)# ./setup_install_server /export/install/Solaris10_hostname
<RETURN>

4. When copying of the Solaris 10 Operating System completes, demount the DVD-ROM.

27
(INSTALL SERVER)# cd / <RETURN>
(INSTALL SERVER)# eject cdrom <RETURN>

4.1.1.2 Configuring a network install server


1. Register the IP/MAC address of an installation target machine on the network install server.

○ Register the IP address of an installation target machine

Edit the /etc/hosts file with a text editor.

When an IP address is "192.168.1.1", it becomes the following.


192.168.1.1 hostname

○ Register the MAC address of an installation target machine

Edit the /etc/ethers file with a text editor.

When an IP address is "0:80:17:28:1:f8", it becomes the following.


0:80:17:28:1:f8 hostname
2. For boot the installation target machine from the network, execute add_install_client command
on the network install server.

An add_install_client parameter changes with machine models.

SPARC Enterprise T1000/T2000/T5120/T5140/T5220/T5240/T5440

(INSTALL SERVER) # cd /export/install/Solaris10_hostname/Solaris_10/Tools/ <RETURN>


(INSTALL SERVER) # ./add_install_client hostname sun4v <RETURN>

SPARC Enterprise M3000/M4000/M5000/M8000/M9000

(INSTALL SERVER) # cd /export/install/Solaris10_hostname/Solaris_10/Tools/ <RETURN>


(INSTALL SERVER) # ./add_install_client hostname sun4u <RETURN>
3. Install the Fibre Channel driver in the OS install image on the network install server.

This step makes the disk array device connected to the Fibre Channel card identifiable. Mount
"FUJITSU PCI Fibre Channel 4.0" on the CD-ROM drive in the network install server and do
the following:

The process of installing the Fibre Channel driver varies with each target machine model.
 Solaris 10 5/08 or older
SPARC Enterprise T1000/T2000/T5120/T5140/T5220/T5240/T5440

(INSTALL SERVER) # cd /cdrom/cdrom0 <RETURN>


(INSTALL SERVER) # bin/pfcapkgadd.sh -R
/export/install/Solaris10_hostname/Tools/Boot/ -p sun4v <RETURN>

SPARC Enterprise M3000/M4000/M5000/M8000/M9000

(INSTALL SERVER) # cd /cdrom/cdrom0 <RETURN>

28
(INSTALL SERVER) # bin/pfcapkgadd.sh -R
/export/install/Solaris10_hostname/Tools/Boot/ -p sun4u <RETURN>

 Solaris 10 10/08 or higher

(1) Create the working directory for unpacking miniroot.

(INSTALL SERVER) # mkdir /tmp/work <RETURN>

(2) Unpack the miniroot to the work directory using the root_archive(1M) command.

If /tmp/work/tmp/AdDrEm.lck file is not exist, following procedure ignoring.

(INSTALL SERVER) # /boot/solaris/bin/root_archive unpackmedia


/export/install/Solaris10_hostname /tmp/work <RETURN>
(INSTALL SERVER) # rm /tmp/work/tmp/AdDrEm.lck <RETURN>

- The following messages might be displayed when root_archive command is executed. you
might ignore these messages.

umount: /tmp/mnt29984 busy


rmdir: directory "/tmp/mnt29984": Directory is a mount point or in use
lofiadm: could not unmap file /export/install/Solaris10_hostname/boot/sparc.miniroot:
Device busy
rmdir: directory "/tmp/mnt29984": Directory is a mount point or in use

(3) Install the Fibre Channel driver in the working directory.


SPARC Enterprise T1000/T2000/T5120/T5140/T5220/T5240/T5440

(INSTALL SERVER) # cd /cdrom/cdrom0 <RETURN>


(INSTALL SERVER) # bin/pfcapkgadd.sh -R /tmp/work/ -p sun4v <RETURN>

SPARC Enterprise M3000/M4000/M5000/M8000/M9000

(INSTALL SERVER) # cd /cdrom/cdrom0 <RETURN>


(INSTALL SERVER) # bin/pfcapkgadd.sh -R /tmp/work/ -p sun4u <RETURN>

(4) Pack the working directory.

(INSTALL SERVER) # mkdir -p /tmp/media/Solaris_10 <RETURN>


(INSTALL SERVER) # /tmp/work/boot/solaris/bin/root_archive packmedia /tmp/media
/tmp/work <RETURN>

The error messages might be displayed when root_archive command is executed. you might
ignore these messages.

29
(5) Copy the file in /tmp/media directory to the installation image on the install server.

Target device path name for "umount -f" and "lofiadm -d" commands can confirm by "df -k"
command.

(INSTALL SERVER) # cd /tmp/media <RETURN>


(INSTALL SERVER) # find boot Solaris_10/Tools/Boot | cpio -pdum
/export/install/Solaris10_hostname <RETURN>
(INSTALL SERVER) # umount -f /dev/lofi/1 <RETURN>
(INSTALL SERVER) # lofiadm -d /dev/lofi/1 <RETURN>

The error messages might be displayed when root_archive command is executed. you might
ignore these messages.

4. Verify the correspondence between the Fibre Channel card location and driver instance
number.

Boot the installation target machine from the network in single-user mode, with the -s option
specified.

ok boot net -s <RETURN>

5. Execute the following to verify the correspondence between the Fibre Channel card device
path and the driver instance:

# grep fjpfca /tmp/root/etc/path_to_inst <RETURN>


"/pci@1,700000/fibre-channel@0" 0 "fjpfca"
"/pci@2,600000/fibre-channel@0" 1 "fjpfca"

Each line of this command listing contains device path, instance number and instance name in
this order.

In the example above, the drive instance of the Fibre Channel card mounted at device path
"/pci@1,700000/XXXX@0" is fjpfca0.

The correspondence between device paths and server slot positions is described in the relevant
server’s user's guide. If any other server is used or if the relevant user's guide is not available for
reference, check the correspondence in the following way:

6. Flash the LED on the Fibre Channel card associated with the driver instance. The flashing
LED lets you identify the Fibre Channel card location for a single-channel 4Gbps Fibre
Channel card (SE0X7F11x), or the Fibre Channel card location and the port position
associated with the driver instance for a dual-channel 4Gbps Fibre Channel card

30
(SE0X7F12x). The LED at fjpfca0 can be flashed as instructed below. The LINK LED will
flash for 3 minutes.

# /usr/sbin/FJSVpfca/fc_adm -l fjpfca0 <RETURN>

To stop the flashing of the LED, enter Ctrl-c (press the c key while holding down Ctrl key).

For information on using fc_adm, refer to "FUJITSU PCI Fibre Channel Guide."

7. Shutdown the OS on the installation target machine and return to OBP.

# /usr/sbin/shutdown -g0 -i0 -y <RETURN>

8. Configure the definition files that are used for booting the OS at network install time.
When Solaris 10 10/08 or higher is installed, the following procedures are executed
beforehand.
 Solaris 10 10/08 or higher
Unpack the miniroot to the work directory using the root_archive(1M) command.

(INSTALL SERVER) # /boot/solaris/bin/root_archive unpackmedia


/export/install/Solaris10_hostname /tmp/work <RETURN>

- The following messages might be displayed when root_archive command is executed. you
might ignore these messages.

umount: /tmp/mnt29984 busy


rmdir: directory "/tmp/mnt29984": Directory is a mount point or in use
lofiadm: could not unmap file /export/install/Solaris10_hostname/boot/sparc.miniroot:
Device busy
rmdir: directory "/tmp/mnt29984": Directory is a mount point or in use

The files require the following setting on the network install server:

 Fibre Channel driver setting file


 Solaris 10 5/08 or older
{OS install path on the install server}/Tools/Boot/kernel/drv/fjpfca.conf
 Solaris 10 10/08 or higher
/tmp/work/kernel/drv/fjpfca.conf
Configure the Fibre Channel driver setting file (fjpfca.conf) to make the disk array
device on which to create a boot disk identifiable to the Fibre Channel driver. On a
direct connection (FC-AL), the specification of port, fcp-auto-bind-function and fcp-
bind-target is not mandatory. For more information on configuring fjpfca.conf, refer
to "FUJITSU PCI Fibre Channel Guide."

31
There are two ways to make a disk array device identifiable on a fabric connection
(using the FC Switch): automatic setting and manual setting. Automatic setting for
making a disk array device identifiable, when selected, offers the following benefits:

1. If multiple hosts use a disk array device as a boot device each, they can still
share and use the Solaris OS install image on the install server.

2. The Fibre Channel driver is made easier to configure.

These two methods of identifying a disk array device are described below.
a. [Automatic setting]
Example: Set a fabric connection as a topology, a link speed of 4 Gbps and
automatic setting as method of disk array device identification for fjpfca0.

port=
"fjpfca0:nport:sp4";
fcp-auto-bind-function=1;

port Define a connection topology type and a link speed.

fcp-auto-bind-function Configure a disk array device to be identified


automatically.

For more information about the automatic identification function, see Appendix
E, "Fibre Channel Driver/Boot Code Auto-Target Binding Functions."

If a disk array device is identified through automatic setting, set zoning with
the FC Switch to disable a Fibre Channel card from connecting to multiple disk
array devices. In an environment in which a Fibre Channel card is allowed to
connect to multiple disk array devices, allow disk array devices to be identified
through manual setting.
For information on how to set zoning with the FC Switch, refer to the relevant
FC switch’s manual.
b. [Manual setting]
Example: Set a fabric connection as a FC switch topology, a link speed of 4
Gbps and binding of a disk array device with target ID 16 for fjpfca0.

port=
"fjpfca0:nport:sp4";
fcp-bind-target=
"fjpfca0t16:0x210000c0004101d9";

port Define a connection topology type and a link speed.


fcp-bind-target Specify the target WWN.
 Target driver setting file
 Solaris 10 5/08 or older
{OS install path on the install server}/Tools/Boot/kernel/drv/sd.conf
 Solaris 10 10/08 or higher
/tmp/work/kernel/drv/sd.conf

32
Configure the target driver setting file (sd.conf) to make the logical unit (LU) of
the disk array device on which to create a boot disk identifiable. Define only the
boot disk on the disk array device. If an identifiable logical unit of the disk array
unit is already defined, the entry may be bypassed.
Example: Identify target ID 16 and logical unit 0.

name="sd" class="scsi" target=16 lun=0;

 When Solaris 10 10/08 or higher is installed, the following procedures are


executed beforehand.
(1) Pack the working directory.

(INSTALL SERVER) # mkdir -p /tmp/media/Solaris_10 <RETURN>


(INSTALL SERVER) # /tmp/work/boot/solaris/bin/root_archive packmedia
/tmp/media /tmp/work <RETURN>

The error messages might be displayed when root_archive command is executed.


you might ignore these messages.

(2) Copy the file in /tmp/media directory to the installation image on the install server.

Target device path name for "umount -f" and "lofiadm -d" commands can confirm by
"df -k" command.

(INSTALL SERVER) # cd /tmp/media <RETURN>


(INSTALL SERVER) # find boot Solaris_10/Tools/Boot | cpio -pdum
/export/install/Solaris10_hostname <RETURN>
(INSTALL SERVER) # umount -f /dev/lofi/1 <RETURN>
(INSTALL SERVER) # lofiadm -d /dev/lofi/1 <RETURN>

The error messages might be displayed when root_archive command is executed.


you might ignore these messages.

4.1.1.3 Labeling disks


1. Boot the installation target machine from the network in single user mode

ok boot net -s <RETURN>

2. Create a disk label in the lun that is used as a boot disk by carrying out the format (1M)
command, and then check the size of the lun.

# format <RETURN>

Searching for disks...done

33
AVAILABLE DISK SELECTIONS:
0. c7t16d0 <FUJITSU-ETERNUS-4000 cyl 1038 alt 2 hd 64 sec 256>
/pci@1,700000/fibre-channel@0/sd@10,0
Specify disk (enter its number): 0<RETURN>
selecting c7t16d0
[disk formatted]
Disk not labeled. Label it now? y <RETURN>FORMAT MENU:
disk - select a disk
type - select (define) a disk type
partition - select (define) a partition table
current - describe the current disk
format - format and analyze the disk
repair - repair a defective sector
label - write label to the disk
analyze - surface analysis
defect - defect list management
backup - search for backup labels
verify - read and display labels
save - save new disk/partition definitions
inquiry - show vendor, product and revision
volname - set 8-character volume name
!<cmd> - execute <cmd>, then return
quit
format> partition <RETURN>

PARTITION MENU:
0 - change '0' partition
1 - change '1' partition
2 - change '2' partition
3 - change '3' partition
4 - change '4' partition
5 - change '5' partition
6 - change '6' partition
7 - change '7' partition
select - select a predefined table
modify - modify a predefined partition table
name - name the current table
print - display the current table
label - write partition map and label to the disk
!<cmd> - execute <cmd>, then return
quit
partition> print <RETURN>
Current partition table (original):
Total disk cylinders available: 4254 + 2 (reserved cylinders)

Part Tag Flag Cylinders Size Blocks


0 root wm 0 – 15 128.00MB (16/0/0) 262144
1 swap wu 16 – 31 128.00MB (16/0/0) 262144
2 backup wu 0 – 4253 33.23GB (4254/0/0) 69697536
3unassigned wm 0 0 (0/0/0) 0
4unassigned wm 0 0 (0/0/0) 0
5unassigned wm 0 0 (0/0/0) 0
6 usr wm 32 – 4253 32.98GB (4222/0/0) 69173248
7unassigned wm 0 0 (0/0/0) 0

34
format> quit <RETURN>

3. Shutdown the OS on the installation target machine and return to OBP.

# /usr/sbin/shutdown -g0 -i0 -y <RETURN>

4.1.1.4 Configuring Custom JumpStart


Configure Solaris custom jump start to install a driver package. If Custom JumpStart is used, a driver
package is automatically installed and configured at the same time as Solaris is installed.

Perform this procedure on the install server.


1. Make a Custom JumpStart directory.

Make and share a jumpstart directory on the install server.

(INSTALL SERVER)# mkdir /jumpstart <RETURN>


(INSTALL SERVER)# share -F nfs -o ro,anon=0 /jumpstart <RETURN>

2. Copy a driver package, patch and install file.


Copy a driver package, patch and install file to the jumpstart directory on the install server.

Copy the CD image of FUJITSU PCI Fibre Channel to the jumpstart directory on the install
server.

(INSTALL SERVER)# mkdir /jumpstart/FJPFCA <RETURN>


(INSTALL SERVER)# cd /cdrom/cdrom0 <RETURN>
(INSTALL SERVER)# find . | cpio -pumd /jumpstart/FJPFCA <RETURN>

When FUJITSU PCI GigabitEthernet/FUJITSU ULTRA LVD SCSI Host Bus Adapter Driver
install, the following execute.

 FUJITSU PCI GigabitEthernet 3.0 Update1 or higher

Copy the CD image of FUJITSU PCI GigabitEthernet 3.0 Update1 or higher to the
jumpstart directory on the install server.

(INSTALL SERVER)# mkdir /jumpstart/fjgi <RETURN>

35
(INSTALL SERVER)# cp -p /cdrom/cdrom0/install /jumpstart/fjgi/. <RETURN>
(INSTALL SERVER)# cp -p /cdrom/cdrom0/admin /jumpstart/fjgi/. <RETURN>
(INSTALL SERVER)# cp -pr /cdrom/cdrom0/FJSVgid_3.0/10/* /jumpstart/fjgi/.
<RETURN>

 FUJITSU PCI GigabitEthernet 4.0 or higher

Copy the CD image of FUJITSU PCI GigabitEthernet 4.0 or higher to the jumpstart
directory on the install server.

(INSTALL SERVER)# mkdir /jumpstart/fjgi <RETURN>


(INSTALL SERVER)# cp –p /cdrom/cdrom0/install_v4 /jumpstart/fjgi/. <RETURN>
(INSTALL SERVER)# cp –p /cdrom/cdrom0/admin /jumpstart/fjgi/. <RETURN>
(INSTALL SERVER)# cp –pr /cdrom/cdrom0/FJSVgid_4.0/10/* /jumpstart/fjgi/.
<RETURN>

 FUJITSU ULTRA LVD SCSI Host Bus Adapter Driver

Copy the CD image of FUJITSU ULTRA LVD SCSI Host Bus Adapter Driver to the
jumpstart directory on the install server.

(INSTALL SERVER)# mkdir /jumpstart/fjulsa <RETURN>


(INSTALL SERVER)# cp –p /cdrom/cdrom0/install /jumpstart/fjulsa/. <RETURN>
(INSTALL SERVER)# cp –p /cdrom/cdrom0/admin /jumpstart/fjulsa/. <RETURN>
(INSTALL SERVER)# cp –pr /cdrom/cdrom0/ultra_lvd_driver/10/*
/jumpstart/fjulsa/. <RETURN>

3. Copy a JumpStart sample

Copy the JumpStart sample file from the OS install image.

(INSTALL SERVER)# cp -r
/export/install/Solaris10_hostname/Solaris_10/Misc/jumpstart_sample/* /jumpstart
<RETURN>

4. Edit the profile

Edit the /jumpstart/profile file with a text editor.

Create a profile to meet the installation target machine configuration. Create one as instructed
in "Solaris Installation Guide: Custom JumpStart and Advanced Installations."

The setting procedure of profile are different by the file system of the system disk.

36
Profile sample setting : UFS file system

install_type initial_install # The install_type parameter is required. Specify


Initial_install.
system_type server #Specifies server as system_type.
partitioning explicit # Specifies explicit as partitioning.
cluster SUNWCXall #Specifies SUNWCXall (Entire Software Group Plus OEM Support)
as an installed OS cluster.
filesys c7t16d0s1 4096 swap #Assigns a swap file system space of 4096 MB to
c7t16d0s1.
filesys c7t16d0s0 free / #Assigns the rest of the disk space to c7t16d0s0.

Profile sample setting : ZFS file system

install_type initial_install # The install_type parameter is required. Specify Initial_install.


system_type server # Specifies server as system_type.
partitioning explicit # Specifies explicit as partitioning.
cluster SUNWCXall # Specifies SUNWCXall (Entire Software Group Plus OEM
Support) as an installed OS cluster.
pool newpool auto auto auto c7t16d0s0 # The size is automatically allocated in c7t16d0s0
and newpool is made. The size of swap and dump made in newpool is automatically allocated.
bootenv installbe bename sxce_xx # The boot file system is made by name
(newpool/ROOT/sxce_xx) of sxce_xx.

5. Copy a finish script sample

Copy a sample of the finish script from the FJPFCA directory to the /jumpstart directory as
finish.

(INSTALL SERVER)# cp
/jumpstart/FJPFCA/FJPFCA4.0/tool/FJPFCA_jumpstart_finish.sample
/jumpstart/finish <RETURN>

6. Edit the finish script

Edit /jumpstart/finish with a text editor. Edit the following parameters:

 JUMPSTART_HOST Specifies the host name or IP address of the install server.

 JUMPSTART_DIR Specifies the directory in which the JumpStart setting file is stored.
Edit this parameter only when a directory other than /jumpstart is used.

37
When FUJITSU PCI GigabitEthernet/FUJITSU ULTRA LVD SCSI Host Bus Adapter Driver
install , add the following contents. In the following examples, it adds it under
"PF_ARCH=`uname -m`".

 FUJITSU PCI GigabitEthernet 3.0 Update1 or higher

${MNT}/fjgi/install -R /a -d ${MNT}/fjgi -p "$PF_ARCH"

 FUJITSU PCI GigabitEthernet 4.0 or higher

${MNT}/fjgi/install_v4 -R /a -d ${MNT}/fjgi -p "$PF_ARCH"

 FUJITSU ULTRA LVD SCSI Host Bus Adapter Driver

${MNT}/fjulsa/install -R /a -d ${MNT}/fjulsa -p "$PF_ARCH"

Finish script sample setting

#!/bin/sh
### Edit here ###
JUMPSTART_HOST=
JUMPSTART_DIR=/jumpstart
### End of edit ###

### MAIN ###


MNT=/a/mnt
mount -F nfs ${JUMPSTART_HOST}:${JUMPSTART_DIR} ${MNT}

PF_ARCH=`uname -m`
${MNT}/fjgi/install -R /a -d ${MNT}/fjgi -p "$PF_ARCH"
${MNT}/fjulsa/install -R /a -d ${MNT}/fjulsa -p "$PF_ARCH"
${MNT}/FJPFCA/bin/pfcapkgadd.sh -R /a -p "$PF_ARCH"

# Copy fjpfca.conf
if [ -f /kernel/drv/fjpfca.conf ]
then
echo "copying fjpfca.conf "
cp /kernel/drv/fjpfca.conf /a/kernel/drv/fjpfca.conf
COPY_STATUS="$?"
if [ "$?" != "0" ]
then

38
echo "ERROR: fjpfca.conf copy failed."
fi
else
echo "NOTICE: /kernel/drv/fjpfca.conf does not exists."
fi

## Copy sd.conf
if [ -f /kernel/drv/sd.conf ]
then
echo "copying sd.conf "
cp /kernel/drv/sd.conf /a/kernel/drv/sd.conf
COPY_STATUS="$?"
if [ "$?" != "0" ]
then
echo "ERROR: sd.conf copy failed."
fi
else
echo "NOTICE: /kernel/drv/sd.conf does not exists."
fi

umount ${MNT}

7. Edit the rules file

Edit the /jumpstart/rules file with a text editor. Specify a profile and finish script used for each
host in the rules file.

The rules file comes with a number of sample settings by default. Comment them out, because
they are not required.

Append the following to the rules file:

hostname <install machine hostname> - profile finish

8. Check and enable the rules file

Execute the check command to enable the rules file.


 Solaris 10 5/08 or older

(INSTALL SERVER)# cd /jumpstart <RETURN>

39
(INSTALL SERVER)# /jumpstart/check -p /export/install/Solaris10_hostname -r rules
<RETURN>

 Solaris 10 10/08 or higher

(INSTALL SERVER)# cd /jumpstart <RETURN>


(INSTALL SERVER)# /jumpstart/check -p /tmp/media -r rules <RETURN>

If the check command is executed and the following error messages are displayed, the
check command execute again after the following procedures.

Error message:
ERROR: /tmp/media is not a valid Solaris 2.x CD image

(INSTALL SERVER)# cd /tmp/media/Solaris_10/Tools/Boot <RETURN>


(INSTALL SERVER)# bzcat lu.cpio.bz2 | cpio -idum <RETURN>
(INSTALL SERVER)# ls usr/sbin/install.d/chkprobe <RETURN>
usr/sbin/install.d/chkprobe
(INSTALL SERVER)# cd /jumpstart <RETURN>
(INSTALL SERVER)# /jumpstart/check -p /tmp/media -r rules <RETURN>

9. Enter network boot settings.

The setting varies with each installation target machine model.

SPARC Enterprise T1000/T2000/T5120/T5140/T5220/T5240/T5440

(INSTALL SERVER)# /export/install/Solaris10_hostname/Solaris_10/Tools/


add_install_client -i install machine IP address -e install machine mac address -s install
server host name:/export/install/Solaris10_hostname -c install server host name:/jumpstart
install machine host name sun4v <RETURN>

SPARC Enterprise M3000/M4000/M5000/M8000/M9000

(INSTALL SERVER)# /export/install/Solaris10_hostname/Solaris_10/Tools/


add_install_client -i install machine IP address -e install machine mac address -s install
server host name:/export/install/Solaris10_hostname -c install server host name:/jumpstart
install machine host name sun4u <RETURN>

4.1.1.5 Configuring the Fibre Channel boot code


Configure the Fibre Channel boot code required to boot the OS.

Perform this procedure on the install machine.


1. If you use SPARC Enterprise T1000/T2000/T5120/T5140/T5220/T5240/T5440, execute the
following command:

40
ok setenv auto-boot? false <RETURN>
ok reset-all <RETURN>

If you use SPARC Enterprise M3000/M4000/M5000/M8000/M9000, set the server mode


switch to service mode and execute the following command:

ok reset-all <RETURN>

2. Make sure that Fibre Channel card is identified on the OBP. Check the physical path name of
the slot in which the Fibre Channel card is mounted.

Example of having a single-channel 4 Gbps Fibre Channel card (SE0X7F11x) and a dual-
channel 4 Gbps Fibre Channel card (SE0X7F12x) mounted on a server

ok show-devs <RETURN>
/pci@1,700000
/pci@2,600000
**
/openprom
/chosen
/packages
/pci@1,700000/fibre-channel@0 *physical path name of single-channel 4Gbps Fibre
Channel card
/pci@2,600000/fibre-channel@0 *physical path name of dual-channel 4Gbps Fibre
Channel card port0
/pci@2,600000/fibre-channel@0,1 *physical path name of dual-channel 4Gbps Fibre
Channel card port1
/mc@0,0/bank@0,c0000000
/mc@0,0/bank@0,80000000

3. With the boot code enabled on the Fibre Channel card used for booting the OS, restart the
server. Move to the Fibre Channel card physical path (/pci@1,700000/fibre-channel@0)
confirmed in Step 2 before executing a setting command. There is no need to enable the boot
code on those Fibre Channel cards that do not use the boot feature.

ok cd /pci@1,700000/fibre-channel@0 <RETURN> Move to the Fibre Channel card


physical path
ok ENABLE fjpfca-set-bootfunction <RETURN> Enable the boot feature.
ok reset-all <RETURN> Initialize the server.

Perform this step on all cards used for booting the OS.

4. After restarting the server, view information about the disk array devices connected to it.

41
Example: fabric connection

ok cd /pci@1,700000/fibre-channel@0 <RETURN>
ok PROBE fjpfca-info <RETURN>
Target - DID 10500 210000e00040101d9 FUJITSU-E4000-0000
Target - DID 10600 210000e00040101da FUJITSU-E4000-0000

Targets residing on a fabric connection may appear as "-," but this is of no concern.

5. Configure the disk array device to make it identifiable with the Fibre Channel boot code.

There are two ways to make a disk array device identifiable on a fabric connection (using the
FC Switch): automatic setting and manual setting. Automatic setting for making a disk array
device identifiable, when selected, offers the following benefits:
(1) If multiple hosts use a disk array device as a boot device each, they still can share and use the
Solaris OS install image on the install server.
(2) The Fibre Channel driver is made easier to configure.

These two methods of identifying a disk array device are described below.
a. [Automatic setting]
Remove the disk array device setting in the Fibre Channel boot code. Execute the
following command:

ok cd /pci@1,700000/fibre-channel@0 <RETURN> Move to the Fibre Channel card


physical path
ok ENABLE fjpfca-all-target-cancel <RETURN>
fjpfca-all-target-cancel: Delete bind target parameter ...

If a disk array device is identified through automatic setting, set zoning with the FC
Switch to disable a Fibre Channel card from connecting to multiple disk array devices. In
an environment in which a Fibre Channel card is allowed to connect to multiple disk array
devices, allow disk array devices to be identified through manual setting.
b. [Manual setting]
In the Fibre Channel switch (fabric connection) environment, execute the fjpfca-bind-
target command to define a disk array device that is identifiable to the Fibre Channel boot
code.
The WWPN displayed in Step 4 or the DID value is required at this time. There is no
need to execute the fjpfca-bind-target command in the FC-AL environment, because the
disk array device is configured automatically.

Move to the Fibre Channel card physical path (/pci@1,700000/fibre-channel@0) confirmed


in Step 2 before carrying out a configuration command.

o Definition by WWPN

ok cd /pci@1,700000/fibre-channel@0 <RETURN> Move to the Fibre Channel card


physical path

42
ok 10 target-wwpn 210000e0004101d9 fjpfca-bind-target <RETURN>
fjpfca-bind-target: Change bind target parameter

o Definition by DID

ok cd /pci@1,700000/fibre-channel@0 <RETURN> Move to the Fibre Channel card


physical path
ok 10 target-did 10500 fjpfca-bind-target <RETURN>
fjpfca-bind-target: Change bind target parameter .

* The setting is not required in the FC-AL (Private loop) environment.

6. Other settings

The connection topology, link speed, and server and disk array device power supply interlock
wait time can be modified as required. The connection topology and link speed are set to "auto"
(automatic setting) by default.

For more details, see Appendix A, " Boot Device Setup Commands."

For example, if automatic setting is not used on a connection with a Fibre Channel switch ready
for 2 Gbps, enter the following:

ok 2g fjpfca-set-linkspeed <RETURN>
ok nport fjpfca-set-topology <RETURN>

Confirm the settings.

ok fjpfca-output-prop <RETURN>
boot function: ENABLE
topology : N_Port
link-speed : 2G
boot wait time: DISABLE (interval time: DISABLE/ boot wait msg: DISABLE)
bind-target: Target_ID=16,WWN=0x210000c0004101d9

For example, if a power supply interlocking boot wait time of 1200 seconds (20 minutes) is set
on a direct connection with a disk array device ready for 2 Gbps, enter the following:

ok 2g fjpfca-set-linkspeed <RETURN>
ok al fjpfca-set-topology <RETURN>
ok d# 1200 fjpfca-set-boot-wait-time <RETURN>

Confirm the settings.

ok fjpfca-output-prop <RETURN>

43
boot function: ENABLE
topology : AL
link-speed : 2G
boot wait time: 1200 sec (interval time: DISABLE/ boot wait msg: DISABLE
bind-target: Target_ID=16,WWN=0x210000c0004101d9

7. Execute the following command to reset:

ok reset-all <RETURN>

4.1.1.6 Executing network installation


Execute the following command on the OBP of the installation target host:

ok boot net - install <RETURN>

Subsequently, proceed with the install process as directed by onscreen guidance.

When the installation of the OS has completed and a prompt is displayed, proceed to the next procedure.

If Solaris10 is installed on a server that supports a graphics card to use a bitmapped display, right-click
anywhere on the screen to display a menu, from which open a terminal and proceed further. The message
"Click <Reboot> to continue" is displayed, but ignore it for now.

When the network install completes, the OS will automatically boot itself from the disk array device.

44
4.1.2 Creating a boot disk by copying an existing boot
disk residing on an internal disk
This section explains how to copy a boot disk that has already been created on an internal disk or
elsewhere and use it to create a boot disk on a disk array device. Before getting started, make sure that
the server has been started from the OS stored on the internal disk and a connection with the disk array
device has been configured via a Fibre Channel card.

A single-path connection with the disk array device will do for now. Convert the connection to a
multipath implementation according to Section 4.2, "Making the Path to a Boot Disk Redundant."

Perform this procedure entirely from the install machine console.

This procedure does not allow a boot disk to be created on a target device that has been connected by the
auto-target bind feature of the Fibre Channel driver. Be sure to configure a target device with fcp-bind-
target to create a boot disk on it.

For more information about configuring fcp-bind-target, refer to "FUJITSU PCI Fibre Channel Guide."

45
4.1.2.1 Getting ready to copy the boot disk to a disk array
device
1. Execute the format command or the like to confirm the disk array device on which to create a
boot disk.

# format <RETURN>
Searching for disks...done
AVAILABLE DISK SELECTIONS
0. c7t16d0 <FUJITSU-ETERNUS-4000 cyl 1038 alt 2 hd 64 sec 256>
/pci@1,700000/fibre-channel@0/sd@10,0

Check for the availability of sufficient disk space at the boot disk creation destination and
sufficient associated partition space when compared with the source to copy from. If a slice has
not been created in the lun at the boot disk creation destination, execute the format command to
create one.

4.1.2.2 Creating a boot disk


Creating a boot disk, the procedure is different depending on the composition of the following file
systems.
 Procedure for copying internal disk (UFS file system) to disk array device (UFS file system).

 Procedure for copying internal disk (UFS file system) to disk array device (ZFS file system).

 Procedure for copying internal disk (ZFS file system) to disk array device (ZFS file system).
The environment that can be made is different according to the operating system.

Operating System Method of creating boot disk

 Procedure for copying internal disk (UFS file system) to disk array
Solaris 10 5/08 or older
device (UFS file system).

 Procedure for copying internal disk (UFS file system) to disk array
Solaris 10 5/08 or older and device (UFS file system).

patch 137137-09 or later  Procedure for copying internal disk (UFS file system) to disk array
device (ZFS file system).

 Procedure for copying internal disk (UFS file system) to disk array
device (UFS file system).

 Procedure for copying internal disk (UFS file system) to disk array
Solaris 10 10/08 or higher
device (ZFS file system).

 Procedure for copying internal disk (ZFS file system) to disk array
device (ZFS file system).

46
It explains each procedure as follows.

 Procedure for copying internal disk (UFS file system) to disk array device (UFS file system).
1. Migrate the system to the obp environment.

# /usr/sbin/shutdown -y -i0 <RETURN>

Launch the system in single-user mode.

ok boot -s <RETURN>

2. Write a boot block.

Write a boot block to the system block (using a path that has been verified by carrying out the
format command or the like).

# installboot /usr/platform/`uname -m`/lib/fs/ufs/bootblk /dev/rdsk/c7t16d0s0 <RETURN>

3. Create a file system.

# newfs -v /dev/rdsk/c7t16d0s0 <RETURN>

4. Execute the mount command (mount the LUN for the boot disk on the identified disk array
device).

# mount -F ufs /dev/dsk/c7t16d0s0 /mnt <RETURN>

5. Copy the boot disk to the disk array device.

# ufsdump 0f - /dev/rdsk/c0t0d0s0 | ( cd /mnt; ufsrestore rf -) <RETURN>

The command above, when carried out, copies data located in directories other than /mnt to
RAID.

6. Copy other partitions.

If /var and /opt have been defined as separate partitions, repeat Steps 3 to 5 above for each
partition.

 Procedure for copying internal disk (UFS file system) to disk array device (ZFS file system).

47
1. Migrate the system to the obp environment.

# /usr/sbin/shutdown -y -i0 <RETURN>

Launch the system in single-user mode.

ok boot -s <RETURN>

2. Create the ZFS file system on a disk array device.

# zpool create rootpool c7t16d0s0 <RETURN>


# zfs create rootpool/rootfs <RETURN>
# zfs create rootpool/rootfs/s10_1008 <RETURN>
# zfs create -V 2G rootpool/swap <RETURN>
# zfs create -V 2G rootpool/dump <RETURN>
# zfs set mountpoint=legacy rootpool/rootfs/s10_1008 <RETURN>

By way of the example, the file system used to the root(/) make a name of rootfs, the swap area
is made by 2GB and the dump areas are made by 2GB. The mount point of rootpool/rootfs set to
legacy.

3. Write a boot block.

Write a boot block to the system block (using a path that has been verified by carrying out the
format command or the like).

# installboot -F zfs /usr/platform/`uname -i`/lib/fs/zfs/bootblk /dev/rdsk/c7t16d0s0


<RETURN>

4. Execute the mount command.


Mount the ZFS file system made on the disk array device according to Step 2.

# mount -F zfs rootpool/rootfs/s10_1008 /mnt <RETURN>

5. Copy the boot disk to the disk array device.

# ufsdump 0f - /dev/rdsk/c0t0d0s0 | ( cd /mnt; ufsrestore rf -) <RETURN>

6. Copy other partitions.

If /var and /opt have been defined as separate partitions, repeat Steps 4 to 5 above for each
partition.

 Procedure for copying internal disk (ZFS file system) to disk array device (ZFS file system).

48
1. Migrate the system to the obp environment.

# /usr/sbin/shutdown -y -i0 <RETURN>

Launch the system in single-user mode.

ok boot -s <RETURN>

2. Create the ZFS file system on a disk array device.

# zpool create rootpool c7t16d0s0 <RETURN>


# zfs create rootpool/rootfs <RETURN>
# zfs create -V 2G rootpool/swap <RETURN>
# zfs create -V 2G rootpool/dump <RETURN>
# zfs set mountpoint=legacy rootpool/rootfs <RETURN>

By way of the example, the file system used to the root(/) make a name of rootfs, the swap area
is made by 2GB and the dump areas are made by 2GB. The mount point of rootpool/rootfs set
to legacy.

3. Write a boot block.

Write a boot block to the system block (using a path that has been verified by carrying out the
format command or the like).

# installboot -F zfs /usr/platform/`uname -i`/lib/fs/zfs/bootblk /dev/rdsk/c7t16d0s0


<RETURN>

4. Create a snapshot.

Create the snapshot of root(/) of the internal disk.

# zfs list <RETURN>


NAME USED AVAIL REFER MOUNTPOINT
rootpool 142K 78.7G 19K /rootpool
rootpool/dump 2G 78.7G 16K -
rootpool/rootfs 18K 78.7G 18K legacy
rootpool/swap 2G 78.7G 16K -
rpool 5.84G 61.1G 94K /rpool
rpool/ROOT 4.81G 61.1G 18K legacy
rpool/ROOT/s10_1008 4.81G 61.1G 4.81G /
rpool/dump 512M 61.1G 512M -

49
rpool/export 32.0M 61.1G 20K /export
rpool/export/home 32.0M 61.1G 32.0M /export/home
rpool/swap 512M 61.6G 10.8M -
# zfs snapshot rpool/ROOT/s10_1008@snapshot <RETURN>

5. Copy the boot disk to the disk array device.

# mkdir /backup <RETURN>


# zfs send rpool/ROOT/s10_1008@snapshot > /backup/s10_1008.img <RETURN>
# zfs receive rootpool/rootfs/s10_1008 < /backup/s10_1008.img <RETURN>

6. Copy other partitions.

If /var have been defined as separate partitions, repeat Steps 4 to 5 above for each partition.

7. Execute the mount command.

# mount -F zfs rootpool/rootfs/s10_1008 /mnt <RETURN>

4.1.2.3 Editing mount table information


Enter the access path to the disk array device on which a boot disk has been created in /mnt/etc/vfstab.
The boot disk LUN being mounted on /mnt. In the UFS file system and the ZFS file system, the setting
method is different.

In the UFS file system, all the access paths is added, comment out access paths that are of no use.

Example: SAN Boot environment by UFS file system

#device device mount FS fsck mount mount


#to mount to fsck point type pass at boot options
#
#/dev/dsk/c0t0d0s3 - - swap - no -
#/dev/dsk/c0t0d0s0 /dev/rdsk/c0t0d0s0 / ufs 1 no
-
#/dev/dsk/c0t0d0s1 /dev/rdsk/c0t0d0s1 /var ufs 1 no
-
fd - /dev/fd fd - no -
/proc - /proc proc - no -

50
/dev/dsk/c7t16d0s3 - - swap - no -
/dev/dsk/c7t16d0s0 /dev/rdsk/c7t16d0s0 / ufs 1 no
-
/dev/dsk/c7t16d0s1 /dev/rdsk/c7t16d0s1 /var ufs 1 no
-
..

In the ZFS file system, only swap is added, comment out access paths that are of no use.

Example: SAN Boot environment by ZFS file system(the root device is not set.)

#device device mount FS fsck mount mount


#to mount to fsck point type pass at boot options
#
#/dev/dsk/c0t0d0s3 - - swap - no -
#/dev/dsk/c0t0d0s0 /dev/rdsk/c0t0d0s0 / ufs 1 no -
#/dev/dsk/c0t0d0s1 /dev/rdsk/c0t0d0s1 /var ufs 1 no -
fd - /dev/fd fd - no -
/proc - /proc proc - no -
/dev/zvol/dsk/rootpool/swap - - swap - no -

If a disk array device has a LU other than its boot disk defined in sd.conf, remove the definition from
/mnt/kernel/drv/sd.conf.

The following procedures are executed continuously only when the SAN Boot environment by the ZFS
file system is constructed.

1. The mountpoint property is changed to the root path (/).

# zfs set mountpoint=/ rootpool/rootfs/s10_1008 <RETURN>

− It advances to the following procedure ignoring it though the following messages are
displayed when the mountpoint property is set to the root path (/).

cannot mount '/': directory is not empty


property may be set but unable to remount filesystem

2. Bootfs setting.

# zpool set bootfs=rootpool/rootfs/s10_1008 rootpool <RETURN>

51
4.1.2.4 Configuring the Fibre Channel boot code
Configure the Fibre Channel boot code to allow the OS to be booted from a disk array device.

Example: Set on fjpfca0.

# /usr/sbin/FJSVpfca/fc_hbaprp -i fjpfca0 -b ENABLE <RETURN>


# /usr/sbin/FJSVpfca/fc_hbaprp -i fjpfca0 -c /kernel/drv/fjpfca.conf <RETURN>
# /usr/sbin/FJSVpfca/fc_hbaprp -i fjpfca0 -v <RETURN>
boot_function : ENABLE
topology : N_Port
link-speed : 4G
boot wait time : DISABLE ( interval time : DISABLE , boot wait msg : DISABLE )
bind-target: Target_ID=16,WWN=0x210000c0004101d9

4.1.2.5 Resetting the server


If you use SPARC Enterprise T1000/T2000/T5120/T5140/T5220/T5240/T5440, execute the following
commands depending on the system status.

 The OS is booting

execute the following command:

# /usr/sbin/eeprom auto-boot?=false <RETURN>


# /usr/sbin/shutdown -i0 -g0 -y <RETURN>

After migrating to the obp environment, execute the following command:

ok reset-all <RETURN>

 In the obp environment

Execute the following command:

ok setenv auto-boot? false <RETURN>


ok reset-all <RETURN>

 The server is turned off

Turn on the server and execute “The OS is booting” or “In the obp environment” depending on
the system status.

52
If you use SPARC Enterprise M3000/M4000/M5000/M8000/M9000 set the server mode switch to
service mode and execute the following commands depending on the system status.

 The OS is booting

Set the server mode switch to service mode and execute the following command:

# /usr/sbin/shutdown -i0 -g0 -y <RETURN>

After migrating to the obp environment, execute the following command:

ok reset-all <RETURN>

 In the obp environment

Set the server mode switch to service mode and then execute the following command:

ok reset-all <RETURN>

 The server is turned off

Set the server mode switch to service mode and then turn on the server.

4.1.2.6 Booting from a disk array device


Specify a disk array device from which boot the OS.

ok boot /pci@1,700000/fibre-channel@0/disk@10,0 <RETURN> (*1)


Boot device: /pci@1,700000/fibre-channel@0/disk@10,0 File and args: kernel/sparcv9/unix

(*1) The value (in the example above, "10,0") that follows "disk" specified by boot denotes the
target_id/LUN. It must be the same as the value of target_id/LUN of the disk array device that is
identified by the Fibre Channel driver after the OS boots. In the FC-AL environment, specify the
value of target_id that is displayed when fjpfca-info is executed.

The values of target_id and LUN need be specified in hexadecimal at boot time.

53
4.2 Making the Path to a Boot Disk Redundant
This section explains how to make the path to a boot device based on the ETERNUS Multipath Driver
redundant.

4.2.1 Installing the Enhanced Support Facility


Install Enhanced Support Facility if it is yet to be installed. If it is already installed, proceed to the next
procedure.

1. Specify the disk array device and boot the OS in single-user mode.

ok boot /pci@1,700000/fibre-channel@0/disk@10,0 -s <RETURN>


Boot device: /pci@1,700000/fibre-channel@0/disk@10,0 File and args: kernel/sparcv9/unix

2. Install Enhanced Support Facility as instructed in "Enhanced Support Facility Install Guide."

3. Restart the host in the following way:

# /usr/sbin/shutdown -i0 -g0 -y <RETURN>

4. After migrating to the obp environment, execute the following command:

ok reset-all <RETURN>

5. Specify the disk array device and boot the OS.

ok boot /pci@1,700000/fibre-channel@0/disk@10,0 <RETURN>


Boot device: /pci@1,700000/fibre-channel@0/disk@10,0 File and args: kernel/sparcv9/unix

54
4.2.2 Configuring the ETERNUS Multipath Driver
This section explains how to configure the ETERNUS Multipath Driver to make the path to a boot device
redundant.

About target ID of the equipment used as a boot device, it recommends setting it as the same value with
two Fibre Channel cards which constitute a multipath.

4.2.2.1 Single system (non-cluster system)


1. Specify a disk array device and boot the OS from it to install the ETERNUS Multipath Driver.
Launch the host from the boot disk on the disk array device. Then, install ETERNUS
Multipath Driver as instructed in the "ETERNUS Multipath Diver Install Guide." When the
install completes, respond with "y" at the following prompt to let the grmpdautoconf command
execute automatically to proceed to the multipath building process in Step2.

Do you want to make a multipath configuration now?

If the ETERNUS Multipath Driver package has already been installed, execute grmpdautoconf
to proceed to the multipath building process in Step 2.

# /usr/sbin/grmpdautoconf <RETURN>

2. Execute grmpdautoconf to build a multipath.


Work with grmpdautoconf interactively. For more details, refer to "ETERNUS Multipath
Driver User's Guide." During this interactive session, make the following choices:

 Select "Manual selection" m in response to the automatic/manual path selection prompt.

Select an access path automatically or manually?


** If automatic selection is selected, all access paths marked "New" are registered with the
system.
** Select automatic selection if an access path has been properly selected at ETERNUS and
switch setup.
** All paths marked "Exist" are eligible for LUN installation.
** All devices residing on an AL connection are eligible for LUN and device installation.
Manual selection ---> 'm'
Automatic selection ---> 'a'
Quit ---> 'q'
Enter a key. [m,a,q] m <RETURN>

55
 Different manual path selection screens are displayed depending on which method of disk
array device identification by the Fibre Channel driver has been selected, automatic setting
or manual setting.
a. [Automatic setting]

Select a start-up path on the manual path selection screen.

Adapter Switch ETERNUS


Status
instance WWN WWN product
-----+-------------------------------------+-----+------------------------------------------------------
+-----
[ ] 1 fjpfca0 100000000e24ac06 1 210000e0004101d9 E4000 CM1CA0P0
New
[ ] 2 fjpfca1 100000000e244737 3 210000e0004101da E4000 CM0CA0P0
New

Among the paths marked "New," enter the number of the access path you want
registered with the system.
** The access path list is redisplayed after the entry.
** Selected paths are marked with "*."
** If an invalid number is selected, enter that number again to deselect it.
** All paths marked "Exist" are eligible for LUN installation.
** All devices residing on an AL connection are eligible for LUN and device
installation.

Enter path numbers ---> Enter numerals (multiple path numbers can be entered
separated from each
other with a comma)
Conclude entry ---> 'x'
Quit ---> 'q'

Enter a key. [Path number ,x,q] 1 2 <RETURN>


Adapter Switch ETERNUS
Status
instance WWN WWN product
-----+-------------------------------------+-----+------------------------------------------------------
+-----
[*] 1 fjpfca0 100000000e24ac06 1 210000e0004101d9 E4000 CM1CA0P0
New
[*] 2 fjpfca1 100000000e244737 3 210000e0004101da E4000 CM0CA0P0
New

56
Among the paths marked "New," enter the number of the access path you want
registered with the system.
** The access path list is redisplayed after the entry.
** Selected paths are marked with "*."
** If an invalid number is selected, enter that number again to deselect it.
** All paths marked "Exist" are eligible for LUN installation.
** All devices residing on an AL connection are eligible for LUN and device
installation.

Enter path numbers ---> Enter numerals (multiple path numbers can be entered
separated from each
other with a comma)
Conclude entry ---> 'x'
Quit ---> 'q'

Enter a key. [Path number ,x,q] x <RETURN>

The setting of the path (disk array device) selected above is reflected in the Fibre Channel
driver setting file (/kernel/drv/fjpfca.conf). Once this step completes, the setting of the disk
array device that is identified by the Fibre Channel driver is fixed, so the following setting in the
Fibre Channel driver setting file can be removed:

fcp-auto-bind-function=1;

b. [Manual setting]

The wwn entered in fjpfca.conf appears as "Exist" or "AL." Leave all other paths unselected,
selecting "Confirmed (x)" for them.

Adapter Switch ETERNUS


Status
instance WWN WWN product
-----+----------------------------------+-----+----------------------------------------------------+-----
fjpfca0 100000000e24ac06 1 210000e0004101d9 E4000 CM1CA0P0
Exist
[ ] 1 fjpfca1 100000000e244737 3 210000e0004101da E4000 CM0CA0P0
New

Among the paths marked "New," enter the number of the access path you want
registered with the system.
** The access path list is redisplayed after the entry.

57
** Selected paths are marked with "*."
** If an invalid number is selected, enter that number again to deselect it.
** All paths marked "Exist" are eligible for LUN installation.
** All devices residing on an AL connection are eligible for LUN and device
installation.

Enter path numbers ---> Enter numerals (multiple path numbers can be entered
separated from each
other with a comma)
Conclude entry ---> 'x'
Quit ---> 'q'

Enter a key. [Path number ,x,q] 1 <RETURN>


Adapter Switch ETERNUS
Status
instance WWN WWN product
-----+----------------------------------+-----+------------------------------------------------------+---
---
fjpfca0 100000000e24ac06 1 210000e0004101d9 E4000 CM1CA0P0
Exist
[*] 1 fjpfca1 100000000e244737 3 210000e0004101da E4000 CM0CA0P0
New

Among the paths marked "New," enter the number of the access path you want
registered with the system.
** The access path list is redisplayed after the entry.
** Selected paths are marked with "*."
** If an invalid number is selected, enter that number again to deselect it.
** All paths marked "Exist" are eligible for LUN installation.
** All devices residing on an AL connection are eligible for LUN and device
installation.

Enter path numbers ---> Enter numerals (multiple path numbers can be entered
separated from each
other with a comma)
Conclude entry ---> 'x'
Quit ---> 'q'
Enter a key. [Path number ,x,q] x <RETURN>

58
 When SPARC Enterprise T1000/T2000/T5120/T5140/T5220/T5240/T5440 is being used, it
is necessary to choose the mode of an access path. Select mplb mode at the prompt for access
path mode selection. The ETERNUS Multipath Driver supports two different multipath
access modes: Solaris standard mode, in which the multipath is accessed from a Solaris
standard special file, and mplb mode, in which the multipath is accessed from a mplb special
file as in the past.

In the Solaris10 environment, select mplb mode for the access path mode. Solaris standard
mode does not work in the SAN Boot environment.

When SPARC Enterprise M3000/M4000/M5000/M8000/M9000 is being used, it is not


necessary to choose access path mode.

Choose an access special file between the following:


Solaris standard special file (/dev/[r]dsk/c*t*d*s*)
mplb special file (/dev/FJSVmplb/[r]dsk/mplb*s*)

/dev/[r]dsk/c*t*d*s* ---> 's'


/dev/FJSVmplb/[r]dsk/mplb*s* ---> 'm'
Enter a key. [s, m] m <RETURN>

3. Check the device path name of the boot device. The grmpdautoconf command carried out in
Step 2 displays a combination of a multipath management special file and a selected access
special file. Use the output of the ls command to identify the boot disk and the physical device
path name of each configuration path. Physical device path names thus found are used in Steps
6 and 9.

# ls -l <Boot disk slice 0 > <RETURN>


# ls -l < Configuration path slice 2 > <RETURN>

Assume that grmpdautoconf has delivered the following output listing:

*** Phase 1: read mplb.conf ***


*** Phase 2: read /dev ***
*** Phase 3: read /devices ***
*** Phase 4: compare mplb.conf and /devices ***
Path : Action : Element path : LUN : Storage
mplb0 : new : c2t16d0s2 c13t16d0s2 : 0 : E40004641- 130011 :
mplb1 : new : c2t16d1s2 c13t16d1s2 : 1 : E40004641- 130011 :
mplb2 : new : c2t16d2s2 c13t16d2s2 : 2 : E40004641- 130011 :

The boot disk and the paths that configure it up are as follows:

59
Boot disk /dev/FJSVmplb/rdsk/mplb0s0
Configuration path /dev/rdsk/c2t16d0s2
/dev/rdsk/c13t16d0s2

Execute the ls command to check the device path names.

# ls -l /dev/FJSVmplb/rdsk/mplb0s0 <RETURN>
lrwxrwxrwx 1 root root 36 Aug 29 12:05 /dev/FJSVmplb/rdsk/mplb0s0 -> (line wrapping)
../../../devices/pseudo/mplb@0:a,raw <RETURN>
^^^^^^^^^^^^^^^
# ls -l /dev/rdsk/c2t16d0s2 <RETURN>
lrwxrwxrwx 1 root root 58 Aug 29 17:13 /dev/rdsk/c2t16d0s2 -> (line wrapping)
../../devices/pci@1,700000/fibre-channel@0/mplbt@10,0:c,raw <RETURN>
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
# ls -l /dev/rdsk/c13t16d0s2 <RETURN>
lrwxrwxrwx 1 root root 58 Aug 29 17:13 /dev/rdsk/c13t16d0s2 -> (line wrapping)
../../devices/pci@2,600000/fibre-channel@0/mplbt@10,0:c,raw <RETURN>
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

4. Set a Fibre Channel driver link speed.

Edit the Fibre Channel driver setting file (/kernel/drv/fjpfca.conf) to set a Fibre Channel link
speed.

In configuring the Fibre Channel driver, the link speed setting can be set to automatic selection
for the sake of easier connectivity. The expected link speed may not be attained depending on
the connection status. Therefore, set the highest transmission rate available in the environment.

Example: Set fjpfca0 to a link speed of 4 Gbps.

port=
"fjpfca0:nport:sp4";

For more information about configuring fjpfca.conf, refer to "FUJITSU PCI Fibre Channel
Guide."

5. Load all the Fibre Channel boot codes that are used to access the boot disk with a disk array
device boot setting.

Example: Set on fjpfca0.

60
# /usr/sbin/FJSVpfca/fc_hbaprp -i fjpfca0 -b ENABLE <RETURN>
# /usr/sbin/FJSVpfca/fc_hbaprp -i fjpfca0 -c /kernel/drv/fjpfca.conf <RETURN>
# /usr/sbin/FJSVpfca/fc_hbaprp -i fjpfca0 -v <RETURN>
boot_function : ENABLE
topology : N_Port
link-speed : 4G
boot wait time : DISABLE ( interval time : DISABLE , boot wait msg : DISABLE )
bind-target: Target_ID=16,WWN=0x210000e0004101d9

6. Correct the system settings to meet the multipath implementation.


 SAN Boot environment by the UFS file system
a. Root device setting (/etc/system)

Edit /etc/system file to set rootdev and forceload. For the rootdev setting, set the boot disk
physical device name as found in Step 3 above, excluding "../../devices" at the beginning and
",raw" at the end.

When the setting concerning forceload to each driver exists in /etc/system file, an additional
setting need not be done.

rootdev: /pseudo/mplb@0:a
forceload: drv/mplbt
forceload: drv/mplb
forceload: drv/sd

b. Mount information setting (/etc/vfstab)

Edit /etc/vfstab file to rewrite each entry to a path name after the multipath implementation.

/dev/FJSVmplb/dsk/mplb0s0 /dev/FJSVmplb/rdsk/mplb0s0 / ufs 1 no -


/dev/FJSVmplb/dsk/mplb0s3 - - swap - no –

c. An access setup of a boot disk (/kernel/drv/sd.conf).

When target ID of the equipment used as a boot device has differed with two Fibre Channel
cards which constitute a multipath, target ID about all the paths of a boot disk may not be written
to sd.conf. In that case, the following setup is needed.

○ Edit /kernel/drv/sd.conf

The definition of target ID of the path used as a boot disk is added.


Example: The definition of "Target ID = 18" is added.

name=”sd” class=”scsi” target=18 lun=0;

61
○ Reconstruction of sd driver
# touch /reconfigure <RETURN>
or
# update_drv –f sd <RETURN>

 SAN Boot environment by the ZFS file system


a. Root device setting (/etc/system)

Edit /etc/system file to set forceload.

forceload: drv/mplb
forceload: drv/sd

b. An access setup of a boot disk (/kernel/drv/sd.conf).

When target ID of the equipment used as a boot device has differed with two Fibre Channel
cards which constitute a multipath, target ID about all the paths of a boot disk may not be written
to sd.conf. In that case, the following setup is needed.

○ Edit /kernel/drv/sd.conf

The definition of target ID of the path used as a boot disk is added.


Example: The definition of "Target ID = 18" is added.

name=”sd” class=”scsi” target=18 lun=0;

○ Reconstruction of sd driver
# touch /reconfigure <RETURN>
or
# update_drv –f sd <RETURN>

7. Configure a dump device as required.


 SAN Boot environment by the UFS file system

# dumpadm -d /dev/FJSVmplb/dsk/mplb0s3 <RETURN>

 SAN Boot environment by the ZFS file system

# dumpadm -d /dev/zvol/dsk/rootpool/dump <RETURN>

8. Stop the unit and reset the obp environment.

# /usr/sbin/shutdown -y -i0 -g0 <RETURN>


ok reset-all <RETURN>

62
9. Configure a boot device.

Configure a boot device on all redundant paths to the boot disk on the OBP. Take the physical
device name of each configuration path found in Step 3, excluding "../../devices" at the
beginning and ":*,raw" at the end, and set it with "mplbt" being replaced with "disk."

ok nvalias raid1 /pci@1,700000/fibre-channel@0/disk@10,0 <RETURN>


ok nvalias raid2 /pci@2,600000/fibre-channel@0/disk@10,0 <RETURN>
ok setenv boot-device raid1 raid2 <RETURN>

10. If you use SPARC Enterprise T1000/T2000/T5120/T5140/T5220/T5240/T5440, execute the


following command;

ok setenv auto-boot? true <RETURN>

If you use SPARC Enterprise M3000/M4000/M5000/M8000/M9000, set the server mode


switchback to AUTO.
11. Start the host.

ok boot <RETURN>

4.2.2.2 Cluster system


Perform this procedure to build a cluster system using PRIMECLUSTER.
1. Specify a disk array device and boot the OS from it to install the ETERNUS Multipath Driver.

Launch the host from the boot disk on the disk array device. Then, install ETERNUS
Multipath Driver as instructed in "ETERNUS Multipath Driver Install Guide." When the install
completes, respond with "y" at the following prompt to let the grmpdautoconf command execute
automatically to proceed to the multipath building process in Step 2.

Do you want to make a multipath configuration now ?

If the ETERNUS Multipath Driver package has already been installed, execute grmpdautoconf
to proceed to the multipath building process in Step 2.

# /usr/sbin/grmpdautoconf <RETURN>

2. Execute grmpdautoconf to build a multipath.

Work with grmpdautoconf interactively. For more details, refer to "ETERNUS Multipath
Driver User's Guide." During this interactive session, make the following choices:

 Select "Manual selection" m in response to the automatic/manual path selection prompt.

63
Select an access path automatically or manually?
** If automatic selection is selected, all access paths marked "New" are registered with the
system.
** Select automatic selection if an access path has been properly selected at ETERNUS and
switch setup.
** All paths marked "Exist" are eligible for LUN installation.
** All devices residing on an AL connection are eligible for LUN and device installation.
Manual selection ---> 'm'
Automatic selection ---> 'a'
Quit ---> 'q'
Enter a key. [m,a,q] m <RETURN>

 Different manual path selection screens are displayed depending on which method of disk
array device identification has been selected in the Fibre Channel driver, automatic setting
or manual setting.
a. [Automatic setting]

Select a startup path on the manual path selection screen.

Adapter Switch ETERNUS


Status
instance WWN WWN product
-----+----------------------------------+-----+--------------------------------------------------+-----
[ ] 1 fjpfca0 100000000e24ac06 1 210000e0004101d9 E4000 CM1CA0P0
New
[ ] 2 fjpfca1 100000000e244737 3 210000e0004101da E4000 CM0CA0P0
New

Among the paths marked "New," enter the number of the access path you want
registered with the system.
** The access path list is redisplayed after the entry.
** Selected paths are marked with "*."
** If an invalid number is selected, enter that number again to deselect it.
** All paths marked "Exist" are eligible for LUN installation.
** All devices residing on an AL connection are eligible for LUN and device
installation.

Enter path numbers ---> Enter numerals (multiple path numbers can be entered
separated from each
other with a comma)
Conclude entry ---> 'x'

64
Quit ---> 'q'

Enter a key. [Path number ,x,q] 1 2 <RETURN>


Adapter Switch ETERNUS Status
instance WWN WWN product
-----+----------------------------------+-----+-------------------------------------------------+-----
[*] 1 fjpfca0 100000000e24ac06 1 210000e0004101d9 E4000 CM1CA0P0 New
[*] 2 fjpfca1 100000000e244737 3 210000e0004101da E4000 CM0CA0P0
New

Among the paths marked "New," enter the number of the access path you want
registered with the system.
** The access path list is redisplayed after the entry.
** Selected paths are marked with "*."
** If an invalid number is selected, enter that number again to deselect it.
** All paths marked "Exist" are eligible for LUN installation.
** All devices residing on an AL connection are eligible for LUN and device
installation.

Enter path numbers ---> Enter numerals (multiple path numbers can be entered
separated from each
other with a comma)
Conclude entry ---> 'x'
Quit ---> 'q'

Enter a key. [Path number ,x,q] x <RETURN>

The setting of the path (disk array device) selected above is reflected in the Fibre Channel
driver setting file (/kernel/drv/fjpfca.conf). Once this step completes, the setting of the disk
array device that is identified by the Fibre Channel driver is fixed, so the following setting in the
Fibre Channel driver setting file can be removed:

fcp-auto-bind-function=1;

b. [Manual setting]

The wwn entered in fjpfca.conf appears as "Exist" or "AL." Leave all other paths unselected,
selecting "Confirmed (x)" for them.

Adapter Switch ETERNUS


Status
instance WWN WWN product

65
---+-----------------------------------+-----+------------------------------------------------------+----
-
fjpfca0 100000000e24ac06 1 210000e0004101d9 E4000 CM1CA0P0
Exist
[ ] 1 fjpfca1 100000000e244737 3 210000e0004101da E4000 CM0CA0P0
New

Among the paths marked "New," enter the number of the access path you want
registered with the system.
** The access path list is redisplayed after the entry.
** Selected paths are marked with "*."
** If an invalid number is selected, enter that number again to deselect it.
** All paths marked "Exist" are eligible for LUN installation.
** All devices residing on an AL connection are eligible for LUN and device
installation.

Enter path numbers ---> Enter numerals (multiple path numbers can be entered
separated from each
other with a comma)
Conclude entry ---> 'x'
Quit ---> 'q'

Enter a key. [Path number ,x,q] 1 <RETURN>


Adapter Switch ETERNUS
Status
instance WWN WWN product
-----+----------------------------------+-----+------------------------------------------------------+---
--
fjpfca0 100000000e24ac06 1 210000e0004101d9 E4000 CM1CA0P0
Exist
[*] 1 fjpfca1 100000000e244737 3 210000e0004101da E4000 CM0CA0P0
New

Among the paths marked "New," enter the number of the access path you want
registered with the system.
** The access path list is redisplayed after the entry.
** Selected paths are marked with "*."
** If an invalid number is selected, enter that number again to deselect it.
** All paths marked "Exist" are eligible for LUN installation.
** All devices residing on an AL connection are eligible for LUN and device
installation.

66
Enter path numbers ---> Enter numerals (multiple path numbers can be entered
separated from each
other with a comma)
Conclude entry ---> 'x'
Quit ---> 'q'

Enter a key. [Path number ,x,q] x <RETURN>

 Select y in response to the prompt asking whether to use PRIMECLUSTER.

Is the disk array device used as a PRIMECLUSTER/SafeCLUSTER shared disk?


**If PRIMECLUSTER/SafeCLUSTER is used, use the multipath setup feature provided by
the relevant product.
** The value of maxthrottle also needs to be reviewed.

Yes ---> 'y' (Processing ends)


No ---> 'n'

Enter a key. [y,n] y <RETURN>


Processing up to sd has ended successfully.

*** IMPORTANT NOTICE ***


Installation of ETERNUS Multipart Driver Package was successful.

3. Next, execute the mplbconfig command.

# /usr/sbin/mplbconfig -o /tmp/mplb-file1 <RETURN>


*** Phase 1: Loading mplb.conf ***
*** Phase 2: Checking the device file in /dev ***
*** Phase 3: Checking the device fie in /devices ***
*** Phase 4: Checking mplb.conf against the configuration in /devices ***
=== Multipath configuration plan ===
Existing instance : 0
New instance :2
Add a path : 0 (instance)
Delete a path : 0 (instance)

67
4. Delete all lines except for the system volume. Edit /tmp/mplb-file1 using a vi editor or the like
to delete all except for the path to a system disk.

With a cluster system, the instance number of each local multipathdisk, such as a boot disk,
must not be defined in duplicate among the nodes that configure the cluster. Change instance
numbers (designated by X in mplbX) between 0 and 2047 to avoid duplication with other nodes.

*** mplb config file ***


Path : Process : Configuration path : LUN : Device information
mplb0 : new : c2t16d0s2 c13t16d0s2 : 0 : E40004641- 130011

5. Apply the edited file, converting the system volume into a multipath implementation.

# /usr/sbin/mplbconfig -f /tmp/mplb-file1 <RETURN>


*** Phase 1: Loading mplb.conf ***
*** Phase 2: Checking the device file in /dev ***
*** Phase 3: Checking the device file in /devices ***
*** Phase 4: Checking mplb.conf against the configuration in /devices ***
*** Phase 5: Updating mplb.conf ***
=== Multipath configuration plan ===
Existing instance: 0
New instance: 1
Add a path : 0 (instance)
Delete a path : 0 (instance)

6. Check the device path name of the boot device. The grmpdautoconf command carried out in
Step 4 displays a combination of a multipath management special file and a selected access
special file. Use the output of the ls command to identify the boot disk and the physical device
path name of each configuration path. Physical device path names thus found are used in Steps
9 and 12.

# ls -l <Boot disk slice 0 > <RETURN>


# ls -l <Configuration path slice 2 > <RETURN>

Assume that grmpdautoconf has delivered the following output listing:

*** Phase 1: read mplb.conf ***


*** Phase 2: read /dev ***

68
*** Phase 3: read /devices ***
*** Phase 4: compare mplb.conf and /devices ***
Path : Action : Element path : LUN : Storage
mplb0 : new : c2t16d0s2 c13t16d0s2 : 0 : E40004641- 130011 :
mplb1 : new : c2t16d1s2 c13t16d1s2 : 1 : E40004641- 130011 :
mplb2 : new : c2t16d2s2 c13t16d2s2 : 2 : E40004641- 130011 :

The booth disk and the paths that configure it up are as follows:

Boot disk /dev/FJSVmplb/rdsk/mplb0s0

Configuration path /dev/rdsk/c2t16d0s2

/dev/rdsk/c13t16d0s2

Execute the ls command to check the device path names.

# ls -l /dev/FJSVmplb/rdsk/mplb0s0 <RETURN>
lrwxrwxrwx 1 root root 36 Aug 29 12:05 /dev/FJSVmplb/rdsk/mplb0s0 -> (line wrapping)
../../../devices/pseudo/mplb@0:a,raw <RETURN>
^^^^^^^^^^^^^^^
# ls -l /dev/rdsk/c2t16d0s2 <RETURN>
lrwxrwxrwx 1 root root 58 Aug 29 17:13 /dev/rdsk/c2t16d0s2 -> (line wrapping)
../../devices/pci@1,700000/fibre-channel@0/mplbt@10,0:c,raw <RETURN>
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
# ls -l /dev/rdsk/c13t16d0s2 <RETURN>
lrwxrwxrwx 1 root root 58 Aug 29 17:13 /dev/rdsk/c13t16d0s2 -> (line wrapping)
../../devices/pci@2,600000/fibre-channel@0/mplbt@10,0:c,raw <RETURN>
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

7. Set a Fibre Channel driver link speed.

Edit the Fibre Channel driver setting file (/kernel/drv/fjpfca.conf) to set a Fibre Channel driver
link speed.

In configuring the Fibre Channel driver, the link speed setting can be set to automatic selection
for the sake of easier connectivity. The expected link speed may not be attained depending on
the connection status. Therefore, set the highest transmission rate available in the environment.

69
Example: Set fjpfca0 to a link speed of 4 Gbps.

port=
"fjpfca0:nport:sp4";

For more information about configuring fjpfca.conf, refer to the "FUJITSU PCI Fibre Channel
Guide."

8. Load all the Fibre Channel cards that are used to access the boot disk with a disk array device
boot setting.

Example: Set on fjpfca0.

# /usr/sbin/FJSVpfca/fc_hbaprp -i fjpfca0 -b ENABLE <RETURN>


# /usr/sbin/FJSVpfca/fc_hbaprp -i fjpfca0 -c /kernel/drv/fjpfca.conf <RETURN>
# /usr/sbin/FJSVpfca/fc_hbaprp -i fjpfca0 -v <RETURN>
boot_function : ENABLE
topology : N_Port
link-speed : 4G
boot wait time : DISABLE ( interval time : DISABLE , boot wait msg : DISABLE )
bind-target: Target_ID=16,WWN=0x210000e0004101d9

9. Correct the system settings to meet the multipath implementation.


 SAN Boot environment by the UFS file system
a. Root device setting (/etc/system)

Edit /etc/system file to set rootdev and forceload. For the rootdev setting, set the boot disk
physical device name as found in Step 6 above, excluding "../../devices" at the beginning and
",raw" at the end.

When the setting concerning forceload to each driver exists in /etc/system file, an additional
setting need not be done.

rootdev: /pseudo/mplb@0:a
forceload: drv/mplbt
forceload: drv/mplb
forceload: drv/sd

b. Mount information setting (/etc/vfstab)

Edit /etc/vfstab file to rewrite each entry to a path name after the multipath implementation.

/dev/FJSVmplb/dsk/mplb0s0 /dev/FJSVmplb/rdsk/mplb0s0 / ufs 1 no -

70
/dev/FJSVmplb/dsk/mplb0s3 - - swap - no -

c. An access setup of a boot disk (/kernel/drv/sd.conf).

When target ID of the equipment used as a boot device has differed with two Fibre Channel
cards which constitute a multipath, target ID about all the paths of a boot disk may not be written
to sd.conf. In that case, the following setup is needed.

○ Edit /kernel/drv/sd.conf

The definition of target ID of the path used as a boot disk is added.


Example: The definition of "Target ID = 18" is added.

name=”sd” class=”scsi” target=18 lun=0;

○ Reconstruction of sd driver
# touch /reconfigure <RETURN>
or
# update_drv –f sd <RETURN>

 SAN Boot environment by the ZFS file system


a. Root device setting (/etc/system)

Edit /etc/system file to set forceload.

forceload: drv/mplb
forceload: drv/sd

b. An access setup of a boot disk (/kernel/drv/sd.conf).

When target ID of the equipment used as a boot device has differed with two Fibre Channel
cards which constitute a multipath, target ID about all the paths of a boot disk may not be written
to sd.conf. In that case, the following setup is needed.

○ Edit /kernel/drv/sd.conf

The definition of target ID of the path used as a boot disk is added.


Example: The definition of "Target ID = 18" is added.

name=”sd” class=”scsi” target=18 lun=0;

○ Reconstruction of sd driver
# touch /reconfigure <RETURN>
or
# update_drv –f sd <RETURN>

10. Configure a dump device as required.


 SAN Boot environment by the UFS file system

# dumpadm -d /dev/FJSVmplb/dsk/mplb0s3 <RETURN>

71
 SAN Boot environment by the ZFS file system

# dumpadm -d /dev/zvol/dsk/rootpool/dump <RETURN>

11. Stop the equipment and reset the obp environment.

# /usr/sbin/shutdown -y -i0 -g0 <RETURN>


ok reset-all <RETURN>

12. Configure a boot device.

Configure a boot device on all redundant paths to the boot device on the OBP. Take the
physical device name of each configuration path found in Step 6, excluding "../../devices" at the
beginning and ":*,raw" at the end, and set it with "mplbt" being replaced with "disk."

ok nvalias raid1 /pci@1,700000/fibre-channel@0/disk@10,0 <RETURN>


ok nvalias raid2 /pci@2,600000/fibre-channel@0/disk@10,0 <RETURN>
ok setenv boot-device raid1 raid2 <RETURN>

13. If you use SPARC Enterprise T1000/T2000/T5120/T5140/T5220/T5240/T5440, execute the


following command;

ok setenv auto-boot? true <RETURN>

If you use SPARC Enterprise M3000/M4000/M5000/M8000/M9000, set the server mode


switchback to AUTO.
14. Start the host.

ok boot <RETURN>

72
4.3 Boot Disk Mirroring
This section explains how to mirror between multipath implementations of two boot disks while the OS is
booted from one of them.

4.3.1 Mirroring by PRIMECLUSTER GDS

1. Verify that zoning has been implemented by the FC Switch as shown above.
2. Verify that the connection of the Fibre Channel card to the disk array device at the mirroring
destination has been configured properly in the OBP environment.

ok cd /pci@1,700000/fibre-channel@0 <RETURN>
ok PROBE fjpfca-info <RETURN>
Target -- DID 10500 210000e00040101d9 FUJITSU-E4000-0000
Target -- DID 10600 210000e00040101da FUJITSU-E4000-0000

3. Boot the OS and verify that the connection of the Fibre Channel driver to the disk array device
at the mirroring destination has been configured properly.

# /usr/sbin/FJSVpfca/fc_info -p <RETURN>
adapter=fjpfca#0 :
port_id=0x010500 tid=0 wwn=210000e00040101d9 adapter=fjpfca#1 connected
class=class3
port_id=0x01060 tid=0 wwn=210000e00040101da adapter=fjpfca#1 connected
class=class3

73
4. If a Multipath Driver is yet to be set on the disk array device at the mirroring destination, set it.
Referring to "ETERNUS Multipath Driver User's Guide," set a Multipath Driver and create a
multipath disk on the disk array device (ETERNUS #2) at the mirroring destination.

5. If using the ETERNUS Multipath Driver, add a definition of the mirroring destination disk
to target driver setting file /kernel/drv/sd.conf.
Example: Identify target ID 16, logical unit 1.

name=”sd” class=”scsi” target=16 lun=1;

6. Execute the install with reference to the PRIMECLUSTER GDS manual.

7. Refer to “PRIMECLUSTER Global Disk Services Guide” and mirror the disk at the mirroring
source and destination with each other.

 If a system disk is mirrored using PRIMECLUSTER GDS, the message below may be
displayed at boot time. This message may be ignored.

NOTICE: “forceload: drv/<driver name> appears more than once in /etc/system.

* One of mplb, mplbt and sd appears in place of <driver name>.

This message is displayed if the setting forceload is defined in duplicate in /etc/system file. To
leave this message hidden, delete later occurrences of the setting of forceload.

forceload: drv/mplb
~
forceload: drv/mplb  Delete this line.

8. If a snapshot of a system volume is created using PRIMECLUSTER GDS Snapshot, perform


Step 1 through 5 above for the snapshot disk the same way as for the disk at the mirroring
destination. Then, configure the snapshot as instructed in “PRIMECLUSTER Global Disk
Service Guide.”

74
4.3.2 Notes on using PRIMECLUSTER

4.3.2.1 Cluster system building procedure


Build a cluster system in the procedural steps described below.

75
Chapter 5 Backing Up and Restoring Boot
Disks

Boot disks can be backed up in this environment described in this guide by following the procedural steps
explained below.

 Boot the OS from an internal disk and back up and restore it by file system or by lun.

 Boot the OS from a network and back up and restore the boot disk by file system or by lun.

 Back up and restore up the boot disk using ETERNUS EC (Equivalent Copy) or OPC (One
Point Copy).
* ETERNUS SF AdvancedCopy Manager is required.

In the environment explained in this guide, the method of booting the Solaris OS from CD/DVD and
backing up the boot disk cannot be used. This chapter concerns the procedures: "Boot the OS from an
internal disk and back up and restore it by file system or by lun" and "Boot the OS from a network and
back up and restore the boot disk by file system or by lun." For information on the procedure "Back up
and restore the boot disk using ETERNUS EC (Equivalent Copy) or OPC (One Point Copy) with
ETERNUS SF AdvancedCopy Manager," refer to the ETERNUS SF AdvancedCopy Manager manual.

For information on the procedures for backing up and restoring a system disk that is mirrored by
PRIMECLUSTER GDS, refer to the "PRIMECLUSTER Global Disk Services Guide." Although the
method of booting the Solaris OS from CD/DVD and backing up and restoring a system disk is covered
here, the same method can also be used when the OS has been booted from a network or internal disk. If
PRIMECLUSTER GDS Snapshot is used, the OS can be booted from a boot disk on a disk array device
and then the boot disk can be backed up and restored using the ETERNUS Advanced Copy function or
PRIMECLUSTER GDS copy function.

The type of tape device used may dictate certain precautions may apply in the procedure of configuring
or implementing the backup and restore operations. Refer in advance to the instruction manual pertaining
to the type of tape device used to ensure that boot disks are backed and restored as instructed.

In an environment in which a system disk has optional software installed on it that has a module running
as part of a kernel, such as a driver or file system, additional precautions may apply. Refer to the manual
for the optional software and follow its instructions.

Refer to "Solaris ZFS Administration Guide" for details of back up and restore of the ZFS file system
environment.

This chapter assumes that Solaris is installed on disk device c7t16d0 on a disk array device.

76
5.1 Backing Up/Restoring after Booting OS
from a Network
If a boot disk residing on a disk array device has been created by installing the OS from a network, follow
the procedures explained in this section to back up and restore the boot disk.

5.1.1 Backup procedure


1. Boot the OS from a network in single-user mode with the -s option specified.

ok boot net -s <Return>

2. Back up the boot disk. The boot disk can be backed up by file system or by lun.
a. Back up by file system
 UFS file system environment
(1) The procedure for backing up a boot disk by file system using the ufsdump(1M)
command is explained below. Disk partition information, such as slice size, is not
backed up and needs to be recorded beforehand using the prtvtoc(1M) command or
format(1M) command.

# prtvtoc /dev/rdsk/c7t16d0s2 <Return>

or

# format /dev/rdsk/c7t16d0s2 <Return>


format> partition <Return>
partition> print <Return>

(2) Back up a boot disk using the ufsdump(1M) command. In this example,
/dev/dsk/c7t16d0s0 is used as a boot disk, and tape device /dev/rmt/0 is used.

# ufsdump 0cf /dev/rmt/0 /dev/rdsk/c7t16d0s0 <Return>

 ZFS file system environment


(1) The procedure for backing up a boot disk by file system using the zfs(1M) command is
explained below. Disk partition information, such as slice size, is not backed up and
needs to be recorded beforehand using the prtvtoc(1M) command or format(1M)
command.

# prtvtoc /dev/rdsk/c7t16d0s2 <Return>

or

# format /dev/rdsk/c7t16d0s2 <Return>


format> partition <Return>

77
partition> print <Return>

(2) Back up a boot disk using the zpool(1M) command.


It advances to the following procedure ignoring it though the error message might be
displayed when the pool was imported.

# zpool import <Return>


pool: rpool
id: 4856116377389642800
state: ONLINE
action: The pool can be imported using its name or numeric identifier.
config:

rpool ONLINE
c7t16d0s0 ONLINE
# zpool import 4856116377389642800 <Return> Specify the ID that confirmed in
zpool import.
# zfs list <Return>
NAME USED AVAIL REFER MOUNTPOINT
rpool 6.33G 13.2G 94K /rpool
rpool/ROOT 4.83G 13.2G 18K legacy
rpool/ROOT/s10_1008 4.83G 13.2G 4.76G /
rpool/dump 1.00G 13.2G 1.00G -
rpool/export 38K 13.2G 20K /export
rpool/export/home 18K 13.2G 18K /export/home
rpool/swap 512M 13.7G 10.0M -
#

(3) Creating of snapshot.

# zfs snapshot rpool/ROOT/s10_1008@snapshot <Return>

When failing in creating the snapshot, the following procedures are executed. And,
the snapshot is created again.

# zfs set mountpoint=legacy rpool/ROOT/s10_1008 <Return>


# mount -F zfs rpool/ROOT/s10_1008 /mnt <Return>
# umount /mnt <Return>

(4) Back up a boot disk using the zfs(1M) command. In this example,
rpool/ROOT/s10_1008 is used as a boot path, and tape device /dev/rmt/0 is used.

78
# zfs send rpool/ROOT/s10_1008@snapshot > /dev/rmt/0 <Retrun>

b. Backup by disk
(1) Back up a boot disk using the dd(1M) command. In this example, tape device
/dev/rmt/0 is used.

# dd if=/dev/rdsk/c7t16d0s2 of=/dev/rmt/0 bs=64k <Return>

In this format, if= is followed by the name of the disk to be backed up, such as
/dev/rdsk/c0t0d0s2, specified in the character type (/dev/rdsk/...). Remember to
specify s2, a slice that designates a disk as a whole.

The dd(1M) command might not be able to be backed up according to the size of
LUN because it doesn't correspond to the multi volume.

5.1.2 Restore procedure


1. Boot the OS from a network in single-user mode with the -s option specified.

ok boot net -s <Return>

2. Restore the boot disk. Restore it in the same unit in which it has been backed up.
a. Restore by file system
 UFS file system environment
(1) If the boot disk has a new lun defined or has its lun derived from another use, create a
disk slice and a disk label using the format(1M) command. Reference the disk partition
information that has been recorded at backup for the size of the slice created and other
details.

# format <Return>

For information on creating a disk slice and a disk label using the format(1M)
command, browse through the online manual.

(2) Create a new file system using the newfs(1M) command.

# newfs /dev/rdsk/c7t16d0s0 <Return>

Here, specify the slice name of the restore destination device in the character type
(/dev/rdsk/...).

(3) Mount the boot disk. In this example, /dev/dsk/c7t16d0s0 is used as a boot disk.

# mount -F ufs /dev/dsk/c7t16d0s0 /mnt <Return>

79
(4) Move to the mounted directory.

# cd /mnt <Return>

(5) Restore the boot disk using the ufsrestore(1M) command. In this example, tape device
/dev/rmt/0 is used. For example, the boot disk might be restored from another LU in
the disk array device.

# ufsrestore rf /dev/rmt/0 <Return>

(6) Create a boot block using the installboot(1M) command. For information on creating a
boot block using installboot(1M) command, browse through the online manual.
Use the boot block on the restore destination device for the boot block creating.

Here, specify slice 0 of the restore destination device in the character type
(/dev/rdsk/...). In this example, /dev/dsk/c7t16d0s0 is used as a boot disk.

# installboot /mnt/usr/platform/`uname -i`/lib/fs/ufs/bootblk /dev/rdsk/c7t16d0s0


<Return>

(7) Move to the root directory and unmount the boot disk.

# cd / <Return>
# umount /mnt <Return>

(8) Check the file system for consistency using the fsck(1M) command.

# fsck /dev/rdsk/c7t16d0s0 <Return>

Here, specify the slice name the restore destination device in the character type
(/dev/rdsk/...).

 ZFS file system environment


(1) If the boot disk has a new lun defined or has its lun derived from another use, create a
disk slice and a disk label using the format(1M) command. Reference the disk partition
information that has been recorded at backup for the size of the slice created and other
details.

# format <Return>

For information on creating a disk slice and a disk label using the format(1M)
command, browse through the online manual.

(2) Create the ZFS file system.

80
# zpool create rpool c7t16d0s0 <Return>
# zfs create rpool/ROOT <Return>

(3) Restore the boot disk using the zfs(1M) command.

# zfs receive rpool/ROOT/s10_1008@snapshot < /dev/rmt/0 <Return>

(4) The mountpoint property set to the legacy.

# zfs set mountpoint=legacy rpool/ROOT/s10_1008 <Return>

(5) Mount the restore destination device.

# mount -F zfs rpool/ROOT/s10_1008 /mnt <Return>

(6) Create a boot block using the installboot(1M) command. For information on creating a
boot block using installboot(1M) command, browse through the online manual.
Use the boot block on the restore destination device for the boot block creating.

Here, specify slice 0 of the restore destination device in the character type
(/dev/rdsk/...). In this example, /dev/dsk/c7t16d0s0 is used as a boot disk.

# installboot -F zfs /mnt/usr/platform/`uname -i`/lib/fs/zfs/bootblk


/dev/rdsk/c7t16d0s0 <Return>

(7) The mountpoint property set to the root path.


It advances to the following procedure ignoring it though the error message is
displayed when the mountpoint property is set the root path.

# zfs set mountpoint=/ rpool/ROOT/s10_1008 <Return>

(8) Bootfs setting.

# zpool set bootfs=rpool/ROOT/s10_1008 rpool <Return>

b. Restore by disk
(1) Restore the boot disk using the dd(1M) command. In this example, tape device
/dev/rmt/0 is used.

# dd if=/dev/rmt/0 of=/dev/rdsk/c7t16d0s2 bs=64k <Return>

Specify the name of the disk to be backed up in the character type (/dev/rdsk/...).
Remember to specify s2, a slice that designates a disk as a whole.

81
5.2 Backing Up/Restoring after Booting the OS
from an Internal Disk
If a boot disk residing on a disk array device has been created by copying the OS from an internal disk,
follow the procedures explained in this section to back up and restore the boot disk.

5.2.1 Backup procedure


1. Boot the OS from an internal disk in single-user mode with the -s option specified.

ok boot <internal disk> -s <Return>

2. Back up the boot disk. The boot disk can be backed up by file system or by lun.
a. Back up by file system
 UFS file system environment
(1) The procedure for backing up a boot disk by file system using the ufsdump(1M)
command is explained below. Disk partition information, such as slice size, is not
backed up and needs to be recorded beforehand using the prtvtoc(1M) command or
format(1M) command.

# prtvtoc /dev/rdsk/c7t16d0s2 <Return>

or

# format /dev/rdsk/c7t16d0s2 <Return>


format> partition <Return>
partition> print <Return>

(2) Back up the boot disk using the ufsdump(1M) command. In this example,
/dev/dsk/c7t16d0s0 is used as a boot disk, and tape device /dev/rmt/0 is used.

# ufsdump 0ucf /dev/rmt/0 /dev/rdsk/c7t16d0s0 <Return>

 ZFS file system environment


(1) The procedure for backing up a boot disk by file system using the zfs(1M) command
is explained below. Disk partition information, such as slice size, is not backed up
and needs to be recorded beforehand using the prtvtoc(1M) command or format(1M)
command.

# prtvtoc /dev/rdsk/c7t16d0s2 <Return>

or

# format /dev/rdsk/c7t16d0s2 <Return>

82
format> partition <Return>
partition> print <Return>

(2) Back up a boot disk using the zpool(1M) command.


It advances to the following procedure ignoring it though the error message might
be displayed when the pool was imported.

# zpool import <Return>


pool: rpool
id: 4856116377389642800
state: ONLINE
action: The pool can be imported using its name or numeric identifier.
config:

rpool ONLINE
c7t16d0s0 ONLINE
# zpool import 4856116377389642800 <Return> ID confirmed with zpool import
is specified.
# zfs list <Return>
NAME USED AVAIL REFER MOUNTPOINT
rpool 6.33G 13.2G 94K /rpool
rpool/ROOT 4.83G 13.2G 18K legacy
rpool/ROOT/s10_1008 4.83G 13.2G 4.76G /
rpool/dump 1.00G 13.2G 1.00G -
rpool/export 38K 13.2G 20K /export
rpool/export/home 18K 13.2G 18K /export/home
rpool/swap 512M 13.7G 10.0M -
#

(3) Creating of snapshot.

# zfs snapshot rpool/ROOT/s10_1008@snapshot <Return>

When failing in creating the snapshot, the following procedures are executed. And,
the snapshot is created again.

# zfs set mountpoint=legacy rpool/ROOT/s10_1008 <Return>


# mount -F zfs rpool/ROOT/s10_1008 /mnt <Return>
# umount /mnt <Return>

83
(4) Back up a boot disk using the zfs(1M) command. In this example,
rpool/ROOT/s10_1008 is used as a boot path, and tape device /dev/rmt/0 is used.

# zfs send rpool/ROOT/s10_1008@snapshot > /dev/rmt/0 <Retrun>

b. Back up by disk
(1) Back up the boot disk using the dd(1M) command. In this example, tape device
/dev/rmt/0 is used. The boot disk may also be backed up to another LU on the disk
array device.

# dd if=/dev/rdsk/c7t16d0s2 of=/dev/rmt/0 bs=64k <Return>

In this format, if= is followed by the name of the disk to be backed up, such as
/dev/rdsk/c0t0d0s2, specified in the character type (/dev/rdsk/...). Remember to
specify s2, a slice that designates a disk as a whole.

The dd(1M) command might not be able to be backed up according to the size of
LUN because it doesn't correspond to the multi volume.

5.2.2 Restore procedure


1. Boot the OS from an internal disk in single-user mode with the -s option specified.

ok boot <internal disk> -s <Return>

2. Restore the boot disk in the same unit in which it has been backed up.
a. Restore by file system
 UFS file system environment
(1) If the boot disk has a new lun defined or has its lun derived from another use, create a
disk slice and a disk label using the format(1M) command. Reference the disk partition
information that has been recorded at backup for the size of the slice created and other
details.

# format <Return>

For information on creating a disk slice and a disk label using the format(1M)
command, browse through the online manual.

(2) Create a new file system using the newfs(1M) command.

# newfs /dev/rdsk/c7t16d0s0 <Return>

Specify the name of the slice in which the boot disk is restored in the character type
(/dev/rdsk/...).

(3) Mount the boot disk.

84
In this example, /dev/dsk/c7t16d0s0 is used as a boot disk.

# mount -F ufs /dev/dsk/c7t16d0s0 /mnt <Return>

(4) Move to the mounted directory.

# cd /mnt <Return>

(5) Restore the boot disk using the ufsrestore(1M) command. In this example, tape device
/dev/rmt/0 is used.

# ufsrestore rf /dev/rmt/0 <Return>

(6) Create a boot block using the installboot(1M) command. For information on creating a
boot block using installboot(1M) command, browse through the online manual.
Use the boot block on the restore destination device for the boot block creating.

Here, specify slice 0 of the restore destination device in the character type
(/dev/rdsk/...). In this example, /dev/dsk/c7t16d0s0 is used as a boot disk.

# installboot /mnt/usr/platform/`uname -i`/lib/fs/ufs/bootblk /dev/rdsk/c7t16d0s0


<Return>

(7) Move to the root directory and unmount the boot disk.

# cd / <Return>
# umount /mnt <Return>

(8) Check the file system for consistency using the fsck(1M) command.

# fsck /dev/rdsk/c7t16d0s0 <Return>

Specify the name of the slice in which the boot disk is restored in the character type
(/dev/rdsk/...).

 ZFS file system environment


(1) If the boot disk has a new lun defined or has its lun derived from another use, create a
disk slice and a disk label using the format(1M) command. Reference the disk
partition information that has been recorded at backup for the size of the slice created
and other details.

# format <Return>

85
For information on creating a disk slice and a disk label using the format(1M)
command, browse through the online manual.

(2) Create the ZFS file system.

# zpool create rpool c7t16d0s0 <Return>


# zfs create rpool/ROOT <Return>

(3) Restore the boot disk using the zfs(1M) command.

# zfs receive rpool/ROOT/s10_1008@snapshot < /dev/rmt/0 <Return>

(4) The mountpoint property set to the legacy.

# zfs set mountpoint=legacy rpool/ROOT/s10_1008 <Return>

(5) Mount the restore destination device.

# mount -F zfs rpool/ROOT/s10_1008 /mnt <Return>

(6) Create a boot block using the installboot(1M) command. For information on creating a
boot block using installboot(1M) command, browse through the online manual.
Use the boot block on the restore destination device for the boot block creating.

Here, specify slice 0 of the restore destination device in the character type
(/dev/rdsk/...). In this example, /dev/dsk/c7t16d0s0 is used as a boot disk.

# installboot -F zfs /mnt/usr/platform/`uname -i`/lib/fs/zfs/bootblk


/dev/rdsk/c7t16d0s0 <Return>

(7) The mountpoint property set to the root path.


It advances to the following procedure ignoring it though the error message is
displayed when the mountpoint property is set the root path.

# zfs set mountpoint=/ rpool/ROOT/s10_1008 <Return>

(8) Bootfs setting.

# zpool set bootfs=rpool/ROOT/s10_1008 rpool <Return>

b. Restore by disk
(1) Restore the boot disk using the dd(1M) command. In this example, tape device
/dev/rmt/0 is used. The boot disk may also be restored from another LU on the disk
array unit that has been backed up beforehand.

86
# dd if=/dev/rmt/0 of=/dev/rdsk/c7t16d0s2 bs=64k <Return>

Here, specify the name of the disk to be backed up in the character type
(/dev/rdsk/...). Remember to specify s2, a slice that designates a disk as a whole.

87
Appendix A Boot Device Setup Commands

This appendix focuses on the commands used to configure the boot code on the Fibre Channel card.
These commands are executable on either the OS or the OBP.

The commands introduced here work only on the single-channel 4Gbps Fibre Channel card (SE0X7F11x)
and dual-channel 4Gbps Fibre Channel card (SE0X7F12x).

A.1 Command Executable on the OS


Perform this procedure with the OS being booted and with FUJITSU PCI Fibre Channel 4.0 or any later
package installed.

1 fc_hbaprp
 Name

fc_hbaprp

 Format

/usr/sbin/FJSVpfca/fc_hbaprp -i adpname -f tgt_id -P WWN

-f tgt_id -I PORT_ID

-d tgt_id

-D [-y]

-w boot-wait-time

-l linkspeed

-t topology

-v

-s savefile

-r|-R filename

-c conffile

-C [-y]

-b ENABLE|DISABLE

88
 Function

Configures the boot code on the Fibre Channel card.

 Operands

The settings that work on the boot code on the Fibre Channel card are listed below.

All these settings need to be accompanied by the specification of -i adpname. Specify the
instance name of the Fibre Channel driver as adpname.

-i adpname -f tgt_id -P WWN

-i adpname -f tgt_id -I PORT_ID

Configure a target device. Up to 10 entries can be registered.

The following values can be set:

tgt_id Specifies Target_ID of the target device with a decimal number.

WWN Specifies WWPN of the target device with a hexadecimal number (boot device
specification by WWPN).

PORT_ID Specifies Port_ID(DID) of the target device with a hexadecimal number (boot device
specification by Port_ID).

-i adpname -d tgt_id

Erases the setting of the target device. The following value can be set:

tgt_id Specifies Target_ID of the target device with a decimal number.

-i adpname -D [-y]

Erases all settings of the target device.

If -y is not attached, a message asking if you are sure you want to erase the settings is displayed.

If -y is attached, the settings are erased unconditionally.

-i adpname -w boot-wait-time

Sets a boot wait time in seconds. The following value can be set:

boot-wait-time Specifies a boot wait time with a decimal number. Either 0 second (no boot-
wait-time) or a value between 180 and 86,400 seconds can be set.

-i adpname -l linkspeed

Sets a link speed. One of the following values can be set:

1G|1g : Sets 1 Gbps.

2G|2g : Sets 2 Gbps.

89
4G|4g : Sets 4 Gbps.

AUTO|auto : Sets a link speed with AUTO.

-i adpname -t topology

Sets a topology. One of the following values can be set:

NPORT|nport : Makes an NPORT connection.

Connects to the Fibre Channel switch.

AL|al : Sets the FC-AL topology.

AUTO|auto : Sets a topology automatically.

-i adpname -v

Displays the settings as listed below.

Item Value Description

boot function DISABLE/ENABLE Enables or disables the boot function.

Target_ID Example) 0 (hexadecimal) Bound Target_ID

Target WWN Example) 210000e0004101d9 Bound target


(hexadecimal)

Target DID Example) 010111 (hexadecimal) Bound DID

topology AL/N_Port/AUTO Set topology. AUTO refers to an automatically


set topology.

link-speed 1G/2G/4G/AUTO Set link speed. AUTO refers to an automatically


set link speed.

boot wait time DISABLE or numeric value Set boot wait time. Value is specified in seconds.
(decimal) DISABLE overrides a boot wait time.

interval time DISABLE This item is not available and cannot be changed.

boot wait msg DISABLE This item is not available and cannot be changed.

90
-i adpname -s savefile

Saves the settings as listed below.

Item Value Description

boot function DISABLE/ENABLE Enables or disables the boot function.

Target_ID Example) 0 (hexadecimal) Bound Target_ID

Target WWN Example) 210000e0004101d9 Bound target


(hexadecimal)

Target DID Example) 010111 (hexadecimal) Bound DID

topology AL/N_Port/AUTO Set topology. AUTO refers to an automatically


set topology.

link-speed 1G/2G/4G/AUTO Set link speed. AUTO refers to an automatically


set link speed.

boot wait time DISABLE or numeric value Set boot wait time. Value is specified in seconds.
(decimal) DISABLE overrides a boot wait time.

interval time DISABLE This item is not available and cannot be changed.

boot wait msg DISABLE This item is not available and cannot be changed.

-i adpname -r|-R filename

Updates the boot code on the Fibre Channel card with the settings saved with -s.

If -r is specified, the boot code on the Fibre Channel card is updated with all settings, except for
the boot function.

If -R is specified, the boot code on the Fibre Chanel card is updated with all settings, including
the boot function.

-i adpname -c conffile

Updates the boot code on the Fibre Channel card with the settings of the driver setting file
(/kernel/drv/fjpfca.conf).

-i adpname -C [-y]

Erases all settings, except for enabling/disabling of the boot function.

If -y is not attached, a message asking if you are sure you want to erase the settings is displayed.

If -y is attached, the settings are erased unconditionally.

91
-i adpname -b ENABLE|DISABLE

Enables or disables the Fibre Channel card boot function. One of the following values can be
set:

ENABLE : Enables the boot function.

DISABLE : Disables the boot function.

 Note

Enabling or disabling the boot function on either port of the dual-channel 4Gbps Fibre Channel
card (SE0X7F12x) will automatically impart the same setting to the other port. It is not possible
to make a change to the setting of either port alone and not both.

 Example

# /usr/sbin/FJSVpfca/fc_hbaprp -i fjpfca0 -f 0 -P 0x210000e0001014d9 <RETURN>


# /usr/sbin/FJSVpfca/fc_hbaprp -i fjpfca0 -f 1 -I 0x10c00 <RETURN>
# /usr/sbin/FJSVpfca/fc_hbaprp -i fjpfca0 -d 0 <RETURN>
# /usr/sbin/FJSVpfca/fc_hbaprp -i fjpfca0 -D <RETURN>
delete all bind registration ? [y(Y),n(N) ] y <RETURN>
# /usr/sbin/FJSVpfca/fc_hbaprp -i fjpfca0 -w 180 <RETURN>
# /usr/sbin/FJSVpfca/fc_hbaprp -i fjpfca0 -l 4g <RETURN>
# /usr/sbin/FJSVpfca/fc_hbaprp -i fjpfca0 -t nport <RETURN>
# /usr/sbin/FJSVpfca/fc_hbaprp -i fjpfca0 -v <RETURN>
boot function : ENABLE
topology : N_Port
link-speed : 4G
boot wait time : 180 ( interval time : DISABLE , boot wait msg : DISABLE )
bind-target: Target_ID=0,WWPN=0x210000e0001014d9
# /usr/sbin/FJSVpfca/fc_hbaprp -i fjpfca0 -s savefile <RETURN>
# /usr/sbin/FJSVpfca/fc_hbaprp -i fjpfca0 -r savefile <RETURN>
# /usr/sbin/FJSVpfca/fc_hbaprp -i fjpfca0 -c /kernel/drv/fjpfca.conf <RETURN>
# /usr/sbin/FJSVpfca/fc_hbaprp -i fjpfca0 -b ENABLE <RETURN>

92
A.2 Command Executable on the OBP
Before starting the procedure, set the to maintenance mode and restart the server. If you use SPARC
Enterprise T1000/T2000/T5120/T5140/T5220/T5240/T5440, execute the following command:

ok setenv auto-boot? false <RETURN>


ok reset-all <RETURN>

If you use SPARC Enterprise M3000/M4000/M5000/M8000/M9000, set the server mode switch to
service mode and execute the following command:

ok reset-all <RETURN>

If an attempt is made to configure the Fibre Channel card in any other condition, it could hang up. When
it does, turn the power to the server off, and then turn it back on.

To carry out this command, you need to move to the node of the Fibre Channel card on which to
configure.

Example: single-channel 4Gbps Fibre Channel card (SE0X7F11x) and dual-channel 4Gbps Fibre
Channel card (SE0X7F12x) mounted on a server

ok show-devs <RETURN>
/pci@1,700000
/pci@2,600000
**
/openprom
/chosen
/packages
/pci@1,700000/fibre-channel@0 *physical path name of the single channel 4 Gbps Fibre
Channel card
/pci@2,600000/fibre-channel@0 *physical path name of the dual-channel 4Gbps Fibre
Channel card port0
/pci@2,600000/fibre-channel@0,1 physical path name of the dual-channel 4Gbps Fibre
Channel card port1
ok cd /pci@1,700000/fibre-channel@0 <RETURN>
ok

1 fjpfca-set-bootfunction
 Name

fjpfca-set-bootfunction

93
 Format

ENABLE | DISABLE fjpfca-set-bootfunction

 Function

Enables or disables the Fibre Channel card boot function.

Note)The server must be restarted or the reset-all command must be carried out after this
command has been carried out. In configuring on more than one card as well, remember to carry
out reset-all for every card mounted.

 Operands

One of the following values can be set:

ENABLE : Enable the boot function.

DISABLE : Disables the boot function.

 Note

Enabling or disabling the boot function on either port of the dual-channel 4Gbps Fibre Channel
card (SE0X7F12x) will automatically impart the same setting to the other port. It is not possible
to make a change to the setting of either port alone and not both.

 Example

ok ENABLE fjpfca-set-bootfunction <RETURN>


ok reset-all <RETURN>
..
ok DISABLE fjpfca-set-bootfunction <RETURN>
ok reset-all <RETURN>..

2 fjpfca-output-prop
 Name

fjpfca-output-prop

 Format

fjpfca-output-prop

94
 Function

Displays the settings stored in ROM on the Fibre Channel card.

Item Value Description

boot function DISABLE/ENABLE Enables or disables the boot function.

Target_ID Example) 0 (hexadecimal) Bound Target_ID

Target WWN Example) 210000e0004101d9 Bound target


(hexadecimal)

Target DID Example) 010111 (hexadecimal) Bound DID

topology AL/N_Port/AUTO Set topology. AUTO refers to an


automatically set topology.

link-speed 1G/2G/4G/AUTO Set link speed. AUTO refers to an


automatically set link speed.

boot wait time DISABLE or numeric value Set boot wait time. Value is specified
(decimal) in seconds.
DISABLE overrides a boot wait time.

interval time DISABLE This item is not available and cannot


be changed.

boot wait msg DISABLE This item is not available and cannot
be changed.

 Example

ok fjpfca-output-prop <RETURN>
boot function : ENABLE
topology : AUTO
link-speed : AUTO
boot wait time : DISABLE ( interval time : DISABLE , boot wait msg : DISABLE )
bind-target: Target_ID=0,WWPN=0x210000e0001014d9

95
3 fjpfca-set-linkspeed
 Name

fjpfca-set-linkspeed

 Format

1g | 2g | 4g | auto fjpfca-set-linkspeed

 Function

Sets a link speed.

 Operands

One of the following values can be set:

1g : Sets 1 Gbps.

2g : Sets 2 Gbps.

4 g : Sets 4 Gbps.

auto: Sets a link speed automatically.

 Example

ok 1g fjpfca-set-linkspeed <RETURN>
ok 2g fjpfca-set-linkspeed <RETURN>
ok 4g fjpfca-set-linkspeed <RETURN>
ok auto fjpfca-set-linkspeed <RETURN>

 Default value

auto

 Note

This command is operable only if the boot function is enabled.

4 fjpfca-set-topology
 Name

fjpfca-set-topology

 Format

nport | al | auto fjpfca-set-topology

 Function

96
Sets a topology.

 Operands

nport: Makes an NPORT connection.


Connects to the Fibre Channel switch.

al: Sets the FC-AL topology.

auto: Sets a topology automatically.

 Example

ok nport fjpfca-set-topology <RETURN>


ok al fjpfca-set-topology <RETURN>
ok auto fjpfca-set-topology <RETURN>

 Default value

auto

 Note

This command is operable only if the boot function is enabled.

5 fjpfca-bind-target
 Name

fjpfca-bind-target

 Format

value1 target-alpa | target-did |target-wwpn value2 fjpfca-bind-target

 Function

Configure a target device. Up to 10 entries can be registered.

 Operands

Specify a target device to be connected with either Port_ID (DID) or WWPN.

value1: Specifies Target_ID of the target device with a hexadecimal number.

target-wwpn: Specifies WWPN of the target device (boot device specification by WWPN).

target-alpa: Specifies Port_ID (DID) of the target device (boot device specification by
Port_ID).

target-did: Specifies Port_ID (DID) of the target device (boot device specification by
Port_ID).

97
value2: Specifies Port_ID (DID or WWPN with a hexadecimal number.

 Example

ok 0 target-wwpn 210000e0004101d9 fjpfca-bind-target <RETURN> * WWN specification


ok 1 target-alpa 11206 fjpfca-bind-target <RETURN> * Port_ID(DID) specification
ok 2 target-did 11000 fjpfca-bind-target <RETURN> * Port_ID(DID) specification

 Note

This command is operable only if the boot function is enabled.

6 led-flash
 Name

led-flash

 Format

[sec-time] led-flash

 Function

Flashes the LED on the Fibre Channel card (for 10 seconds by default and for 60 seconds at the
longest).

Use this command to verify the location of the Fibre Channel card and where the ports are
positioned (on a dual-channel 4Gbps Fibre Channel card).

 Operands

Specify the duration of time in which the LED flashes.

A number preceded by d# is assumed to be a decimal number. All other numbers are assumed
to be hexadecimal.

 Example

ok led-flash Flashes the LED for 10 seconds.


ok d# 10 led-flash Flashes the LED for 10 seconds (period specified with a decimal number).
ok 3c led-flash Flashes the LED for 60 seconds (period specified with a hexadecimal number)

 Note

This command is operable only if the boot function is enabled.

98
7 fjpfca-set-boot-wait-time
 Name

fjpfca-set-boot-wait-time

 Format

wait-time | DISABLE fjpfca-set-boot-wait-time

 Function

If a power supply interlock control is implemented between the server and a disk array device,
it is necessary to let the disk array device start up before launching a boot sequence. Use the
fjpfca-set-boot-wait-time command to delay the launch of the boot sequence for a specified
period of time.

The Fibre Channel card is monitoring the status of the disk array device even while the OS
waits to be booted, ready to start booting automatically as soon as it confirms that the disk array
device is seen started up, even before the specified period of time expires.

The period of time for which the boot sequence is delayed can be set in seconds between 180
and 86,400 seconds.

The command comes with this mode disabled by default.

Referring to the manual supplied with the disk array device, set (for the boot wait time) the
amount of time that it takes for the system to enter the READY state after the POWER switch is
pressed.

 Operands

Specify a wait time (in seconds) at boot time with a hexadecimal number.

A number preceded by d# is assumed a decimal number.

DISABLE overrides a boot wait time.

 Example

ok d# 1200 fjpfca-set-boot-wait-time Sets 1200 seconds


ok b4 fjpfca-set-boot-wait-time Sets 180 seconds
ok DISABLE fjpfca-set-boot-wait-time Overrides a boot wait time

 Default value

DISABLE

 Note

This command is operable only if the boot function is enabled.

99
8 fjpfca-info
 Name

fjpfca-info

 Format

STATUS | PROBE fjpfca-info

 Function

Displays connectivity information about the Fibre Channel card mounted.

 Operands

One of the following values can be set:

STATUS Displays the Link status of the Fibre Channel card.


Indicates whether the target device set up on the Fibre Channel card is connectable.

PROBE Displays a list of target devices connectable from the Fibre Channel card.

 Example

ok STATUS fjpfca-info <RETURN>


Link_status=up topology=Nport port_id=0x010000 wwpn=1000000b5d65c00a(0)
port_id=0x010100 tid=0 wwpn=210000e00004101d9 connected(0)
ok PROBE fjpfca-info <RETURN>
Target -- DID 10100 WWPN 210000e0004101d9 FUJITSU-E4000-0000

 Note

This command is operable only if the boot function is enabled.

9 fjpfca-target-cancel
 Name

fjpfca-target-cancel

 Format

tgt_id fjpfca-target-cancel

 Function

Erases the settings of a target device.

100
 Operands

tgt_id Specifies Target_ID of the target device with a hexadecimal number.

 Example

ok 0 fjpfca-target-cancel <RETURN>

 Note

This command is operable only if the boot function is enabled.

10 fjpfca-all-target-cancel
 Name

fjpfca-all-target-cancel

 Format

fjpfca-all-target-cancel

 Function

Erases all settings of the target device.

 Example

ok fjpfca-all-target-cancel <RETURN>
delete all bind registration ? [ y(Y),n(N) ] y

 Note

This command is operable only if the boot function is enabled.

101
Appendix B Checking the Fibre Channel
Card Boot Code Version
Number

This appendix explains how to confirm the boot code (firmware) version number of the Fibre Channel
card.

There are two ways to perform this confirmation: confirming the code by checking the OS and confirming
the code by checking OBP.

B.1 Checking on the OS


Perform this procedure with the OS already booted and with FUJITSU PCI Fibre Channel 4.0 or any later
package installed.

View /var/adm/messages to check the boot code version number from the following display:

scsi: [ID 243001 kern.info] /pci@1,700000/fibre-channel@0 (fjpfca0):


INFO : FUJITSU PCI Fibre Channel FCode Version : v12l30, boot_function=ENABLE;

B.2 Checking on the OBP


Perform this procedure on the OBP. Before starting the procedure, set the to maintenance mode and
restart the server. If you use SPARC Enterprise T1000/T2000/T5120/T5140/T5220/T5240/T5440,
execute the following command:

ok setenv auto-boot? false <RETURN>


ok reset-all <RETURN>

If you use SPARC Enterprise M3000/M4000/M5000/M8000/M9000, set the server mode switch to
service mode and execute the following command:

ok reset-all <RETURN>

Move to the node of the Fibre Channel card whose boot code version number is to be confirmed, and
execute the .properties command.

102
Read the value of fjpfca_fcode_vl.

ok cd /pci@1,700000/fibre-channel@0 <RETURN>
ok .properties <RETURN>
status okay
fru PCI slot(PCI#08)
component-name PCI#08
assigned-addresses 81001814 00000000 00000700 00000000 00000100

(Omission)

fjpfca_fcode_vl v12l30

(Omission)

ok

103
Appendix C Recording SAN Boot Setting
Information

Record and save to any other document the values listed in the "SAN Boot Setting Information" table that
have been set at installation. Whenever a Fibre Channel card in use is replaced, reconfigure the same set
of values of this "SAN Boot Setting Information" on the replacement card as well.

The reconfiguration sequence takes different courses depending on which of the following cases applies:
1. Active replacement of the card during OS running on an alternate path.
2. Cold replacement of the card when OS is available on alternate path.
3. Cold replacement of the card when OS can not boot on any paths.

When a card is replaced in Cases 1 and 2 above, the SAN boot setting information can be reconfigure
into the new card from the Fibre Channel driver environment definition files using the fc_hbaprp
command. This "SAN Boot Setting Information" is required when replacing a card in Case 3. In this
case, carry out a command executable on the OBP to reconfigure the SAN boot setting information.

Keep a record of this setting information with regard to all Fibre Channel cards used for rebooting the OS.

104
"SAN Boot Setting Information"

No. Item Explanation Actual value (record)

1 Device path name Name of a physical device path on the


OS (/pci@XXXX/yyyy@z)
2 Slot position Mounted slot position
3 Boot function (boot function) Specify whether to enable or disable the
boot function.
4 Topology information Specify a topology (nport to connect to
(topology) the switch, al to make a direct
connection)
5 Link speed (link speed) Use to fix a speed of transmission line
(1G/2G/4G/auto).
6 Boot delay function (boot wait Specify whether to enable or disable the
time) boot delay function and a boot wait
time.
7 Target bind information Specify disk array device bind target_id :
(Target_ID/Target information (for all registered entries). 1
WWN|Target DID) wwn | did :
target_id :
2
wwn | did :
target_id :
3
wwn | did :
target_id :
4
wwn | did :
target_id :
5
wwn | did :
target_id :
6
wwn | did :
target_id :
7
wwn | did :
target_id :
8
wwn | did :
target_id :
9
wwn | did :
target_id :
10
wwn | did :

105
Appendix D Making Fixes to Setting Files
after a Boot Failure

Errors in any SAN Boot setting file (such as sd.conf, mplb.conf or /etc/system) could prevent the OS
from starting up after a boot failure. When this occurs, start the OS in single-user mode and mount the
system disk on a disk array device to make fixes to the setting file as instructed in this appendix.

The way the OS is started depends on which of the following ways has been used to install the OS on the
system disk on a disk array device.

 System disk installation as per Section 4.1.1, "Creating a boot disk using a network install server"

 System disk installation as per Section 4.1.2, "Creating a boot disk by copying an existing boot disk
residing on an internal disk"

D.1 If the OS Has Been Installed As Per Section


4.1.1, "Creating a boot disk using a network
install server"
1. Initialize the obp environment.

ok reset-all <RETURN>

2. Boot from the network. Start the OS in single-user mode.

ok boot net -s <RETURN>

3. Mount the system disk on a disk array device.


For the disk to be mounted, specify the device that has been set as an OS installation device in
Section "4.1.1.4, Configuring Custom JumpStart"
 Environment installed by UFS file system

# mount -F ufs /dev/dsk/c7t16d0s0 /mnt <RETURN>

 Environment installed by ZFS file system


It advances to the following procedure ignoring it though the error message might be displayed
when importing pool.

# zpool import <RETURN>


pool: raid_pool
id: 9153334525621735888
state: ONLINE

106
action: The pool can be imported using its name or numeric identifier.
config:

raid_pool ONLINE
c7t16d0s0 ONLINE
# zpool import 9153334525621735888 <RETURN> Specify the ID that confirmed in zpool
import.
# zfs list <RETURN>
NAME USED AVAIL REFER MOUNTPOINT
raid_pool 5.98G 92.5G 93K /raid_pool
raid_pool/ROOT 4.98G 92.5G 18K legacy
raid_pool/ROOT/s10_1008 4.98G 92.5G 4.98G /
raid_pool/dump 512M 92.5G 512M -
raid_pool/export 38K 92.5G 20K /export
raid_pool/export/home 18K 92.5G 18K /export/home
raid_pool/swap 512M 92.9G 88.0M -
# zfs set mountpoint=legacy raid_pool/ROOT/s10_1008 <RETURN>
# mount -F zfs raid_pool/ROOT/s10_1008 /mnt <RETURN>

4. Fix the setting files in the /mnt directory.


Example)If /etc/vfstab is the cause of the failure and requires change, fix /mnt/etc/vfstab.

5. When the fix is completed, unmount the system disk and migrate to the obp environment as
instructed below.
 UFS file system

# cd / <RETURN>
# umount /mnt <RETURN>

 ZFS file system


It advances to the following procedure ignoring it though the error message is displayed when
the mountpoint property is set the root path.

# cd / <RETURN>
# umount /mnt <RETURN>
# zfs set mountpoint=/ raid_pool/ROOT/s10_1008 <RETURN>

6. Perform a retry from the boot sequence afterwards.

107
D.2 If the OS Has Been Installed As Per Section
4.1.2, "Creating a boot disk by copying an
existing boot disk residing on an internal
disk"
1. Initialize the obp environment.

ok reset-all <RETURN>

2. Boot from the internal disk. Start the OS in single-user mode.

ok boot disk0 -s <RETURN>


^^^^^^

^^^^^^ denotes an internal disk specification.

3. Mount the system disk on a disk array device.


For the disk to be mounted, specify the disk that has been verified in Section "4.1.2.1 Getting
ready to copy the boot disk to a disk array device."
 Environment installed by UFS file system

# mount -F ufs /dev/dsk/c7t16d0s0 /mnt <RETURN>

 Environment installed by ZFS file system


It advances to the following procedure ignoring it though the error message might be displayed
when importing pool.

# zpool import <RETURN>


pool: raid_pool
id: 9153334525621735888
state: ONLINE
action: The pool can be imported using its name or numeric identifier.
config:

raid_pool ONLINE
c7t16d0s0 ONLINE
# zpool import 9153334525621735888 <RETURN> Specify the ID that confirmed in zpool
import.
# zfs list <RETURN>
NAME USED AVAIL REFER MOUNTPOINT
raid_pool 5.98G 92.5G 93K /raid_pool
raid_pool/ROOT 4.98G 92.5G 18K legacy

108
raid_pool/ROOT/s10_1008 4.98G 92.5G 4.98G /
raid_pool/dump 512M 92.5G 512M -
raid_pool/export 38K 92.5G 20K /export
raid_pool/export/home 18K 92.5G 18K /export/home
raid_pool/swap 512M 92.9G 88.0M -
# zfs set mountpoint=legacy raid_pool/ROOT/s10_1008 <RETURN>
# mount -F zfs raid_pool/ROOT/s10_1008 /mnt <RETURN>

4. Fix the setting files in /mnt directory.


Example)If /etc/vfstab is the cause of the failure and requires change, fix /mnt/etc/vfstab.

5. When the fix is completed, unmount the system disk and migrate to the obp environment as
instructed below.
 UFS file system

# cd / <RETURN>
# umount /mnt <RETURN>

 ZFS file system


It advances to the following procedure ignoring it though the error message is displayed when
the mountpoint property is set the root path.

# cd / <RETURN>
# umount /mnt <RETURN>
# zfs set mountpoint=/ raid_pool/ROOT/s10_1008 <RETURN>

6. Perform a retry from the boot sequence afterwards.

109
Appendix E Fibre Channel Driver/Boot
Code Auto-Target Binding
Functions

This appendix describes the Fibre Channel driver/boot code auto-target binding functions.

E.1 Fibre Channel Driver Auto-Target Binding


Function
The Fibre Channel driver auto-target binding function enables the Fibre Channel driver to automatically
connect a server to target devices without requiring their definitions in fcp-bind-target in fjpfca.conf.

The Fibre Channel driver connects all target devices attached to a fabric device to the lowest available
Target_ID in ascending order of WWNs.

1. Fibre Channel driver auto-target bind example

Server
FC Card

FC Switch
Target_ID:3
Target_ID:0
Target_ID:2 Target_ID:1
CM0 CM1 CM0 CM1

ETERNUS #1 ETERNUS #2

ETERNUS #1 CM0(WWN) : 0x210000e0004101d9


CM1(WWN) : 0x230000e0004101d9
ETERNUS #2 CM0(WWN) : 0x210000e0004101da
CM1(WWN) : 0x230000e0004101da

110
In the example shown above, the Fibre Channel driver automatically connects target disk array devices in
ascending order of WWNs as follows:

 Connect ETERNUS #1 CM0(WWN=0x210000e0004101d9) to Target_ID:0

 Connect ETERNUS #1 CM1(WWN=0x230000e0004101d9) to Target_ID:2

 Connect ETERNUS #2 CM0(WWN=0x210000e0004101da) to Target_ID:1

 Connect ETERNUS #2 CM1(WWN=0x230000e0004101da) to Target_ID:3

Prerequisites for implementing this automatic connectivity are:


1. "fcp-auto-binding function=1;" is entered in fjpfca.conf.
2. SAN Boot environment built on a fabric connection

The Fibre Channel driver auto-target binding function is designed with primary emphasis on building a
SAN Boot environment effortlessly, and its use is recommended only for the purpose of building an
environment.

If the Fibre Channel driver auto-target binding function is used in normal operations, it might fail to
complete intended target device connections owing to the effect of the failure of target devices or the like.
The use of fcp-bind-target to make target device connections is recommended for normal operations.

For information about marking target device connections using fcp-bind-target, refer to the "FUJITSU
PCI Fibre Channel Guide."

111
E.2 Fibre Channel Boot Code Auto-Target
Binding Function
This section describes the Fibre Channel boot code auto-target binding function.

The Fibre Channel boot code auto-target binding function lets the Fibre Channel boot code detect target
devices automatically without requiring their definitions in fjpfca-bind-target, allowing only those target
devices having the lowest Port-ID assigned to them to be connected to a server for SAN Boot.

1. Fibre Channel boot code auto-target bind example

Server Connected as target


FC Card devices to implement
SAN Boot
Port_ID:10000
FC Switch
Port_ID:10100 Port_ID:10400
Port_ID:10300
Port_ID:10200
CM0 CM1 CM0 CM1

ETERNUS #1 ETERNUS #2

In the example shown above, target devices located at ETERNUS#1 CM0 with the lowest assigned
Port_ID are connected to implement SAN Boot.

Prerequisites to implementing this automatic connectivity are:


1. No target definitions are given in fjpfca-bind-target
2. SAN Boot environment built on a fabric connection

Because the Fibre Channel boot code auto-target binding function focuses on effortless implementation of
SAN Boot, it use is recommended only in environment construction and in environments in which FC
switch-based zoning is implemented. If the Fibre Channel boot code auto-target binding function is used
elsewhere, it might fail to complete intended target device connections owing to the effect of the failure
of target devices or the like. The use of fjpfca-bind-target to manually configure target device
connections is recommended in incompatible environments. For information about making target device
connections using fjpfca-bind-target, see Appendix A, "Boot Device Setup Commands."

If SAN Boot is implemented with the boot code auto-target binding function, the target device
information is imparted to the Fibre Channel driver, enabling it to make target device connections
automatically (fcode-auto-bind function). For more information about the fcode-auto-bind function, refer
to "FUJITSU PCI Fibre Channel Guide."

112
Appendix F SAN Boot release procedure

If you release the multipath definition of the boot disk, it does the following procedures.
It becomes impossible might do boot when it releases by the methods other than the procedure or it
makes a mistake in the procedure.

After the mirroring is released, the following procedures are executed when mirroring it with
PRIMECLUSTER GDS.

F.1 ETERNUS Multipath Driver


It is ETERNUS multipath driver's release procedure.

All multipath are released.

1. In the SAN Boot environment by the UFS file system, edit /etc/vfstab file to rewrite mount
point.
In the SAN Boot environment by the ZFS file system, it advances to Step 2.

/dev/FJSVmplb/dsk/mplb0s0 /dev/FJSVmplb/rdsk/mplb0s0
↓ ↓
/dev/dsk/c2t16d0s0 /dev/rdsk/c2t16d0s0

Former mount point is either path constituting multipath displayed by the iompadm command.

# /usr/opt/FJSViomp/bin/iompadm info /dev/FJSVmplb/fiomp/adm0


IOMP: /dev/FJSVmplb/fiomp/adm0
Element:
/dev/rdsk/c2t16d0s2 online active block "good status with active
[E30004641- 130011-CM01-CA01-PORT36] (mplbt0)"
/dev/rdsk/c3t16d0s2 online standby block "good status with standby
[E30004641- 130011-CM00-CA00-PORT32] (mplbt32)"

2. Release the multipath.

# mplbconfig -r
Cannot unload module: mplb
Will be unloaded upon reboot.
Forcing update of mplb.conf.

113
3. Edit /kernel/drv/mplbt.conf file to delete all the definitions.
Example)The following delete.

name="mplbt" parent="fjpfca" target=16 lun=0;


4. Edit /kernel/drv/mplbh.conf file and make it default setting.


Example)Delete all the following definitions of the mplbh.conf file.

mplbh-path-0="pci10cf,1178-0-10" mplbh-path-1="pci10cf,1178-1-10"
mplbh-disk-name="E30004641- 130011-0010";
mplbh-detect-disk-num=1;
mplbh-detect-disk-0="E30004641- 130011-0010";
mplbh-used-path-num=2;
mplbh-used-path-0="pci10cf,1178-0-10";
mplbh-used-path-1="pci10cf,1178-1-10";

It adds when there is no ";" at the end of the following lines.

name="mplbh" parent="mplbx" instance=X;

5. Edit the /kernel/drv/sd.conf file to delete the definition of mplb.


Delete the following lines.

# Start eternusmpd configuration -- do NOT alter or delete this line


name="sd" parent="mplbh" target=0 lun=0;

# End eternusmpd configuration -- do NOT alter or delete this line

6. Delete the following definitions added to /etc/system file.


● SAN Boot environment by the UFS file system
Delete the following lines.

rootdev: /pseudo/mplb@0:a
forceload: drv/mplbt
forceload: drv/mplb
forceload: drv/sd

● SAN Boot environment by the ZFS file system


Delete the following lines.

114
forceload: drv/mplbt
forceload: drv/mplb
forceload: drv/sd

7. Stop the service.

# svcadm disable -t svc:/system/fjsvmplb:default

8. Make a special file of sd. Ignore the error message.

# update_drv -f sd
Cannot unload module: sd
Will be unloaded upon reboot.
Forcing update of sd.conf.

9. Restart the server.

# touch /reconfigure
# reboot

10. When the dump device changed, it returns. (Only the SAN Boot environment by the UFS file
system)

# dumpadm -d /dev/dsk/c2t16d0s3

115

Potrebbero piacerti anche