Sei sulla pagina 1di 225

Lenovo® X6 Systems Solution™ for SAP HANA®

Implementation Guide for System x® X6 Servers

Lenovo Development for SAP Solutions


In cooperation with: SAP AG
Created on 17th April 2015 09:42 – Version 1.8.80-12
© Copyright Lenovo, 2015
Technical Documentation

X6 Systems Solution for SAP HANA Platform Edition


Dear Reader,
We wish to explicitly announce that this guide book is for the System X6 based servers for SAP HANA
Platform Edition (Type 3837/6241 Model AC3/AC4/Hxx) based on Intel® Xeon® IvyBridge EX Family
of Processors.
The System eX5 based servers for SAP HANA Platform Edition (models 7148-H** and 7143-H**) are
not discussed in this manual based on Intel Xeon Westmere EX Family of Processors.
The Lenovo Systems X6 solution for SAP HANA Platform Edition is based on System X6 Architecture
building blocks that provide a highly scalable infrastructure for the SAP HANA Platform Edition ap-
pliance software. The Systems x3850 X6, x3950 X6 and software, such as IBM General Parallel File
System™ (GPFS) will be used to run SAP HANA Platform Edition appliance software.
Lenovo has created orderable models upon which you may install and run the SAP HANA Platform
Edition appliance software according to the sizing charts coordinated with SAP AG. For each workload
type, special ordering options for the System x3850 X6 and System x3950 X6 Type 3837/6241 Models
AC3/AC4/Hxx have been approved by SAP and Lenovo to accommodate the requirements for the SAP
HANA Platform Edition appliance software.
The Lenovo – SAP HANA Development Team

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® I


1.8.80-12 © Copyright Lenovo, 2015
Technical Documentation

Copyrights and Trademarks


© Copyright 2010-2015 Lenovo.
Lenovo may not offer the products, services, or features discussed in this document in all countries.
Consulty our local Lenovo representative for information on the products and services currently available
in your area. Any reference to a Lenovo product, program, or service is not intended to state or imply
that only that Lenovo product, program, or service may be used. Any functionally equivalent product,
program, or service that does not infringe any Lenovo intellectual property right may be used instead.
However, it is the user’s responsibility to evaluate and verify the operation of any other product, program,
or service.
Lenovo may have patents or pending patent applications covering subject matter described in this doc-
ument. The furnishing of this document does not give you any license to these patents. You can send
license inquiries, in writing, to:
Lenovo (United States), Inc.
1009 Think Place - Building One
Morrisville, NC 27560
U.S.A.
Attention: Lenovo Director of Licensing

LENOVO PROVIDES THIS PUBLICATION “AS IS” WITHOUT WARRANTY OF ANY KIND, EI-
THER EXPRESS OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WAR-
RANTIES OF NON-INFRINGEMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR
PURPOSE.
Neither this documentation nor any part of it may be copied or reproduced in any form or by any means
or translated into another language, without the prior consent of Lenovo.
This document could include technical inaccurancies or errors. The information contained in this doc-
ument is subject to change without any notice. Lenovo reserves the right to make any such changes
without obligation to notify any person of such revision or changes. Lenovo makes no commitment to
keep the information contained herein up to date.
Any references in this publication to non-Lenovo Web sites are provided for convenience only and do not
in any manner serve as an endorsement of those Web sites. The materials at those Web sites are not part
of the materials for this Lenovo product, and use of those Web sites is at your own risk.
Information concerning non-Lenovo products was obtained from a supplier of these products, published
announcement material, or other publicly available sources and does not constitute an endorsement
of such products by Lenovo. Sources for non-Lenovo list prices and performance numbers are taken
from publicly available information, including vendor announcements and vendor worldwide home pages.
Lenovo has not tested these products and cannot confirm the accuracy of performance, capability, or any
other claims related to non-Lenovo products. Questions on the capability of non-Lenovo products should
be addressed to the supplier of those products.

Edition Notice: 17th April 2015


This is the twelfth published edition of this document. The online copy is the master.

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® II


1.8.80-12 © Copyright Lenovo, 2015
Technical Documentation

Lenovo, the Lenovo logo, System x and For Those Who Do are trademarks or registered trademarks
of Lenovo in the United States, other countries, or both. Other product and service names might be
trademarks of Lenovo or other companies.
A current list of Lenovo trademarks is available on the web at:
http://www.lenovo.com/legal/copytrade.html.
IBM, the IBM logo, and ibm.com are trademarks of International Business Machines Corp., registered in
the United States and/or other countries.
Adobe and PostScript are either registered trademarks or trademarks of Adobe Systems Incorporated in
the United States and/or other countries.
Fusion-io is a registered trademark of Fusion-io, in the United States.
Intel, Intel Xeon, Itanium, and Pentium are trademarks or registered trademarks of Intel Corporation or
its subsidiaries in the United States and other countries.
Java and all Java-based trademarks and logos are trademarks or registered trademarks of Oracle and/or
its affiliates.
UNIX is a registered trademark of The Open Group in the United States and other countries.
Linux is a trademark of Linus Torvalds in the United States, other countries, or both.
SAP HANA is a trademark of SAP Corporation in the United States, other countries, or both.
Oracle is a registered trademark of Oracle Corporation and/or its affiliates.
Other company, product or service names may be trademarks or service marks of others.

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® III


1.8.80-12 © Copyright Lenovo, 2015
Technical Documentation

Contents
1 Abstract 1
1.1 Preface & Scope . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.2 Acknowledgements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.3 Feedback . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.4 Disclaimer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.5 Support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3

2 Introduction 6
2.1 Purpose . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
2.2 Applicability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
2.2.1 SAP HANA Platform Edition Versions . . . . . . . . . . . . . . . . . . . . . . . . . 6
2.3 Exclusions and Exceptions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
2.4 Conventions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
2.4.1 Icons Used . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
2.4.2 Code Snippets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7

3 Solution Overview 7
3.1 The SAP HANA Appliance Software . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
3.2 Definition of SAP HANA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7

4 Hardware Configurations 9
4.1 SAP HANA Platform Edition T-Shirt Sizes . . . . . . . . . . . . . . . . . . . . . . . . . . 10
4.2 Single Node versus Clustered Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . 10
4.2.1 Network Switch Options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
4.3 SAP HANA Optimized Hardware Configurations . . . . . . . . . . . . . . . . . . . . . . . 13
4.3.1 System x3850 X6 Single Node Configurations . . . . . . . . . . . . . . . . . . . . . 13
4.3.2 System x3950 X6 Single Node Configurations . . . . . . . . . . . . . . . . . . . . . 13
4.3.3 System x3850 X6 Single Node Four Socket Configurations with Storage Expansion 14
4.3.4 System x3950 X6 SAP ERP on SAP HANA Single Node Configurations . . . . . . 14
4.3.5 System x3850 X6 Cluster Node Configurations with Storage Expansion . . . . . . 15
4.3.6 System x3950 X6 Cluster Node Configurations . . . . . . . . . . . . . . . . . . . . 15
4.4 Card Placement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
4.4.1 Network Interface Cards . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
4.4.2 Slots for additional Network Interface Cards . . . . . . . . . . . . . . . . . . . . . . 16
4.4.3 RAID Adapter Cards . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16

5 Networking 21
5.1 Networking Requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
5.2 Jumbo Frames . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
5.3 Network Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
5.4 Network Switch Configuration For Clustered Installations . . . . . . . . . . . . . . . . . . 23
5.5 Customer Site Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
5.6 Network Definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
5.6.1 Numbering conventions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
5.6.2 Internal Networks – Option 1 G8264 RackSwitch 10Gbit . . . . . . . . . . . . . . . 25
5.6.3 Internal Networks – Option 2 G8124 RackSwitch 10Gbit . . . . . . . . . . . . . . . 26
5.6.4 Administrative, SAP-Access and Backup Networks – Option G8052 RackSwitch
1Gbit . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
5.6.5 Network Configurations in a Clustered Environment . . . . . . . . . . . . . . . . . 28
5.7 Setting up the Switches . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
5.7.1 Basic Switch Configuration Setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
5.7.2 Advanced Setup of the Switches . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® IV


1.8.80-12 © Copyright Lenovo, 2015
Technical Documentation

5.7.3 Disable Spanning Tree Protocol . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31


5.7.4 Disable Default IP Address . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
5.7.5 Enable L4Port Hash . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
5.7.6 Disable Routing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
5.7.7 Add Networking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
5.7.8 VLAN configurations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
5.7.9 Save changes to switch FLASH memory . . . . . . . . . . . . . . . . . . . . . . . . 34
5.8 Inter-Site Portchannel Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
5.8.1 Static Trunk over one Inter-Site Link . . . . . . . . . . . . . . . . . . . . . . . . . . 34
5.8.2 Portchannel over two Inter-Site Links . . . . . . . . . . . . . . . . . . . . . . . . . 35
5.8.3 Portchannel over four Inter-Site Links . . . . . . . . . . . . . . . . . . . . . . . . . 36
5.8.4 Save and Restore Switch Configuration . . . . . . . . . . . . . . . . . . . . . . . . 36
5.9 Automated Deployment of Switch Configurations . . . . . . . . . . . . . . . . . . . . . . . 37
5.9.1 Script Usage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
5.9.2 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
5.9.3 Input Values . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
5.10 Known Issues and Bugs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38

6 Guided Install of the Lenovo Solution 39


6.1 Preparation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
6.1.1 Firewall Preparations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
6.1.2 Lenovo Systems solution for SAP HANA Additional Software Stack . . . . . . . . 40
6.1.3 Software, Firmware and Drivers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
6.1.4 Card Placement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
6.1.5 Hardware UEFI Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
6.2 Phase 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
6.2.1 Storage Configuration – RAID Setup . . . . . . . . . . . . . . . . . . . . . . . . . . 46
6.2.2 Mounting Installation Images using the IMM Virtual Media Center . . . . . . . . . 49
6.2.3 Starting the Automatic Installation Process . . . . . . . . . . . . . . . . . . . . . . 50
6.3 Phase 2 – SLES for SAP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
6.4 Phase 2 – RHEL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
6.5 Interim Check . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
6.5.1 Model Type 6241 Special Instructions . . . . . . . . . . . . . . . . . . . . . . . . . 58
6.5.2 Installation of Compatibility Pack . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
6.5.3 Installation without Network Connectivity . . . . . . . . . . . . . . . . . . . . . . . 59
6.6 Phase 3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
6.6.1 Verification of ServeRAID M5120 Controller Firmware and Configuration . . . . . 59
6.6.2 HANA Installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
6.6.3 Single Node with HA Installation with Side-car Quorum Solution . . . . . . . . . . 63

7 After Installation 64
7.1 Actions to insure the correctness of the installation . . . . . . . . . . . . . . . . . . . . . . 64
7.2 HANA Network Setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65

8 Disaster Recovery 66
8.1 Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
8.1.1 Terminology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
8.1.2 Architectural overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67
8.1.3 Three site/Tiebreaker node architecture . . . . . . . . . . . . . . . . . . . . . . . . 69
8.2 Mixing eX5/X6 Server in a DR Cluster . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
8.3 Hardware Setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
8.3.1 Site A and B . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
8.3.2 Tiebreaker Site C (optional) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
8.3.3 Acquire TCP/IP addresses and host names . . . . . . . . . . . . . . . . . . . . . . 70

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® V


1.8.80-12 © Copyright Lenovo, 2015
Technical Documentation

8.3.4 Network switch setup (GPFS and SAP HANA network) . . . . . . . . . . . . . . . 70


8.3.5 Link between site A and B . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
8.3.6 Network integration into customer infrastructure . . . . . . . . . . . . . . . . . . . 71
8.3.7 Setup network connection to tiebreaker node at site C (optional) . . . . . . . . . . 71
8.4 Software Setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
8.4.1 GPFS configuration prerequisites . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72
8.4.2 GPFS Server configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74
8.4.3 GPFS Disk configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
8.4.4 Filesystem Creation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
8.4.5 SAP HANA appliance installation . . . . . . . . . . . . . . . . . . . . . . . . . . . 77
8.4.6 Tiebreaker node setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78
8.4.7 Verify Installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
8.5 Extending a DR-Cluster . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80
8.6 Mixing eX5/X6 Server in a DR Cluster . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81
8.6.1 Hardware Setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81
8.6.2 GPFS Part 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81
8.6.3 HANA Backup Node Installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83
8.6.4 GPFS Part 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84
8.6.5 HANA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84
8.7 Using Non Productive Instances on Inactive DR Site . . . . . . . . . . . . . . . . . . . . . 85
8.7.1 Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85
8.7.2 Setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86

9 Mixed eX5/X6 Environments 89


9.1 Mixed eX5/X6 HA Clusters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89
9.1.1 Definition & Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89
9.1.2 Prerequisites & Limitations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89
9.1.3 New Installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90
9.1.4 Existing Cluster Extension/Node Replacement . . . . . . . . . . . . . . . . . . . . 92
9.1.5 Deviating Operation Instructions . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94
9.2 Mixed eX5/X6 DR Clusters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94
9.2.1 Definition & Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94
9.2.2 Prerequisites & Limitations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94
9.2.3 Existing Cluster Extension/Node Replacement . . . . . . . . . . . . . . . . . . . . 98
9.2.4 Deviating Operation Instructions . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100

10 Special Single Node Installation Scenarios 101


10.1 Single Node with HA Installation with Side-car Quorum Solution . . . . . . . . . . . . . . 101
10.1.1 Installation of SAP HANA appliance single node with HA . . . . . . . . . . . . . . 102
10.1.2 Prepare quorum node . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103
10.1.3 Quorum Node Network Setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104
10.1.4 Adapt hosts file . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105
10.1.5 SSH configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105
10.1.6 Quorum Node IBM GPFS setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106
10.1.7 Quorum Node IBM GPFS installation . . . . . . . . . . . . . . . . . . . . . . . . . 106
10.1.8 Add quorum node . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107
10.1.9 Create descriptor disk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108
10.1.10 Add disk to file system . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108
10.1.11 Verify Cluster Setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108
10.1.12 Installation of SAP HANA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109
10.2 Single Node with stretched HA Installation . . . . . . . . . . . . . . . . . . . . . . . . . . 109
10.2.1 Installation and configuration of SLES and IBM GPFS . . . . . . . . . . . . . . . 110
10.2.2 Installation of SAP HANA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111
10.3 Single Node with DR Installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® VI


1.8.80-12 © Copyright Lenovo, 2015
Technical Documentation

10.3.1 Installation and configuration of SLES and IBM GPFS . . . . . . . . . . . . . . . 112


10.3.2 Optional: Expansion Storage Setup for Non-Production Instance . . . . . . . . . . 113
10.4 Single Node with HA and DR Installation . . . . . . . . . . . . . . . . . . . . . . . . . . . 113
10.4.1 Installation and configuration of SLES and IBM GPFS . . . . . . . . . . . . . . . 114
10.4.2 Optional: Expansion Storage Setup for Non-Production Instance . . . . . . . . . . 116
10.5 Single Node DR Installation with SAP HANA System Replication . . . . . . . . . . . . . 117
10.5.1 Installation and configuration of SLES and IBM GPFS . . . . . . . . . . . . . . . 118
10.5.2 Installation of SAP HANA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119
10.5.3 Optional: Expansion Storage Setup for Non-Production Instance . . . . . . . . . . 119
10.6 Single Node with HA using IBM GPFS Storage Replication and DR using System Replication120
10.6.1 Installation and configuration of SLES and IBM GPFS . . . . . . . . . . . . . . . 122
10.6.2 Installation of SAP HANA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123
10.7 Expansion Storage Setup for Non-productive SAP HANA Instance . . . . . . . . . . . . . 124

11 Virtualization 126
11.1 Getting Started . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126
11.1.1 Memory Overhead . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126
11.1.2 Configure UEFI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127
11.1.3 Start Embedded VMware ESXi Hypervisor . . . . . . . . . . . . . . . . . . . . . . 127
11.1.4 Enable SSH on VMware ESXi Hypervisor . . . . . . . . . . . . . . . . . . . . . . . 127
11.1.5 StorCLI on VMware ESXi 5.5 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127
11.1.6 Setting up ESXi Storage in CLI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 128
11.1.7 Setting Storage for SLES and HANA ISO . . . . . . . . . . . . . . . . . . . . . . . 129
11.1.8 Restart VMware ESXi Hypervisor . . . . . . . . . . . . . . . . . . . . . . . . . . . 130
11.1.9 Installing VMware vSphere Client . . . . . . . . . . . . . . . . . . . . . . . . . . . 130
11.2 Configuring and Starting VMs with vSphere Client . . . . . . . . . . . . . . . . . . . . . . 131
11.3 Operating System (SLES for SAP 11 SP3) Installation . . . . . . . . . . . . . . . . . . . . 143
11.4 Operating System (Red Hat Enterprise Server 6.5) Installation . . . . . . . . . . . . . . . 143
11.4.1 Changes after Red Hat Installation . . . . . . . . . . . . . . . . . . . . . . . . . . . 144

12 Upgrading the Hardware Configuration 145


12.1 Power Policy Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145
12.2 Reboot Behavior . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145
12.3 Adding storage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147
12.3.1 Adding storage via EXP2524 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147
12.3.2 Adding storage on second internal M5210 controller . . . . . . . . . . . . . . . . . 147
12.3.3 Configure RAID array(s) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 148
12.3.4 Deciding for a CacheCade RAID Level . . . . . . . . . . . . . . . . . . . . . . . . . 149
12.3.5 Configuring RAID array when CacheCade is not yet configured . . . . . . . . . . . 149
12.3.6 Configuring RAID array with existing CacheCade . . . . . . . . . . . . . . . . . . 149
12.3.7 Changing the CacheCade RAID Level . . . . . . . . . . . . . . . . . . . . . . . . . 149
12.3.8 Configuring GPFS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 150
12.4 Adding memory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 151
12.5 Adding CPU Books . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152

13 Software Updates 153


13.1 Warning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153
13.2 Update Variants . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153
13.2.1 General per node update procedure . . . . . . . . . . . . . . . . . . . . . . . . . . 153
13.2.2 Disruptive Cluster Update . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 155
13.2.3 Full Cluster Rolling Update . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 155
13.3 Linux Kernel Update . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 155
13.3.1 SLES Kernel Update Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 155
13.3.2 RHEL Kernel Update Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 156

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® VII


1.8.80-12 © Copyright Lenovo, 2015
Technical Documentation

13.3.3 Kernel Update Procedure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 156


13.4 Updating GPFS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 157
13.4.1 GPFS Settings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 157
13.4.2 enableLinuxReplicatedAIO=yes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 157
13.4.3 DR-clusters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 157
13.4.4 Disruptive GPFS Cluster Update . . . . . . . . . . . . . . . . . . . . . . . . . . . . 158
13.5 Upgrading from GPFS 3.5 to 4.1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 161
13.5.1 Disruptive Upgrade from GPFS 3.5 to 4.1 . . . . . . . . . . . . . . . . . . . . . . . 162
13.5.2 Rolling upgrade per node from GPFS 3.5 to 4.1 . . . . . . . . . . . . . . . . . . . . 163
13.6 Update Mellanox Network Cards . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 166
13.7 SAP HANA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 166

14 System Check and Support 167


14.1 System Login . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 167
14.2 Basic System Check . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 167
14.3 System Support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 170
14.4 Additional Tools for System Checks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 170
14.4.1 Lenovo Advanced Settings Utility . . . . . . . . . . . . . . . . . . . . . . . . . . . . 170
14.4.2 ServeRAID MegaCli Utility for Storage Management . . . . . . . . . . . . . . . . . 171
14.4.3 ServeRAID StorCLI Utility for Storage Management . . . . . . . . . . . . . . . . . 171
14.4.4 IBM SSD Wear Gauge CLI utility . . . . . . . . . . . . . . . . . . . . . . . . . . . 171
14.5 Getting Support (IBM PMR, SAP OSS) . . . . . . . . . . . . . . . . . . . . . . . . . . . . 172

15 Backup and Restore of the Primary Partition 173


15.1 Description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 173
15.1.1 Boot Loader . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 174
15.1.2 Drive Partitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 175
15.2 Prerequisites . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 175
15.2.1 Correcting the backup fstab . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 176
15.2.2 Add boot loader entry for backup partition . . . . . . . . . . . . . . . . . . . . . . 177
15.3 Backup of the Linux operating system . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 178
15.4 Restoring the operating system . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 179

16 SAP HANA Backup and Recovery 181


16.1 Description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 181
16.2 Backup of SAP HANA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 181
16.3 Restore of SAP HANA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 185

17 Troubleshooting 188
17.1 Adding SAP HANA Worker/Standby Nodes in a Cluster . . . . . . . . . . . . . . . . . . . 188
17.2 GPFS mount points missing after Kernel Update . . . . . . . . . . . . . . . . . . . . . . . 188
17.3 Degrading disk I/O throughput . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 188
17.4 SAP HANA will not install after a system board exchange . . . . . . . . . . . . . . . . . . 189
17.5 Known Kernel Updates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 189
17.6 Important SAP Notes (SAP Service Marketplace ID required) . . . . . . . . . . . . . . . . 189
17.6.1 SAP Note 1641148 HANA server hang caused by GPFS issue . . . . . . . . . . . . 189

Appendices 191

A GPFS Disk Descriptor Files 191

B Topology Vectors (GPFS 3.5 failure groups) 191

C Quotas 193
C.1 Quota Calculation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 193

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® VIII


1.8.80-12 © Copyright Lenovo, 2015
Technical Documentation

C.2 Quota Calculation Script . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 193

D Lenovo X6 Server MTM List & Model Overview 195

E Frequently Asked Questions 197


E.1 FAQ #1: SAP HANA Memory Limits . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 197
E.2 FAQ #2: GPFS parameter readReplicaPolicy . . . . . . . . . . . . . . . . . . . . . . . . . 197
E.3 FAQ #3: SAP HANA Memory Limit on XS sized Machines . . . . . . . . . . . . . . . . . 197
E.4 FAQ #4: Overlapping NSDs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 198
E.5 FAQ #5: Missing RPMs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 198
E.6 FAQ #6: CPU Governor set to ondemand . . . . . . . . . . . . . . . . . . . . . . . . . . . 200
E.7 FAQ #7: No disk space left bug (Bug IV33610) . . . . . . . . . . . . . . . . . . . . . . . . 201
E.8 FAQ #8: Setting C-States . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 202
E.9 FAQ #9: ServeRAID M5120 RAID Adapter FW Issues . . . . . . . . . . . . . . . . . . . 202
E.9.1 Changing Queue Depth . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 203
E.9.2 Use recommended Firmware version . . . . . . . . . . . . . . . . . . . . . . . . . . 203
E.10 FAQ #10: GPFS Parameter enableLinuxReplicatedAIO . . . . . . . . . . . . . . . . . . . 204
E.11 FAQ #11: GPFS NSD on Devices with GPT Labels . . . . . . . . . . . . . . . . . . . . . 204
E.12 FAQ #12: GPFS pagepool should be set to 4GB . . . . . . . . . . . . . . . . . . . . . . . 205
E.13 FAQ #13: Limit Page Cache Pool to 4GB (SAP Note #1557506 . . . . . . . . . . . . . . 206
E.14 FAQ #14: restripeOnDiskFailure and start-disks-on-startup . . . . . . . . . . . . . . . . . 206

F References 207
F.1 Lenovo References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 207
F.2 IBM References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 207
F.3 SAP General Help (SAP Service Marketplace ID required) . . . . . . . . . . . . . . . . . . 207
F.4 SAP Notes (SAP Service Marketplace ID required) . . . . . . . . . . . . . . . . . . . . . . 208
F.5 Novell SUSE Linux Enterprise Server References . . . . . . . . . . . . . . . . . . . . . . . 209

G Changelog 210

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® IX


1.8.80-12 © Copyright Lenovo, 2015
Technical Documentation

List of Figures
1 Current SAP HANA Appliance Scenarios . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
2 System x3850 X6 Server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
3 System Storage EXP2524 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
4 System x3950 X6 Server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
5 SAP HANA Multiple Single Node Example . . . . . . . . . . . . . . . . . . . . . . . . . . 10
6 SAP HANA Clustered Example with Backup . . . . . . . . . . . . . . . . . . . . . . . . . 11
7 Workload Optimized System x3850 X6 2 Socket Rear View . . . . . . . . . . . . . . . . . 17
8 Workload Optimized System x3850 X6 4 Socket Rear View . . . . . . . . . . . . . . . . . 18
9 Workload Optimized System Storage Book. This contains slots 11, 12 and slots 43, 44 on
x3950 X6 in an additional Storage Book . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
10 Workload Optimized System x3950 X6 8 Socket Rear View . . . . . . . . . . . . . . . . . 20
11 G8264 RackSwitch . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
12 G8264 RackSwitch schema . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
13 IBM G8124 RackSwitch . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
14 IBM G8124 RackSwitch schema . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
15 G8052 RackSwitch . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
16 G8052 RackSwitch schema . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
17 Cluster Node Network Diagram . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
18 Cluster Switch Networking Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
19 License Agreement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
20 Hostname and Domain Name . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
21 Network Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
22 Cluster Node NIC Configuration dialog bond0 . . . . . . . . . . . . . . . . . . . . . . . . . 53
23 Clock and Time Zone . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
24 Advanced NTP Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
25 Password for the System Administrator . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
26 Installation Mode Selection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
27 HANA Password Input Dialog . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
28 GPFS IP Configuration Dialog . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
29 DR Architectural Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
30 DR Data Distribution in a Four Node Cluster . . . . . . . . . . . . . . . . . . . . . . . . . 67
31 Logical DR Network Setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68
32 DR Networking View (with no client uplinks shown) . . . . . . . . . . . . . . . . . . . . . 68
33 SAP HANA DR using storage expansion - architectural overview . . . . . . . . . . . . . . 86
34 Single Node with High Availability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101
35 File System Layout - Single Node HA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102
36 Network Switch Setup for Single Node with HA . . . . . . . . . . . . . . . . . . . . . . . . 105
37 Single Node with stretched HA - Two Site Approach . . . . . . . . . . . . . . . . . . . . . 109
38 Single Node with stretched HA - Three Site Approach . . . . . . . . . . . . . . . . . . . . 110
39 File System Layout - Single Node stretched HA . . . . . . . . . . . . . . . . . . . . . . . . 111
40 Single Node with Disaster Recovery - Two Site Approach . . . . . . . . . . . . . . . . . . 112
41 Single Node with Disaster Recovery - Three Site Approach . . . . . . . . . . . . . . . . . 112
42 File System Layout - Single Node with DR with Storage Expansion . . . . . . . . . . . . . 113
43 Single Node with HADR using IBM GPFS Storage Replication . . . . . . . . . . . . . . . 114
44 File System Layout - Single Node HADR . . . . . . . . . . . . . . . . . . . . . . . . . . . 115
45 File System Layout - Single Node HADR with Storage Expansion . . . . . . . . . . . . . . 117
46 Single Node DR with SAP System Replication . . . . . . . . . . . . . . . . . . . . . . . . 118
47 Single Node DR with SAP System Replication . . . . . . . . . . . . . . . . . . . . . . . . 118
48 File System Layout of Single Node DR with SAP System Replication . . . . . . . . . . . . 119
49 File System Layout of Single Node DR with SAP System Replication with Storage Expansion120
50 Single Node with HA using IBM GPFS Storage Replication and DR using System Replication121

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® X


1.8.80-12 © Copyright Lenovo, 2015
Technical Documentation

51 Single Node with HA using IBM GPFS Storage Replication and DR using System Repli-
cation without remote site Switches . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122
52 File System of Single Node with HA and DR with System Replication . . . . . . . . . . . 123
53 File System of Single Node with HA and DR with System Replication and Storage Expansion124
54 ESXi 5.x filesystems on a System x3850 X6. The VFAT filesystems belong to the USB
device . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129
55 ESXi5.5 Storage Path . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 130
56 ESXi 5.1 WEB Welcome . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131
57 Create new virtual machine . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132
58 Choose custom configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132
59 Choose a name . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133
60 Choose disk storage for VM files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133
61 Newest virtual machine hardware version . . . . . . . . . . . . . . . . . . . . . . . . . . . 134
62 Configure the use of more than 32 CPUs . . . . . . . . . . . . . . . . . . . . . . . . . . . . 134
63 Choose Operating System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135
64 Choose number of CPUs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135
65 Choose Memory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136
66 Choose Network Cards . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136
67 Choose SCSI controller . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 137
68 Create new HANA datastore . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 137
69 Choose datastore size . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 138
70 Choose datastore . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 138
71 Choose SCSI Node . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139
72 Add a new CD/DVD device . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139
73 Select ISO image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 140
74 Select IDE device 0:0 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 140
75 Finish creation of SLES ISO mount . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 141
76 Upgrade virtual hardware . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 142
77 Confirm upgrade . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 142
78 Upgrade virtual hardware . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 142
79 Changing the autoyast parameter for installation . . . . . . . . . . . . . . . . . . . . . . . 143
80 Adding kickstart parameter for install . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 144
81 Overview of Backup/Restore Operations . . . . . . . . . . . . . . . . . . . . . . . . . . . . 174
82 Sample GRUB boot loader screen . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 179

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® XI


1.8.80-12 © Copyright Lenovo, 2015
Technical Documentation

List of Tables
1 Network Switch Options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
2 System x3850 X6 Single Node Configurations . . . . . . . . . . . . . . . . . . . . . . . . . 13
3 IBM System x3950 X6 Single Node Four Socket Configurations . . . . . . . . . . . . . . . 13
4 System x3950 X6 Single Node Eight Socket Configurations . . . . . . . . . . . . . . . . . . 14
5 System x3950 X6 Single Node Four Socket Configurations with Storage Expansion . . . . 14
6 System x3950 X6 SAP ERP on SAP HANA Single Node Configurations . . . . . . . . . . 14
7 System x3850 X6 Cluster Node Configurations with Storage Expansion . . . . . . . . . . 15
8 System x3950 X6 Cluster Node Configurations . . . . . . . . . . . . . . . . . . . . . . . . 15
9 Slots which may be used for additional NICs . . . . . . . . . . . . . . . . . . . . . . . . . 16
10 Card assignments for a two socket x3850 X6 . . . . . . . . . . . . . . . . . . . . . . . . . . 16
11 Card assignments for a four socket x3850 X6 . . . . . . . . . . . . . . . . . . . . . . . . . 17
12 Network interface card assignments for an eight socket x3950 X6 . . . . . . . . . . . . . . 19
13 Card placement for x3950 X6 four socket and eight socket . . . . . . . . . . . . . . . . . . 19
14 Customer infrastructure addresses . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
15 IP address configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
16 Numbering conventions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
17 G8264 RackSwitch port assignments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
18 G8124 RackSwitch port assignments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
19 G8052 RackSwitch port assignments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
20 Installation Process and Phases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
21 SAP HANA references . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
22 DVD Part Numbers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
23 Supported Firmware, Software and Driver Levels . . . . . . . . . . . . . . . . . . . . . . . 42
24 Required Operation Modes UEFI settings . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
25 Required Processors UEFI settings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
26 Required Power UEFI settings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
27 Required Memory UEFI settings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
28 Boot options and boot loaders used . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
29 x3850 X6 RAID Controller Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
30 x3950 X6 RAID Controller Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
31 Partition Scheme for Single Nodes and Cluster Installations . . . . . . . . . . . . . . . . . 49
32 DVD/ISO Media Install Options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
33 ServeRAID M5120 Firmware Issues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
34 Hostname Settings for DR . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
35 Extra Network Settings for DR . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
36 GPFS Settings for DR Cluster . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74
37 eX5 T-Shirt Size to X6 Model Mapping . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89
38 Stanza file for X6 servers in eX5 clusters . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91
39 Stanza file for X6 servers in eX5 clusters . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93
40 eX5 T-Shirt Size to X6 Model Mapping . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95
41 Stanza file for X6 servers in eX5 clusters . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97
42 Stanza file for X6 servers in eX5 clusters . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99
43 IBM System x3550 M4 GPFS quorum node . . . . . . . . . . . . . . . . . . . . . . . . . . 103
44 Single Node with HA OS Partitioning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104
45 Single Node with HA OS Networking Setup . . . . . . . . . . . . . . . . . . . . . . . . . . 104
46 Single Node with HA Network Switch Definitions . . . . . . . . . . . . . . . . . . . . . . . 105
47 SAP HANA Virtual Machine Sizes by Lenovo . . . . . . . . . . . . . . . . . . . . . . . . . 126
48 RAID array and RAID controller overview . . . . . . . . . . . . . . . . . . . . . . . . . . . 146
49 x3850 X6 Memory DIMM Placement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 151
50 x3950 X6 Memory DIMM Placement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 151
51 Upgrade GPFS Portability Layer Checklist . . . . . . . . . . . . . . . . . . . . . . . . . . 156
52 Upgrade GPFS Portability Layer Checklist . . . . . . . . . . . . . . . . . . . . . . . . . . 158

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® XII


1.8.80-12 © Copyright Lenovo, 2015
Technical Documentation

53 GPFS Upgrade Checklist . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 162


54 Required SAP HANA directories for restore . . . . . . . . . . . . . . . . . . . . . . . . . . 184
55 Topology Vectors in a 8 node DR-cluster . . . . . . . . . . . . . . . . . . . . . . . . . . . . 192
56 Lenovo MTM Mapping & Model Overview . . . . . . . . . . . . . . . . . . . . . . . . . . 195
57 Lenovo MTM Mapping & Model Overview . . . . . . . . . . . . . . . . . . . . . . . . . . 196
58 ServeRAID M5120 Firmware Issues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 203

List of Listings
1 SSH login screen . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 167
2 Support script usage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 167
3 Support script output . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 168
4 Example SUSE fstab entries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 175
5 Example Red Hat fstab entries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 176
6 Example SLES primary fstab file . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 176
7 Example SLES backup fstab file . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 176
8 Example RHEL primary fstab file . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 176
9 Example RHEL backup fstab file . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 177
10 Example UEFI Configuration for Primary Partition . . . . . . . . . . . . . . . . . . . . . 177
11 Example GRUB Configuration for Primary Partition . . . . . . . . . . . . . . . . . . . . . 178
12 Example GRUB Configuration for Backup Partition . . . . . . . . . . . . . . . . . . . . . 178
13 Example rsync command . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 179
14 Example rsync command . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 179
15 Example SLES primary fstab file . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 179
16 Example RHEL primary fstab file . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 180

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® XIII


1.8.80-12 © Copyright Lenovo, 2015
Technical Documentation

List of Abbreviations
ASU Lenovo Advanced Settings Utility
BIOS Basic Input / Output System
DR Disaster Recovery (previously SAP Disaster Tolerance)
DT SAP Dynamic Tiering (not to be confused with Disaster Recovery (DR), previously
Disaster Tolerance (DT))
ELILO EFI Linux Loader
IBM GPFS IBM General Parallel File System
GRUB Grand Unified Bootloader
GSS GPFS Storage Server
IMM Integrated Management Module
LILO Linux Loader
MTM Machine Type Model
NIC Network Interface Controller
OLAP On Line Analytical Processing
OLTP On Line Transaction Processing
OS Operating System
RHEL Red Hat Enterprise Linux
SAP HANA SAP HANA Platform Edition
SLES SUSE Linux Enterprise Server
SLES for SAP SUSE Linux Enterprise Server for SAP Applications
UEFI Unified Extensible Firmware Interface
UUID Universally Unique Identifier
VLAG Virtual Link Aggregation Group
VLAN Virtual Local Area Network

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® XIV


1.8.80-12 © Copyright Lenovo, 2015
Technical Documentation

1 Abstract
This document provides general information specific to the Lenovo Systems Solution for SAP HANA
Platform Edition (short: Lenovo Solution). This document assumes that the reader understands the
basic structure and components of the SAP HANA Platform Edition (SAP HANA) software, that he
has a solid understanding of Linux administration processes, and that he has been instructed how to
install the SAP HANA1 software on Lenovo Systems hardware.
Lenovo Solution is built with Lenovo Systems hardware based on Intel Xeon Architecture as building
blocks for a scale-up or scale-out SAP HANA system. These provide a highly-scalable infrastructure
for SAP HANA. The Lenovo Systems servers, with local storage or GPFS Storage Server (GSS), and
Lenovo Systems Networking switches will be used to run SAP HANA.
Lenovo has created orderable models upon which you may install and run the SAP HANA according to
the sizing charts coordinated with SAP AG. For each workload type, special ordering options for the
Lenovo System servers, storage and switches have been approved by SAP and Lenovo to accommodate
the requirements for the SAP HANA.
Attention
IMPORTANT: Please do not attempt to install a system without having been instructed
about the content of this document.

Note
It is considered best practice to create backups before and recover the SAP HANA system
after a major failure instead of relying on a fresh install with the help of this document. For
details on Backup and Recovery please refer to the Lenovo Solution Backup & Restore Guide
as well as the Lenovo Solution Hardware, Operating System & GPFS Operations Guide (SAP
Note 1650046).
© Copyright 2014-2015 Lenovo. All Rights Reserved.
Neither this documentation nor any part of it may be copied or reproduced in any form or by any means
or translated into another language, without the prior consent of Lenovo.
Lenovo makes no warranties or representations with respect to the content hereof and specifically disclaims
any implied warranties of merchantability or fitness for any particular purpose. Lenovo assumes no
responsibility for any errors that may appear in this document. The information contained in this
document is subject to change without any notice. Lenovo reserves the right to make any such changes
without obligation to notify any person of such revision or changes. Lenovo makes no commitment to
keep the information contained herein up to date.
Edition Notice: 17th April 2015
This is the published edition of this document. The online copy is the master.

1 SAP HANA Platform Edition

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 1


1.8.80-12 © Copyright Lenovo, 2015
Technical Documentation

1.1 Preface & Scope

The objective of this paper is to document the installation and configuration of the SAP HANA Platform
Edition (SAP HANA) on System x hardware using a managed set up rather than manually installing
each node from scratch. The major products installed here are SAP HANA, IBM General Parallel File
System (IBM GPFS) and the operating systems SUSE Linux Enterprise Server for SAP Applications
(SLES for SAP), or Red Hat Enterprise Linux (RHEL).
For instructions how to administrate SAP HANA Platform Edition (SAP HANA) please refer to the
SAP HANA Technical Operations Manual2 . Instructions how to administrate and maintain the other
components delivered with the System x solution can be found in the SAP Note 1650046 – Lenovo Systems
Solution Hardware, Operating System & GPFS Operations Guide. The Lenovo System x solution for
SAP HANA Quick Start Guide provides an overview of the complete solution and instructions how to
find service and support for your Lenovo Solution.

1.2 Acknowledgements

The authors of this document are:


• Martin Bachmaier, Lenovo Development for SAP Solutions, Germany
• Florian Bausch, Lenovo Development for SAP Solutions, Germany
• Detlev Freund, Lenovo Development for SAP Solutions, Germany
• Patrick Hartman, Lenovo Development for SAP Solutions, Germany
• Guido Kampe, Lenovo Development for SAP Solutions, Germany
• Nils König, Lenovo Development for SAP Solutions, Germany
• Christoph Nelles, Lenovo Development for SAP Solutions, Germany
• Richard Ott, Lenovo Development for SAP Solutions, Germany
• Volker Pense, Lenovo Development for SAP Solutions, Germany
• Michael Reumann, Lenovo Development for SAP Solutions, Germany
The authors would like to thank the following Lenovo and IBM colleagues:
• Herbert Diether, Lenovo Development for SAp Solutions, Germany
• Oliver Rettig, Lenovo Development for SAP Solutions, Germany
• Keith Frisby, Lenovo Systems Lab Services, US
• Thorsten Nitsch, IBM GTS, Germany
• Alexander Trefs, Lenovo Technical Sales, Germany
And many people at SAP Development, Walldorf, Germany; specifically:
• Abdelkader Sellami, SAP HANA Support, Walldorf, Germany
• Adolf Brosig, SAP HANA Development, Walldorf, Germany
• Helmut Cossmann, SAP HANA Development, Walldorf, Germany
• Henning Sackewitz, SAP Development, Walldorf, Germany
• Michael Becker, SAP HANA Support Development, Walldorf, Germany

2 http://help.sap.com/hana_platform

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 2


1.8.80-12 © Copyright Lenovo, 2015
Technical Documentation

• Oliver Rebholz, SAP HANA Development, Walldorf, Germany

1.3 Feedback

We are interested in your comments and feedback. Please send it to sapsolutions@lenovo.com. The full
guidebook can be downloaded, depending on its version, from following community (SAP HANA Support
Document section) – SAP Solutions at Lenovo Community.

1.4 Disclaimer

This document is subject to change without notification and will not cover the issues encountered in
every customer situation. It should be used only in conjunction with the official product literature. The
information contained in this document has not been submitted to any formal test and is distributed AS
IS.
All statements regarding Lenovo future direction and intent are subject to change or withdrawal without
notice, and represent goals and objectives only. Contact your local Lenovo office or Lenovo authorized
reseller for the full text of the specific Statement of Direction.
Some information addresses anticipated future capabilities. Such information is not intended as a defini-
tive statement of a commitment to specific levels of performance, function or delivery schedules with
respect to any future products. Such commitments are only made in Lenovo product announcements.
The information is presented here to communicate Lenovo’s current investment and development activities
as a good faith effort to help with our customers’ future planning.
This document is for educated service personnel only. If you are not familiar with the described system,
we will ask you to restrain from trying to apply what is described herein – you could void the preloaded
system installation – and void the SAP certified configuration. This will void the warranty and support of
said machine. Please contact the sapsolutions@lenovo.com to get enrolled for education prior to installing
an Lenovo Solution appliance.
In case of issues with the SAP HANA appliance, the customer is asked to open a SAP Help Desk request
(OSS ticket) first and foremost. Only by following this path, can we ensure the proper configuration of
the Lenovo Solution. If the customer would open an IBM/Lenovo support ticket for the system, he might
be requested to perform system upgrades to firmware or software to the latest available levels which
might not be supported with the SAP HANA appliance. If identified as a hardware or file system issue,
the ticket will be forwarded to the IBM/Lenovo support team and handled appropriately. Although this
may be contrary to standard IBM/Lenovo Support processes, it is the approved and accepted support
process for all SAP Appliances including the SAP HANA appliance.

1.5 Support

The System x SAP HANA development team provides new images for the SAP HANA appliance at reg-
ular intervals. These images have dependencies regarding the hardware, operating systems, and hardware
drivers. The use of the latest image for maintenance and installation of SAP HANA appliance is highly
recommended.
Whenever the firmware level recommendations (fixes known firmware issues) for the Lenovo components
of the SAP HANA appliance are given by the individual System x support representatives, it is the
customers’ responsibility to upgrade (or downgrade) to the recommended levels as instructed by System
x support representatives. A list of the minimally required versions can be found in SAP Note 1880960
– Lenovo Systems Solution for SAP HANA PTF List.

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 3


1.8.80-12 © Copyright Lenovo, 2015
Technical Documentation

Whenever the operating systems recommendations (fixes known operating systems issues) for the SUSE
Linux components of the SAP HANA appliance are given by the SAP, SUSE, or IBM/Lenovo support
representatives, it is the customers’ responsibility to upgrade (or downgrade) to the recommended levels
as instructed by SAP through an explicit SAP Note or a Customer OSS Message. SAP describes their
operational concept, including updating of the operating system components in SAP Note 1599888 – SAP
HANA: Operational Concept. If the Linux kernel is updated, you have to recompile IBM GPFS software
as well.
Whenever other hardware or software recommendations (that fix known issues) for components of the
SAP HANA appliance are given by the individual IBM/Lenovo support representatives, it is the cus-
tomers’ responsibility to upgrade (or to downgrade) to the recommended levels as instructed by IB-
M/Lenovo support representatives.
If software and documentation updates are available, you can download them from the respective Lenovo,
IBM, SUSE or SAP website. To check for updates, go to the following websites. Follow the procedure in
the included documentation to update the software.
• Firmware and drivers for System X6 Servers
– You can obtain updates for System x3850/x3950 X6 servers on the IBM support website (Fix
Central) at http://www.ibm.com/support/fixcentral using the the ’Find product’ tab.
• IBM General Parallel File System (IBM GPFS3 ) updates
– You can obtain updates for GPFS on the IBM support website for GPFS 3.5.0 and GPFS
4.1.0
• SUSE Linux Enterprise Server for SAP Applications 11 SP3
– You can download the installation package from the SUSE website at http://download.
novell.com/Download?buildid=XL0RqEykZpc~
• SUSE Linux patches and updates
– You can obtain the latest code updates for SUSE from the SUSE website at http://download.
novell.com/patch/finder/
• Red Hat Enterprise Linux 6.5
– You can download the installation package from the Red Hat website at http://www.redhat.
com/en/technologies/linux-platforms/enterprise-linux
• VMware ESX Server patches and updates
– You can obtain the latest code updates for vSphere ESX server from the VMware website at
http://www.vmware.com/support/
• SAP HANA appliance updates
– You can obtain the latest code updates from SAP at the SAP Service Marketplace at http:
//service.sap.com/swdc
Lenovo recommends that customers follow the software upgrade recommendations set out by SAP in the
SAP HANA Technical Operations Manual4 (TOM). It is important to understand that the corrections
listed in this note are those known to be a solution to a definite problem when running SAP HANA
appliance on the System x solutions. This knowledge was derived from internal testing, or customers who
ran into a specific problem. In parallel, the organizations owning the individual products provide a lot
more fixes that are unknown to the Lenovo-SAP team, yet are recommend to be applied, nevertheless. In
particular, there are fixes that IBM/Lenovo recommends to install that are not listed here. It is expected

3 IBM General Parallel File System


4 http://help.sap.com/hana/SAP_HANA_Technical_Operations_Manual_en.pdf

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 4


1.8.80-12 © Copyright Lenovo, 2015
Technical Documentation

that you contact your IBM/Lenovo service contact to get a list of those fixes as well as a reasonably
current service level in general.

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 5


1.8.80-12 © Copyright Lenovo, 2015
Technical Documentation

2 Introduction

2.1 Purpose

This document is intended to provide a single point of reference for techniques and product behaviors
when dealing with SAP HANA.

2.2 Applicability

The techniques and product behaviours outlined in this document apply to:
• SAP HANA appliance Platform Edition v1.0
• SLES for SAP5 11 SP3
• RHEL6 6.5
• IBM GPFS 3.5 and 4.1
• Lenovo Systems solution for SAP HANA appliance based on the:
– System x3850/x3950 X6 Workload Optimized Server

2.2.1 SAP HANA Platform Edition Versions

In this document, we reference to several different versions of the Lenovo Solution guided installation
software. The following numbering refers to the corresponding SAP HANA Platform Edition version.
1.7.x SAP HANA Platform Edition v 1.0 SPS07 - First release on IBM/Lenovo Systems X6 hardware
1.8.x SAP HANA Platform Edition v 1.0 SPS08

2.3 Exclusions and Exceptions

The techniques and product behaviours outlined in this document may not be applicable to future releases.

2.4 Conventions

This guide uses several conventions to improve the reader’s experience and the ease of understanding.

2.4.1 Icons Used

The following information boxes indicate important information you should follow according to the level
of importance.
Attention
ATTENTION – pay close attention to the instructions given

Warning
WARNING – this is something to take into consideration

5 SUSE Linux Enterprise Server for SAP Applications


6 Red Hat Enterprise Linux

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 6


1.8.80-12 © Copyright Lenovo, 2015
Technical Documentation

Note
INFORMATION – extra information describing in detail

2.4.2 Code Snippets

When reading code snippets you have to note the following: Lines of code that are too long to be shown
in one line will be automatically broken. This line break is indicated by an arrow at the end of the first
and an arrow at the start of the second line:
1 This is a code snippet that is too long to be printed in one single line, therefore ←-
,→you will see an automatic line break.

There are also line numbers at the left side of each code snippet to improve the readability.
Code examples that contain commands that have to be executed on a command line follow these rules:
• Lines beginning with a # indicate commands to be executed by the root user.
• Lines beginning with a $ indicate commands to be executed by an arbitrary user.

3 Solution Overview
This document provides general information specific to the Lenovo Solution. This document assumes
that the reader understands the basic structure and components of the SAP HANA Platform Edition.
SAP HANA should be installed on hardware that has been specifically certified for SAP HANA by SAP.
This hardware may not be configured from individual parts, rather it is to be ordered and delivered as a
single unit using an IBM or Lenovo manufacturer type/model number specified later.

3.1 The SAP HANA Appliance Software

The Lenovo Solution is based on building blocks that provide a highly scalable infrastructure for SAP HANA
based on the System x architecture: x3850/x3950 X6 as well as software, such as IBM GPFS, that will
be used to run SAP HANA.
Lenovo has created several system models upon which you may install and run SAP HANA according to
the sizing charts coordinated with SAP. For each workload type a special System x type/model has been
approved by SAP and Lenovo to accommodate the requirements for the SAP HANA Platform Edition.

3.2 Definition of SAP HANA

The following picture defines the current SAP HANA scenarios that can be leveraged through the System
x solution for the SAP HANA Platform Edition.

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 7


1.8.80-12 © Copyright Lenovo, 2015
Technical Documentation

Corporate Business Intelligence (BI)

SAP Business Warehouse

SAP HANA DB Appliance 1.0 SPS 03

Local BI Local BI

SAP ERP SAP ERP n Customer


SAP SAP SAP
HANA
(CRM (CRM,
HANA Application HANA

SRM,SCM) SRM,SCM)

Data Mart Data Mart Data Mart

SAP HANA DB SAP HANA DB SAP HANA DB


SAP HANA SAP HANA SAP HANA
Appliance Appliance 1.0 Appliance
1.0 1.0
1.0 SPS 05 1.0 SPS 05 1.0 SPS 05

Figure 1: Current SAP HANA Appliance Scenarios

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 8


1.8.80-12 © Copyright Lenovo, 2015
Technical Documentation

4 Hardware Configurations
The System X6 Workload Optimized servers for SAP HANA are based upon two building blocks that
can be used to fulfill the hardware requirements for SAP HANA. The SAP HANA appliance software
must be installed only on a certified and tested hardware configuration based on one of these two models.
Lenovo provides a model/type number for four (4) socket and eight (8) socket systems that are to be
setup for each certified model by SAP. A customer needs only to choose the model and the extra options
to fulfill their requirements. Models created manually will neither be supported by IBM/Lenovo nor SAP
due to the high-performance criteria set out by SAP during certification.
System x3850 X6 Workload Optimized Server for
SAP HANA
• 2×–4×Intel Xeon E7-8880v27,8 Family of Processors
• 128-2048GB DDR3 Memory
• Internal Storage:
– 6×1.2TB 2.5" HDD for RAID1 and RAID5
– 2×400GB SSD for LSI CacheCade
Figure 2: System x3850 X6 Server
• One (1) External Storage (EXP2524) for systems >
512GB (stand-alone configurations) or ≥ 512GB (clus-
ter configurations)
• 2 ×Dual-Port 10GbE NICs
• 1 ×Quad-Port 1GigE NICs
• IBM General Parallel File System
• Certified for SLES for SAP OS and SAP HANA appli-
ance software
Optional System Storage EXP2524
• Up to 20×1.2TB 2.5" HDD RAID59
• Up to 4×400GB SSD for LSI CacheCade
System x3950 X6 Workload Optimized Server for Figure 3: System Storage EXP2524
SAP HANA
• 4×–8×Intel Xeon E7-8880v210,11 Family of Processors
• 512GB–6TB DDR3 Memory
• Internal Storage:
– 12×1.2TB 2.5" HDD for RAID1 and RAID5
– 4×400GB SSD for LSI CacheCade
• One (1) External Storage (EXP2524) for systems ≥
3TB (stand-alone configurations) or > 1024GB (cluster
configurations)
7 For improved performance, E7-8890v2 is supported as an optional feature.
8 For customers who confirm that an upgrade to an 8 socket system will never be desired, the Intel processors E7-4880v2
or E7-4890v2 will also be supported as optional alternate features.
9 RAID6 optional
10 For improved performance, E7-8890v2 is supported as an optional feature.
11 For customers who confirm that an upgrade to an 8 socket system will never be desired, the Intel processors E7-4880v2

or E7-4890v2 will also be supported as optional alternate features.

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 9


1.8.80-12 © Copyright Lenovo, 2015
Technical Documentation

• 2 ×Dual-Port 10GbE NICs


• 1 ×Quad-Port 1GigE NICs
• IBM General Parallel File System
• Certified for SLES for SAP OS and SAP HANA appli-
ance software

4.1 SAP HANA Platform Edition T-Shirt Sizes

Lenovo and SAP have certified a set of configurations to be


used with the SAP HANA Platform Edition that are based
on the Intel Xeon IvyBridge EX E7-4880v2, E7-4890v2, E7-
8880v2 or E7-8890v2 processor family.

4.2 Single Node versus Clustered Configu-


ration

The Systems X6 Solution servers can be configured in two


ways:
1. As a single node configuration with separate, indepen- Figure 4: System x3950 X6 Server
dent HANA installations (example: production, test,
development). These servers all have individual GPFS
clusters that are independent from each other. These
should be installed as single servers.

SAP ERP Clients SAP ERP Clients SAP ERP Clients


(Prod) (Test) (Dev)

Server 1 Server 2 Server 3


(Production)
Production) (Test) (Development)

SAP HANA SAP HANA SAP HANA


database database database

GPFS GPFS GPFS

Internal
Internal Internal Internal
storage
storage storage storage

Figure 5: SAP HANA Multiple Single Node Example

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 10


1.8.80-12 © Copyright Lenovo, 2015
Technical Documentation

2. As a clustered configuration with a distributed HANA


instance across servers. All server (nodes) form one HANA cluster. All servers (nodes) form one
GFPS cluster. These should be installed as clustered servers.

Clients

SAP BW SAP ERP

SAP HANA Cluster


Server 1 Server 2 Server 3

SAP HANA SAP HANA SAP HANA Backup/Recovery


Database Database Database
Master node Worker node Standby node
SAN Storage

SAN
storage
GPFS GPFS
GPFS
Primary Secondary
Additional node
node node SAN
storage

SAN
Internal Storage Internal Storage Internal Storage storage
SAP HANA SAP HANA SAP HANA
Data&Log Data&Log Data&Log
SAN
storage
GPFS Cluster

Figure 6: SAP HANA Clustered Example with Backup

The term scale-out or cluster is used interchangeably in this document. What is meant is the use
of multiple single Lenovo workload optimized servers connected via one or more configuration specific
network switches in such a way that all servers act as one single high performance SAP HANA instance.
These servers will need to be configured different from a single node system and are therefore defined
here explicitly. Further documentation will differentiate between non-clustered (single or consolidated)
and clustered installations.

4.2.1 Network Switch Options

For clustered configurations, extra hardware such as network switches and adapters need to be purchased
in addition to the clustered appliances. Currently, the only supported network switches for the Lenovo
Workload Optimized server in a clustered configuration are:

Network Description Part Number


RackSwitch G8264 (Rear-to-Front) 7309G64
RackSwitch G8264 (Front-to-Rear) 730964F
10Gb Ethernet RackSwitch G8124DC 7309BD5
RackSwitch G8124E (Rear-to-Front) 7309BR6
RackSwitch G8124E (Front-to-Rear) 7309BF7
RackSwitch G8052 (Rear-to-Front) 7309G52
1Gb Ethernet
RackSwitch G8052 (Front-to-Rear) 730952F

Table 1: Network Switch Options

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 11


1.8.80-12 © Copyright Lenovo, 2015
Technical Documentation

Note
These configurations may change over time, so please contact sapsolutions@lenovo.com for
any update.

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 12


1.8.80-12 © Copyright Lenovo, 2015
Technical Documentation

4.3 SAP HANA Optimized Hardware Configurations

SEO models exist for certain configurations, please see the D: Lenovo X6 Server MTM List & Model
Overview on page 195 for more details.

4.3.1 System x3850 X6 Single Node Configurations

SAP Models 128 256 384 512 256 512


Product x3850 X6
3837–AC3 or 6241–AC3
3873–H2x 3873–H3x 3873–H4x 3873–H5x
Type/Model
or or n/a or n/a or
6241–H2x 6241–H3x 6241–H4x 6241–H5x
CPU 2 ×Intel Xeon® E7-8880v2 4 ×Intel Xeon® E7-8880v2
128GB 256GB 384GB 512GB 256GB 512GB
Memory
DDR3 DDR3 DDR3 DDR3 DDR3 DDR3
Disk 6×1.2TB HDD 2×400GB SSD
Controller 1 ×M5210
Disk Layout 3.6 TB RAID5 for SAP HANA data/log
2 × Dual-Port 10GbE
Network
1 × Quad-Port 1GigE

Table 2: System x3850 X6 Single Node Configurations

4.3.2 System x3950 X6 Single Node Configurations

SAP Models 256 512 768 1024 1536 2048


Product x3950 X6
3837–AC4 or 6241–AC4
3837–HBx 3837–HCx
Type/Model
n/a or n/a or n/a n/a
6241–HBx 6241–HCx
CPU 4 ×Intel Xeon® E7-8880v2
256GB 512GB 768GB 1024GB 1536GB 2048GB
Memory
DDR3 DDR3 DDR3 DDR3 DDR3 DDR3
Disk 6×1.2TB HDD 2×400GB SSD 12×1.2TB HDD 4×400GB SSD
Controller 1 ×M5210 2 ×M5210
3.6 TB RAID5 for 9.6 TB RAID5 for
Disk Layout
SAP HANA data/log SAP HANA data/log
2 × Dual-Port 10GbE
Network
1 × Quad-Port 1GigE

Table 3: IBM System x3950 X6 Single Node Four Socket Configurations

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 13


1.8.80-12 © Copyright Lenovo, 2015
Technical Documentation

SAP Models 512 1024 1536 2048


Product x3950 X6
3837–AC4 or 6241–AC4
3837–HDx
Type/Model
n/a n/a n/a or
6241–HDx
CPU 8 ×Intel Xeon® E7-8880v2
512GB 1TB 1.5TB 2TB
Memory
DDR3 DDR3 DDR3 DDR3
Disk 6×1.2TB HDD 2×400GB SSD 12×1.2TB HDD 4×400GB SSD
Controller 1 ×M5210 2 ×M5210
Disk Layout 3.6 TB RAID5 for SAP HANA data/log
2 × Dual-Port 10GbE
Network
1 × Quad-Port 1GigE

Table 4: System x3950 X6 Single Node Eight Socket Configurations

4.3.3 System x3850 X6 Single Node Four Socket Configurations with Storage Expansion

SAP Models 768 1024 1536* 2048*


Product x3850 X6
3837–AC3 or 6241–AC3
3873–H6x
Type/Model
n/a or n/a
6241–H6x
CPU 4 ×Intel Xeon® E7-8880v2
Memory 768GB DDR3 1 TB DDR3 1.5 TB DDR3 2 TB DDR3
Disk 15×1.2TB HDD & 4×400GB SSD
Controller 1 ×M5210 & 1 ×M5120/M5225
Disk Layout 13.2 TB RAID5 for SAP HANA data/log
2 × Dual-Port 10GbE
Network
1 × Quad-Port 1GigE

Table 5: System x3950 X6 Single Node Four Socket Configurations with Storage Expansion
* For Suite on HANA only, not Datamart and BW

4.3.4 System x3950 X6 SAP ERP on SAP HANA Single Node Configurations

SAP Models 3TB 4TB 6TB


Product x3950 X6
Type/Model 3837–AC4 or 6241–AC4
CPU 8 ×Intel Xeon® E7-8880v2
Memory 3 TB DDR3 4 TB DDR3 6 TB DDR3
Disk 21×1.2TB HDD & 6×400GB SSD 30×1.2TB HDD & 8×400GB SSD
Controller 2 ×M5210 & 1 ×M5120/M5225
Disk Layout 19.2 TB RAID5 for SAP HANA data/log 28.8 TB RAID5 for SAP HANA data/log
2 × Dual-Port 10GbE
Network
1 × Quad-Port 1GigE

Table 6: System x3950 X6 SAP ERP on SAP HANA Single Node Configurations

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 14


1.8.80-12 © Copyright Lenovo, 2015
Technical Documentation

4.3.5 System x3850 X6 Cluster Node Configurations with Storage Expansion

SAP Models 256 512 1024


Product x3850 X6
Type/Model 3837–AC3 or 6241–AC3
2 ×Intel Xeon® E7-
CPU 4 ×Intel Xeon® E7-8880v2
8880v2
Memory 256GB DDR3 512GB DDR3 1 TB DDR3
Disk 15×1.2TB HDD & 4×400GB SSD
Controller 1 ×M5210 & 1 ×M5120/M5225
Disk Layout 13.2 TB RAID5 for SAP HANA data/log
2 × Dual-Port 10GbE
Network
1 × Quad-Port 1GigE

Table 7: System x3850 X6 Cluster Node Configurations with Storage Expansion

4.3.6 System x3950 X6 Cluster Node Configurations

SAP Models 512 1024 1024 2048


Product x3950 X6
Type/Model 3837–AC4 or 6241–AC4
CPU 4 ×Intel Xeon® E7-8880v2 8 ×Intel Xeon® E7-8880v2
Memory 512GB DDR3 1 TB DDR3 1 TB DDR3 2 TB DDR3
12×1.2TB HDD 21×1.2TB HDD
Disk
& 4×400GB SSD & 6×400GB SSD
Controller 2 ×M5210 2 ×M5210 & 1 ×M5120/M5225
9.6 TB RAID5 19.2 TB RAID5
Disk Layout
for SAP HANA data/log for SAP HANA data/log
2 × Dual-Port 10GbE
Network
1 × Quad-Port 1GigE

Table 8: System x3950 X6 Cluster Node Configurations

4.4 Card Placement

Attention
You need to make sure, that the cards are placed in the correct PCI slot. Please refer to the
tables below for the assignment regarding in which slot a certain card should be. This step
must be done before the installation. Please be aware, that only with the correct card layout
your machine is supported by Lenovo.
Depending on having two, four or eight socket machines, there is a different card placement. Please refer
to figure 7 and table 10 two socket machines, figure 8 and table 11 on page 17 regarding four socket
machines and figure 10 and table 12 on page 19. Concerning the numbering of the slots please note that
PCI slots 11 and 12 are located in the Storage Book, see figure 9. A x3950 X6 machine has an additional
Storage Book containing PCI slots 43 and 44. The Storage Books are accessible from the front.

4.4.1 Network Interface Cards

The x3850 X6 machine comes with two Mellanox ConnectX-3 FDR IB VPI adapters that provide two
QSFP ports each. With QSA adapters the QSFP ports support SFP+ transceivers for 10GbE connec-
tivity. A quad port Intel I-350 provides four 1GbE ports and is placed in slot 10. In a x3950 X6 an

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 15


1.8.80-12 © Copyright Lenovo, 2015
Technical Documentation

additional I-350 card can be placed in slot 42. Intel I-340 PCI cards is available optionally, if more 1GbE
ports are needed.
Please see the tables and figures below regarding the assignment regarding in which slot a certain card
should be, depending on your machine type and configuration.

4.4.2 Slots for additional Network Interface Cards

If the customer needs more network ports, the PCI slots shown in table 9: Slots which may be used for
additional NICs on page 16 may be used for additional NICs.

Machine PCI Slots


x3850 X6 two sockets 9, 10
x3850 X6 four sockets 2, 3, 5, 6, 10
x3950 X6 four sockets 9, 10, 41, 42
x3950 X6 eight sockets 5, 6, 10, 37, 38, 42

Table 9: Slots which may be used for additional NICs

4.4.3 RAID Adapter Cards

The internal RAID adapter is a ServeRAID M5210 which resides in slot 12 in the Storage Book. Regarding
the x3950 X6, there are two internal RAID adapter used, residing in slot 12 and 44.
The first external RAID adapter (ServeRAID M5120 or M5225) in a x3850 X6 will be placed in slot 8,
the second in slot 7 and then slot 9 for the third. Regarding a x3950 X6 machine, placement will start
in slot 40, then 39, then 41 and finally 7 and 8, refer to table 13 for details.

Ethernet
Card Port Label Slot
Device
ServeRAID M5210 (internal) – 12 –
E eth4
10
F eth5
Intel I-350 1GbE quad port
G eth6
H eth7
– eth8
– eth9
Intel I-340 1GbE quad port *
– 9 eth10
– eth11
A eth0
Mellanox ConnectX-3 FDR IB VPI 8
B eth1
C eth2
Mellanox ConnectX-3 FDR IB VPI 7
D eth3
100MbE internal Ethernet Adapter for
I – –
System Management via the IMM

Table 10: Card assignments for a two socket x3850 X6


* This card is optional

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 16


1.8.80-12 © Copyright Lenovo, 2015
Technical Documentation

Figure 7: Workload Optimized System x3850 X6 2 Socket Rear View

Ethernet
Card Port Label Slot
Device
ServeRAID M5210 (internal) – 12 –
E eth4
10
F eth5
Intel I-350 1GbE quad port
G eth6
H eth7
ServeRAID M5120/M5225 (external) * – 9 –
ServeRAID M5120/M5225 (external) * – 8 –
ServeRAID M5120/M5225 (external) * – 7 –
– eth8
– eth9
Intel I-340 1GbE quad port *
– 5 eth10
– eth11
C eth2
Mellanox ConnectX-3 FDR IB VPI 4
D eth3
A eth0
Mellanox ConnectX-3 FDR IB VPI 1
B eth1
100MbE internal Ethernet Adapter for
I – –
System Management via the IMM

Table 11: Card assignments for a four socket x3850 X6


* This cards are only used in certain configurations, please refer to section 4.4.3 for details

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 17


1.8.80-12 © Copyright Lenovo, 2015
Technical Documentation

Figure 8: Workload Optimized System x3850 X6 4 Socket Rear View

Figure 9: Workload Optimized System Storage Book. This contains slots 11, 12 and slots 43, 44 on x3950
X6 in an additional Storage Book

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 18


1.8.80-12 © Copyright Lenovo, 2015
Technical Documentation

Ethernet
Card Port Label Slot
Device
E eth4
10
F eth5
Intel I-350 1GbE quad port
G eth6
H eth7
C eth2
Mellanox ConnectX-3 FDR IB VPI 36
D eth3
A eth0
Mellanox ConnectX-3 FDR IB VPI 4
B eth1
100MbE internal Ethernet Adapter for
I – –
System Management via the IMM
100MbE internal Ethernet Adapter for
J – –
System Management via the IMM
K e.g. eth8
42
L e.g. eth9
Intel I-350 1GbE quad port *
M e.g. eth10
N e.g. eth11
– e.g. eth8
– e.g. eth9
Intel I-340 1GbE quad port *
– 5 e.g. eth10
– e.g. eth11

Table 12: Network interface card assignments for an eight socket x3950 X6
* This cards is optional, please refer to table 13 for details

4 processors 8 processors
Slot 512GB 4S 1TB 4S 1TB 2TB 4TB 6TB* 12TB*
4 MLNX MLNX MLNX MLNX MLNX
S/C
7 MLNX MLNX M5120/
M5225
S/C
8 M5120/
M5225
10 I350 I350 I350 I350 I350 I350 I350
12 M5210 M5210 M5210 M5210 M5210 M5210
36 MLNX MLNX MLNX MLNX MLNX
S/C S/C
C M5120/
39 MLNX MLNX M5120/ M5120/
M5225
M5225 M5225
C C C S/C
C M5120/ C M5120/
40 M5120/ M5120/ M5120/ M5120/
M5225 M5225
M5225 M5225 M5225 M5225
C M5120/ C M5120/
41
M5225 M5225
42 I350 I350 I350 I350 I350 I350 I350
44 M5210 M5210 M5210 M5210 M5210 M5210 M5210

Table 13: Card placement for x3950 X6 four socket and eight socket

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 19


1.8.80-12 © Copyright Lenovo, 2015
Technical Documentation

Figure 10: Workload Optimized System x3950 X6 8 Socket Rear View

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 20


1.8.80-12 © Copyright Lenovo, 2015
Technical Documentation

5 Networking

5.1 Networking Requirements

The networking for the Lenovo Solution, the Integrated Management Module (IMM) and the correspond-
ing switches should be set up and integrated into the customer network environment according to the
customer’s requirements and the recommendations from SAP. SAP currently recommends that individual
workloads are separated by either physical or virtual LAN addresses or subnets.
The individual workloads described by SAP are:
• SAP HANA internal communication via SAP HANA private networking
• Customer access to the SAP HANA appliance via:
– SAP Landscape Transformation Replication (LT)
– Sybase Replication (SR)
– SAP Business Objects Data Services (DS)
– Business Objects XI, Microsoft Excel, etc.
– Server data management tools for:
∗ System/DB backup and restore operations
– Logical server application management (can be partially accomplished via Integrated Manage-
ment Module)
∗ SSH access, VNC access, SAP Support access
We strongly recommend that the following SAP Workloads are dedicated and distinct subnets using
separate Ethernet adapters (NICs). If not, the network setup will become more complicated.
• SAP HANA client access
• Server data management
• Server application management
Additionally to the SAP workloads the Lenovo Solution defines two additional workloads:
• IBM clustered files system communications for GPFS
• Physical server management via the Integrated Management Module
– Hardware support, console web access and SSH access
It is necessary to separate the IBM GPFS and SAP HANA internal networks from all other networks as
well as from each other. Servers being configured in a clustered scenario require two dedicated high speed
NICs (e.g. 10GbE) with separate physical private LANs for the internal communication of GPFS and SAP
HANA. In addition external networks, e.g. for SAP Client/BW and SAP management communication
should be separated as well. If not, SAP HANA performance may be compromised and the system is not
supported by SAP nor Lenovo.

5.2 Jumbo Frames

It is possible and allowed to activate the usage of so-called jumbo frames for the HANA and GPFS
networks. Jumbo frames are Ethernet frames with a Maximum Transmission Unit (MTU) up to 9000
bytes. The standard MTU is 1500.

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 21


1.8.80-12 © Copyright Lenovo, 2015
Technical Documentation

The advantage of jumbo frames is less overhead for headers and checksum computation. This can lead
to a better network performance on the HANA and GPFS networks.
Attention
Jumbo frames can only be used, if all network components (for example networking adapters
and switches) that have to process these jumbo frames support the usage.
If erroneously activated, jumbo frames cause the loss of network connectivity.
The switches G8264 and G8124 are certified for the usage in the Lenovo Solution appliance with jumbo
frames. In a standard cluster setup jumbo frames can be activated. In DR12 or High Availability setups
the HANA and GPFS networks may communicate via non-Lenovo customer switches that cannot handle
jumbo frames, therefore it is recommended to not use jumbo frames in these setups.
To change this behaviour, you have to change the MTU-size. This can be done like the following:
• SUSE using the Yast-module for networking under the ’General’-Tab for the used network de-
vice/bond
• Red Hat by changing the MTU-size in the file /etc/sysconfig/network-scripts/ifcfg-* for the used
interface/bond

Warning
Starting from release 1.6.60-890 of the non-OS component DVD jumbo frames get activated
during the installation phase. You may have to deactivate the usage of jumbo frames in
certain scenarios.

5.3 Network Configuration

Before you configure the server and install the Lenovo Solution, please gather the following network
information from your network administrator where indicated with the b symbol. Please use only IPv4
addresses.
Note
In case the customer plans to install a single node configuration, but would like to scale it out
to a cluster by adding more severs: plan the network configuration for the GPFS and HANA
networks as if the cluster would be already existing to simplify a later scale out.

IP Address
Primary DNS IP b
Secondary DNS IP b
Domain Search b
NTP Server b
Default Network
b
Prefix
Default Netmask b
Default Gateway b

Table 14: Customer infrastructure addresses

12 Disaster Recovery (previously SAP Disaster Tolerance)

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 22


1.8.80-12 © Copyright Lenovo, 2015
Technical Documentation

Port Label
Network Single Cluster IP Address Hostname Netmask Gateway
Server Node 01 (Worker/Stand-By/Single)
IBM GPFS 127.0.1.1 (default)
gpfsnode01 255.255.255.0 None
Private Net- 192.168.10.101 (ex-
any A/C (mandatory) (recommended) (recom-
work (prede- ample)
b b mended)
fined) b
SAP HANA 127.0.2.1 (default)
hananode01 255.255.255.0 None
Private Net- 192.168.20.101 (ex-
any B/D (mandatory) (recommended) (recom-
work (prede- ample)
b b mended)
fined) b
Any of the re-
Customer
maining NIC b b b b
Network
ports
IMM I b b b b
Server Node 02 (Worker/Stand-By)
IBM GPFS
127.0.1.1 (default) None
Private Net- gpfsnode02 255.255.255.0
any A/C 192.168.10.102 (ex- (recom-
work (prede- (mandatory) (recommended)
ample) mended)
fined)
SAP HANA
127.0.2.1 (default) None
Private Net- hananode02 255.255.255.0
any B/D 192.168.20.102 (ex- (recom-
work (prede- (mandatory) (recommended)
ample) mended)
fined)
..
.
for all other nodes
..
.

Table 15: IP address configuration

5.4 Network Switch Configuration For Clustered Installations

In a clustered configuration with high availability, the internal networks of the appliance for GPFS
and HANA are set up with redundant links. These connect to redundant G8264 10GigE switches or
redundant G8124 10GigE switches. Both switches are connected with a minimum of two ISL ports. It
is recommended to use the 40GbE ports for the ISLs. On host side the two corresponding ports of each
network are configured as Linux bond devices. The data replication connection to the primary data
source can also be set up in a redundant fashion and connects directly to the appliance internal 10GigE
HANA network. The details for this setup depend strongly on the customers network infrastructure and
need to be planned accordingly. Details to the exact configuration can be found in chapter 5.6.5: Network
Configurations in a Clustered Environment on page 28.
Warning
When connecting the data replication network directly to the internal 10GigE network, an
ACL needs to be configured on the uplink port to isolate the internal networks (e.g. 127.0.n.24)
from the customer network.
If a network adapter or one of the switches fail, the SAP HANA network and the GPFS network are
taken over by the remaining switch and network adapter.
It is recommended to establish redundant network connections for the other networks (e.g. client network)
as well. This setup is similar to the internal networks and requires two identical G8052 1GigE switches.

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 23


1.8.80-12 © Copyright Lenovo, 2015
Technical Documentation

As long as there is one redundant path to each server the remaining appliance and data management
networks can be implemented with a single link. Each of the networks will then connect to one of the
two 1GigE switches.
To implement network redundancy on the switch level, a Virtual Link Aggregation Group (VLAG) needs
to be created on the two network switches. A VLAG requires a dedicated inter-switch link (ISL) for
synchronization. More details can be found in Chapter 5.6.5: Network Configurations in a Clustered
Environment on page 28.
Warning
Firmware version N/OS 7.6 or higher is required for VLAGs, so please update the G8124 /
G8264 switches to this level and use the same firmware version for all switches. Currently the
firmware level recommendation is version 7.7.5 or higher.

Note
For more details on VLAGs please obtain the version of "RackSwitch G8264 Application
Guide" respective to the N/OS you have installed and consult chapter 11, "Virtual Link
Aggregation Groups".

5.5 Customer Site Networks

We allow the customer to define and use their own networks and connect them to the dedicated customer
network NICs using their own switch infrastructure. Please ensure the proper IP address setup on the
Lenovo Solution server. This guide does not go into detail regarding the customers switch configuration,
nor for the configuration in the cluster.

5.6 Network Definitions

5.6.1 Numbering conventions

Network IP-Interf. VLAN LACP-Key VLAG-Key Tier-ID Network


MGMT (G8264) 128 4095* - - - -
MGMT (G8124) 128 4095* - - - -
MGMT (G8052) 128 4092** - - - -
ISL - 4094 VLAN+1000 LACP Key 10 -
GPFS - 100(++) port#+1000 LACP-Key - 192.168.10.0/24
HANA - 200(++) port#+1000 LACP-Key - 192.168.20.0/24
IMM (BMC) - 300(++) port#+1000 LACP-Key - 192.168.30.0/24
* VLAN 4095 is internally assigned to the management port(s) and cannot be changed.
** VLAN 4092 is a suggestion for the management VLAN.

Table 16: Numbering conventions

Note
The "(++)" in the table above indicates that +1 should be added for every new network in
case of multiple GPFS, HANA or IMM LANs.

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 24


1.8.80-12 © Copyright Lenovo, 2015
Technical Documentation

5.6.2 Internal Networks – Option 1 G8264 RackSwitch 10Gbit

This option is defined to use the G8264 RackSwitch 10Gbit Ethernet switch as a private network landscape
for IBM GPFS and SAP HANA. This allows up to 24 Lenovo Solution servers to be connected. The
setup is as follows:
18,20,22,24,26,28...64 (HANA)
.----------------------,5_____
MGMT| G8264 Switch |1_____\__ Inter-Switch 40Gb Link (ISL)
‘----------------------’ \_\_____Port 5 bonded ISL
17,19,21,23,25,27...63 (GPFS) / \
18,20,22,24,26,28...64 (HANA)/ \___Port 1 bonded ISL
.----------------------,5____/ /
MGMT| G8264 Switch |1_________/
‘----------------------’
17,19,21,23,25,27...63 (GPFS)

Figure 11: G8264 RackSwitch

Figure 12: G8264 RackSwitch schema

This guide defines the IBM GPFS network to be used as 192.168.10.0/24 and the SAP HANA network
to be used as 192.168.20.0/24. If the customer wants to use a different IP range he may do so, but it
should be used consistently as the internal (private) network within this guide.

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 25


1.8.80-12 © Copyright Lenovo, 2015
Technical Documentation

Switch Port VLAN IP Address Hostname Server NIC


g8264-1 MGMT 4095 <customer-mgmt IP1> <switch1> n/a
g8264-1 17 100 192.168.10.101 gpfsnode01 bond0
g8264-1 18 200 192.168.20.101 hananode01 bond1
g8264-1 19 100 192.168.10.102 gpfsnode02 bond0
g8264-1 20 200 192.168.20.102 hananode02 bond1
.. .. .. .. .. ..
. . . . . .
g8264-1 63 100 192.168.10.124 gpfsnode24 bond0
g8264-1 64 200 192.168.20.124 hananode24 bond1
g8264-2 MGMT 4095 <customer-mgmt IP2> <switch2> n/a
g8264-2 17 100 192.168.10.101 gpfsnode01 bond0
g8264-2 18 200 192.168.20.101 hananode01 bond1
g8264-2 19 100 192.168.10.102 gpfsnode02 bond0
g8264-2 20 200 192.168.20.102 hananode02 bond1
.. .. .. .. .. ..
. . . . . .
g8264-2 63 100 192.168.10.124 gpfsnode24 bond0
g8264-2 64 200 192.168.20.124 hananode24 bond1

Table 17: G8264 RackSwitch port assignments

Note
There is no public network attached to these switches.

5.6.3 Internal Networks – Option 2 G8124 RackSwitch 10Gbit

This option is defined to use the G8124 RackSwitch 10Gbit Ethernet switch as a private network landscape
for IBM GPFS and SAP HANA. This allows up to 11 Lenovo Solution servers to be connected. The
setup is as follows:
2,4,6,8,10,12,14,...22 (HANA)
.----------------------,24____
MGMT| G8124 Switch |23____\__ Inter-Switch 10Gb Link (ISL)
‘----------------------’ \_\_____Port 24 bonded ISL
1,3,5,7,9,11,13,15..21 (GPFS) / \
2,4,6,8,10,12,14,...22 (HANA)/ \___Port 23 bonded ISL
.----------------------,24___/ /
MGMT| G8124 Switch |23________/
‘----------------------’
1,3,5,7,9,11,13,15..21 (GPFS)

Figure 13: IBM G8124 RackSwitch

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 26


1.8.80-12 © Copyright Lenovo, 2015
Technical Documentation

Figure 14: IBM G8124 RackSwitch schema

This guide defines the IBM GPFS network to be used as 192.168.10.0/24 and the SAP HANA network
to be used as 192.168.20.0/24. If the customer wants to use a different IP range he may do so, but it
should be used consistently as the internal (private) network within this guide.

Switch Port VLAN IP Address Hostname Server NIC


g8124-1 MGMT-b 4095 <customer-mgmt IP1> <switch1> n/a
g8124-1 1 100 192.168.10.101 gpfsnode01 bond0
g8124-1 2 200 192.168.20.101 hananode01 bond1
g8124-1 3 100 192.168.10.102 gpfsnode02 bond0
g8124-1 4 200 192.168.20.102 hananode02 bond1
g8124-1 5 100 192.168.10.103 gpfsnode03 bond0
g8124-1 6 200 192.168.20.103 hananode03 bond1
.. .. .. .. .. ..
. . . . . .
g8124-1 21 100 192.168.10.111 gpfsnode11 bond0
g8124-1 22 200 192.168.20.111 hananode11 bond1
g8124-2 MGMT-b 4095 <customer-mgmt IP2> <switch2> n/a
g8124-2 1 100 192.168.10.101 gpfsnode01 bond0
g8124-2 2 200 192.168.20.101 hananode01 bond1
g8124-2 3 100 192.168.10.102 gpfsnode02 bond0
g8124-2 4 200 192.168.20.102 hananode02 bond1
g8124-2 5 100 192.168.10.103 gpfsnode03 bond0
g8124-2 6 200 192.168.20.103 hananode03 bond1
.. .. .. .. .. ..
. . . . . .
g8124-2 21 100 192.168.10.111 gpfsnode32 bond0
g8124-2 22 200 192.168.20.111 hananode32 bond1

Table 18: G8124 RackSwitch port assignments

5.6.4 Administrative, SAP-Access and Backup Networks – Option G8052 RackSwitch


1Gbit

The G8052 RackSwitch 1Gbit Ethernet switch is mainly used for the administrative networks. It can
be used also for SAP-Access, backup network or other client specific networks. These networks are both
public and private and need to be carefully separated with VLANs. The landscape is as follows:
2,4,6,8,10,12,14,...48
52.----------------------,50______
51| G8052 Switch |49______\__ Inter-Switch 1Gb Link (ISL)
‘----------------------’ \_\_____Port 50 bonded ISL
1,3,5,7,9,11,13,....47 (IMM) / \
2,4,6,8,10,12,14,...48 / \___Port 49 bonded ISL
52.----------------------,50_____/ /
51| G8052 Switch |49__________/
‘----------------------’
1,3,5,7,9,11,13,....47 (IMM)

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 27


1.8.80-12 © Copyright Lenovo, 2015
Technical Documentation

Figure 15: G8052 RackSwitch

Figure 16: G8052 RackSwitch schema

This guide defines the Integrated Management Module (IMM) Network to be 192.168.30.0/24. If the
customer wants to use a different IP range for the Integrate Management Module (IMM) he may do so,
but it should be used consistently within this guide.

Switch Port VLAN IP Address Hostname Server NIC


g8052-1 52 4092 <customer-mgmt IP1> <switch1> n/a
g8052-1 1 300 192.168.30.101 cust-imm01.site.net sys-mgmt
g8052-1 3 300 192.168.30.102 cust-imm02.site.net sys-mgmt
.. .. .. .. ..
g8052-1 . . . . .
g8052-1 47 300 192.168.30.124 cust-imm24.site.net sys-mgmt
g8052-2 52 4092 <customer-mgmt IP2> <switch2> n/a
g8052-2 1 300 192.168.30.125 cust-imm25.site.net sys-mgmt
g8052-2 3 300 192.168.30.126 cust-imm26.site.net sys-mgmt
.. .. .. .. ..
g8052-2 . . . . .
g8052-2 47 300 192.168.30.148 cust-imm48.site.net sys-mgmt

Table 19: G8052 RackSwitch port assignments

5.6.5 Network Configurations in a Clustered Environment

The networking in the clustered environment is the cornerstone of the Lenovo Solution. Therefore it is
important that you ensure that the network (switches, wires, etc. ) has been set up before starting the
installation of the servers. Below is one example of how to connect the customers network infrastructure
with the clustered environment, see figure 17.
Please read section 5.7: Setting up the Switches on page 29 for the RackSwitch setup.

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 28


1.8.80-12 © Copyright Lenovo, 2015
Technical Documentation

SAP client Legend Interface 0 1 GigE


Inter Switch Bonded
SAP HANA GPFS Links Interface
1GbE IMM 6
Optional 10 GigE
10GbE 10 GbE 10 GbE 1 GbE 40 GbE Interface 8

Customer
Customer
Interface Zone
Interface Zone
SAP
SAPHANA
HANAAppliance
Appliance

IMM IMM IMM IMM


0 1 0 1 0 1 0 1
Node1 Node2 Node3 NodeN

10 GbE
1GigE 1 6 6 6 6
Customer HANA HANA HANA HANA
Switch Choice 8 8 8 10GigE 1 8
Optional

System
management

SAP
Business Suite
10GigE 2
7 7 7 7
Customer
1GigE 2 GPFS GPFS GPFS GPFS
9 9 9 9
Switch Choice
Optional

3 5 10 3 5 10 3 5 10 3 5 10
2 4 11 2 4 11 2 4 11 2 4 11

Figure 17: Cluster Node Network Diagram

5.7 Setting up the Switches

5.7.1 Basic Switch Configuration Setup

5.7.1.1 Configuring SSH/SCP Features on the Switch SSH and SCP features are disabled by
default. To change the SSH/SCP settings, use the following procedure. Connect to the switch via a serial
console and execute the following commands:
RS 8XXX> enable
RS 8XXX# configure terminal
RS 8XXX(config)# ssh enable
RS 8XXX(config)# ssh scp-enable
RS 8XXX(config)# interface ip 128
RS 8XXX(config-ip-if)# ip address <customer-mgmt IP> <customer-subnetmask>
RS 8XXX(config-ip-if)# enable
RS 8XXX(config-ip-if)# exit
Example: Configuring gateway
RS 8XXX(config)# ip gateway 4 address <customer-gateway>
RS 8XXX(config)# ip gateway 4 enable
Save changes to switch FLASH memory
RS 8XXX# copy running-config startup-config

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 29


1.8.80-12 © Copyright Lenovo, 2015
Technical Documentation

5.7.1.2 Simple Network Management Protocol Version 3 SNMP version 3 (SNMPv3) is an


enhanced version of the Simple Network Management Protocol, approved by the Internet Engineering
Steering Group in March, 2002. SNMPv3 contains additional security and authentication features that
provide data origin authentication, data integrity checks, timeliness indicators and encryption to protect
against threats such as masquerade, modification of information, message stream modification and dis-
closure. SNMPv3 allows clients to query the MIBs securely. SNMPv3 configuration is managed using
the following command path menu:
RS 8XXX(config)# snmp-server ?
The default configuration of N/OS has two SNMPv3 users by default. Both of the following users have
access to all the MIBs supported by the switch:
• User name is adminmd5 (password adminmd5). Authentication used is MD5
• User name is adminsha (password adminsha). Authentication used is SHA
You can try to connect to the switch using the following command.
# snmpwalk -v 3 -c Public -u adminmd5 -a md5 -A adminmd5 -x des -X adminmd5 -l authPriv
<hostname> sysDescr.0

5.7.2 Advanced Setup of the Switches

For every switch in the cluster do the following:


It is mandatory to setup Virtual Link Aggregation Group (VLAG) between the switches as well as a
Virtual Local Area Network (VLAN) for each private network. The following illustration shows the setup
for an M-sized cluster using the G8264 RackSwitches.

G8264 #1 G8264 #2
MGMT 1 IP: 192.168.255.253/24 VLAN 4095 MGMT 2 IP: 192.168.255.252/24 VLAN 4095
ISL VLAN 4094
Tier-ID 10 (port 1, 5)

mgt 5-8 13-16 18 20 22 24 26 28 30 32 34 36 38 40 42 44 46 48 50 52 54 56 58 60 62 64 mgt 5-8 13-16 18 20 22 24 26 28 30 32 34 36 38 40 42 44 46 48 50 52 54 56 58 60 62 64


1-4 9-12 17 19 21 23 25 27 29 31 33 35 37 39 41 43 45 47 49 51 53 55 57 59 61 63 1-4 9-12 17 19 21 23 25 27 29 31 33 35 37 39 41 43 45 47 49 51 53 55 57 59 61 63

Port: 30 Port: 18
Port: 29

Port: 17 Port: 18 Port: 30


Port: 17
bond0 bond0 GPFS
Port: 29 HANA
bond1 bond1
eth6
eth6
eth7 eth0 eth2 node 1 eth7 eth0 eth2 node 2
eth8 eth8
eth9 eth1 eth3 eth4 eth5 sys eth9 eth1 eth3 eth4 eth5 sys

Figure 18: Cluster Switch Networking Example

Note
Please make sure that you pick the same port of each of the two Mellanox adapters for each
of the internal networks. This reduces complexity.

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 30


1.8.80-12 © Copyright Lenovo, 2015
Technical Documentation

Note
The management IP addresses are examples and need to be customized according to the
customer’s network.
These instructions are for RackSwitch N/OS Version 7.6. Newer versions may have different commands.
Please check the RackSwitch Industry-Standard CLI Reference for the version of the CLI that correlates
to the switch N/OS version.

5.7.3 Disable Spanning Tree Protocol

RS 8XXX (config)# spanning-tree mode disable


RS 8XXX (config)# no spanning-tree stg-auto

Note
Spanning-Tree is disabled globally with "spanning-tree mode disable". The setting "no
spanning-tree stg-auto" prevents the switch from automatically creating STG groups when
defining VLANs.

5.7.4 Disable Default IP Address

RS 8XXX (config)# no system default-ip data

5.7.5 Enable L4Port Hash

RS 8264 (config)# portchannel thash l4port

5.7.6 Disable Routing

RS 8XXX (config)# no ip routing

5.7.7 Add Networking

For each subnetwork, you should create the following VLANs and Trunk VLAG configurations as de-
scribed.

5.7.8 VLAN configurations

5.7.8.1 IBM GPFS Storage Network


• Create IP interface for the GPFS storage network
# Define Switch 1,2
RS 8XXX (config)# vlan 100
RS 8XXX (config)# interface ip 10
# next line for the 1st switch:
RS 8XXX (config-ip-if)# ip address 192.168.10.249 255.255.255.0
# next line for the 2nd switch:
RS 8XXX (config-ip-if)# ip address 192.168.10.248 255.255.255.0
RS 8XXX (config-ip-if)# vlan 100
RS 8XXX (config-ip-if)# enable

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 31


1.8.80-12 © Copyright Lenovo, 2015
Technical Documentation

RS 8XXX (config-ip-if)# exit


• Define LACP Trunk for each VLAN
# Define on Switches 1,2
# RS 8264 ports 9-63, odd (bottom) ports
# RS 8124 ports 1-21, odd ports
RS 8XXX (config)# interface port <port>
RS 8XXX (config-if)# switchport access vlan 100
RS 8XXX (config-if)# lacp mode active
RS 8XXX (config-if)# lacp key 1000+<port>
RS 8XXX (config-if)# bpdu-guard
RS 8XXX (config-if)# spanning-tree portfast
RS 8XXX (config-if)# exit
Repeat this for every port that needs to be configured.

5.7.8.2 SAP HANA Network


• Create IP interface for the HANA network
# Define Switch 1, 2
RS 8XXX (config)# vlan 200
RS 8XXX (config)# interface ip 20
# next line for the 1st switch:
RS 8XXX (config-ip-if)# ip address 192.168.20.249 255.255.255.0
# next line for the 2nd switch:
RS 8XXX (config-ip-if)# ip address 192.168.20.248 255.255.255.0
RS 8XXX (config-ip-if)# vlan 200
RS 8XXX (config-ip-if)# enable
RS 8XXX (config-ip-if)# exit
• Define LACP Trunk for each VLAN
# Define on Switches 1,2
# RS 8264 ports 10-64, even (top) ports
# RS 8124 ports 2-22, even ports
RS 8XXX (config)# interface port <port>
RS 8XXX (config-if)# switchport access vlan 200
RS 8XXX (config-if)# lacp mode active
RS 8XXX (config-if)# lacp key 1000+<port>
RS 8XXX (config-if)# bpdu-guard
RS 8XXX (config-if)# spanning-tree portfast
RS 8XXX (config-if)# exit
Repeat this for every port that needs to be configured.

5.7.8.3 Integrated Management Module (IMM) Network


• Create IP interface for the IMM network
# Define Switch 1,2
RS 8052 (config)# vlan 300
RS 8052 (config)# interface ip 30
# next line for the 1st switch:
RS 8052 (config-ip-if)# ip address 192.168.30.249 255.255.255.0
# next line for the 2nd switch:

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 32


1.8.80-12 © Copyright Lenovo, 2015
Technical Documentation

RS 8052 (config-ip-if)# ip address 192.168.30.248 255.255.255.0


RS 8052 (config-ip-if)# vlan 300
RS 8052 (config-ip-if)# enable
RS 8052 (config-ip-if)# exit
• Set access VLAN for switchports
# Define Switch 1,2
# RS 8052 ports 1-47
RS 8052 (config)# interface port <port>
RS 8052 (config-if)# switchport access vlan 300
RS 8052 (config-if)# bpdu-guard
RS 8052 (config-if)# exit
# RS 8052 port 48 as managment port
RS 8052 (config)# interface port 48
RS 8052 (config-if)# description MGMTPort
RS 8052 (config-if)# switchport access vlan 4092
RS 8052 (config-if)# bpdu-guard
RS 8052 (config-if)# exit

5.7.8.4 Enabling VLAG Setup


• Create trunk (dynamic or static) used as ISL
# one of the next three lines is valid according to the switch type
RS 8264 (config)# interface port 1,5
RS 8124 (config)# interface port 23,24
RS 8052 (config)# interface port 49,50
RS 8XXX (config-if)# switchport mode trunk
# next line defines the VLANs needed on the ISL on the HANA/GPFS-switches
RS 8264 (config-if)# switchport trunk allowed vlan add 4094,[HANA VLAN(S),GPFS VLAN(S)]
# next line defines the VLANs needed for the ISL on the IMM-switches
RS 8052 (config-if)# switchport trunk allowed vlan add 4094,[IMM VLAN(S)]
RS 8XXX (config-if)# lacp mode active
RS 8XXX (config-if)# lacp key 5094
RS 8XXX (config-if)# enable
RS 8XXX (config-if)# exit
RS 8XXX (config)# vlag enable
• Define VLAG peer relationship for each VLAN
# Define Switch 1
RS 8XXX (config)# vlag tier-id 10
RS 8XXX (config)# vlag hlthchk peer-ip <customer-mgmt IP2>
RS 8XXX (config)# vlag isl adminkey 5094
# For each <VLAN port> in <VLAN ports>
RS 8XXX (config)# vlag adminkey 1000+<VLAN port> enable

# Define Switch 2
RS 8XXX (config)# vlag tier-id 10
RS 8XXX (config)# vlag hlthchk peer-ip <customer-mgmt IP>
RS 8XXX (config)# vlag isl adminkey 5094
# For each <VLAN port> in <VLAN ports>
RS 8XXX (config)# vlag adminkey 1000+<VLAN port> enable

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 33


1.8.80-12 © Copyright Lenovo, 2015
Technical Documentation

5.7.9 Save changes to switch FLASH memory

RS 8XXX# copy running-config startup-config

5.8 Inter-Site Portchannel Configuration

In a stretched HA or DR scenario a inter-site port channel needs to be configured. The inter-site port
channel configuration depends on the customer premise equipment and infrastructure. This chapter
describes diverse options how this configuration can be implemented. The following examples are based
on G8264 port layout. For the supported G8124 solution, depending on the connection type, the switch
ports 22, or 21-22 respectively, should be used. If the port channel configuration is needed for a stretched
HA setup, the HANA and the GPFS VLANs have to be enabled on the trunk interfaces. If the port
channel trunk is for a DR setup, only GPFS VLANs have to be enabled on the trunk interfaces.

5.8.1 Static Trunk over one Inter-Site Link

If there is just one single site-interconnect available - as described with the drawing below - the following
configuration has to be applied to the switches to establish a static inter-site connection.

Single Inter-Site Link


.------------------------------------------------.
| |
HANA 18,20,22,24,26,28...64 HANA 18,20,22,24,26,28...64
.----------------------,5_____ .----------------------,5_____
MGMT| G8264 Switch 1a |1_____\ MGMT | G8264 Switch 2a |1_____\
‘----------------------’ \ ‘----------------------’ \
GPFS 17,19,21,23,25,27...63 / ISL GPFS 17,19,21,23,25,27...63 / ISL
HANA 18,20,22,24,26,28...64 / HANA 18,20,22,24,26,28...64 /
.----------------------,5____/ .----------------------,5____/
MGMT| G8264 Switch 1b |1___/ MGMT | G8264 Switch 2b |1___/
‘----------------------’ ‘----------------------’
GPFS 17,19,21,23,25,27...63 GPFS 17,19,21,23,25,27...63

• Switchport Portchannel Configuration


# Define Switch 1a,2a
# RS 8264 port 64
# RS 8124 port 22
RS 8XXX (config)# interface port <port>
RS 8XXX (config-if)# switchport mode trunk
# The next 2 configuration statements are valid in case of a stretched HA solution. In a
# stretched HA scenario HANA and GPFS VLANs must be enabled on the trunk interface.
RS 8XXX (config-if)# switchport trunk allowed vlan [GPFS VLAN,HANA VLAN]
RS 8XXX (config-if)# switchport trunk native vlan [GPFS VLAN,HANA VLAN]
# The next 2 configuration statements are valid in case of DR solution. Only GPFS VLAN
# must be enabled on the trunk interface in a DR scenario.
RS 8XXX (config-if)# switchport trunk allowed vlan [GPFS VLAN]
RS 8XXX (config-if)# switchport trunk native vlan [GPFS VLAN]
RS 8XXX (config-if)# exit

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 34


1.8.80-12 © Copyright Lenovo, 2015
Technical Documentation

5.8.2 Portchannel over two Inter-Site Links

If there are two site-interconnect fibres - as described with the drawing below - each cable should connect
to two switches, instead of connecting them both to just one switch pair. The following configuration has
to be applied to the switches to establish one logical static inter-site connection over 2 cables.

Redundant Inter-Site Link (one on each switch)


.------------------------------------------------.
| |
HANA 18,20,22,24,26,28...64 HANA 18,20,22,24,26,28...64
HANA 18,20,22,24,26,28...64 HANA 18,20,22,24,26,28...64
.----------------------,5_____ .----------------------,5_____
MGMT| G8264 Switch 1a |1_____\ MGMT | G8264 Switch 2a |1_____\
‘----------------------’ \ ‘----------------------’ \
GPFS 17,19,21,23,25,27...63 / ISL GPFS 17,19,21,23,25,27...63 / ISL
HANA 18,20,22,24,26,28...64 / HANA 18,20,22,24,26,28...64 /
.----------------------,5____/ .----------------------,5____/
MGMT| G8264 Switch 1b |1___/ MGMT | G8264 Switch 2b |1___/
‘----------------------’ ‘----------------------’
GPFS 17,19,21,23,25,27...63(64) GPFS 17,19,21,23,25,27...63(64)
| |
‘-----------------------------------------------’
Redundant Inter-Site Link (one on each switch)
• Switchport Portchannel Configuration
# Define Switch 1a,2a,1b,2b
# RS 8264 port 64
# RS 8124 port 22
RS 8XXX (config)# interface port <port>
RS 8XXX (config-if)# switchport mode trunk
# The next 2 configuration statements are valid in case of a stretched HA solution. In a
# stretched HA scenario HANA and GPFS VLANs must be enabled on the trunk interface.
RS 8XXX (config-if)# switchport trunk allowed vlan [GPFS VLAN,HANA VLAN]
RS 8XXX (config-if)# switchport trunk native vlan [GPFS VLAN,HANA VLAN]
# The next 2 configuration statements are valid in case of DR solution. Only GPFS VLAN
# must be enabled on the trunk interface in a DR scenario.
RS 8XXX (config-if)# switchport trunk allowed vlan [GPFS VLAN]
RS 8XXX (config-if)# switchport trunk native vlan [GPFS VLAN]
RS 8XXX (config-if)# exit
RS 8XXX (config)# portchannel 63 port <port>
RS 8XXX (config)# portchannel 63 enable
RS 8XXX (config)# vlag portchannel 63 enable

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 35


1.8.80-12 © Copyright Lenovo, 2015
Technical Documentation

5.8.3 Portchannel over four Inter-Site Links

If there are four site-interconnect fibres - as described with the drawing below - two of them should be
connected on port 63 and port 64 on each switch. The following configuration has to be applied to the
switches to establish one logical static inter-site connection over 4 cables.
Portchannel over four inter-site links (two on each switch)
.------------------------------------------------.
| .--------------------------------------------+---.
| | | |
HANA 18,20,22,24,26,28...64(+63) HANA 18,20,22,24,26,28...64(+63)
.----------------------,5_____ .----------------------,5_____
MGMT| G8264 Switch 1a |1_____\ MGMT | G8264 Switch 2a |1_____\
‘----------------------’ \ ‘----------------------’ \
GPFS 17,19,21,23,25,27...63 / ISL GPFS 17,19,21,23,25,27...63 / ISL
HANA 18,20,22,24,26,28...64 / HANA 18,20,22,24,26,28...64 /
.----------------------,5____/ .----------------------,5____/
MGMT| G8264 Switch 1b |1___/ MGMT | G8264 Switch 2b |1___/
‘----------------------’ ‘----------------------’
GPFS 17,19,21,23,25,27...63(+64) GPFS 17,19,21,23,25,27...63(+64)
| | | |
| ‘--------------------------------------------+---’
‘------------------------------------------------’
Portchannel over four inter-site links (two on each switch)

• Switchport Portchannel Configuration


# Define Switch 1a,1b,2a,2b
# RS 8264 port 63,64
# RS 8124 port 21,22
RS 8XXX (config)# interface port <port>
RS 8XXX (config-if)# switchport mode trunk
# The next 2 configuration statements are valid in case of a stretched HA solution. In a
# stretched HA scenario HANA and GPFS VLANs must be enabled on the trunk interface.
RS 8XXX (config-if)# switchport trunk allowed vlan [GPFS VLAN,HANA VLAN]
RS 8XXX (config-if)# switchport trunk native vlan [GPFS VLAN,HANA VLAN]
# The next 2 configuration statements are valid in case of DR solution. Only GPFS VLAN
# must be enabled on the trunk interface in a DR scenario.
RS 8XXX (config-if)# switchport trunk allowed vlan [GPFS VLAN]
RS 8XXX (config-if)# switchport trunk native vlan [GPFS VLAN]
RS 8XXX (config-if)# exit
RS 8XXX (config)# portchannel 63 port <port>
RS 8XXX (config)# portchannel 63 port <port>
RS 8XXX (config)# portchannel 63 enable
RS 8XXX (config)# vlag portchannel 63 enable

5.8.4 Save and Restore Switch Configuration

5.8.4.1 Save Switch Configuration Locally Execute:


# scp admin@switch.example.com:getcfg .

5.8.4.2 Restore Switch Configuration Execute:

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 36


1.8.80-12 © Copyright Lenovo, 2015
Technical Documentation

# scp getcfg admin@switch.example.com:putcfg

5.9 Automated Deployment of Switch Configurations

From N/OS firmware level 7.7.5 onwards the script SwitchAutoConfig.sh can be used to create and
deploy configurations via ssh for the switch models G8124 and G8264. For the switch model G8052,
the configurations can only be created, but not deployed via ssh. We recommend to copy and paste the
created configuration into the serial console for G8052.
SwitchAutoConfig.sh can be found in /opt/ibm/saphana/bin/.
As a prerequisite for SwitchAutoConfig.sh, the switches must be base configured as described in chapter
5.7.1: Basic Switch Configuration Setup on page 29, and reachable via ssh over network.

5.9.1 Script Usage

./SwitchAutoConfig.sh -h

usage: ./SwitchAutoConfig.sh [-c type] [-d type] styletypes=[G8264|G8052|G8124]

-c just creates switch configurations for the chosen switch type.


-d creates and also deploys the switch configurations for the chosen switch type

Example: SwitchAutoConfig.sh -d G8264

5.9.2 Examples

The following command will create the configurations for a G8264 switch pair. You will be asked to enter
configuration details like IP addresses. After the configuration part you have to enter the ssh password
of the switches, twice per switch. The first you enter the ssh password, the script will check the firmware
version of the switches, the second time the password must be entered for the deployment process.
./SwitchAutoConfig.sh -c G8264
The following command will create and deploy the configurations for a G8264 switch pair.
./SwitchAutoConfig.sh -d G8264

Attention
Please be very careful, if you create the configuration for a switch connected to the customer
network. In this case make sure, that the switch is disconnected during the setup. Only if the
configuration is complete and matches the customer requirements bring up the connection to
the customer network.
After the configuration deployment the switches should be checked manually. Afterwards
the configuration can be saved as described in chapter 5.7.9: Save changes to switch FLASH
memory on page 34.

5.9.3 Input Values

All the default values are based on the Networking Guide standards, but can be changed if needed. Most
input values like hostname or IP address need to be provided by the customer. Portchannel is only needed
in case of a DR or HA cluster. If portchannel should be configured, the script will ask for the type of port
channel that has to be configured. There are two port channel options - HA or DR. The GPFS, HANA,

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 37


1.8.80-12 © Copyright Lenovo, 2015
Technical Documentation

xCat and IMM VLAN IPs are IPs that reside within those VLANs. Their purpose is to be able to ping
server addresses within these VLANs from the switch. For G8052 the script will ask for a MGMT Port,
because the G8052 has no dedicated management port.

5.10 Known Issues and Bugs

Attention
Read the following list of issues carefully since they can lead to degrading network performance
or – even worse – to the loss of network connections, automatic IBM GPFS unmounts, and
crashing SAP HANA databases.
• There is a known error in the firmware levels 7.6.1 and 7.6.4 for the RackSwitch G8264 that could
cause a switch to crash. Under certain circumstances it may happen that, if ssh commands that
have been issued prior to the putcfg command, that the putcfg command itself will cause a switch
to crash. The only option to fix this crash is a power cycle of the switch. If putcfg is used to
distribute a prepared switch configuration or to restore a prior configuration, a serial connection
should be opened to the switch to be able to monitor the switch health via console logging output.
This bug has been fixed with N/OS firmware level 7.7.5.
• It has been seen in the field that the massive use of SNMP could possibly lead to a 100% CPU
utilization of the RackSwitch G8264 when using the N/OS firmware level 7.6.1. This will lead to
ISL problems which then causes the ISL-connected switches to shutdown switch ports. The follow
on effect is that the IBM GPFS cluster will shutdown due to missing communication and the SAP
HANA application then crashes. Therefore, SNMP should be used wisely. This bug has been fixed
with N/OS firmware level 7.7.5.
• If a ssh connection to a switch is not working and terminates with the error ’Write failed: Broken
pipe’ please use:
1 ssh -c aes128-ctr -m hmac-sha1 admin@192.168.20.11

instead. On some versions of the switch firmware, there are problems with the amount of algorithms
that are offered by openssl.

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 38


1.8.80-12 © Copyright Lenovo, 2015
Technical Documentation

6 Guided Install of the Lenovo Solution


This section describes the installation and configuration of HANA on SUSE Linux Enterprise Server for
SAP Applications 11 SP3 and HANA on Red Hat Enterprise Linux 6.5. Subsections that only apply to
one of these operating systems are marked accordingly. This section can be applied starting from the
non-OS component DVD version 1.8.80-10.
The software installation and configuration is executed at the customer site. This includes networking
customization, IBM GPFS cluster setup and SAP HANA installation. It does not include the connection
and replication to SAP Business Suite back end systems (such as ERP or BW).

Attention
For the usage SLES for SAP or RHEL with SAP HANA Revision 80 a Linux Compatibility
Pack must be installed. SAP has documented this in SAP Note 2001528 – Linux: SAP HANA
Database SPS 08 revision 80 (or higher) on RHEL 6 or SLES 11.

Note
It is highly recommended to check the system setup and software versions of installed compo-
nents after the complete installation process. See section 14.2: Basic System Check on page
167 how to achieve this.

Phase Actions
1 OS installation
Reboot
2 OS, network configuration
Reboot
RAID, GPFS configuration & installation
3
HANA configuration & installation

Table 20: Installation Process and Phases

Guided Installation Instructions for Single Node Installations:


1. Certified hardware ordered and available: Chapter 4: Hardware Configurations on page 9
2. Preparation: Chapter 6.1: Preparation on page 40
3. Acquiring TCP/IP addresses and host names: Section 5.3: Network Configuration on page 22
4. Phase 1: Installation of the operating system: Section 6.2: Phase 1 on page 46
5. Phase 2: Installing IBM GPFS and OS configuration: Section 6.3: Phase 2 – SLES for SAP on
page 51, or 6.4: Phase 2 – RHEL on page 56
6. Interim system check: Section 6.5: Interim Check on page 57
7. Phase 3: Installing SAP HANA and final configuration: Section 6.6: Phase 3 on page 59
8. Final system check: Chapter 14: System Check and Support on page 167

Guided Installation Instructions for Clustered Nodes Installations:


1. Certified hardware ordered and available: Chapter 4: Hardware Configurations on page 9
2. Preparation: Chapter 6.1: Preparation on page 40

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 39


1.8.80-12 © Copyright Lenovo, 2015
Technical Documentation

3. Acquiring TCP/IP addresses and host names: Section 5.3: Network Configuration on page 22
4. Cluster network switch setup: Section 5.6.5: Network Configurations in a Clustered Environment
on page 28
5. Phase 1: Installation of the operating system: Section 6.2: Phase 1 on page 46
6. Phase 2: Installing IBM GPFS and OS configuration: Section 6.3: Phase 2 – SLES for SAP on
page 51, or 6.4: Phase 2 – RHEL on page 56
7. Interim system check: Section 6.5: Interim Check on page 57
8. Phase 3: Installing SAP HANA and final configuration: Section 6.6: Phase 3 on page 59
9. Final system check: Chapter 14: System Check and Support on page 167

6.1 Preparation

As you might not be able to access online documentation at the customer site, please make yourself
familiar with the following links and downloads before arriving without all information that is useful.
Please note that these documents in turn might reference to other documentation not mentioned here,
so you would need to get this as well. We highly recommend the SAP HANA Installation Guides as well
as the SAP HANA TOC Manual.

Experience SAP HANA http://experiencesaphana.com


SAP Service Marketplace https://service.sap.com/hana*
SAP Help Portal – SAP HANA http://help.sap.com/hana_appliance
SAP HANA 1.0: Central Note https://service.sap.com/sap/support/notes/1514967*
SAP HANA Sizing Guide https://service.sap.com/sap/support/notes/1514966*
Release Restrictions Note https://service.sap.com/sap/support/notes/1513496*

Table 21: SAP HANA references


* SAP Service Marketplace ID required

Depending on the customer’s operation guidelines it might be necessary to prepare the customer infras-
tructure beforehand so that the HANA appliance can be integrated in a smooth and timely manner.
What follows are a few tips we have collected while talking with SAP.

6.1.1 Firewall Preparations

If the customer has firewalls running between the HANA appliance and the connected components (ERP,
clients, backup & restore server, etc.), make sure that the appropriate network ports are opened. For
details on the relevant ports please refer to the SAP HANA security guide at http://help.sap.com/
hana_appliance → Security.

6.1.2 Lenovo Systems solution for SAP HANA Additional Software Stack

The customer needs to have the "Non OS content for Lenovo Systems solution for SAP HANA appliance
additional software stack" before the service person arrives. A DVD should have arrived with every
system. It is not possible due to legal reasons to download the DVD from the Internet. In case a
customer has lost the DVD, or did not receive such, he needs to order it directly from Lenovo. In order
to do this, please direct the customer to contact Lenovo support and provide part number (p/n) for the
latest version from the table below. The other numbers are here for reference.

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 40


1.8.80-12 © Copyright Lenovo, 2015
Technical Documentation

Part
Description Remarks Supported OS
Number
SLES for SAP 11 SP3,
00KG299 SAP HANA FRU Pkg v. 1.8.80-12 for X6 latest version
RHEL 6.5
Previous versions not covered by this document.

Table 22: DVD Part Numbers

Warning
Please note that starting with the IBM Image SAP HANA FRU Pkg v. 1.8.80–10 (p/n
00KC236) only SLES for SAP 11 SP3, and RHEL 6.5 are supported for X6 servers.

6.1.3 Software, Firmware and Drivers

The System x servers software, firmware and driver versions should either be at the exact level as given
here or can be above if indicated so. For details please refer to table 23. The versions listed in that
table have been certified with SAP. If an upgrade to a higher version is supported without consultation
of Lenovo/SAP, this is indicated with a 3. Updates that require a statement from Lenovo or SAP before
upgrading are indicated with b. Certain firmware levels have been declared as static and an upgrade to
higher version is not supported, this is indicated with 7.
If unsure, you should first contact SAP Support (via the SAP OSS System) with a direct question
regarding the latest drivers and their support. For items that can be obtained via the IBM Support
Portal (www.ibm.com/support), we suggest only to upgrade critical updates. This can be done without
contacting the Lenovo/SAP. All others are not allowed unless explicitly identified below.

Attention
Mandatory kernel update after installation on SLES for SAP 11 SP3. At the time
this document is created, kernel version 3.0.101-0.8.1, or higher, is mandatory for use with
SAP HANA.

Attention
Mandatory update of the GNU C Library is required after installation when in-
stalling SAP HANA Database revision 80 or higher. Please update your distributions
GNU C Library to a version glibc-2.11.3-17.56.2 or higher, see SAP Note 1888072 – SAP
HANA DB: Indexserver crash in __strcmp_sse42 for details.

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 41


1.8.80-12 © Copyright Lenovo, 2015
Technical Documentation

SLES – OS Software and Drivers


Component Version
Recommended SLES for SAP Applications 11 SP3 kernel 3.0.101-0.8.1* or higher
The GNU C Library (glibc) 2.11.3-17.56.2 or higher
GCC runtime environment (gcc47-runtime)** 4.7.2_20130108-0.17.2 or higher
SLES for SAP Applications 11 SP3 software and drivers Updates within SP3 as allowed by SAP 3
RHEL – OS Software and Drivers
RHEL 6.5 kernel 2.6.32-431* 3
GCC runtime environment (compat-sap-c++)** 4.7.2-103
OpenSSL – important security fix (Heartbleed) openssl-1.0.1e-16.el6_5.7 3
Updates within version 6.5 as allowed by
RHEL 6.5 software and drivers
SAP 3
Misc. Software, Firmware and Drivers
Component Version
IBM General Parallel File System (GPFS) Recommended: 4.1.0-2 or higher 3
IBM ServeRAID M5120 Controller Firmware FW Package Build: 23.22.0-0024
(for external expansion unit) FW Version: 3.340.75-3372b
IBM ServeRAID M5225 Controller Firmware FW Package Build: 24.2.1-0052
(for external expansion unit) FW Version: 4.220.120-3749b
IBM ServeRAID M5210 Controller Firmware FW Package Build: 24.0.2-0013
(for internal disks) FW Version: 4.200.20-2839b
System x3850 X6 Specific Firmware
Component Version
Integrated Management Module (IMM) 1AOO52Z3
UEFI (FW/BIOS) Flash A8E104T3
DSA DSYTD2Y3
System x3950 X6 Specific Firmware
Component Version
Integrated Management Module (IMM) 1AOO58I3
UEFI (FW/BIOS) Flash A8E108L3
DSA DSYTD2Y3
IBM Systems Networking N/OS
Component Version
IBM RackSwitch G8052 N/OS 7.7.5 or higher 3
IBM RackSwitch G8124 N/OS 7.7.5 or higher 3
IBM RackSwitch G8264 N/OS 7.7.5 or higher 3

Table 23: Supported Firmware, Software and Driver Levels


* Update of kernel will need recompiling the GPFS drivers, see the Lenovo Operations Guide for SAP HANA
appliance for further details.
** Update of the GCC runtime environment is necessary for installations of SAP HANA SPS08 (Revision 80) or
higher See 2001528 – Linux: SAP HANA Database SPS 08 revision 80 (or higher) on RHEL 6 or SLES 11 for
futher details.
*** The GNU C Library (glibc) must be updated for SLES for SAP 11. There is a bug in the Linux glibc library
that leads to an invalid memory access. See 1888072 – SAP HANA DB: Indexserver crash in __strcmp_sse42
for further details.

Note
UEFI and IMM firmware levels should always be updated in parallel to avoid possible con-
tention problems between the two.

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 42


1.8.80-12 © Copyright Lenovo, 2015
Technical Documentation

Note
When installing or performing upgrades, the operator should be prepared to expect multiple
reboots. Please refer to chapter 12.2: Reboot Behavior on page 145.

Warning
Do not downgrade existing firmware levels unless otherwise explicitly recommended to do
so by Lenovo.

6.1.4 Card Placement

Attention
You may need to change the card placement. The machine coming from the factory may have
a different card layout than we require. Please refer to section 4.4: Card Placement on page
15 for the assignment regarding in which slot a certain card should be. This step must be
done before the installation. Please be aware, that only with the correct card layout your
machine is supported by Lenovo.

6.1.5 Hardware UEFI Configuration

These steps are necessary before the operating system can be installed. When the system comes from
Lenovo, it should already been set to the settings listed, but after UEFI firmware updates, it may happen
that these parameters are reset. Follow the next instructions on how to configure the servers UEFI
parameters correctly for use with SAP HANA appliance.
Please check in this step also the power policy settings like described in chapter 12.1: Power Policy
Configuration on page 145.

6.1.5.1 Obtaining web interface access for IMM To access the web interface of the IMM and use
the remote presence feature, you need the IP address for the IMM. You can modify the IMM IP address
through the UEFI Setup utility. To locate or change the IP address, complete the following steps:
1. Turn on the server.
2. When the prompt <F1> Setup is displayed, press F1 .
3. From the setup utility main menu, select
System Settings Integrated Management Module Network Configuration .

4. Obtain or change the network settings (IP address, host name, subnet mask, gateway).
5. Save network settings, confirm to restart IMM.
6. Choose Esc to get back to main menu.

6.1.5.2 Feature on Demand Activation To be able to configure the RAID adapters correctly some
Feature on Demand (FoD) keys need to be activated. It is possible that they are already activated when
shipped.
• ServeRAID M5100/M5200 Series Performance Key for Lenovo System x
• ServeRAID M5100/M5200 Series SSD Caching Enabler for Lenovo System x
• (optional, only if RAID6 is required by customer) ServeRAID M5100/M5200 Series RAID 6 Upgrade
for Lenovo System x (RAID6 can only be configured on external M5120 RAID adapters.)

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 43


1.8.80-12 © Copyright Lenovo, 2015
Technical Documentation

The necessary documentation was shipped with the servers to the customer. You can activate the FoDs
via IMM: After the login go to IMM Management Activation Key Management .

Note
We recommend that the customer keeps a backup of the Feature on Demand keys.

6.1.5.3 General Performance-optimized Settings for SAP HANA


1. In the UEFI/BIOS select Load Default Settings .
2. Select System Settings Operating Modes .

3. Choose Operating Mode Custom Mode .

4. Choose C1 Enhance Mode Disable .

5. Choose Power/Performance Bias Platform Controlled .

6. Choose Platform Controlled Type Maximum Performance .

7. Press Esc twice.


8. Select Save Settings and press .
9. Select System Settings Power .

10. Choose Workload Configuration: I/O sensitive .


11. Press Esc twice.
12. Select Save Settings and press .
Please check and set the settings in UEFI according to the following tables.
Section Operation Modes

Setting Value ASU tool setting


Choose Operating Mode Custom Mode OperatingModes.ChooseOperatingMode
Memory Speed Max Performance Memory.MemorySpeed
Memory Power Management Automatic Memory.MemoryPowerManagement
Proc Performance States Enable Processors.ProcessorPerformanceStates
C1 Enhance Mode Disable Processors.C1EnhancedMode
QPI Link Frequency Max Performance Processors.QPILinkFrequency
Turbo Mode Enable Processors.TurboMode
CPU States Enable Processors.C-States
Power/Performance Bias Platform Controlled Power.PowerPerformanceBias
Platform Controlled Type Max Performance Power.PlatformControlledType

Table 24: Required Operation Modes UEFI settings

Section Processors

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 44


1.8.80-12 © Copyright Lenovo, 2015
Technical Documentation

Setting Value ASU tool setting


Turbo Mode Enable Processors.TurboMode
Processor Performance States Enable Processors.ProcessorPerformanceStates
C-States Enable Processors.C-States
Package ACPI C-State Limit ACPI C3 Processors.PackageACPIC-StateLimit
C1 Enhanced Mode Disable Processors.C1EnhancedMode
Hyper Threading Enable Processors.Hyper-Threading
Execute Disable Bit Enable Processors.ExecuteDisableBit
Intel Virtualization Technology Enable Processors.IntelVirtualizationTechnology
Enable SMK Disable Processors.EnableSMX
Hardware Prefetcher Enable Processors.HardwarePrefetcher
Adjacent Cache Prefetch Enable Processors.AdjacentCachePrefetch
DCU Streamer Prefetcher Enable Processors.DCUStreamerPrefetcher
DCU IP Prefetcher Enable Processors.DCUIPPrefetcher
Direct Cache Access (DCA) Enable Processors.DirectCacheAccessDCA
QPI Link Frequency Max Performance Processors.QPILinkFrequency

Table 25: Required Processors UEFI settings

Section Power

Setting Value ASU tool setting


Active Energy Manager Capping Disable Power.ActiveEnergyManager
Power/Performance Bias Platform Controlled Power.PowerPerformanceBias
Platform Controlled Type Max Performance Power.PlatformControlledType
Workload Configuration I/O sensitive Power.WorkloadConfiguration

Table 26: Required Power UEFI settings

Section Memory

Setting Value ASU tool setting


Memory Mode Independent Memory.MemoryMode
Memory Memory Speed Max Performance Memory.MemorySpeed
Memory Power Management Automatic Memory.MemoryPowerManagement
Socket Interleave NUMA Memory.SocketInterleave
Memory Data Scrambling Enable Memory.PatrolScrub
PatrolScrub Enable Memory.MemoryDataScrambling
Mirroring Disable Memory.Mirroring
Sparing Disable Memory.Sparing
RankMarginingTest Disable Memory.RankMarginingTest

Table 27: Required Memory UEFI settings

6.1.5.4 Disable GPT Recovery


1. Select System Settings Recovery & RAS Disk GPT Recovery Disk GPT Recovery .

2. Choose None .
3. Press Esc three times.

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 45


1.8.80-12 © Copyright Lenovo, 2015
Technical Documentation

4. Select Save Settings and press .

6.1.5.5 Boot Order The installer supports (starting from release 1.8.80-10) only the installation in
UEFI Mode. For the boot loaders used see table 28.
Note
When you reinstall a system but changed the Legacy/UEFI Mode, make sure the partition
table is cleared, either by wiping it or by recreating the RAID1 VD for the OS.

Type SLES 11 SP3 RHEL 6.5


Supported from 1.7.70-8 1.8.80-10
Boot loader ELILO Grub

Table 28: Boot options and boot loaders used

If you want to install in UEFI Mode, you do not have to change the boot order at all. The default boot
order is: CD/DVD Rom, Hard Disk 0, PXE Network. After successful installation there will be a new
entry on top of the list for the newly installed operating system.
Attention
You must not activate UEFI Secure Boot – it is disabled by default – because the installation
of GPFS and other software add-ons will fail.

6.2 Phase 1

The Lenovo Systems Solution for SAP HANA appliance is ready for an installation with the factory
provided image.

6.2.1 Storage Configuration – RAID Setup

The RAID configuration of all RAID5 and RAID6 arrays is executed by the automated installer starting
with release 1.8.80-10. The only manual step the installing person has to do is to configure the RAID1
for the OS.
The following tables are meant as an overview and a reference in case that the automated RAID config-
uration is not working properly.
Tables 29: x3850 X6 RAID Controller Configuration on page 47 and 30: x3950 X6 RAID Controller
Configuration on page 48 describe possible configurations of the RAID controllers. There are different
possible setups for the RAID controllers with different numbers of SSDs and HDDs:
• M5210 (on x3950 X6: first internal)
– 2 SSDs + 6 HDDs: 1 × RAID1 for OS, 1 × RAID5 for GPFS
• M5210 (only x3950 X6, second internal)
– 2 SSDs + 6 HDDs: 1 × RAID5 for GPFS
• M5120/M5225
– 2 SSDs + 9 HDDs: 1 × RAID5 for GPFS
– 2 SSDs + 10 HDDs: 1 × RAID6 for GPFS
– 2 SSDs + 18 HDDs: 2 × RAID5 for GPFS

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 46


1.8.80-12 © Copyright Lenovo, 2015
Technical Documentation

– 2 SSDs + 20 HDDs: 2 × RAID6 for GFPS


– Optionally: +2 SSDs13

Physical
Controller Models VD ID Type Config Comment
Drives
0 HDD 2 RAID1 VD for OS
M5210 all GPFS,
(3+p)
1 HDD 4 CacheCade
RAID5
enabled
CacheCade
2 SSD 2 RAID0†
of VD1
(8+p)
Single node: 9 GPFS,
M5120/ 0* HDD RAID5
≥ 768GB, CacheCade
M5225 (8+2p)
Cluster: 10 enabled
RAID6
≥ 512GB
CacheCade
1 SSD 2 RAID0
of VD0

Table 29: x3850 X6 RAID Controller Configuration.


* There are different possible configurations for this VD depending on the number of SSDs/HDDs connected to
the controller.
† RAID1 for all CacheCade arrays is possible, but may require additonal hardware. See section CacheCade RAID1
Configuration in the Operations Guide for X6 based models for more details.

13 Optionally:+2 SSDs for CacheCade RAID1. For details on hardware configuration and setup see Operations Guide
for X6 based models section CacheCade RAID1 Configuration

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 47


1.8.80-12 © Copyright Lenovo, 2015
Technical Documentation

Physical
Controller Models VD ID Type Config Comment
Drives
0 HDD 2 RAID1 VD for OS
1st M5210 all GPFS,
(3+p)
1 HDD 4 CacheCade
RAID5
enabled
CacheCade
2 SSD 2 RAID0†
for VD1
GPFS,
(5+p)
Single node: 0 HDD 6 CacheCade
2nd M5210 RAID5
≥ 768GB, enabled
Cluster: CacheCade
1 SSD 2 RAID0
≥ 512GB for VD1
(8+p)
Single node: 9 GPFS,
0* HDD RAID5
≥ 3072GB, CacheCade
1st M5120/ (8+2p)
Cluster: 10 enabled
M5225 RAID6
≥ 2048GB
(8+p)
Single node: 9 GPFS,
1* HDD RAID5
≥ 6144GB, CacheCade
Cluster: (8+2p) enabled
10
≥ 3072GB RAID6
CacheCade
1/2** SSD 2/4* RAID0
for VD0&1
(8+p)
Single node: HDD 9 GPFS,
0* RAID5
≥ 12.288, CacheCade
2 M5120/
nd
(8+2p)
Cluster: 10 enabled
M5225 RAID6
≥ 4096GB
(8+p)
Single node: 9 GPFS,
1* HDD RAID5
≥ 12.288GB, CacheCade
Cluster: (8+2p) enabled
10
≥ 6144GB RAID6
CacheCade
1/2* SSD 2 RAID0
for VD0&1
(8+p)
Single node: 9 GPFS,
3rd M5120/ 0* HDD RAID5
≥ 12.288GB, CacheCade
M5225 (8+2p)
Cluster: 10 enabled
RAID6
≥ 6144GB
CacheCade
1 SSD 2 RAID0
for VD0

Table 30: x3950 X6 RAID Controller Configuration.


* There are different possible configurations for this VD depending on the number of SSDs/HDDs connected to
the controller.
** This number will depend on the availability of VD1
† RAID1 for all CacheCade arrays is possible, but may require additonal hardware. See section CacheCade RAID1
Configuration in the Operations Guide for X6 based models for more details.

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 48


1.8.80-12 © Copyright Lenovo, 2015
Technical Documentation

Partition
Device Partition # Size File system Mount Point
Name*
1 /dev/sda1 148MB vfat /boot/efi
2 /dev/sda2 64GB ext3/4 /
/dev/sda 3 /dev/sda3 32GB swap (none)
/var/backup/
4 /dev/sda4 148MB vfat
boot/efi
5 /dev/sda5 64GB ext3/4 /var/backup
/sapmnt
/dev/sd[b-z] Unpartitioned (whole device) 100% GPFS
(sapmntdata)

Table 31: Partition Scheme for Single Node and Cluster Installations
* The actual partition numbers may vary depending on whether you use RHEL or SLES for SAP.

Warning
At this point, only the RAID1 for the OS will be configured. Other RAID arrays are generated
automated in phase 3 of the setup.

6.2.1.1 Starting the MegaRAID Configuration Tool


1. In the UEFI main menu select System Settings Storage .

2. Select the internal RAID controller. If your server has two M5210 controllers, only configure the
first controller as described here. You can determine the first internal controller by the smaller bus
number on the right side of the "Storage" view.
3. Select Configuration Management .
4. (If shown) Select Manage Foreign Configuration Clear Foreign Configuration .

5. Select Clear Configuration and confirm.


6. Select Create Virtual Drive . (If this is not possible, press Esc and select Configuration Management again.)
7. Select Generic RAID 1 .
8. The RAID1 must be configured on HDDs.
9. Select Save Configuration and confirm.
10. Leave the controller configuration.

6.2.2 Mounting Installation Images using the IMM Virtual Media Center

Using the IMM, the machine can be booted into the installation media. Directions on how to use the IMM
can be found in the Lenovo server installation guidelines respective to the System x model purchased.
The server software installation process varies slightly depending on how the mounted software images
are attached to the server. This section describes the different image mounting methods and the available
options to install the images for each method. See table 32: DVD/ISO Media Install Options on page
50. Installations via USB drives are supported starting from IBM non-OS components DVD 1.7.70–8.
Starting from IBM non-OS components version 1.7.70-8 there are two IBM/Lenovo DVDs shipped besides
the DVDs of the operating system media kit. The "IBM Installation" DVD (IBM non-OS components),
now "Lenovo Installation" DVD (Lenovo non-OS components), contains all files that are needed for a
successful installation of the appliance. The "Additional Products" DVD contains additional files for SAP
HANA that are not required for a successful installation. If you want to have these files automatically
transferred to the server(s) during installation, you must use option 1 in table 32.

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 49


1.8.80-12 © Copyright Lenovo, 2015
Technical Documentation

Virtual
USB
DVD/ISO Media Option Media
Stick
Manager
SLES for SAP/RHEL 3(1st*)
1 Lenovo non-OS Components 3(2nd)
RHEL for HANA*** 3(3rd)
Additional Products 3(4th)**
SLES for SAP 3
2
Lenovo non-OS Components 3
RHEL for HANA*** 3

Table 32: DVD/ISO Media Install Options


* The SLES for SAP/RHEL image must be loaded first into the Virtual Media Manager. The Additional Products
DVD is optional.
** We recommend not mounting this DVD.
*** Only required for RHEL installations for installation of compatibility pack.

6.2.3 Starting the Automatic Installation Process

Attention
Due to changes to the MTM, special steps have to be taken when installing a machine with
Model Type 6241.
In this case you have to append a special kernel parameter during the installation process.
For this you need the solution model name of the machine (e.g. AC34S1024). This name
consists of
1. "AC3" for x3850 X6 chassis, or "AC4" for x3950 X6 chassis.
2. "2S" for two socket machines, or "4S", or "8S".
3. The size of RAM in GB, e.g. "1024" or "512".

• SLES, UEFI Mode: After you mount the software images for the execution of phase one install,
restart the system and wait until the black boot-option screen from SUSE is displayed.
– In the boot-option screen, use + to select Installation and press e .
– Go to the line linuxefi /boot/x86_64/loader/linux.
1. When installing a 6241 machine: Append saphanacfg=<yourvalue>,
e.g. saphanacfg=AC34S1024.
2. Option 1, and 2 in table 32: Append autoyast=usb:///. Press F10 .
• RHEL, UEFI Mode: After you mount the software images for the execution of phase one install,
restart the system and press any key as soon as the RHEL boot loader starts to enter the boot
options menu.
– In the boot-option screen, use + to select Red Hat Enterprise Linux 6.5 and press e .
– Press e again to edit the kernel paramters.
1. When installing a 6241 machine: Append saphanacfg=<yourvalue>,
e.g. saphanacfg=AC34S1024.
2. Option 1 in table 32: Append ks=cdrom:/ks.cfg.
3. Option 2 in table 32: Append ks=hd:sdb:/ks.cfg.
– Press and then b .

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 50


1.8.80-12 © Copyright Lenovo, 2015
Technical Documentation

• SLES and RHEL: The media will automatically install the SLES for SAP or RHEL operating
system. The installer will copy the extra software necessary for the SAP HANA product (GPFS
and other software add-ons). The machine will be properly partitioned, installed and initially
configured. You will not need to touch this system at this point. After the system reboot phase
two of the installation will begin.
Note
Continue with Section 6.3: Phase 2 – SLES for SAP on page 51, or 6.4: Phase 2 – RHEL on
page 56.

6.3 Phase 2 – SLES for SAP

Warning
If you had to restart the server in one of the next steps and you see this screen again,
1. change into a console or open a terminal and execute service openibd start. If you
do not do this, you will not be able to configure the network correctly in later steps.
To open a console, press Ctrl+Alt+Shift+X, then enter the command and then enter
exit to close the console.
At the welcome screen select "Next".
2. Ensure that the customer accepts the license agreements for Novell SLES for SAP Applications
and Lenovo Systems Solution for SAP HANA (including GPFS). Select "Next".

Figure 19: License Agreement

3. On the next screen enter keyboard preferences. Select "Next".


4. Assign host name for the server and domain name according to customer’s wishes. Select "Next".

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 51


1.8.80-12 © Copyright Lenovo, 2015
Technical Documentation

Figure 20: Hostname and Domain Name

5. By "Network Configuration", the networking adapters need to be configured to the customers net-
work landscape. Depending on the customer’s network infrastructure the other Ethernet adapters
need to be modified according to table 15: IP address configuration on page 23. This is left to the
customer and service personnel to properly define in advance.

Figure 21: Network Configuration

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 52


1.8.80-12 © Copyright Lenovo, 2015
Technical Documentation

(a) Click on the green highlighted and underlined "Network Interfaces".


(b) There are two bonded devices configured for the Mellanox adapters. This is used by default
for the IBM GPFS and SAP HANA private networks and should not be changed. This is a
private network and does not need to be connected to the customers network landscape.
(c) Single node: If the customer wishes to use the 10Gb adapters for his client access, then you
need to change the adapter used for each of these bonded adapters. It is not necessary in a
single node installation to use two adapters, only that one adapter is assigned with the correct
private networking host names and IP addresses.
Note
In case the customer plans to scale out the single node installation to a cluster
by adding more severs: Plan the network configuration for the GPFS and HANA
networks as if the cluster was already present to simplify a later scale out.
(d) Cluster node: It is important to modify the host name/IP address pair of gpfsnodeNN
/ 127.0.1.1 (e.g 192.168.10.101/24) in order to properly auto-configure the private network.
Follow the advice given by the customer in table 15: IP address configuration on page 23.
It is also important to modify the host name/IP address pair of hananodeNN / 127.0.2.1
(e.g 192.168.20.101/24) in order to properly auto-configure the private network. Follow the
advice given by the customer in table 15: IP address configuration on page 23.
Warning
If not changed, the installation will fail at a later point. Please see figure 22 on
page 53. Please change the value in the marked black box to reasonable values,
e.g. 192.168.10.101/24 gpfsnode01 for bond0 and 192.168.20.101/24 hanaode01 for
bond1. Do not use the preset values in the fields IP address and hostname.

Figure 22: Cluster Node NIC Configuration dialog bond0

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 53


1.8.80-12 © Copyright Lenovo, 2015
Technical Documentation

Warning
You have to assign the correct IP and the fully qualified domain name of the
server to the interface that will be connected to the customer’s network.
(e) Under the tabs "Hostname/DNS" and "Routing" confirm host name, domain name, name
servers, search domain(s) and routing information and add any missing information. Select
"Next".
6. On the next screen enter clock and time zone information. Select "Next".

Figure 23: Clock and Time Zone

7. A network time protocol (NTP) server should be configured. It is mandatory to configure it on


cluster nodes and highly recommended to configure it on single node installations. Select "OK".

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 54


1.8.80-12 © Copyright Lenovo, 2015
Technical Documentation

Figure 24: Advanced NTP Configuration

8. Enter the root password. Select "Next".

Figure 25: Password for the System Administrator

Select "Next".
9. Register the SLES System using the supplied envelope in the customers delivery.

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 55


1.8.80-12 © Copyright Lenovo, 2015
Technical Documentation

10. On the "Installation Completed" screen press Finish.


11. Follow the instructions in section 6.5: Interim Check on page 57.
Warning
Mandatory Kernel Update on SLES for SAP 11 SP3: At the time this document is
created, kernel version 3.0.101-0.8.1 is mandatory for SLES for SAP 11 SP3. Please consult
SAP if there is now a higher version recommended. Please see 13.3: Linux Kernel Update on
page 155 for the steps needed to be performed.

6.4 Phase 2 – RHEL

1. At the "Welcome" screen click "Forward".


2. Ensure that the customer accepts the license agreements for RHEL. Select "Forward".
3. Register the software.
4. Configure the keyboard layout and select "Forward".
5. Enter a root password and select "Forward".
6. If the customer wants to, you can create a further (non-root) user on this machine. Then select
"Forward".
7. In the time configuration, select "Synchronize date and time over the network". Configure the time
servers; if there is no Internet access, remove the default time servers. For cluster installations the
configuration of an NTP server is mandatory. For single node installations it is highly recommended.
8. Select the timezone tab and select the correct timezone. Select "Forward".
9. Deselect "Enable kdump?". Select "Finish", then select "No".
10. Log in as root user.
11. Execute system-config-network and select DNS configuration. Do not use the Device configuration
option.
• As "Hostname" enter the short hostname without domain.
• Enter the DNS servers.
• As "DNS search path" enter the domain.
12. Edit the configuration file of the network device for the external communication, e.g. ifcfg-eth4,
in /etc/sysconfig/network-scripts/. (Do not change the settings for eth0-3, they are the slaves
of bond0-1.) Make sure that the file contains the line ONBOOT=yes but the line HWADDR= is deleted.
At the end the file should look like this:
1 DEVICE=eth[X]
2 TYPE=Ethernet
3 UUID=[UUID]
4 ONBOOT=yes
5 NM_CONTROLLED=no
6 BOOTPROTO=none
7 IPV6INIT=no
8 IPADDR=[IP address]
9 NETMASK=[netmask]
10 GATEWAY=[gateway]

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 56


1.8.80-12 © Copyright Lenovo, 2015
Technical Documentation

The networking adapters need to be configured to the customers network landscape. Depending on
the customer’s network infrastructure the other Ethernet adapters need to be modified according
to table 15: IP address configuration on page 23. This is left to the customer and service personnel
to properly define in advance.
(a) There are two bonded devices configured for the Mellanox adapters. This is used by default
for the IBM GPFS and SAP HANA private networks and should not be changed. This is a
private network and does not need to be connected to the customers network landscape.
(b) Single node: If the customer wishes to use the 10Gb adapters for his client access, then
you need to change the adapter used for each of these bonded adapters. It is not necessary
in a single node installation to use two adapters, only that one adapter is assigned with the
correct private networking host names and IP addresses. Configure the interfaces via the files
ifcfg-bond0, and ifcfg-bond1 in the directory /etc/sysconfig/network-scripts/.

Note
In case the customer plans to scale out the single node installation to a cluster
by adding more severs: Plan the network configuration for the GPFS and HANA
networks as if the cluster was already present to simplify a later scale out.
(c) Cluster node: It is important to modify the host name/IP address pair of gpfsnodeNN
/ 127.0.1.1 (e.g 192.168.10.101/24) in order to properly auto-configure the private network.
Follow the advice given by the customer in table 15: IP address configuration on page 23.
It is also important to modify the host name/IP address pair of hananodeNN / 127.0.2.1
(e.g 192.168.20.101/24) in order to properly auto-configure the private network. Follow the
advice given by the customer in table 15: IP address configuration on page 23.
Configure the interfaces via the files ifcfg-bond0, and ifcfg-bond1 in the directory /etc/
sysconfig/network-scripts/.

Warning
If not changed, the installation will fail at a later point. Please see figure 22 on
page 53. Please change the value in the marked black box to reasonable values,
e.g. 192.168.10.101/24 gpfsnode01 for bond0 and 192.168.20.101/24 hanaode01 for
bond1. Do not use the preset values in the fields IP address and hostname.
Execute
1 service network restart

to load the new network configuration. Try to connect via SSH to the machine to ensure the network
connectivity.
13. Configure /etc/hosts: Remove the hostname from both lines (important!). Add a line for gpf-
snodeXX and hananodeXX (where XX is the node number, e.g. 01) and a line for the external IP
and hostname.
14. Reboot the server.

6.5 Interim Check

Before starting phase three, it is a good practice to ensure that you can access all machines on the network
and that each node is ready to install and configure the SAP HANA appliance software. You can use the
following commands to determine that each system is ready for the cluster install.
On every node run the following commands and check that they are consistent with the cluster you are
about to install:

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 57


1.8.80-12 © Copyright Lenovo, 2015
Technical Documentation

1. Review the physical partitions (sdx):


1 # cat /proc/partitions | awk '{ print $4 }' | sort

2. This command must properly show the node itself (not every node):
1 # cat /etc/hosts | grep gpfsnode

3. This command must properly show the node itself (not every node):
1 # cat /etc/hosts | grep hananode

4. The following command lists all reachable servers in both internal networks. Ensure all servers
are reachable. Except for the server’s own adapter, MAC address are shown and can be used for
verifying that the right servers were found and not other servers in the same network reachable
through other network connections:
1 # nmap -sP 192.168.10.0/24 192.168.20.0/24

5. Ensure the IBM GPFS private network is set up correctly:


1 # cat /proc/net/bonding/bond0

6. Ensure the SAP HANA private network is set up correctly:


1 # cat /proc/net/bonding/bond1

7. Check the time settings and NTP:


1 # ntpq -p
2 # date

If any of these values are not as expected, you should correct them and repeat this test before starting
with phase three.

6.5.1 Model Type 6241 Special Instructions

Attention
When installing a machine with Model Type 6241 you have to take these steps.
1. Download the ZIP file from SAP Note 2130171 – Automated installer does not recognize MT 6241
as valid and load it onto your machine.
2. Unzip the file:
1 unzip fix.zip

3. Execute the script runme.sh:


1 bash runme.sh

6.5.2 Installation of Compatibility Pack

Attention
Install the mandatory compatibility pack. (More information can be found in SAP Note
2001528 – Linux: SAP HANA Database SPS 08 revision 80 (or higher) on RHEL 6 or SLES
11.)

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 58


1.8.80-12 © Copyright Lenovo, 2015
Technical Documentation

For RHEL the compatibility pack is shipped on an additional DVD:


1 yum -y install [mount point of RHEL for HANA DVD]/Packages/compat-sap*.rpm

6.5.3 Installation without Network Connectivity

Attention
The phase three needs uplink network connectivity and working DNS resolution to execute
properly. If there is no connectivity to the customer’s network and DNS server use this
workaround:
Add the external host names specified in step 9 of phase 2 (dialog "SAP HANA Configuration", see
screenshot above) to the /etc/hosts file on all nodes so that every node can resolve the external host
name of the other nodes. Test this by pinging the external host name of all nodes on every node before
continuing with the next phase.

6.6 Phase 3

6.6.1 Verification of ServeRAID M5120 Controller Firmware and Configuration

After the initial release of the new X6-based servers (x3850 X6, x3950 X6) a serious issue in various
firmware versions of the ServeRAID M5210 RAID adapter has been found which can trigger continuous
controller resets. This happens only under heavy load and each controller reset may cause service inter-
ruption. Certain firmware versions do not exhibit this issue, but these versions show severely degraded
I/O performance. Only servers using the ServeRAID M5120 controller for attaching an external SAS
enclosure are affected.
Future appliance versions will be have the workaround for the controller reset issue preinstalled while the
performance issue can be only solved by an up- or downgrade to an unaffected firmware version.
Non-exhaustive list of known affected firmware versions:

Issue Affected versions


Controller resets 23.7.1-0010, 23.12.0-0011, 23.12.0-0016, 23.12.0-0019
Lowered Performance 23.16.0-0018, 23.16.0-0027

Table 33: ServeRAID M5120 Firmware Issues

Solution: The current recommendation is to use firmware version 23.22.0-0024 (or newer, if listed as
stable by Lenovo SAP HANA Team) and to change the following configuration value in the installed OS.
Both can be done after installation.

6.6.1.1 Use recommended Firmware version


1. Check which FW Package Build is installed on all M5120 RAID controllers:
1 # /opt/MegaRAID/storcli/storcli64 -AdpAllInfo -aAll | grep 'M5120' -B 5 -A 3
2

3 Adapter #1
4

5 ==============================================================================
6 Versions
7 ================

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 59


1.8.80-12 © Copyright Lenovo, 2015
Technical Documentation

8 Product Name : ServeRAID M5120


9 Serial No : xxxxxxxxxx
10 FW Package Build: 23.22.0-0024

Currently, version 23.22.0-0024 is recommended. Download the 23.22.0-0024 FW package for


ServeRAID 5100 SAS/SATA adapters via IBM fixcentral or use following direct link: https:
//ibm.biz/BdRatD.
2. Make the downloaded file executable and then run it:
chmod +x ibm_fw_sraidmr_5100-23.22.0-0024_linux_32-64.bin
./ibm_fw_sraidmr_5100-23.22.0-0024_linux_32-64.bin -s
3. Please reboot the server after updating all M5120 controllers.

6.6.2 HANA Installation

Attention
The SAP HANA installation packages are copied to the node in this step. Make sure that
the Lenovo non-OS components DVD is still mounted via IMM (or USB thumb drive), or the
installation will fail.
Phase three starts after the machine has rebooted and you have ascertained that all the networking is
working. Either from the console, or from a SSH connection, you may call the Lenovo SAP HANA
appliance configuration tool. It is recommended to call the configuration tool on the first node but it can
be started on any node of the cluster.
Attention
In case, you are connecting via SSH from a machine that is not set for the English language,
you must set the LANG environment variable to "C" beforehand. If not, the SAP HANA
Database Installation may break while trying to determine the hardware requirements.
# export LANG=C
Download the latest hardware check script from SAP Note 1658845 – Recently certified SAP HANA
hardware/OS not recognized. Copy the ZIP file to the server to /root/HanaHwCheck.zip. The automated
(Lenovo) installer will update the HANA hardware check script automatically, if it finds this file at this
location.
Attention
Not providing the most recent HANA hardware check script may cause the HANA installation
to fail.

6.6.2.1 Single Node Installation Execute the following command as root user.
1 # saphana-setup-saphana.sh

1. Read the "International Program License Agreement" and accept with "1".
2. Select "Single Node"

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 60


1.8.80-12 © Copyright Lenovo, 2015
Technical Documentation

Figure 26: Installation Mode Selection

3. Confirm.
4. Accept the external hostname or set the correct value. Select "OK".
5. Enter a SID. Select "OK".
6. Enter an Instance Number. Select "OK".
7. Enter the SAP HANA password. Select "OK".

Figure 27: HANA Password Input Dialog

8. Confirm the password. Select "OK".


9. Make sure that gpfsnode01 is assigned to the correct IP. Press to select "OK".

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 61


1.8.80-12 © Copyright Lenovo, 2015
Technical Documentation

Figure 28: GPFS IP Configuration Dialog

10. Repeat for hananode01.


11. Choose, if you want to get the RAID arrays configured automatically. We recommend to choose
"Yes". Only choose "No", if the RAID was already configured before.
12. Read GPFS license agreement and accept the with "1".

6.6.2.2 Cluster Installation Execute the following command as root user on every node in the
cluster.
1 # saphana-setup-saphana.sh

1. Read the "International Program License Agreement" and accept with "1".
2. Select "Cluster (Worker)"
3. Confirm.
4. Accept the external hostname or set the correct value. Select "OK".
5. Choose, if you want to get the RAID arrays configured automatically. We recommend to choose
"Yes". Only choose "No", if the RAID was already configured before.
6. Read GPFS license agreement and accept the with "1".
Execute the following command as root user only on the first node in the cluster after the previous
step was completed for every node in the cluster.
1 # saphana-setup-saphana.sh

1. Select "Cluster (Master)"


2. Confirm.
3. Enter a SID. Select "OK".
4. Enter an Instance Number. Select "OK".
5. Enter the SAP HANA password. Select "OK".
6. Confirm the password. Select "OK".
7. Enter the number of nodes in the cluster. Select "OK".

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 62


1.8.80-12 © Copyright Lenovo, 2015
Technical Documentation

8. Enter the number of standby nodes in the cluster. Select "OK".


9. Make sure that the gpfsnode entries are assigned to the correct IPs. Press to select "OK".
10. Repeat for hananode entries.
11. Read GPFS license agreement and accept the with "1".
Note
Follow the instructions in Section 7: After Installation on page 64.
Please also review SAP Note 1906381 – Network setup for external communication for an overview how
HANA can connect to the client network.

6.6.3 Single Node with HA Installation with Side-car Quorum Solution

Adding a second node for high availability is described in section 10.1: Single Node with HA Installation
with Side-car Quorum Solution on page 101. Please refer to there when installing a simple single node
HA solution.

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 63


1.8.80-12 © Copyright Lenovo, 2015
Technical Documentation

7 After Installation
After the installation of the Lenovo Solution you have to take several actions to ensure that the installation
is correct.

7.1 Actions to insure the correctness of the installation

• At first execute a system check (see Section 14.2: Basic System Check on page 167) with the latest
version of the check script.
Follow the instructions given by the check script to prevent unwanted behaviour of the appliance.
Warning
Update the kernel and IBM GPFS to the suggested levels. Earlier versions of GPFS
and the kernel have known bugs that may cause the appliance to stop working.

Attention
Do not change the SSH configuration for the root user (e.g. not allowing SSH logins).
SSH is required for IBM GPFS and is configured accordingly.
• On x3850 X6 and x3950 X6 servers you can create a symbolic link from /sapmnt/<SID> to /sapmnt/
shared/<SID> to simulate the GPFS filesystem layout of eX5 based appliances, if you use scripts
or other tools that use this path hard coded:
1 ln -s /sapmnt/shared/<SID> /sapmnt/<SID>

• Install the SAP Solution Manager Diagnostics Agent (SMD). If the customer plans to integrate
the new HANA server(s) into his existing SAP management infrastructure (SAP Solution Man-
ager, System Landscape Directory) the SMD must be installed in preparation. The SAP Solution
Manager Diagnostic Agent can be installed via the SAP HANA Lifecycle Manager (HLM).
To install the SMD via the HANA Lifecycle Manager open a browser and navigate to https:
//<HANAServerHostname>:1129/lmsl/HLM/<SID>/ui?sid=<SID>, choose Add Solution Manager
Diagnostics Agent (SMD) and follow the instructions on screen. Skip the registration forms for the
Solution Manager and the System Landscape Directory if you do not wish to register the HANA
installation at this time. For other means to use the HLM or if the HLM is not accessible please
refer to the SAP HANA Update and Configuration Guide 14 .
The installation of SAP Solution Manager Diagnostics Agent is documented in the chapter Adding
a Solution Manager Diagnostics Agent on an SAP HANA System in the aforementioned guide.
• Check that the HANA log mode is configured correctly.
If the log mode is wrong, the appliance will experience an out-of-space condition on the IBM GPFS
(/sapmnt/). See SAP Note 1642148 – FAQ: SAP HANA Database Backup & Recovery (No. 26
What general configuration options do I have for saving the log area?).
• Make sure that the backup paths are configured correctly.
They only are allowed to point to the GPFS filesystem if it is used as a staging area for a third
party backup solution. Permanent backups on the GPFS are unsupported.
• Check, if the SAP Host agent is running on every server.
If not you can either reboot every server in the cluster or start it by executing on every server:
1 service sapinit start
14 Obtainable from http://help.sap.com/hana_appliance

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 64


1.8.80-12 © Copyright Lenovo, 2015
Technical Documentation

7.2 HANA Network Setup

There are several options how to setup HANA regarding the connection on the client network. This
depends highly on the setup of the customer network. A good overview of the possibilities gives:
SAP Note 1906381 – Network setup for external communication

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 65


1.8.80-12 © Copyright Lenovo, 2015
Technical Documentation

8 Disaster Recovery
The scope of this section is to provide a guide for the IBM Disaster Recovery (previously SAP Disaster
Tolerance) solution for SAP HANA . The solution is implemented in two physically independent locations,
with one location used as the production site and the second which serves as the backup or disaster site.
A third optional location is possible for a tie breaking (quorum) feature of GPFS.
The goal of DR is to enable a secondary data center to take over production services with exactly the
same set of data as stored in the primary site’s data center. Synchronous data replication between the
primary and secondary site ensures zero data loss (RPO=0). This allows the protection of a data center
against events like power outage, fire, flood, or hurricane. The time required to recover the services
(RTO) is different for each installation depending upon the exact client implementation.

8.1 Architecture

This sections briefly explains the architecture of the IBM DR solution for SAP HANA and provides
examples how it can be installed in a standard two-tier or three-tier data center environment.

Site C
Quorum
Node
(Optional)

Site A Site B
FS: FS:
sapmntdata sapmntdata

Synchronous
Synchronous
Replication
Replication

Figure 29: DR Architectural Overview

8.1.1 Terminology

The terms site A, primary site, and active site are used interchangeably in this document to refer to the
location where the productive SAP HANA HA is initially set up and used.
Similarly, site B, backup site, and passive site all refer to the second location where the productive SAP
HANA HA system is copied to in the case of a disaster.
After a failover the naming of these two sides may be swapped, depending whether the customer wants
to switch back as soon as possible or keep using the former backup site as the primary site.
Site C will refer to the quorum or tiebreaker site.
SAP also uses the terms Disaster Recovery (DR) and Disaster Tolerant (DT) interchangeably. We will
try to be consistent and use DR in this document.

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 66


1.8.80-12 © Copyright Lenovo, 2015
Technical Documentation

8.1.2 Architectural overview

The IBM DR solution for SAP HANA can be thought of two standard IBM HA clusters in two different
sites combined into one large cluster. Each site can be planned as a standard IBM HA cluster with the
same hardware requirements as the standard solution. Currently, the only architectural requirement is
that both sites have the same number of server nodes and each site has the same number of network
switches as the existing IBM HA cluster offering.
The idea of IBM DR solution for SAP HANA is to have one stretched IBM GPFS cluster spanning both
sites and providing one file system for SAP HANA. There are two separate SAP HANA clusters on both
sites that can access data in this single shared file system. Synchronous data replication built into the
file systems ensures that at any given point in time there is the exact same data in both data centers.
Figure 30: DR Data Distribution in a Four Node Cluster on page 67 shows the high-level architecture.
Warning
As of December 2012, SAP has published an end-to-end value of 320µs latency between the
synchronous Sites of a DR cluster. It is known by both SAP and IBM that this number of
itself is not enough to describe if the SAP HANA database can recover from a disaster or not.
Latency is a term that can be split into many different categories such as: network latency,
or application latency; each of which has their own values necessary for a proper DR setup.
It is also dependent on whether you use On Line Analytical Processing (OLAP) or On Line
Transaction Processing (OLTP) workloads.
Currently SAP is considering this value on a case per case basis, and it is important that you
discuss these values with your customer and the SAP consultant on site.
The IBM DR solution for SAP HANA works with a total of three data copies. The first copy is kept local
to the writing node. The second copy is stored on any other node except the writing node and the third
copy is always stored on any node on the remote site. Depending on the file size and actual disk space
usage of a certain node, GPFS tends to either cluster blocks on a node or stripe them across multiple
nodes. The same applies to distribution over disks within a node.

Site A Site B
node1 node2 node3 node4 node5 node6 node7 node8

HDD HDD HDD HDD HDD HDD HDD HDD


replica
first

synchronous
syn
second
replica

chr
ono
us
replica
third
meta
data

fio fio fio fio fio fio fio fio

FG FG FG FG FG FG FG FG
1,0,1 1,0,2 2,0,1 2,0,2 1,1,1 1,1,2 2,1,1 2,1,2

sda1 sda2 sda1 sda2 sda1 sda2 sda1 sda2 sda1 sda2 sda1 sda2 sda1 sda2 sda1 sda2
sdb2 sdb2 sdb2 sdb2 sdb2 sdb2 sdb2 sdb2
OS OS OS OS OS OS OS OS

Figure 30: DR Data Distribution in a Four Node Cluster

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 67


1.8.80-12 © Copyright Lenovo, 2015
Technical Documentation

The details of the network setup are not strictly defined. It is up to the project team to develop a solution
that is suitable to the customer’s existing network infrastructure. This must be discussed well in advance
together with the customer networking team.
The basic requirement is to have at least two sites, a third network site is needed if a so called tiebreaker
node will be part of the Disaster Tolerance architecture.
Each site will use a standard HA setup with its own dedicated GPFS and SAP HANA network. This can
be provided by using the standard IBM RackSwitch G8264 10 Gbps Ethernet switches, which are part
of the standard SAP HANA HA offering of IBM. The standard network requirements of a HA solution
regarding the customer’s uplink connectivity also apply to DR.
For the tiebreaker node at site C, there are no
special network requirements as there is only one
server.
For the connectivity between the two main sites, HANA HANA
at least one dedicated optical fibre connection end-
to-end between both sides is recommended. A
routed or non-dedicated connection may be used, GPFS
but no guarantees about performance or operation
can be made. Using redundant optical fibres end-
to-end may improve performance and reliability.
The project team is responsible to work out a solu-
tion with respect to the customer’s infrastructure Site C
(optional)
and requirements. A dedicated Ethernet network
needs to be provided for the GPFS network. For Figure 31: Logical DR Network Setup
the configuration of the inter-site portchannel see
Section 5.8: Inter-Site Portchannel Configuration on page 34.
Figure 30 on page 67 shows a scenario with four nodes on each site. Only the HANA internal network
and the GPFS network are shown, no uplinks connecting the HANA cluster to the client network.
In a solution with a quorum site, the tiebreaker node must be reachable from within the internal GPFS
IPv4 network, each node must be able to reach the tiebreaker node and vice versa. There are no other
special requirements on this connection; neither bandwidth or latency guarantees are needed. It is
acceptable to use a routed connection through the customer’s internal network as long as it is reliable.

IBM RackSwitch G8264 #1 IBM RackSwitch G8264 #3

HANA internal HANA internal


10 Gbit
GPFS GPFS
IBM RackSwitch G8264 #2 IBM RackSwitch G8264 #4
ISL 40 Gbit 40 Gbit ISL
HANA internal HANA internal
10 Gbit
GPFS GPFS

4 ports from each node: 2x GPFS, 2x HANA internal 4 ports from each node: 2x GPFS, 2x HANA internal

node1 node2 node3 node4 node5 node6 node7 node8

Figure 32: DR Networking View (with no client uplinks shown)

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 68


1.8.80-12 © Copyright Lenovo, 2015
Technical Documentation

8.1.3 Three site/Tiebreaker node architecture

If the customer decides to use a tiebreaker node in a third site, an additional server with an appropriate
GPFS license is required. Although the use of any server is possible, we recommend to use the Side-Car
Quorum Node x3550 M3/M4 defined in section 10.1.2: Prepare quorum node on page 103. This definition
includes the necessary licenses and services required for the tiebreaker node. This node is optional but
recommended for increased reliability and simplicity in the case of a disaster.
The solution has been tested in setups with and without this additional node. The rationale for this node
is the split-brain scenario where the connection between the two main sites is lost. The tiebreaker node
helps in deciding which site is the active site and, thus, prevents the primary site from going down for
data integrity reasons. Additionally, this server eases some operational procedures by reducing both the
time needed for recovery and the likelihood of operating errors.
This document will describe the use of the tiebreaker node and explain the deviations when it is not
necessary.

8.2 Mixing eX5/X6 Server in a DR Cluster

Please read chapter 9.2: Mixed eX5/X6 DR Clusters on page 94. Information given there takes precedence
over the instructions below.

8.3 Hardware Setup

This section talks about how to physically install System x machines and how to prepare uEFI for HANA.
It also provides information about the network has to be set up.

8.3.1 Site A and B

The hardware setup of the nodes at each site has to be performed as described in section 6: Guided
Install of the Lenovo Solution on page 39. The following list summarises these steps.
• Ensure certified hardware is available and connected to power
• Verify firmware levels. They must be identical on all nodes
• Modify / Check UEFI settings. They must be identical on all nodes
• Configure storage (RAID setup)

8.3.2 Tiebreaker Site C (optional)

It is recommended to setup the tiebreaker node according to the description in section 10.1.2: Prepare
quorum node on page 103.
The tiebreaker node must have a small partition (50 MB is sufficient) to hold a replica of the GPFS file
system descriptors. It will not contain any data or metadata information. The node must be able to
reach all other nodes at both site A and site B of the GPFS cluster. The partition can reside on a logical
volume (LVM) if desired. However, GPFS must be able to recognize the partition, so, when using LVM,
the name /dev/dm-X must be used instead of the logical volume name. Performance is not critical for
this partition.

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 69


1.8.80-12 © Copyright Lenovo, 2015
Technical Documentation

8.3.3 Acquire TCP/IP addresses and host names

Refer to section 5.3: Network Configuration on page 22 which contains a template that can be used to
gather all the required networking parameters. Ideally, this is done before the installation starts at the
customer location.

8.3.3.1 Tiebreaker node The following parameters must be available for the installation of the
cluster:

Parameter Value
Hostname
IP address for Hostname
IP address for GPFS Network

Table 34: Hostname Settings for DR

In case of a new installation these additional parameters are required. See Table 35 on page 70

Parameter Value
Netmask
Default gateway
Additional routes
DNS server
NTP server

Table 35: Extra Network Settings for DR

The tiebreaker node must be able to reach all cluster nodes on both sites with the IP addresses and
hostnames used for GPFS (gpfsnodeXX) with which the GPFS cluster uses to communicate internally.
Conversely, the cluster nodes must reach the tiebreaker node with the same host name and IP address.
This can be achieved, for example, via routing, tunneling, a VPN connection, or through a dedicated
physical network.

8.3.4 Network switch setup (GPFS and SAP HANA network)

The setup of the switches used for the GPFS and SAP HANA network is described in section 5.4: Network
Switch Configuration For Clustered Installations on page 23. For the link between the switches on both
sites refer to the next sections.

8.3.5 Link between site A and B

The GPFS network will be stretched over site A and B, while the SAP HANA network must not. This
means that the GPFS network on both sites will be one subnet and each node can reach all other nodes
on both sites; whereas, the SAP HANA networks on site A and B are isolated from each other.
The GPFS network on both sites should be connected with at least a dedicated 10GBit connection. A
routed network is not recommended as it may have severe impact on the synchronous replication of the
data.
The SAP HANA network is separated on both sites. This is due to SAP HANA being operated in a cold
standby mode. For this reason, both sites will use the same hostnames and IP adresses for SAP HANA.
This requires a strict isolation of these two networks.

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 70


1.8.80-12 © Copyright Lenovo, 2015
Technical Documentation

8.3.6 Network integration into customer infrastructure

The network connections in the customer network for SAP HANA access, management, backup and other
connections depends very much on the customer network and his requirements. General guidance can be
found in section 6: Guided Install of the Lenovo Solution on page 39.

8.3.7 Setup network connection to tiebreaker node at site C (optional)

The tiebreaker node at site C needs to be integrated as well into the GPFS cluster. Every node in the
cluster must be able to contact the tiebreaker node and vice versa.
This depends on the configuration of the tiebreaker node (one or more network interfaces), the subnet
used for GPFS traffic (private or public) and other parameters. It is up to the project team to come to
an agreed solution with the customer.
Possible setups include a multi-homed tiebreaker node or static host routes when private address ranges
are used. VPN, NAT or router capabilities are further options.
The following is an example for a setup with a GPFS subnet of 192.168.10.x and a tiebreaker node with
one network adapter and a public IP address in a 10.x.x.x range:
1. On the tiebreaker node add the GPFS address as an alias to the NIC attached to the public network
e.g.
1 # ifconfig eth0:1 192.168.10.199 netmask 255.255.255.0

To make this permanent, add an entry like this to the respective ifcfg-ethX file in /etc/syscon-
fig/network
1 IPADDR_1='192.168.10.99/24'

2. Add host routes on every node in the GPFS cluster to this IP alias.
1 # route add -host 192.168.10.199 gw <tiebreaker external ip>

3. Add host routes on the tiebreaker node for every node in the cluster.
1 # route add -host 192.168.10.101 gw <external IP node1>
2 # route add -host 192.168.10.102 gw <external IP node2>
3 ...
4 # route add -host 192.168.10.10X gw <external IP nodeN>

4. Verify that the newly created alias is reachable throughout the cluster and all nodes can be pinged
from the tiebreaker node via the internal GPFS network addresses.

8.4 Software Setup

Note
The base installation changed with the advent of the new text based installer which also allows
the installation on Red Hat Enterprise Linux. This replaces the manual installation described
here in earlier releases.
Install all standard DR servers as described in section 6: Guided Install of the Lenovo Solution on page
39. In phase 3 choose the role Cluster Node (Worker) for all servers. Please note that in the interim
check in section 6.5: Interim Check on page 57 each site is only expected to see only the site-local nodes
in the HANA network test.

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 71


1.8.80-12 © Copyright Lenovo, 2015
Technical Documentation

For the optional quorum node, please follow the instructions given in section 10.1.2: Prepare quorum
node on page 103 and following to install the base operating system and software.

8.4.1 GPFS configuration prerequisites

Create /etc/hosts entries for GPFS


To ensure communication over the correct network interfaces, define the host entries manually on each
node (including the tiebreaker node if available) for the GPFS and SAP HANA networks. Ensure that
the entry for the local machine is always the first entry in the list. This is required for the installer
scripts. Do not copy this file from one node to the other as it will break other installation scripts.
Each node in the cluster (except the tiebreaker node) has the following two names associated with it
1 192.168.10.1XX gpfsnodeXX
2 192.168.20.1XX hananodeXX

The tiebreaker node only has a gpfsnode name as it is used solely for GPFS communication
1 192.168.10.1XX gpfsnodeXX

The GPFS network spans both sites, which means in an example with four nodes per site you have
gpfsnode01 up to gpfsnode08 (gpfsnode01-04 at site A, gpfsnode05-08 at site B).
The SAP HANA network is restricted to only one site, which in turn means you should use each hanan-
odeXX entry twice (once per site). This effectively couples any active SAP HANA node to a backup node
on the second site. In the example with four nodes on each site you have hananode01 to hananode04 at
site A and hananode01 to hananode04 at site B.

8.4.1.1 Example two sites with four nodes each


1 ...
2 # Second node on first site:
3 192.168.10.102 gpfsnode02
4 192.168.20.102 hananode02
5 192.168.10.101 gpfsnode01
6 192.168.20.101 hananode01
7 192.168.10.103 gpfsnode03
8 192.168.20.103 hananode03
9 192.168.10.104 gpfsnode04
10 192.168.20.104 hananode04
11 ...
12 # Second node on second site (physically the sixth node)
13 192.168.10.106 gpfsnode06
14 192.168.20.102 hananode02
15 192.168.10.105 gpfsnode05
16 192.168.20.101 hananode01
17 192.168.10.107 gpfsnode07
18 192.168.20.103 hananode03
19 192.168.10.108 gpfsnode08
20 192.168.20.104 hananode04
21 ...

The optional tiebreaker node only has GPFS addresses. This has two consequences: the tiebreaker
node only has gpfsnodeXX entries in the /etc/hosts file for all nodes; and, all other nodes have no

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 72


1.8.80-12 © Copyright Lenovo, 2015
Technical Documentation

hananodeXX entry for this special node. In our example above, a tiebreaker node would get allocated
gpfsnode99.
After editing the /etc/hosts entries it is a good idea to verify network connectivity. To do so, execute
the following command to list all nodes of the DR clusters attached to the GPFS network:
1 # nmap -sP 192.168.10.0/24

and execute this command at each site to confirm the SAP HANA network:
1 # nmap -sP 192.168.20.0/24

Only the nodes of the site should be listed using the second command. Verify that you got the correct
machines by comparing the displayed MAC addresses with the MAC addresses of the bond1 device on
each respective node.

8.4.1.2 SSH key exchange As GPFS uses SSH, the root ssh keys on all nodes need to be exchanged
to allow for password-less SSH connectivity within the cluster. This is a general GPFS requirement.
Please note that the following commands will overwrite any additional SSH key authorizations you may
have installed yourself.
Run the following commands all from the first node in the GPFS cluster.
Generate the known_hosts file on the first node
1 # for node in gpfsnode0{1..8} ; do ssh-keygen -R $node ; ssh-keyscan -t rsa $node >>←-
,→ /root/.ssh/known_hosts ; done
2 # for ip in 192.168.10.{1..8} ; do ssh-keygen -R $ip ; ssh-keyscan -t rsa $ip >> /←-
,→root/.ssh/known_hosts ; done

Generate a new SSH key for passwordless ssh access, authorize it and distribute it to the other nodes:
1 # ssh-keygen -q -b 4096 -N "" -C "Unique SSH key for root on DR Cluster" -f /root/.←-
,→ssh/id_rsa
2 # cat ~/.ssh/id_rsa.pub > ~/.ssh/authorized_keys
3 # for node in gpfsnode0{1..8} ; do scp /root/.ssh/id_rsa /root/.ssh/id_rsa.pub /root←-
,→/.ssh/authorized_keys root@$node:.ssh/ ; done

Distribute the known_hosts file to the other nodes:


1 # for node in gpfsnode0{2..8} ; do scp /root/.ssh/known_hosts root@${node}:/root/.←-
,→ssh/ ; done

A small explanation for the gpfsnode01..8 value, this generates a list of names from gpfsnode01 to gpf-
snode08. If the host names are non-successive, replace gpfsnode01..8 with a space separated list of the
hostnames. The distribution of the known_hosts file omits the first node, as on this node the files are
already prepared.
Note
In previously releases of this document the shipped SSH root key was used and distributed
among the nodes in the DR-enabled. This imposes a security risk and you should consider
replacing this key with a new unique key. Please contact support.

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 73


1.8.80-12 © Copyright Lenovo, 2015
Technical Documentation

8.4.2 GPFS Server configuration

Create the necessary configuration files. On the first node (which will be the primary configuration
server), create a file /var/mmfs/config/nodes.cluster and add one line per node containing its GPFS
network hostname. If applicable, add the tiebreaker node as last node.
Next append ":quorum" (no spaces) to the end of line for some hosts, according to the following rules:
a) Distribute all available nodes (except tiebreaker) in four equal sized groups and append ":quorum" to
the first node of each group.
b) If a quorum node is available, mark it as quorum.
c) Without a quorum node, mark the second node of the first group as a quorum.
With an example of 8 nodes, you should have 5 nodes marked as quorum nodes. See the following example
for an 8 node DR cluster without and with a dedicated tiebreaker node (gpfsnode99):

Topology nodes.cluster file with nodes.cluster file with-


Vector Quorum Node out Quorum Node
Failure group 1 1,0,x gpfsnode01:quorum gpfsnode01:quorum
gpfsnode02 gpfsnode02:quorum
Failure group 2 2,0,x gpfsnode03:quorum gpfsnode03:quorum
gpfsnode04 gpfsnode04
Failure group 3 1,1,x gpfsnode05:quorum gpfsnode05:quorum
gpfsnode06 gpfsnode06
Failure group 4 2,1,x gpfsnode07:quorum gpfsnode07:quorum
gpfsnode08 gpfsnode08
Failure group 5
3,0,1 gpfsnode99:quorum (not applicable)
(tie breaker)

Table 36: GPFS Settings for DR Cluster

The nodes.cluster file for an eight node setup without separate quorum node (i.e. tiebreaker node) should
look like this:
1 gpfsnode01:quorum-manager
2 gpfsnode02:quorum-manager
3 gpfsnode03:quorum-manager
4 gpfsnode04:
5 gpfsnode05:quorum-manager
6 gpfsnode06:
7 gpfsnode07:quorum-manager
8 gpfsnode08:

Note
Adding node designation ’manager’ is optional as quorum nodes are automatically eligible to
be chosen as cluster manager.
One comment regarding the topology vectors, as they will be used in a later step. The value of x has to
be replaced with the number of the node within the failure group. If you have 3 nodes in each failure
group, and the number of the nodes is from 1 to 3 in each failure group, then the second node in the first
failure group will be 1,0,2; the second node in the third failure group will be 1,1,2.
Create the GPFS cluster with the first node of each site as primary (-p) resp. secondary server (-s)
1 # mmcrcluster -n /var/mmfs/config/nodes.cluster -p gpfsnode01 -s gpfsnode05 -C ←-
,→HANADR1 -A -r /usr/bin/ssh -R /usr/bin/scp

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 74


1.8.80-12 © Copyright Lenovo, 2015
Technical Documentation

Mark all nodes as licensed. Mark all the quorum nodes (including the optional tiebreaker node) and the
configuration servers with a server license and all other nodes as FPO licensed.
1 # mmchlicense server --accept -N gpfsnode01,gpfsnode02,..,gpfsnode99
2 # mmchlicense fpo --accept -N gpfsnode03,gpfsnode04,...

Please take care of your actual licensing.


Start the GPFS daemon on all nodes
1 # mmstartup -a

Apply the following cluster configuration changes


1 # mmchconfig unmountOnDiskFail=meta -i
2 # mmchconfig panicOnDiskFail=meta -i
3 # /usr/bin/yes 999 | /usr/lpp/mmfs/bin/mmchconfig dataStructureDump=/tmp/GPFSdump,←-
,→pagepool=4G,maxMBpS=2048,maxFilesToCache=4000,skipDioWriteLogWrites=1,←-
,→nsdInlineWriteMax=1M,prefetchAggressivenessWrite=2,readReplicaPolicy=local,←-
,→enableRepWriteStream=false,enableLinuxReplicatedAIO=yes,nsdThreadsPerDisk=24,←-
,→restripeOnDiskFailure=yes

After this last command you need to restart GPFS with


1 # mmshutdown -a
2 # mmstartup -a

8.4.3 GPFS Disk configuration

On the first node, create a file /var/mmfs/config/disk.list.data.fs. For each node add entries as
described in the following section, but replace the failureGroup with the correct topology vector for the
particular node. Make sure that the pool definitions are only once in this file.

8.4.3.1 GPFS 3.5 Disk Definitions For every HDD RAID device /dev/sdb and subsequent devices
add a NSD definition like the following template:
1 %nsd: device=/dev/sdb
2 nsd=data01node01
3 servers=gpfsnode01
4 usage=dataAndMetadata
5 failureGroup=1,0,1
6 pool=system

Please don’t forget to increment the first number in the nsd line, e.g. data02node01 for the second HDD
block device. You can get a device list with lsscsi.
Then after adding als device stanzas add these lines unaltered:
1 %pool:
2 pool=system
3 blockSize=1M
4 usage=dataAndMetadata
5 layoutMap=cluster
6 allowWriteAffinity=yes
7 writeAffinityDepth=1
8 blockGroupFactor=1

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 75


1.8.80-12 © Copyright Lenovo, 2015
Technical Documentation

When using a tiebreaker node add the following lines to the stanza file:
1 %nsd: device=/dev/sda3
2 nsd=desc01node99
3 servers=gpfsnode99
4 usage=descOnly
5 failureGroup=3,0,1
6 pool=system

Replace device, nsd name, server with the correct values where necessary.
If your setup includes a tiebreaker node determine the device name of the partition allocated for the
descriptor-only NSD and change the line in disk.list.data.fs starting with %nsd: device= accord-
ingly.

8.4.4 Filesystem Creation

Create the NSDs


1 # mmcrnsd -F /var/mmfs/config/disk.list.data.fs -v no

Create the filesystem


1 # mmcrfs sapmntdata -F /var/mmfs/config/disk.list.data.fs -A no -B 512k -N 3000000 -←-
,→v no -m 3 -M 3 -r 3 -R 3 -j hcluster --write-affinity-depth 1 -s ←-
,→failureGroupRoundRobin --block-group-factor 1 -Q yes -T /sapmnt

Create filesets
1 # mmcrfileset sapmntdata hanadata -t "Data Volume for HANA database"
2 # mmcrfileset sapmntdata hanalog -t "Log Volume for HANA database"
3 # mmcrfileset sapmntdata hanashared -t "Shared Directory for HANA database"

Mount the filesystem on all nodes


1 # mmmount sapmntdata -a

To verify the file system is successfully mounted execute


1 # mmlsmount sapmntdata -L

Link the filesets in the filesystem


1 # mmlinkfileset sapmntdata hanadata -J /sapmnt/data
2 # chmod 755 /sapmnt/data
3 # mmlinkfileset sapmntdata hanalog -J /sapmnt/log
4 # chmod 755 /sapmnt/log
5 # mmlinkfileset sapmntdata hanashared -J /sapmnt/shared
6 # chmod 755 /sapmnt/shared

Set a quota on the hanalog fileset


The formula for the log quota in a DR scenario is:
<# of active nodes> * RAM * <# of GPFS replicas>
Example: In a 7+7 scenario with L nodes using 6 worker nodes and 1 standby
6 * 1024G * 3 = 18432G
Set the quota

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 76


1.8.80-12 © Copyright Lenovo, 2015
Technical Documentation

1 # mmsetquota -j hanalog -h 18432G -s 18432G /dev/sapmntdata

8.4.5 SAP HANA appliance installation

Warning
SAP HANA in this DR solution must be installed using the hostname of the HANA-internal
network (usually on bond1, hostname hananodeXX). The host based routing used in the HA
solution is not applicable for the DR solution.
We recommend to install SAP HANA on the backup site first and thereafter on the primary site. This is
safer to install because your backup site installation cannot accidentally make changes to your production
environment.

8.4.5.1 Install HANA on backup site Before continuing with the installation make sure that the
GPFS file system sapmntdata is mounted at /sapmnt. In order to prepare the backup site, it is necessary
to do a standard HANA installation and then delete the installed content on the shared filesystem.

8.4.5.1.1 Install SAP HANA software on backup site Please install SAP HANA on the backup
site as described in the official SAP documentation available here: http://help.sap.com/hana_appliance.
The location of the SAP HANA installation files is /var/tmp/saphana.
The roles (worker or standby) are not important, except that the first one needs to be a worker. We
recommend to install all other nodes as standby, as this installation type is faster.

8.4.5.1.2 Stop HANA and SAP Host agent on backup site Log in as <SID>adm on one node
and stop SAP HANA:
1 $ HDB stop

Then log in as root and stop SAP Host agent and other services:
1 # /etc/init.d/sapinit stop

Afterwards disable the autostart of the sapinit service


1 # chkconfig sapinit off

Do the last two steps on all backup nodes.

8.4.5.1.3 Delete SAP HANA shared content The purpose of this installation is to install the
node local parts of a SAP HANA system. After installing SAP HANA on all backup site nodes the data
in /sapmnt must be deleted:
1 # rm -r /sapmnt/data/<SID>
2 # rm -r /sapmnt/log/<SID>
3 # rm -r /sapmnt/shared<SID>

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 77


1.8.80-12 © Copyright Lenovo, 2015
Technical Documentation

8.4.5.1.4 Disable mmfsup script on backup site nodes An installation with the Recovery Image
will install a mmfsup script which will automatically start SAP HANA after the file system comes up.
This must be deactivated as it may start SAP HANA on both sites (using the same hostnames.)
The script resides in /var/mmfs/etc. Disable it on all cluster nodes.
1 # chmod 644 /var/mmfs/etc/mmfsup

Note
In previous releases of this document the mmfsup script was deleted. This is not necessary as
disabling the script is sufficient and will keep the file for future use.

8.4.5.2 Install HANA on primary site Now install SAP HANA again on the primary site as
described in the official SAP documentation available here: http://help.sap.com/hana_appliance.
The location of the SAP HANA installation files is /var/tmp/saphana. Install SAP HANA with the
same parameters as on the backup site. This is very important for DR to work properly. Please make
sure that you install the individual HANA nodes with the correct roles, for example, five worker and one
standby node in a six node per site solution.
After the installation finished deactivate the autostart of SAP Services
1 # chkconfig sapinit off

Please verify that the user <SID>adm and the group SAPSYS have the same UID resp. GID on all
nodes. Use the command
1 # id <SID>adm

and compare the numerical IDs of <SID>adm and group sapsys.

8.4.5.3 Disable mmfsup script on production site nodes An installation with the Recovery
Image will install an mmfsup script which will automatically start HANA after the file system comes up.
This must be deactivated as it may start SAP HANA on both sites (using the same hostnames.)
The script resides in /var/mmfs/etc. Remove it on all cluster nodes.
1 # chmod 644 /var/mmfs/etc/mmfsup

Note
In previous releases of this document the mmfsup script was deleted. This is not necessary as
disabling the script is sufficient and will keep the file for future use.

8.4.6 Tiebreaker node setup

8.4.6.1 Quorum node setup using a new node The setup of a new server can be done by following
the instructions in section 10.1.2: Prepare quorum node on page 103 excluding the setup of the switches
which does not apply to a DR configuration.

8.4.6.2 Tiebreaker node setup using an existing node If an existing node will be used as the
tiebreaker node, please consult the system administrator and ask him to:
• Provide a partition which will be used for to hold the GPFS file descriptor information
• Install GPFS

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 78


1.8.80-12 © Copyright Lenovo, 2015
Technical Documentation

• Build the GPFS portability layer. Note: This may require the installation of the kernel header files
/ sources and some development tools (compiler, make...)
• Setup network access to all other GPFS cluster nodes in the GPFS network
• Exchange ssh keys so that the tiebreaker node root account can be accessed without a password
from the other GPFS cluster nodes.
Follow the instructions in sections 10.1.6: Quorum Node IBM GPFS setup on page 106 and 10.1.7:
Quorum Node IBM GPFS installation on page 106.
General information how to install and setup GPFS can be found online in the Information Center section
Installing GPFS on Linux nodes.

8.4.7 Verify Installation

8.4.7.1 GPFS Cluster configuration


• Verify that all nodes are up and running
1 # mmgetstate -a

• Verify distribution of the configuration servers


The primary and secondary GPFS configuration servers must each be on one site. Otherwise,
fail-over to the standby site will not work.
This is checked with
1 # mmlscluster

• Verify distribution of quorum nodes


The current active quorum setup can be checked with
1 # mmgetstate -aLs

The cluster configuration is listed with


1 # mmlscluster

When using the tiebreaker node check that the tiebreaker node is a quorum node and that the
remaining quorum nodes are distributed evenly among the other file system failure groups. You see
the failure groups with
1 # mmlsdisk sapmntdata

Information about the failure group setting can be found in section 8.4.2: GPFS Server configuration
on page 74. If not using the tiebreaker make sure that the active site has at least one more quorum
node than the passive site. In general, try to keep an odd number of quorum nodes.
• Verify cluster manager location
Verify the location of the cluster manager depending on the use of the tiebreaker node
1 # mmlsmgr

If the solution uses a tiebreaker node, the cluster manager must be on the passive/backup site, in a
solution without a tiebreaker node, the cluster manager must be on the active site. To change the
cluster manager issue
1 # mmchmgr -c <node>

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 79


1.8.80-12 © Copyright Lenovo, 2015
Technical Documentation

• Verify replication factor 3 (= three copies, two local and one remote copy)
1 # mmlsfs sapmntdata

Verify that the following values are all set to 3:


1 -m Default number of metadata replicas
2 -M Maximum number of metadata replicas
3 -r Default number of data replicas
4 -R Maximum number of data replicas

• Test replication factor 3


Write a new file to the shared filesystem and verify replication level applied to this file:
1 # mmlsattr <path to file>

All values must be set to 3 and no flags (like illbalanced, metaupdatemiss, etc.) must be shown.
Please check the GPFS documentation or ask IBM GPFS support if there are flags shown after
restripe.
• Check failure groups
You should have four failure groups 1,0,x 2,0,x 1,1,x and 2,1,x. If you are using the tiebreaker node,
a fifth failure group 3,1,1 should be in the file system. Get the list of failure groups from the disk
list
1 # mmlsdisk sapmntdata

Make sure that the server nodes are distributed evenly among the failure groups.
• Disk availability All GPFS disks must be online.
1 # mmlsdisk sapmntdata -e
2 All disks up and ready

If there are disks down, check the reason (eg. hardware failure, system reboot, ...) and restart them
once the problem has been resolved.
The following command will try to start all disks in the file system. This has no effect on already
started disks.
1 # mmchdisk sapmntdata start -a

Note
Follow the instructions in Section 7: After Installation on page 64.

8.5 Extending a DR-Cluster

This section describes how to grow a DR cluster. Growing a DR enabled cluster requires that both sites
grow by the same number of nodes. In general the installation of each active/backup server couple needs
not to be done at the same time, but it’s highly recommended. The overcautious technician may also
decide to install the backup node prior to the active node.
The following sections will only explain the differences from the basic DR installation in the sections
before.

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 80


1.8.80-12 © Copyright Lenovo, 2015
Technical Documentation

8.6 Mixing eX5/X6 Server in a DR Cluster

Please read chapter 9.2: Mixed eX5/X6 DR Clusters on page 94. Information given there takes precedence
over the instructions below.

8.6.1 Hardware Setup

Please refer to 8.3: Hardware Setup on page 69 and follow the instructions there. Ping the new machine
on the GPFS network from all machines to test if the network configuration is correct. Ping the new
machine on the HANA network from all servers, it is supposed to be reachable only from nodes on the
same site.

8.6.2 GPFS Part 1

1. First step is to add /etc/hosts entries on every machine. Let’s assume that the new nodes are the
9th and 10th nodes with node09 going to the active site and 10 into the backup site. Distribute any
new nodes evenly into the existing failure groups (topology), so that a failure group has at most
one more node than the other, put the backup server into the corresponding FG on the backup site.
In the example above, the 9th node will go into failure group 1 (1,0,x) getting the topology vector
1,0,3 and the 10th node will go into failure group 3 (1,1,x) with topology vector 1,1,3.
On all existing nodes, add host entries for the the GPFS network, .e.g.:
1 192.168.1.109 gpfsnode09
2 192.168.1.110 gpfsnode10

On the new nodes add entries for all other nodes. Copying the entries from one of the existing
nodes is the easiest way.
First add host keys for the new nodes to the existing machines. Run on any existing node
1 # for srcnode in gpfsnode0{1..8} ; do echo node $srcnode ; ssh $srcnode 'for ←-
,→target in gpfsnode0{9,10} ; do echo -n $target ; ssh-keygen -R $target ; ←-
,→ssh-keyscan -t rsa target >> /root/.ssh/known_hosts ; done '; done

The value gpfsnode01..8 will generate a list from gpfsnode01 to gpfsnode08, if the host names differ
or are not consecutive, replace this with a space separated list of host names. The same applies to
gpfsnode09,10 which are the new nodes in this example.
Then copy the root SSH key to the new news. Issue these command on one of the existing cluster
nodes:
1 # scp /root/.ssh/authorized_keys /root/.ssh/id_rsa /root/.ssh/id_rsa.pub ←-
,→root@gpfsnode09:/root/.ssh/
2 # scp /root/.ssh/authorized_keys /root/.ssh/id_rsa /root/.ssh/id_rsa.pub ←-
,→root@gpfsnode10:/root/.ssh/

On all new cluster nodes run this command


1 # for node in gpfsnode{01..10} ; do echo -n $node ; ssh-keygen -R $node ; ssh-←-
,→keyscan -t rsa $node >> /root/.ssh/known_hosts ; done

Test the SSH key exchange by runnign this command on any node
1 # for srcnode in gpfsnode{01..10} ; do echo from node $srcgpfsnode ; ssh ←-
,→$srcnode 'for target in gpsfnode{01..10} ; do echo To node $target ; ssh ←-
,→$target hostname ; done '; done

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 81


1.8.80-12 © Copyright Lenovo, 2015
Technical Documentation

The command should run without interaction and errors.


2. Install GPFS (base package):
1 # cd /var/tmp/install/gpfs-<GPFS-RELEASE>
2 # rpm -ivh gpfs.base-<GPFS-RELEASE>-0.x86_64.rpm

3. Update to the latest GPFS Maintenance Release


Warning
It is highly recommended to upgrade to GPFS 3.5.0-17 or higher.
Install the following three packages for the latest (X) maintenance release:
1 # rpm -ivh gpfs.docs-<GPFS-RELEASE>-X.noarch.rpm
2 # rpm -ivh gpfs.gpl-<GPFS-RELEASE>-X.noarch.rpm
3 # rpm -ivh gpfs.msg.en_US-<GPFS-RELEASE>-X.noarch.rpm

4. Verify your GPFS installation:


1 # rpm -qa | grep gpfs

The installed packaged from above should be listed here.


5. Build the GPFS Portability Layer
Follow the instructions in /usr/lpp/mmfs/src/README:
1 # cd /usr/lpp/mmfs/src
2 # make Autoconfig
3 # make World
4 # make InstallImages

6. To add the new nodes to the cluster run on any running node
1 # mmaddnode -N gpfsnode09,gpfsnode10

7. Mark the servers as licensed:


1 # mmchlicense fpo --accept -N gpfsnode09,gpfsnode10

Please use the correct licensed for the nodes. Server and FPO are just examples.
8. Start the new nodes
1 # mmstartup -N gpfsnode09,gpfsnode10

9. Create the disk descriptor files. Before adding the disks to the shared file system, you must create
the disk descriptor or stanza files. You can create them on any node on the cluster, but it is
preferably done on the node where the files for the initial cluster creation are located. Please see
chapter 8.4.3: GPFS Disk configuration on page 75 for a description of the stanza files. You only
need to create entries for the drives on the new nodes and you can omit the pool configuration
entries. Let us assume the new file is /var/mmfs/config/disk.list.data.gpfsnode0910.
10. Create NSDs
1 # mmcrnsd -F /var/mmfs/config/disk.list.data.gpfsnode0910

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 82


1.8.80-12 © Copyright Lenovo, 2015
Technical Documentation

8.6.3 HANA Backup Node Installation

Skip this for a node on the active site. For the HANA installation on the backup site, we need a temporary
filesystem which must satisfy some requirements. RAM based filesystems are not sufficient, so we use the
fresh created NSDs for a temporary filesystem, install the backup instance, and destroy the temporary
filesystem afterwards before continuing with the installation.
1. Create a temporary filesystem
1 /usr/lpp/mmfs/bin/mmcrfs sapmnttmp -F /var/mmfs/config/disk.list.data.←-
,→gpfsnode0910 -A no -B 1M -N 3000000 -v no -m 1 -M 3 -r 1 -R 3 j hcluster ←-
,→--write-affinity-depth 1 -s failureGroupRoundRobin --block-group-factor 1←-
,→ -Q yes

Before continuing with the installation make sure that the GPFS file system sapmntdata is not
mounted at /sapmnt on the new nodes.
Mount this filesystem on all new backup nodes
1 mmmount sapmnttmp /sapmnt -N <new backup nodes>

2. Install HANA on backup site


In order to prepare the backup site, it is necessary to do a standard HANA installation and then
delete the installed content on the shared filesystem. A tool to automate this procedure is currently
in development by SAP.
Install SAP HANA on the backup site as described in the official SAP documentation available
here: http://help.sap.com/hana_appliance. The location of the SAP HANA installation files is
/var/tmp/saphana. Do a single node installation on each node. Make sure to use exact the same
SAP SID, SAP instance number, user names, user IDs, group names and group IDs, paths as in the
original DR-HANA installation. You can use the command id to query user and group information.
3. Stop HANA and SAP Host agent on backup site
Log in as <SID>adm on one node and stop SAP HANA:
1 $ HDB stop

Then log in as root and stop SAP Host agent and other services:
1 # /etc/init.d/sapinit stop

Afterwards disable the autostart of the sapinit service


1 # chkconfig sapinit off

Do the last two steps on all backup nodes.


4. Delete SAP HANA shared content
5. Disable mmfsup script on backup site nodes An installation with the Recovery Image will install
a mmfsup script which will automatically start SAP HANA after the file system comes up. This
must be deactivated as it may start SAP HANA on both sites (using the same hostnames.)
The script resides in /var/mmfs/etc. Remove it on all cluster nodes.
1 # rm /var/mmfs/etc/mmfsup

6. Delete temporary filesystem After installing all new backup nodes, unmount temporary Filesystem
on all nodes

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 83


1.8.80-12 © Copyright Lenovo, 2015
Technical Documentation

1 mmmumount sapmnttmp -a

and delete it
1 mmdelfs sapmnttmp

This will delete all shared HANA content and will leave the node specific HANA parts installed.

8.6.4 GPFS Part 2

1. Add disks to sapmntdata filesystem


1 # mmadddisk sapmntdata -F /var/mmfs/config/disk.list.data.gpfsnode0910

2. Verify NSD status


Verify that all NSDs are up and running
1 # mmlsdisk sapmntdata

3. Mount GPFS on active


On the new active nodes and only on these, mount the GPFS file system
1 # mmmount sapmntdata -N gpfsnode09,gpfsnode10

GPFS setup is now complete.

8.6.5 HANA

8.6.5.1 Install HANA on active site


1. Please make sure that you have mounted the shared file system on the new nodes.
1 # mmlsmount sapmntdata -L

2. If not already installed, install the SAP host agent


1 # cd /var/tmp/install/saphana/DATA_UNITS/SAP_HOST_AGENT_LINUX_X64
2 # rpm -ihv saphostagent.rpm

As recommended by the RPM installation, a password for sapadm may be set.


3. Deactivate automatic startup through sapinit at startup.
Running SAP’s startup script during system boot must be deactivated as it will will be executed
by a GPFS startup script after cluster start. Execute:
1 # chkconfig sapinit off

4. Install SAP HANA worker and standby nodes as described in the guide "SAP HANA Administration
Guide".
Warning
SAP HANA in this DR solution must be installed using the hostname of the HANA-
internal network (usually on bond1, hostname hananodeXX). The host based routing
used in the HA solution is not applicable for the DR solution.

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 84


1.8.80-12 © Copyright Lenovo, 2015
Technical Documentation

8.7 Using Non Productive Instances on Inactive DR Site

IBM supports the installation of storage expansions in a DR scenario to allow clients to run a non-
productive SAP HANA instance on idling DR-site nodes. During normal operation in a DR scenario, all
nodes at one of the two sites are only receiving data from the active site and store them on their local
disks.
SAP is tolerating to run a non-productive SAP HANA instance on those nodes. The local disks of the
nodes are used for production data. A storage expansion is used to provide enough local storage for those
non-productive instances.
In the event of a disaster, when the backup site becomes the active site, all non-productive SAP HANA
instances have to be shut down to allow production to continue to run.

8.7.1 Architecture

This section briefly explains how IBM enables the use of idling DR-site nodes to run non-productive
SAP HANA instances.

8.7.1.1 Prerequisites The use of a storage expansion is only supported in a DR scenario. No


expansions can be used when running in an HA environment unless being part of the certified server
models.
All nodes on the DR-site must have a storage expansion connected. Having only a subset of the DR-site
nodes equipped with storage expansions is not a supported environment. Furthermore, all expansions
must have identical disk drives installed.
If the customer considers both participating data centers to be equal (which means that after a fail-over
of his production instances to the DR-site he will not manually fail production back to his site A data
center), then you must have storage expansion connected also to all primary site nodes. This storage
expansion will remain unused until you actually need to move data away from DR-site nodes which are
now being used to host SAP HANA production instances.

8.7.1.2 Architectural overview The following illustration shows you how IBM’s solution for SAP HANA
DR with storage expansions looks like:
The expansion storage is visible as local storage only and connected via the SAS interface. The storage
is not shared by multiple nodes.
Attention
The external storage can only be used to host data of non-productive SAP HANA instances.
The storage must not be used to expand space of the production file system or to store
backups.

8.7.1.3 Architectural comments IBM only support running GPFS with a replication factor of 2 for
the non-productive instance. This means, outages of a single node can be handled and no data is lost. We
do not support a replication factor of 3 because the scope of non-productive SAP HANA environments
does not include disaster recovery.
There will be exactly one new file system spanning all DR-site expansion box drives. While we do
not support a multi SID configuration it is a valid scenario to run, e.g., on some DR-site nodes a QA
environment and on other DR-site nodes development. This, however, has to be done on the same file
system.

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 85


1.8.80-12 © Copyright Lenovo, 2015
Technical Documentation

Site A Site B
node1 node2 node3 node4 node5 node6 node7 node8

replica HDD HDD HDD HDD HDD HDD HDD HDD


first
second
replica

Produc-
tion
file
system

replica
third
meta
data

fio fio fio fio fio fio fio fio

sda1 sda2 sda1 sda2 sda1 sda2 sda1 sda2 sda1 sda2 sda1 sda2 sda1 sda2 sda1 sda2

OS OS OS OS OS OS OS OS

RAID Ctrl RAID Ctrl RAID Ctrl RAID Ctrl

First replica ... ... ... ...


Second replica
Second file system spanning only expansion box drives (metadata and data)

Figure 33: SAP HANA DR using storage expansion - architectural overview

IBM does not enable quotas on the new expansion box file system. Make sure to have either a valid
backup procedure in place or to regularly delete old backups.

8.7.2 Setup

This section assumes that the nodes have been successfully installed with an operating system already
(as required for a backup DR site).

8.7.2.1 Hardware setup Connect the EXP2524 SAS port labeled ’In’ to one of the M5025 ports.
For details, see the EXP2524 Installation Guide. Configure the drives as described in the section 6:
Guided Install of the Lenovo Solution on page 39. Either reboot or rescan the SCSI bus and verify that
Linux recognizes the new drives.

8.7.2.2 GPFS configuration You reuse the existing GPFS cluster and create a second file system
spanning only the expansion drives of the DR-site nodes.
Even if your setup includes expansions on the primary site, execute the procedure only on the DR-site
expansions. The primary site expansion drives will not be used in the beginning.
1. On each DR-site node, collect the device names of all expansion drives. When using the M5025
Controller you can get the drive names with the this command:
1 # lsscsi |grep "M5025" |grep -o -E "/dev/sd[a-z]+"

You will end up with something like:


1 /dev/sde
2 /dev/sdf
3 /dev/sdg
4 /dev/sdh

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 86


1.8.80-12 © Copyright Lenovo, 2015
Technical Documentation

for each of DR-site node. Note: After sdz, Linux wraps around and continues with sdaa, sdab, ...
2. Create additional NSDs
For all new expansion drives, create NSDs according to the following rules:
(a) all NSDs will be dataAndMetadata
(b) all NSDs go into the system pool
(c) naming scheme is extXXgpfsnodeYY with XX being the two-digit drive number and YY being
the node number
(d) One failure group for all drives within one expansion box
Example: three M-size nodes with 32-drive expansion (gpfsnode01-03 are primary site nodes, 04-06
are secondary site/DR-site nodes)
1 /dev/sde:gpfsnode04::dataAndMetadata:4:ext01gpfsnode04:system
2 /dev/sdf:gpfsnode04::dataAndMetadata:4:ext02gpfsnode04:system
3 /dev/sdg:gpfsnode04::dataAndMetadata:4:ext03gpfsnode04:system
4 /dev/sdh:gpfsnode04::dataAndMetadata:4:ext04gpfsnode04:system
5 /dev/sde:gpfsnode05::dataAndMetadata:5:ext01gpfsnode05:system
6 /dev/sdf:gpfsnode05::dataAndMetadata:5:ext02gpfsnode05:system
7 /dev/sdg:gpfsnode05::dataAndMetadata:5:ext03gpfsnode05:system
8 /dev/sdh:gpfsnode05::dataAndMetadata:5:ext04gpfsnode05:system
9 /dev/sde:gpfsnode06::dataAndMetadata:6:ext01gpfsnode06:system
10 /dev/sdf:gpfsnode06::dataAndMetadata:6:ext02gpfsnode06:system
11 /dev/sdg:gpfsnode06::dataAndMetadata:6:ext03gpfsnode06:system
12 /dev/sdh:gpfsnode06::dataAndMetadata:6:ext04gpfsnode06:system

Store as /tmp/nsdlistexp.txt. Then create NSDs using those disks


1 # mmcrnsd -F /tmp/nsdlistexp.txt

3. Create file system


1 # mmcrfs /dev/sapmntext -F /tmp/nsdlistexp.txt -A no -B 512k -N 3000000 -v no -←-
,→m 2 -M 2 -r 2 -R 2 -j hcluster --write-affinity-depth 1 -s ←-
,→failureGroupRoundRobin --block-group-factor=1 -T /sapmntext

Warning
Be sure to use nsdlistexp.txt and not your list with internal drives! Using the wrong
drives can destroy your production data!
4. Mount file system on DR-site nodes only.
1 # mmmount sapmntext -N [list of DR-site nodes]

5. Install SAP HANA worker and standby nodes as described in the guide "SAP HANA Administration
Guide". Take care to install HANA on /sapmntext and not on /sapmnt.
Also take care that you don’t use the UID (user id) and GID (group id) of the DR HANA instance
especially when installing non-productive HANA instances before installing the DR instance.
If you have expansion boxes connected also to your primary site nodes, they get activated only when you
need to migrate non-productive SAP HANA instances’ data away from DR-site notes. See the Lenovo
SAP HANA Appliance Operations Guide 15 for details.

15 SAP Note 1650046 (SAP Service Marketplace ID required)

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 87


1.8.80-12 © Copyright Lenovo, 2015
Technical Documentation

When configuring a clustered configuration by hand, install SAP HANA worker and standby nodes as
described in the guide "SAP HANA Administration Guide".

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 88


1.8.80-12 © Copyright Lenovo, 2015
Technical Documentation

9 Mixed eX5/X6 Environments

9.1 Mixed eX5/X6 HA Clusters

9.1.1 Definition & Overview

A mixed eX5/X6 cluster is a System x Solution for SAP HANA cluster consisting of eX5 based servers
(Intel Westmere, MT 7143 and 7147) and X6 based servers (Intel Ivybridge, MT 3837 and 6241). Another
term used is "hybrid cluster". Due to the new storage layout for X6-only installations, an X6 configuration
must be slightly modified before an X6 node can be added to an eX5 cluster. Such an X6 node is considered
to be configured in legacy or compatibility mode.
Besides the different storage layout, there are some minor configuration changes between the older West-
mere appliance releases and the first X6 appliance versions. These will be explained below. Future
releases will level the differences.

9.1.2 Prerequisites & Limitations

9.1.2.1 Limit of X6 nodes in a cluster The maximum number of X6 servers in an eX5 cluster
is limited by the number of eX5 servers within that cluster. The number of X6 server must always be
less than the number of eX5 nodes. If you plan to use more X6 servers in a cluster, the only supported
options are either to increase the number of eX5 server so that they are still the majority or to switch to
a pure X6 cluster which requires a reinstallation.
For each eX5 server model exists a corresponding X6 server model which is permitted as a replacement:

eX5 T-Shirt Size X6 Server Model


SSD (x3690, 7147-H3X, Generation 1)
AC32S256C (2 CPUs, 256GB RAM)
S (x3690, 7147-HBX, Generation 2)
M (x3950, 7143-H2X or 7143-HBX) AC34S512C (4 CPUs, 512GB RAM)
L (x3950, 7143-H3X or 7143-HBX) AC48S1024C (8 CPUs, 1024GB RAM)

Table 37: eX5 T-Shirt Size to X6 Model Mapping

9.1.2.2 Prerequisites Before deploying any X6 server to an eX5 cluster, the GPFS filesystem soft-
ware on the eX5 servers must be updated to the same version installed on the X6 models. The minimum
supported GPFS versions for the cluster are GPFS 3.5 PTF 19 (3.5.0-19) or GPFS 4.1 PTF 2 (4.1.0.2)
which may require an update even on the X6 nodes. Alternatively PTF 17 (3.5.0-17) with eFix 8 can be
used. Contact IBM support to obtain this eFix. Do not use plain 3.5.0-17 without eFix 8!
It is required to use only eX5 servers installed with appliance version 1.6.60-7 or later, which introduced
RAID5 in cluster configurations. The RAID5 setup is perceived as being more reliable and convenient
than the previously used RAID0 configuration. When installing a new cluster please use appliance version
1.6.60-7 or later for the eX5 servers.
Appliance versions 1.6.60-7 and later contain a helper script for calculating the necessary file system
quotas. In a hybrid cluster please use the script on the eX5 cluster node installed with the latest appliance
version. If this script is not available, please calculate the quotas manually following the instructions in
the appendix of the eX5 Operations Guide.
Since Appliance version 1.7.70-9 an updated quota calculation help script is installed which can detect a
hybrid cluster environment enabling it to use the correct formulas even when called on X6 nodes.

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 89


1.8.80-12 © Copyright Lenovo, 2015
Technical Documentation

9.1.3 New Installation

In general, the installation and operation instructions for eX5 and X6-based servers remain valid. For
eX5 servers, please use the installation description in Lenovo eX5 Systems Solution for SAP HANA -
Implementation Guide 16 .
For the installation of the X6 server, please use the Lenovo X6 Systems Solution for SAP HANA -
Implementation Guide for System x X6 Servers 17 and read the instructions below. Please read these
instructions before installing the new server and take care to implement them correctly.

9.1.3.1 Partitioning for M/L sized clusters For X6 nodes in M/L (x3950 based) clusters the first
internal RAID array needs to be partitioned at the OS level. After finishing the base installation in phase
2, login to the server and run
1 # parted /dev/sdb --script mklabel gpt unit gib mkpart system1 ext2 "" 0 1675 mkpart←-
,→ system2 ext2 "" 1675 3350

For SSD/S sized clusters this is not necessary.

9.1.3.2 Adapting the GPFS stanza file After configuring the base system and the subsequent
reboot in phase 2 of the installation, the GPFS stanza files need to be adapted to the older eX5 storage
layout. For S/SSD model based cluster no change is needed as these models use only one GPFS storage
pool like the new X6 models. In clusters based on x3950 models, storage is divided into two GPFS
storage pools. The new X6 servers must provide these two storage pools in order to be compatible. This
is achieved by assigning the internal RAID array to the GPFS storage pool system and assigning the 2nd
RAID array in the external SAS enclosure (AC34S512C) resp. in the upper storage book (AC48S1024C)
to the storage pool hddpool.
Edit the stanza file (/var/mmfs/config/disk.list.data.gpfsnode*) on all X6 nodes and change the
usage and pool parameters as shown in the following table:

16 able to be down loaded from https://apps.na.collabserv.com/communities/service/html/communitystart?

communityUuid=b44ac6f8-c759-4148-b810-96aa5f9b0f94#fullpageWidgetId=W4af14746a85b_478c_833a_d644c42a28fa
17 able to be downloaded from https://apps.na.collabserv.com/communities/service/html/communitystart?

communityUuid=b44ac6f8-c759-4148-b810-96aa5f9b0f94#fullpageWidgetId=W4af14746a85b_478c_833a_d644c42a28fa

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 90


1.8.80-12 © Copyright Lenovo, 2015
Technical Documentation

Model Generated File Change To


1 %nsd: device=/dev/sdb
2 nsd=data01node04
AC32S256C servers=gpfsnode04
3
(no change required)
(S/SSD) 4 usage=dataAndMetadata
5 failureGroup=1004
6 pool=system
1 %nsd: device=/dev/sdb1
2 nsd=MDdata01node04
3 servers=gpfsnode04
1 %nsd: device=/dev/sdb 4 usage=dataAndMetadata
2 nsd=data01node04 5 failureGroup=1004
3 servers=gpfsnode04 6 pool=system
4 usage=dataAndMetadata 7 %nsd: device=/dev/sdb2
5 failureGroup=1004 8 nsd=MDdata02node04
AC34S512C 6 pool=system 9 servers=gpfsnode04
(M) 7 %nsd: device=/dev/sdc 10 usage=dataAndMetadata
8 nsd=data02node04 11 failureGroup=1004
9 servers=gpfsnode04 12 pool=system
10 usage=dataAndMetadata 13 %nsd: device=/dev/sdc
11 failureGroup=1004 14 nsd=data01node04
12 pool=system 15 servers=gpfsnode04
16 usage=dataOnly
17 failureGroup=1004
18 pool=hddpool
1 %nsd: device=/dev/sdb1
2 nsd=MDdata01node04
3 servers=gpfsnode04
1 %nsd: device=/dev/sdb 4 usage=dataAndMetadata
2 nsd=data01node04 5 failureGroup=1004
3 servers=gpfsnode04 6 pool=system
4 usage=dataAndMetadata 7 %nsd: device=/dev/sdb2
5 failureGroup=1004 8 nsd=MDdata02node04
6 pool=system 9 servers=gpfsnode04
7 %nsd: device=/dev/sdc 10 usage=dataAndMetadata
8 nsd=data02node04 11 failureGroup=1004
AC48S1024C 9 servers=gpfsnode04 12 pool=system
(L) 10 usage=dataAndMetadata 13 %nsd: device=/dev/sdc
11 failureGroup=1004 14 nsd=data01node04
12 pool=system 15 servers=gpfsnode04
13 %nsd: device=/dev/sdd 16 usage=dataOnly
14 nsd=data03node04 17 failureGroup=1004
15 servers=gpfsnode04 18 pool=hddpool
16 usage=dataAndMetadata 19 %nsd: device=/dev/sdd
17 failureGroup=1004 20 nsd=data02node04
18 pool=system 21 servers=gpfsnode04
22 usage=dataOnly
23 failureGroup=1004
24 pool=hddpool

Table 38: Stanza file for X6 servers in eX5 clusters

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 91


1.8.80-12 © Copyright Lenovo, 2015
Technical Documentation

Please set the nsd, servers and failureGroup to their correct values.
Complete the installation as described in the eX5 Implementation Guide and run phase 3 (of the cluster
configuration) from any eX5 node. Do not run the cluster configuration on an X6 machine as this will
result in a misconfigured cluster. It is safe to install the whole cluster including the X6 servers from any
eX5 node.

9.1.3.3 Enable automatic restripe for whole cluster eX5 models up to appliance software version
1.6.60-7 installed a script which attempts to start all NSDs and restripes the GPFS filesystem if any NSD
was not up. This script was installed as a GPFS callback which gets triggered upon every node start. Since
appliance version 1.7.70-8 the script and the callback are no longer installed and replaced by a GPFS-
internal restripe mechanism. The GPFS-internal restripe is enabled by setting the cluster configuration
value restripeOnDiskFailure=yes.
In a mixed cluster you must delete the callback and enable the new GPFS internal restripe.
Deactivate the callback and enable the automatic restripe with the following commands:
1 # mmdelcallback start-disks-on-startup
2 # mmchconfig restripeOnDiskFailure=yes

Both commands need to be run only once on any active cluster node.

9.1.4 Existing Cluster Extension/Node Replacement

When expanding a mixed cluster with additional eX5 servers, please follow the instructions in the eX5
Implementation & Operations Guides.
No special handling is required besides using the saphana-quota-calculation.sh script only on eX5
nodes or X6 nodes installed with appliance version 1.7.70-9 or later. Do not run the quota calculator on
any X6 node installed with appliance version 1.7.70-8.
When adding new X6 nodes to an existing hybrid cluster or an eX5-only cluster, please install the X6
nodes according to the X6 Implementation Guide18 . After Phase 2 (the basic configuration) adapt the
generated stanza file on each node before adding these node to the cluster.
Edit the stanza file (/var/mmfs/config/disk.list.data.gpfsnode*) on the X6 nodes and change the
usage and pool parameters as shown in the following table.

18 able to be downloaded from https://apps.na.collabserv.com/communities/service/html/communitystart?


communityUuid=b44ac6f8-c759-4148-b810-96aa5f9b0f94#fullpageWidgetId=W4af14746a85b_478c_833a_d644c42a28fa

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 92


1.8.80-12 © Copyright Lenovo, 2015
Technical Documentation

Model Generated File Change To


1 %nsd: device=/dev/sdb
2 nsd=data01node04
AC32S256C servers=gpfsnode04
3
(no change required)
(S/SSD) 4 usage=dataAndMetadata
5 failureGroup=1004
6 pool=system
1 %nsd: device=/dev/sdb1
2 nsd=MDdata01node04
3 servers=gpfsnode04
1 %nsd: device=/dev/sdb 4 usage=dataAndMetadata
2 nsd=data01node04 5 failureGroup=1004
3 servers=gpfsnode04 6 pool=system
4 usage=dataAndMetadata 7 %nsd: device=/dev/sdb2
5 failureGroup=1004 8 nsd=MDdata02node04
AC34S512C 6 pool=system 9 servers=gpfsnode04
(M) 7 %nsd: device=/dev/sdc 10 usage=dataAndMetadata
8 nsd=data02node04 11 failureGroup=1004
9 servers=gpfsnode04 12 pool=system
10 usage=dataAndMetadata 13 %nsd: device=/dev/sdc
11 failureGroup=1004 14 nsd=data01node04
12 pool=system 15 servers=gpfsnode04
16 usage=dataOnly
17 failureGroup=1004
18 pool=hddpool
1 %nsd: device=/dev/sdb1
2 nsd=MDdata01node04
3 servers=gpfsnode04
1 %nsd: device=/dev/sdb 4 usage=dataAndMetadata
2 nsd=data01node04 5 failureGroup=1004
3 servers=gpfsnode04 6 pool=system
4 usage=dataAndMetadata 7 %nsd: device=/dev/sdb2
5 failureGroup=1004 8 nsd=MDdata02node04
6 pool=system 9 servers=gpfsnode04
7 %nsd: device=/dev/sdc 10 usage=dataAndMetadata
8 nsd=data02node04 11 failureGroup=1004
AC48S1024C 9 servers=gpfsnode04 12 pool=system
(L) 10 usage=dataAndMetadata 13 %nsd: device=/dev/sdc
11 failureGroup=1004 14 nsd=data01node04
12 pool=system 15 servers=gpfsnode04
13 %nsd: device=/dev/sdd 16 usage=dataOnly
14 nsd=data03node04 17 failureGroup=1004
15 servers=gpfsnode04 18 pool=hddpool
16 usage=dataAndMetadata 19 %nsd: device=/dev/sdd
17 failureGroup=1004 20 nsd=data02node04
18 pool=system 21 servers=gpfsnode04
22 usage=dataOnly
23 failureGroup=1004
24 pool=hddpool

Table 39: Stanza file for X6 servers in eX5 clusters

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 93


1.8.80-12 © Copyright Lenovo, 2015
Technical Documentation

Please set the nsd, servers and failureGroup to their correct values.
Follow the normal instructions given in the eX5 Operations Guide in chapter 4.2 Adding a cluster node.
Afterwards either run the quota calculation script from any eX5 nodes, from any X6 node installed with
appliance version 1.7.70-9 or later or do the manual calculation described in the appendix section of the
eX5 Operations Guide.

9.1.5 Deviating Operation Instructions

In general the eX5 Operations Guide is applicable for the whole cluster including the new X6 servers.

9.1.5.1 Quota Calculation eX5 based servers have used two so called fileset for a logical separation
of HANA data volumes and log files. Each fileset is limited with a quota. X6 servers use three filesets
for separating HANA data volumes, log files and the shared parts (like binaries, config, trace, backups).
When using X6 servers in a eX5 cluster, the two fileset setup is used on all nodes, so for the quotas the
eX5 version of the Operations Guide is applicable. The quota calculation is explained in the appendix
of the guide. On any eX5 node and on X6 nodes with appliance version 1.7.73-9 you can use the quota
calculation script saphana-quota-calculator.sh. The usage of this script is also documented in the
quota chapter in the appendix.

9.1.5.2 HANA installation When installing additional SAP HANA instances or reinstalling SAP
HANA, SAP HANA must be installed into /sapmnt as described in the eX5 documentation.

9.1.5.3 Storage Device Failure For any failed storage device in a eX5 based node, the Implemen-
tation & Operation Guides for eX5 are fully applicable.
For X6 based nodes please use the Operation Guide for X6. The only difference in handling is that the
stanza files given in 9.1.4: Existing Cluster Extension/Node Replacement on page 92 must be used.
Please also ensure that CacheCade acceleration is enabled for newly created RAID devices on X6.

9.2 Mixed eX5/X6 DR Clusters

9.2.1 Definition & Overview

A mixed eX5/X6 DR cluster is a Lenovo Solution DR-enabled cluster consisting of eX5 based servers (Intel
Westmere, MT 7143 and 7147) and X6 based servers (Intel Ivybridge, MT 3837 and 6241). Another term
used is "hybrid DR cluster". Due to the new storage layout for X6-only installations, an X6 configuration
must be slightly modified before an X6 node can be added to an eX5 cluster. Such an X6 node is
considered to be configured in legacy or compatibility mode.
Besides the different storage layout, there are some minor configuration changes between the older West-
mere appliance releases and the first X6 appliance versions. These will be explained below. Future
releases will level the differences.

9.2.2 Prerequisites & Limitations

9.2.2.1 Limit of X6 nodes in a cluster The maximum number of X6 servers in an eX5 DR cluster
is limited by the number of eX5 servers within that cluster. The number of X6 server must always be
less than the number of eX5 nodes. If you plan to use more X6 servers in a cluster, the only supported

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 94


1.8.80-12 © Copyright Lenovo, 2015
Technical Documentation

options are either to increase the number of eX5 server so that they are still the majority or to switch to
a pure X6 cluster which requires a reinstallation.
For DR-clusters we require that both sites (primary & secondary) must consist only of eX5 server or only
of X6 servers or of a mix of eX5 and X6 server where the eX5 servers have the majority on each site.
For example these combinations are allowed:
• Primary site: 6 eX5, secondary site: 6 X6 servers
This is allowed as no site is mixed.
• Primary site: 6 eX5, secondary site: 4 eX5 & 2 X6 servers
This is allowed as the first site is not mixed and the eX5 have the majority on the secondary site.
• Primary site: 4 eX5 & 3 X6, secondary site: 6 eX5 & 1 X6
While both sites are mixed, but in each site the eX5 are the majority.
These combinations are not allowed:
• Primary site: 3 ex5 & 3 X6, secondary site: 6 eX5 servers
This is not allowed as on the first site the eX5 servers are not the majority.
• Primary site: 4 eX5 & 3 X6, secondary site: 6 eX5 servers
The eX5 servers are the majority on both sites, but the sites differ in size.
For each eX5 server model exists a corresponding X6 server model which is permitted as a replacement:

eX5 T-Shirt Size X6 Server Model


SSD (x3690, 7147-H3X, Generation 1)
AC32S256C (2 CPUs, 256GB RAM)
S (x3690, 7147-HBX, Generation 2)
M (x3950, 7143-H2X or 7143-HBX) AC34S512C (4 CPUs, 512GB RAM)
L (x3950, 7143-H3X or 7143-HBX) AC48S1024C (8 CPUs, 1024GB RAM)

Table 40: eX5 T-Shirt Size to X6 Model Mapping

9.2.2.2 Prerequisites Before deploying any X6 server to an eX5 cluster, the GPFS filesystem soft-
ware on the eX5 servers must be updated to the same version installed on the X6 models. The minimum
supported GPFS versions for hybrid DR clusters are GPFS 3.5 PTF 19 (3.5.0-19) or GPFS 4.1 PTF 2
(4.1.0.2) which may require an update even on the X6 nodes. Alternatively PTF 17 (3.5.0-17) with eFix
8 can be used. Contact IBM support to obtain this eFix. Do not use plain 3.5.0-17 without eFix 8!
It is required to use only eX5 servers installed with appliance version 1.6.60-7 or later, which introduced
RAID5 in cluster configurations. The RAID5 setup is perceived as being more reliable and convenient
than the previously used RAID0 configuration. When installing a new cluster please use appliance version
1.6.60-7 or later for the eX5 servers.
Appliance versions 1.6.60-7 and later contain a helper script for calculating the necessary file system
quotas. In a hybrid cluster please use the script on the eX5 cluster node installed with the latest appliance
version. If this script is not available, please calculate the quotas manually following the instructions in
the appendix of the eX5 Operations Guide.
Since Appliance version 1.7.70-9 an updated quota calculation help script is installed which can detect a
hybrid cluster environment enabling it to use the correct formulas even when called on X6 nodes.

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 95


1.8.80-12 © Copyright Lenovo, 2015
Technical Documentation

9.2.2.3 Partitioning for M/L sized clusters For X6 nodes in M/L (x3950 based) clusters the first
internal RAID array needs to be partitioned at the OS level. After finishing the base installation in phase
2, login to the server and run
1 # parted /dev/sdb --script mklabel gpt unit gib mkpart system1 ext2 "" 0 1675 mkpart←-
,→ system2 ext2 "" 1675 3350

For SSD/S sized clusters this is not necessary.

9.2.2.4 Adapting the GPFS stanza file After configuring the base system and the subsequent
reboot in phase 2 of the installation, the GPFS stanza files need to be adapted to the older eX5 storage
layout. For S/SSD model based cluster no change is needed as these models use only one GPFS storage
pool like the new X6 models. In clusters based on x3950 models, storage is divided into two GPFS
storage pools. The new X6 servers must provide these two storage pools in order to be compatible. This
is achieved by assigning the internal RAID array to the GPFS storage pool system and assigning the 2nd
RAID array in the external SAS enclosure (AC34S512C) resp. in the upper storage book (AC48S1024C)
to the storage pool hddpool.
Edit the stanza file (/var/mmfs/config/disk.list.data.gpfsnode*) on all X6 nodes and change the
usage and pool parameters as shown in the following table:

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 96


1.8.80-12 © Copyright Lenovo, 2015
Technical Documentation

Model Generated File Change To


1 %nsd: device=/dev/sdb
2 nsd=data01node04
AC32S256C servers=gpfsnode04
3
(no change required)
(S/SSD) 4 usage=dataAndMetadata
5 failureGroup=1,0,4
6 pool=system
1 %nsd: device=/dev/sdb1
2 nsd=MDdata01node04
3 servers=gpfsnode04
1 %nsd: device=/dev/sdb 4 usage=dataAndMetadata
2 nsd=data01node04 5 failureGroup=1,0,4
3 servers=gpfsnode04 6 pool=system
4 usage=dataAndMetadata 7 %nsd: device=/dev/sdb2
5 failureGroup=1,0,4 8 nsd=MDdata02node04
AC34S512C 6 pool=system 9 servers=gpfsnode04
(M) 7 %nsd: device=/dev/sdc 10 usage=dataAndMetadata
8 nsd=data02node04 11 failureGroup=1,0,4
9 servers=gpfsnode04 12 pool=system
10 usage=dataAndMetadata 13 %nsd: device=/dev/sdc
11 failureGroup=1,0,4 14 nsd=data01node04
12 pool=system 15 servers=gpfsnode04
16 usage=dataOnly
17 failureGroup=1,0,4
18 pool=hddpool
1 %nsd: device=/dev/sdb1
2 nsd=MDdata01node04
3 servers=gpfsnode04
1 %nsd: device=/dev/sdb 4 usage=dataAndMetadata
2 nsd=data01node04 5 failureGroup=1,0,4
3 servers=gpfsnode04 6 pool=system
4 usage=dataAndMetadata 7 %nsd: device=/dev/sdb2
5 failureGroup=1,0,4 8 nsd=MDdata02node04
6 pool=system 9 servers=gpfsnode04
7 %nsd: device=/dev/sdc 10 usage=dataAndMetadata
8 nsd=data02node04 11 failureGroup=1,0,4
AC48S1024C 9 servers=gpfsnode04 12 pool=system
(L) 10 usage=dataAndMetadata 13 %nsd: device=/dev/sdc
11 failureGroup=1,0,4 14 nsd=data01node04
12 pool=system 15 servers=gpfsnode04
13 %nsd: device=/dev/sdd 16 usage=dataOnly
14 nsd=data03node04 17 failureGroup=1,0,4
15 servers=gpfsnode04 18 pool=hddpool
16 usage=dataAndMetadata 19 %nsd: device=/dev/sdd
17 failureGroup=1,0,4 20 nsd=data02node04
18 pool=system 21 servers=gpfsnode04
22 usage=dataOnly
23 failureGroup=1,0,4
24 pool=hddpool

Table 41: Stanza file for X6 servers in eX5 clusters

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 97


1.8.80-12 © Copyright Lenovo, 2015
Technical Documentation

Please set the nsd, servers and failureGroup to their correct values.
Complete the installation as described in the chapter "Disaster Recovery" in the Implementation Guide
for eX5.

9.2.3 Existing Cluster Extension/Node Replacement

When expanding a mixed cluster with additional eX5 servers, please follow the instructions in the Disaster
Recovery sections of the eX5 Implementation & Operations Guides. Do not run the quota calculator on
any X6 node installed with appliance version 1.7.70-8.
When adding new X6 nodes to an existing hybrid cluster or an eX5-only cluster, please install the X6
nodes according to the X6 Implementation Guide19 . After Phase 2 (the basic configuration) adapt the
generated stanza file on each node before adding these node to the cluster.
Edit the stanza file (/var/mmfs/config/disk.list.data.gpfsnode*) on the X6 nodes and change the
usage and pool parameters as shown in the following table.

19 able to be downloaded from https://apps.na.collabserv.com/communities/service/html/communitystart?


communityUuid=b44ac6f8-c759-4148-b810-96aa5f9b0f94#fullpageWidgetId=W4af14746a85b_478c_833a_d644c42a28fa

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 98


1.8.80-12 © Copyright Lenovo, 2015
Technical Documentation

Model Generated File Change To


1 %nsd: device=/dev/sdb
2 nsd=data01node04
AC32S256C servers=gpfsnode04
3
(no change required)
(S/SSD) 4 usage=dataAndMetadata
5 failureGroup=1,0,4
6 pool=system
1 %nsd: device=/dev/sdb1
2 nsd=MDdata01node04
3 servers=gpfsnode04
1 %nsd: device=/dev/sdb 4 usage=dataAndMetadata
2 nsd=data01node04 5 failureGroup=1,0,4
3 servers=gpfsnode04 6 pool=system
4 usage=dataAndMetadata 7 %nsd: device=/dev/sdb2
5 failureGroup=1,0,4 8 nsd=MDdata02node04
AC34S512C 6 pool=system 9 servers=gpfsnode04
(M) 7 %nsd: device=/dev/sdc 10 usage=dataAndMetadata
8 nsd=data02node04 11 failureGroup=1,0,4
9 servers=gpfsnode04 12 pool=system
10 usage=dataAndMetadata 13 %nsd: device=/dev/sdc
11 failureGroup=1,0,4 14 nsd=data01node04
12 pool=system 15 servers=gpfsnode04
16 usage=dataOnly
17 failureGroup=1,0,4
18 pool=hddpool
1 %nsd: device=/dev/sdb1
2 nsd=MDdata01node04
3 servers=gpfsnode04
1 %nsd: device=/dev/sdb 4 usage=dataAndMetadata
2 nsd=data01node04 5 failureGroup=1,0,4
3 servers=gpfsnode04 6 pool=system
4 usage=dataAndMetadata 7 %nsd: device=/dev/sdb2
5 failureGroup=1,0,4 8 nsd=MDdata02node04
6 pool=system 9 servers=gpfsnode04
7 %nsd: device=/dev/sdc 10 usage=dataAndMetadata
8 nsd=data02node04 11 failureGroup=1,0,4
AC48S1024C 9 servers=gpfsnode04 12 pool=system
(L) 10 usage=dataAndMetadata 13 %nsd: device=/dev/sdc
11 failureGroup=1,0,4 14 nsd=data01node04
12 pool=system 15 servers=gpfsnode04
13 %nsd: device=/dev/sdd 16 usage=dataOnly
14 nsd=data03node04 17 failureGroup=1,0,4
15 servers=gpfsnode04 18 pool=hddpool
16 usage=dataAndMetadata 19 %nsd: device=/dev/sdd
17 failureGroup=1,0,4 20 nsd=data02node04
18 pool=system 21 servers=gpfsnode04
22 usage=dataOnly
23 failureGroup=1,0,4
24 pool=hddpool

Table 42: Stanza file for X6 servers in eX5 clusters

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 99


1.8.80-12 © Copyright Lenovo, 2015
Technical Documentation

Please set the nsd, servers and failureGroup to their correct values.
Follow the normal instructions given in the eX5 Operations Guide in chapter 4.2 Adding a cluster node.
Afterwards either run the quota calculation script from any eX5 nodes, from any X6 node installed with
appliance version 1.7.70-9 or later or do the manual calculation described in the appendix section of the
eX5 Operations Guide.

9.2.4 Deviating Operation Instructions

In general the eX5 Operations Guide is applicable for the whole cluster including the new X6 servers.

9.2.4.1 Quota Calculation eX5 based servers have used two so called fileset for a logical separation
of HANA data volumes and log files. Each fileset is limited with a quota. X6 servers use three filesets
for separating HANA data volumes, log files and the shared parts (like binaries, config, trace, backups).
When using X6 servers in a eX5 cluster, the two fileset setup is used on all nodes, so for the quotas the
eX5 version of the Operations Guide is applicable. The quota calculation is explained in the appendix
of the guide. On any eX5 node and on X6 nodes with appliance version 1.7.73-9 you can use the quota
calculation script saphana-quota-calculator.sh. The usage of this script is also documented in the
quota chapter in the appendix.

9.2.4.2 HANA installation When installing additional SAP HANA instances or reinstalling SAP
HANA, SAP HANA must be installed into /sapmnt as described in the eX5 documentation.

Note
In the DR solution only for the hanalog fileset a quota is set.

9.2.4.3 Storage Device Failure For any failed storage device in a eX5 based node, the Implemen-
tation & Operation Guides for eX5 are fully applicable.
For X6 based nodes please use the Operation Guide for X6. The only difference in handling is that the
stanza files given in 9.2.3: Existing Cluster Extension/Node Replacement on page 98 must be used.
Please also ensure that CacheCade acceleration is enabled for newly created RAID devices on X6.

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 100


1.8.80-12 © Copyright Lenovo, 2015
Technical Documentation

10 Special Single Node Installation Scenarios


This section covers installations that consist of just one single node in production and need to have HA
or DR features using SAP System Replication or IBM GPFS Storage replication.

10.1 Single Node with HA Installation with Side-car Quorum Solution

A single node with high availability (HA) describes the smallest possible configuration for a highly
available Lenovo solution for a SAP HANA system. In principle, this can be described as a cluster where
only a single node is highly available, since there is only one SAP HANA worker node. There is no
distribution of information across the nodes as there is no secondary worker node attached. Figure 34:
Single Node with High Availability on page 101 shows a high level overview of the system landscape with
two SAP HANA appliances and an IBM GPFS Quorum node.

Worker Node Standby Node


Quorum Node

GPFS Links

SAP HANA Links

Inter-Switch Link (ISL)

G8264 switches

Figure 34: Single Node with High Availability

The major difference between a single node HA configuration and larger scale out clusters is the require-
ment to have a third node to build a quorum for the IBM GPFS file system. Therefore, the smallest
possible setup needs to contain three nodes. Two Lenovo Workload Optimized Systems for SAP HANA
and one quorum node. The third node can e.g. be a plain Lenovo System x3550 M4 system. The
described solution implements a simple 1U server as quorum node for IBM GPFS. This node does not
contribute to the file system with any data disks, but does contribute to the IBM GFPS cluster. The file
system layout is shown in Figure 35: File System Layout - Single Node HA on page 102.

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 101


1.8.80-12 © Copyright Lenovo, 2015
Technical Documentation

node1 node2 node3


Quorum

Shared File System


HDD HDD
replica
first

Data
second
replica

Data
HDD
Descriptor
System
File

FS Desc FS Desc FS Desc


meta
data

Meta Meta
data data

LG1 LG1 LG1


FG1 FG2 FG3

sda1 sda2 sda1 sda2 sda1 sda2

OS OS OS

Figure 35: File System Layout - Single Node HA

10.1.1 Installation of SAP HANA appliance single node with HA

To begin the installation, you need to install both Lenovo Workload Optimized Systems using the steps
at the beginning of chapter 6: Guided Install of the Lenovo Solution on page 39. Configure the network
interfaces (internal and external) and the NTP server(s) as described there.
1. Start the text based installer as follows on each of the two nodes:
saphana-setup-saphana.sh -H
The switch -H prevents SAP HANA from being installed automatically. This needs to be done
manually later. Refer to the steps as stated in section 6.6.2.2: Cluster Installation on page 62
together with the steps described below. Accept the license shown by pressing "1" and hit Enter.
2. Select "Cluster (worker)". This does a basic installation as a cluster node. Enter external the FQDN
as external hostname and accept the IBM GPFS license by pressing "1" and hit Enter. Continue
until the installer finishes successfully.
3. Start the installer again as above with the option -H, this time only on the future master node.
Select this time "Cluster (Master)". Enter details for SID, Instance ID and a HANA password. Enter
number of nodes 2, number of standby nodes 1. Assure that the IP addresses for the IBM GPFS
and SAP HANA network are correct. Accept the IBM GPFS license and wait for the installation
process to continue successfully.

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 102


1.8.80-12 © Copyright Lenovo, 2015
Technical Documentation

4. Then the quorum node will be manually installed and configured to include its own IBM GPFS
NSD to the file system cluster.

10.1.2 Prepare quorum node

The quorum node used can be, e.g. an Lenovo System x3550 M4 with a single CPU and three local disks
configured in a RAID5 configuration. It also contains an Emulex Virtual Fabric Adapter II with two 10
Gigabit Ethernet ports. We recommend the following server to be used as quorum nodes for the best
price/performance of this node. Bigger systems only require a larger cost for the GPFS license and are
not needed. See table 43 on page 103.

Part
System x3550 M4 GPFS quorum node Qty.
Number
x3550 M4, Xeon 4C E5-2609 80W 2.4GHz/1066MHz/10MB,
7914B2G 1X4GB, O/Bay 2.5in & HS SAS/SATA, SR M1115, 550W p/s, 1
Rack
8GB (1x8GB, 2Rx4, 1.35V) PC3L-10600 CL9 ECC DDR3
49Y1397 6
1333MHz LP RDIMM
90Y8877 IBM 300GB 2.5in SFF 10K 6Gbps HS SAS HDD 3
81Y4481 ServeRAID M5110 SAS/SATA Controller for IBM System x 1
Emulex Dual Port 10GbE SFP+ Embedded VFA III for IBM
90Y6456 1
System x
ServeRAID M5100 Series 512MB Flash/RAID 5 Upgrade for IBM
81Y4487 1
System x
Express IBM System x 550W High Efficiency Platinum AC Power
00D7087 1
Supply
00D8042 SLES X86 2 Socket Std SUSE Support 3Yr 1
IBM GPFS for x86 Architecture, GPFS Server Per 10 VUs w/1
68Y9124 28
Yr SW S&S
46M0902 IBM UltraSlim Enhanced SATA Multi-Burner 1
69Y5681 x3550 M4 ODD Cable 1
90Y3901 IBM Integrated Management Module Advanced Upgrade 1
91Y6450 3yr Essentials HW and SW Support 1
39Y7932 4.3m, 10A/100-250V, C13 to IEC 320-C14 Rack Power Cable 2

Table 43: IBM System x3550 M4 GPFS quorum node

10.1.2.1 Install the Operating System You may use SLES20 11 to install the OS on this machine
using the default settings. While installing Linux, please select the pattern "C/C++ Compiler and
Tools" in addition to the default selection of software. If you do not do this at install time, then open
the YaST software panel and install the above pattern before installing and compiling GPFS.
Note
SLES 11 does not contain RAID drivers for the IBM ServeRAID M5110 RAID controller (see
table 43). In order to install this driver at the same time, you must prepare a USB drive
with the appropriate ServeRAID device update driver (dud) file that can be found on IBM
FixCentral. Install it during the install by pressing <F6> during boot splash screen. Please
refer to the driver README instructions for further details.

20 SUSE Linux Enterprise Server

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 103


1.8.80-12 © Copyright Lenovo, 2015
Technical Documentation

Note
We recommend to always use the latest version of SLES for the quorum node.
You can download the IBM ServeRAID drivers from IBM support sites: e.g. http://ibm.com/support/
entry/portal/docdisplay?lndocid=migr-5082165. If you install using the SLES for SAP Applications
11 DVD, you will be able to install with this dud file, but you will not be able to reboot the system as
the device driver that was used to install, is not compatible with the newer kernel delivered in the SLES
for SAP Applications 11 installation media. Therefore we do not recommend to use the SLES for SAP
Applications 11 installation media for this server.

10.1.2.2 Disk partitioning The SLES 11 installation media will automatically partition your hard
drive if you do not remove the boot option "autoyast=usb:///" completely. Although this is not dramatic,
it would mean you would have to use a tool like gparted to resize the partitions in the following manner.
This is not described in this document.
We recommend to remove the boot option, "autoyast=usb:///" completely and manually configure the
partitions as described in Section 44: Single Node with HA OS Partitioning on page 104.

Device Size Mount point


/dev/sda1 rest /
/dev/sda2 10GB swap
/dev/sda3 10GB not mounted - not formated - used for GPFS NSD

Table 44: Single Node with HA OS Partitioning

10.1.2.3 Firewall Disable the integrated firewall during the network configuration steps or else you
won’t be able to connect to the server until the firewall has been configured correctly. This may be turned
on and configured according to the SAP HANA Security Guidelines.

10.1.3 Quorum Node Network Setup

Follow information in table 45: Single Node with HA OS Networking Setup on page 104 to setup the
networking during the SLES for SAP Applications OS installation.

Network Description
10GbE port 0 Connect 10GigE port to the first G8264 switch
10GbE port 1 Connect 10GigE port to the second G8264 switch
Bond Port 0 and Port 1 together
bond0 Set the Bonding options to:
mode=4 xmit_hash_policy=layer3+4
Host Name gpfsnode99
GPFS IP address Place at the end of the range (e.g. 192.168.10.253)
This is not needed as this node will not run SAP
HANA IP Address
HANA.

Table 45: Single Node with HA OS Networking Setup

Figure 36 on page 105 shows the typical network setup for a single node with HA cluster. Deviations are
possible for the management, client access and ERP replication networks depending on the real customer
requirements.

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 104


1.8.80-12 © Copyright Lenovo, 2015
Technical Documentation

SAP client Legend Interface 0 1 GigE


Inter Switch Bonded
SAP HANA GPFS Links Interface
1GbE IMM 6
Optional 10 GigE
10GbE 10 GbE 10 GbE 1 GbE 40 GbE Interface 8

Customer
Customer
Interface Zone
Interface Zone
SAP
SAPHANA
HANASingle
SingleNode
Nodewith
withHA
HAAppliance
Appliance
IMM IMM IMM
0 1 1
Node1 Node2 0 Quorum 0 1
Node
10 GbE
6 6
Customer HANA HANA
Switch Choice 8 8 10GigE 1

System
management

SAP 10GigE 2
Business Suite 7 7 2
GPFS GPFS GPFS
9 9 3

Customer
Switch Choice

3 5 10 3 5 10
2 4 11 2 4 11

Figure 36: Network Switch Setup for Single Node with HA

10.1.3.1 Switch configuration The network switches need to be configured in the standard scale-
out configuration, described in section 5.6.5: Network Configurations in a Clustered Environment on page
28. The 10GigE connections of the additional quorum node will be configured as an extension to the
existing vLAG configuration. The ports of the new network links need to be added to the correct VLANs
and the vLAG and LACP settings need to be made.

Description G8264 Switch #1 G8264 Switch #2


ports 22 22
vLAG - LACP key 1002 1002
PVID 101 101

Table 46: Single Node with HA Network Switch Definitions

10.1.4 Adapt hosts file

The host file /etc/hosts on all three cluster nodes needs to have the following entries. Change the
IP addresses to the ones used in your scenario. Add entries that are missing like for instance external
hostnames.
192.168.10.101 gpfsnode01 gpfsnode01
192.168.10.102 gpfsnode02 gpfsnode02
192.168.10.253 gpfsnode99 gpfsnode99

10.1.5 SSH configuration

The ssh configuration also needs to be extended to the third node. Each node needs to have the public
ssh keys of each other node so that the communication between the GPFS nodes is guaranteed.

10.1.5.1 Generate the ssh key on the quorum node Run the following command to generate
the set of ssh keys on quorum node

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 105


1.8.80-12 © Copyright Lenovo, 2015
Technical Documentation

ssh-keygen -t rsa -f ~/.ssh/id_rsa -N ’’


The key needs to be copied to all cluster nodes. Run the following command on quorum node for each
host.
ssh-copy-id gpfsnode01
ssh-copy-id gpfsnode02
Run the following command on each of the first two nodes with the GPFS private network hostname of
the new quorum node:
ssh-copy-id gpfsnode99

10.1.6 Quorum Node IBM GPFS setup

Update the file /var/mmfs/config/nodes.cluster on the first node (gpfsnode01) to the following con-
tent, as it may be needed later:
gpfsnode01:quorum
gpfsnode02:quorum
gpfsnode99:quorum
Besides the necessary number of quorum nodes it is also required to have a quorum on the file system
descriptor. The number of copies of the file system descriptor depends on the number of disk in different
failure groups. To maintain file system operations GPFS requires a quorum of the majority of the replicas
of the file system descriptor. For a two node HA cluster it is therefore necessary to also have a copy of
the descriptor on the quorum node. A disk needs to made available to GPFS on the additional quorum
node which will only hold a copy of the file system descriptor. It does not have any data or metadata.

10.1.7 Quorum Node IBM GPFS installation

Perform the following commands as user root.


Copy the GPFS installer files from the master node:
mkdir -p /var/tmp/install/gpfs-4.1
scp gpfsnode01:/var/tmp/install/gpfs-4.1/GPFS-4.1* /var/tmp/install/gpfs-4.1
scp gpfsnode01:/var/tmp/install/gpfs-4.1/GPFS_4.1* /var/tmp/install/gpfs-4.1
This should give you the base installer archive GPFS_4.1_STD_LSX_QSG.tar.gz and the PTF GPFS-4.1.
0.<PTF>-x86_64-Linux.standard.tar.gz.
Extract the IBM GPFS archives and start the installer:
cd /var/tmp/install/gpfs-4.1
tar xvf GPFS_4.1_STD_LSX_QSG.tar.gz
tar xvf GPFS-4.1.0.1-x86_64-Linux.standard.tar.gz
./gpfs_install-4.1.0-0_x86_64 --dir . --text-only
Accept the license by pressing "1". Then install the RPMs:
gpfs_release=$(ls gpfs.base-*.x86_64.rpm | cut -d- -f2)
gpfs_update_fixpack=$(ls gpfs.base-*.x86_64.update.rpm | cut -d- -f3 | cut -d. -f1)

rpm -ivh gpfs.base-${gpfs_release}-0.x86_64.rpm


rpm -ivh gpfs.gpl-${gpfs_release}-0.noarch.rpm
rpm -ivh gpfs.msg.en_US-${gpfs_release}-0.noarch.rpm
rpm -ivh gpfs.docs-${gpfs_release}-0.noarch.rpm
rpm -ivh gpfs.gskit-*.x86_64.rpm

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 106


1.8.80-12 © Copyright Lenovo, 2015
Technical Documentation

rpm -ivh gpfs.ext-${gpfs_release}-0.x86_64.rpm

rpm -Uvh gpfs.base-${gpfs_release}-${gpfs_update_fixpack}.x86_64.update.rpm


rpm -Uvh gpfs.ext-${gpfs_release}-${gpfs_update_fixpack}.x86_64.update.rpm
rpm -Uvh gpfs.gpl-${gpfs_release}-${gpfs_update_fixpack}.noarch.rpm
rpm -Uvh gpfs.msg.en_US-${gpfs_release}-${gpfs_update_fixpack}.noarch.rpm
rpm -Uvh gpfs.docs-${gpfs_release}-${gpfs_update_fixpack}.noarch.rpm
Copy the license:
mkdir -p /usr/lpp/mmfs/4.1/
cp -pr license /usr/lpp/mmfs/4.1/

10.1.7.1 Build the IBM GPFS Portability Layer Follow the instructions in /usr/lpp/mmfs/
src/README. In general, you may build the IBM GPFS libraries as follows:
cd /usr/lpp/mmfs/src
make Autoconfig
make World
make InstallImages

10.1.7.2 Change SUSE Linux local settings


1. Create /etc/profile.d/saphana-profile.sh;
PATH=$PATH:/usr/lpp/mmfs/bin
2. Change file permissions:
chmod 644 /etc/profile.d/saphana-profile.sh
3. Activate the new PATH variable
source /etc/profile.d/saphana-profile.sh
4. Create a dump-directory for IBM GPFS
mkdir /tmp/GPFSdump
5. Create a configuration-directory for IBM GPFS
mkdir /var/mmfs/config

10.1.8 Add quorum node

Execute the next commands on the primary node:


1. Add the additional node to the cluster.
mmaddnode gpfsnode99
2. Mark new node as correct licensed
mmchlicense server --accept -N gpfsnode99
3. Mark backup and quorum node as quorum nodes for the cluster
mmchnode --quorum -N gpfsnode02,gpfsnode99
4. Start IBM GPFS on the new node:
mmstartup

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 107


1.8.80-12 © Copyright Lenovo, 2015
Technical Documentation

10.1.9 Create descriptor disk

Create a disk descriptor file in the configuration directory of the quorum node /var/mmfs/config/disk.
list.quorum.gpfsnode99. It should contain the following line which defines the disk partition on the
quorum node as an NSD with the explicit function to hold the file system descriptor:
/dev/sda3:gpfsnode99::descOnly:1099:quorum01node99
Create the NSD by running the mmcrnsd command on the quorum node:
mmcrnsd -F /var/mmfs/config/disk.list.quorum.gpfsnode99 -v no

10.1.10 Add disk to file system

After creating the NSD the disk needs to be added to the file system by running the mmadddisk command:
mmadddisk sapmntdata -F /var/mmfs/config/disk.list.quorum.gpfsnode99 -v no

10.1.11 Verify Cluster Setup

Execute the command mmlsclusteron one of the cluster nodes. The output should look similar to this:
GPFS cluster information
========================
GPFS cluster name: HANAcluster.gpfsnode01
GPFS cluster id: 12394192078945061775
GPFS UID domain: HANAcluster.gpfsnode01
Remote shell command: /usr/bin/ssh
Remote file copy command: /usr/bin/scp

GPFS cluster configuration servers:


-----------------------------------
Primary server: gpfsnode01
Secondary server: gpfsnode02

Node Daemon node name IP address Admin node name Designation


---------------------------------------------------------------------
1 gpfsnode01 192.168.10.101 gpfsnode01 quorum
2 gpfsnode02 192.168.10.102 gpfsnode02 quorum
3 gpfsnode99 192.168.10.253 gpfsnode99 quorum

10.1.11.1 List the IBM GPFS Disks Check the disks in the cluster. There are 2 devices on each
of the NSD server and none on the quorum node. The listing of the command mmlsdisksapmntdata-L
shows that there is one disk per failure group which contains a file system descriptor. This ensures that
a quorum may be reached if a node fails.
disk driver sector failure holds holds storage disk pool remarks
name type size group metadata data status availability id
-------------- ------ ------ ------- -------- ----- ------ ------------ ---- ------ --------
data01node01 nsd 512 1001 yes yes ready up 1 system desc
data02node01 nsd 512 1001 yes yes ready up 2 system
data01node02 nsd 512 1002 yes yes ready up 3 system desc
data02node02 nsd 512 1002 yes yes ready up 4 system
quorum01node99 nsd 512 1003 no no ready up 5 system desc
Number of quorum disks: 3
Read quorum value: 2
Write quorum value: 2

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 108


1.8.80-12 © Copyright Lenovo, 2015
Technical Documentation

10.1.12 Installation of SAP HANA

Please refer to the official SAP documentation available here: http://help.sap.com/hana_appliance.


The location of the SAP HANA installation files is /var/tmp/saphana.

10.2 Single Node with stretched HA Installation

This solution is designed to provide improved high-availability capabilities for a single node SAP HANA
installation. It can be applied to any SAP HANA configuration size. There is one active SAP HANA
instance running on the primary node and database data gets replicated by IBM GPFS to the secondary
node. The secondary node is running in hot-standby, ready to take over operation if the primary node
experiences any failure. In such a 1+1 stretched HA scenario the secondary node usually is distant to the
primary node. Examples are a different fire compartment zone or the other end of the campus. Depending
on distances it can also be on a different campus in the same city. No non-production SAP HANA instance
is allowed to run in this scenario.
Because of the importance of the quorum node it is recommended to place it at a third site. We
understand, however, that this is not always feasible. This leads to the following two designs. In the first
figure 37: Single Node with stretched HA - Two Site Approach on page 109 the quorum node is placed at
the primary site.
This ensures that IBM GPFS on the primary site node stays up and running even if the link to the
DR-site node gets interrupted.

Site B

Worker Node Standby Node


Quorum Node

GPFS Links
SAP HANA Links
Inter-Switch Link (ISL)

G8264 switches

Figure 37: Single Node with stretched HA - Two Site Approach

The second approach places the quorum node at a third site. The network architecture can be seen in
figure 38: Single Node with stretched HA - Three Site Approach on page 110.

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 109


1.8.80-12 © Copyright Lenovo, 2015
Technical Documentation

Site C Site B

Quorum Node

Worker Node Standby Node

GPFS Links G8264 switches


SAP HANA Links
Inter-Switch Link (ISL)

Figure 38: Single Node with stretched HA - Three Site Approach

10.2.1 Installation and configuration of SLES and IBM GPFS

This scenario must be installed like a conventional 1+1 HA scenario as shown above in 10.1.1: Installation
of SAP HANA appliance single node with HA on page 102. The major difference is the network setup. It
can be either routed or switched, depending on the client’s environment (in conventional 1+1 HA scenarios
there is only one IBM-provided switch between the hops). Usually, clients have different types of links
spanning the two sites and they use different network equipment technologies. The client is allowed to
use his own network equipment (i.e. switches) on the secondary site. Ensure that the separation of
network interfaces is kept across both nodes (distinct switches or VLAN21 s for each IBM GPFS and
HANA network port per node). This is to guarantee high-availability of the solution. The file system
layout is shown in Figure 39: File System Layout - Single Node stretched HA on page 111.

21 Virtual Local Area Network

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 110


1.8.80-12 © Copyright Lenovo, 2015
Technical Documentation

node1 node2 node3


Quorum

Shared File System


HDD HDD

replica
first

Data
second
replica

Data
HDD
Descriptor
System
File

FS Desc FS Desc FS Desc


meta
data

Meta Meta
data data

LG1 LG1 LG1


FG1 FG2 FG3

sda1 sda2 sda1 sda2 sda1 sda2

OS OS OS

Figure 39: File System Layout - Single Node stretched HA

10.2.2 Installation of SAP HANA

Please refer to the official SAP documentation available here: http://help.sap.com/hana_appliance.


The location of the SAP HANA installation files is /var/tmp/saphana.

10.3 Single Node with DR Installation

This solution is designed to provide disaster recovery capabilities for a single node SAP HANA instal-
lation. It can be applied to any SAP HANA machine size. There is one active SAP HANA instance
running on the primary site node and a standby node on the backup site is ready to take over operation
in case of a disaster. The difference between a single node with stretched HA and a single node with
DR installation is the fact that automatic failover is sacrificed for the possibility to run a non-production
SAP HANA instance on the DR-site node. Otherwise, the two setups are identical. The setup of this
solution is a manual process after SLES has been installed.
Because of the importance of the quorum node it is recommended to place it at a third site. We
understand, however, that this is not always feasible. This leads to the following two designs. In the
first figure 40: Single Node with Disaster Recovery - Two Site Approach on page 112 the quorum node is
placed at the primary site. This ensures that IBM GPFS on the primary site node stays up and running
even if the link to the DR-site node gets interrupted.

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 111


1.8.80-12 © Copyright Lenovo, 2015
Technical Documentation

Site B
Storage expansion for
non-prod DB instance

Worker Node DR Node


Quorum Node

GPFS Links
SAP HANA Links
Inter-Switch Link (ISL)

G8264 switches

Figure 40: Single Node with Disaster Recovery - Two Site Approach

The second approach places the quorum node at a third site. The network architecture can be seen in
figure 41: Single Node with Disaster Recovery - Three Site Approach on page 112.

Site C Site B

Quorum Node

Worker Node Standby Node

GPFS Links G8264 switches


SAP HANA Links
Inter-Switch Link (ISL)

Figure 41: Single Node with Disaster Recovery - Three Site Approach

10.3.1 Installation and configuration of SLES and IBM GPFS

This scenario has to be installed in the exact same way as described in 10.1.1: Installation of SAP HANA
appliance single node with HA on page 102. IBM GPFS replicates data to the backup site node. The
difference is in the configuration of SAP HANA.

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 112


1.8.80-12 © Copyright Lenovo, 2015
Technical Documentation

10.3.2 Optional: Expansion Storage Setup for Non-Production Instance

This solution supports the additional use of the DR-site node to host a non-production SAP HANA
instance. Follow instructions in 10.7: Expansion Storage Setup for Non-productive SAP HANA Instance
on page 124 to setup the additional disk drives. The overall file system architecture is illustrated in figure
42: File System Layout - Single Node with DR with Storage Expansion on page 113.

node1 node2 node3


Quorum
Shared File System
HDD HDD
replica
first

Data
second
replica

Data
HDD
Descriptor
System
File

FS Desc FS Desc FS Desc

Meta Meta
meta
data

data data

LG1 LG1 LG1


FG1 FG2 FG3

sda1 sda2 sda1 sda2 sda1 sda2

OS OS OS
M5120

...

Second file system for non-prod

Figure 42: File System Layout - Single Node with DR with Storage Expansion

10.4 Single Node with HA and DR Installation

This solution is designed to provide the maximum level of redundancy for a single node SAP HANA
installation. It can be applied to any SAP HANA configuration size. High availability concepts ensure
that the database stays up if the primary node has an issue. Disaster recovery concepts ensure that
the database stays up if the first two SAP HANA nodes (residing in the primary customer data center)
become unavailable. Figure 43: Single Node with HADR using IBM GPFS Storage Replication on page
114 illustrates the overall architecture of the solution.

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 113


1.8.80-12 © Copyright Lenovo, 2015
Technical Documentation

Site B
Storage expansion for
non-prod DB instance

Worker Node Standby Node DR Node

GPFS Links
SAP HANA Links
G8264 switches
Inter-Switch Link (ISL)

Figure 43: Single Node with HADR using IBM GPFS Storage Replication

10.4.1 Installation and configuration of SLES and IBM GPFS

Install the latest supported IBM Systems Solution for SAP HANA on all three nodes by using the latest
supported SLES for SAP Applications DVD and the latest non-OS component DVD.
The procedure is similar as described in Installation of SAP HANA appliance single node with HA. The
final file system layout is shown in figure 44 on page 115.

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 114


1.8.80-12 © Copyright Lenovo, 2015
Technical Documentation

node1 node2 node3

Shared File System


HDD HDD HDD

replica
first
Data
second
replica

Data
replica
third

Data
Descriptor
System
File

FS Desc FS Desc FS Desc

Meta Meta Meta


meta
data

data data data

LG1 LG1 LG1


FG1 FG2 FG3

sda1 sda2 sda1 sda2 sda1 sda2

OS OS OS

Figure 44: File System Layout - Single Node HADR

To begin the installation, you need to install both IBM Workload Optimized Systems using the steps at
the beginning of chapter 6: Guided Install of the Lenovo Solution on page 39. Configure the network
interfaces (internal and external) and the NTP server(s) as described there. The IP addresses can be
in different subnets as long as proper routing between the subnets is in place. Make sure that all three
SAP HANA nodes can ping each other on all interfaces.
1. Start the text based installer as follows on each node:
saphana-setup-saphana.sh -H
The switch -H prevents HANA from being installed automatically. This needs to be done manually
later. Refer to the steps as stated in section 6.6.2.2: Cluster Installation on page 62 together with
the steps described below. Accept the license shown by pressing "1" and hit Enter.
2. Select "Cluster (worker)". This does a basic installation as a cluster node. Enter external the FQDN
as external hostname and accept the IBM GPFS license by pressing "1" and hit Enter. Continue
until the installer finishes successfully.
3. Start the installer again as above with the option -H, this time only on the future master node.
Select this time "Cluster (Master)". Enter details for SID, Instance ID and a HANA password.
Enter number of nodes 3, number of standby nodes 1 (this does not matter, it would be used only
for HANA which is not installed automatically anyway). Assure that the IP addresses for the IBM
GPFS and HANA network are correct. Accept the IBM GPFS license and wait for the installation
process to continue successfully.

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 115


1.8.80-12 © Copyright Lenovo, 2015
Technical Documentation

4. Change the replication level for the IBM GPFS fileset:


mmchfs sapmntdata -m 3 -r 3
5. Check the replication level set:
mmlsfs sapmntdata

...
-m 3 Default number of metadata replicas
-M 3 Maximum number of metadata replicas
-r 3 Default number of data replicas
-R 3 Maximum number of data replicas
...
6. Restripe the data on the IBM GPFS filesystem to all have the required three replicas:
mmrestripefs sapmntdata -R
7. Set the following IBM GPFS configuration parameters:
mmchconfig unmountOnDiskFail=meta
mmchconfig panicOnDiskFail=meta
8. Adjust the quotas on the file system. The log quota is set to 1 TB regardless of memory size.
mmsetquota -j hanalog -h 1024G -s 1024G /dev/sapmntdata
The data quota for this HADR scenario is set to 9 * RAM. In case of a 1 TB server this means a
quota of 9 TB.
mmsetquota -j hanadata -h 9216G -s 9216G /dev/sapmntdata
Allocate the remaining space to HANA shared and execute mmsetquote accordingly.
mmsetquota -j hanashared -h <REMAINING>G -s <REMAINING>G /dev/sapmntdata
9. Install SAP HANA similarly as described in section 8.4.5: SAP HANA appliance installation on
page 77.

10.4.2 Optional: Expansion Storage Setup for Non-Production Instance

This solution supports the additional use of the DR-site node to host a non-production SAP HANA
instance. Follow instructions in 10.7: Expansion Storage Setup for Non-productive SAP HANA Instance
on page 124 to setup the additional disk drives. The overall file system architecture is illustrated in figure
45: File System Layout - Single Node HADR with Storage Expansion on page 117.

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 116


1.8.80-12 © Copyright Lenovo, 2015
Technical Documentation

node1 node2 node3

Shared File System


HDD HDD HDD

replica
first
Data
second
replica

Data
replica
third

Data
Descriptor
System
File

FS Desc FS Desc FS Desc

Meta Meta Meta


meta
data

data data data

LG1 LG1 LG1


FG1 FG2 FG3

sda1 sda2 sda1 sda2 sda1 sda2

OS OS OS
M5120

...

Figure 45: File System Layout - Single Node HADR with Storage Expansion

10.5 Single Node DR Installation with SAP HANA System Replication

This solution provides redundancy at the application layer. It can be applied to any SAP HANA config-
uration size. For details, see official SAP HANA documentation on http://help.sap.com/hana. There
are two ways how to design the network for such a DR solution based on System Replication. As the IBM
GPFS interfaces on the DR-site node are not connected to the primary site a set of redundant switches
is optional. This leads to one architecture with switches and one architecture without switches between
the SAP HANA nodes. Figure 46: Single Node DR with SAP System Replication on page 118 shows the
solution with switches.

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 117


1.8.80-12 © Copyright Lenovo, 2015
Technical Documentation

Site B
Storage expansion for
non-prod DB instance

Worker Node DR Node

SAP HANA Links


Inter-Switch Link (ISL)

G8264 switches

Figure 46: Single Node DR with SAP System Replication

Because the two SAP HANA nodes do not use their IBM GPFS network interfaces you can also opt
for a solution without intermediate network switches. In this case you have to connect the two 10 Gbit
interfaces used for SAP HANA communication on the two nodes directly without an intermediate switch.
This architecture is illustrated in figure 47: Single Node DR with SAP System Replication on page 118.

Site B
Storage expansion for
non-prod DB instance

Worker Node DR Node

SAP HANA Links

Figure 47: Single Node DR with SAP System Replication

10.5.1 Installation and configuration of SLES and IBM GPFS

Each site is considered to be a single node, as far as SLES and IBM GPFS are concerned. The final
file system layout can be seen in figure 48: File System Layout of Single Node DR with SAP System
Replication on page 119.

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 118


1.8.80-12 © Copyright Lenovo, 2015
Technical Documentation

SAP HANA
System Replication

node1 node2

File system A File system B


HDD HDD
replica

replica
first

first
Data Data
Descriptor

Descriptor
System
System
File

File
FS Desc FS Desc

Meta Meta
meta

meta
data

data
data data

LG1 LG1
FG1 FG1

sda1 sda2 sda1 sda2

OS OS

GPFS Cluster A GPFS Cluster B

Figure 48: File System Layout of Single Node DR with SAP System Replication

Perform a single node installation on both nodes as described in 6.6.2.1: Single Node Installation on page
60 but start the installer with the -H option:
saphana-setup-saphana.sh -H
The switch -H prevents HANA from being installed automatically. This needs to be done manually later.
Data replication will be taken care of by SAP HANA application level. Replication can happen syn-
chronously or asynchronously. Configure the network connection for SAP HANA and ensure the connec-
tivity.

10.5.2 Installation of SAP HANA

Please refer to the official SAP documentation available here: http://help.sap.com/hana_appliance.


The location of the SAP HANA installation files is /var/tmp/saphana.

10.5.3 Optional: Expansion Storage Setup for Non-Production Instance

This setup supports the additional use of the DR-site node to host a non-production SAP HANA instance.
The layout of the two file systems (production and non-production) is illustrated in figure 49: File System
Layout of Single Node DR with SAP System Replication with Storage Expansion on page 120.

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 119


1.8.80-12 © Copyright Lenovo, 2015
Technical Documentation

SAP HANA
System Replication
node1 node2

File system A File system B


replica HDD HDD

replica
first

first
Data Data

Descriptor
Descriptor

System
System
File

File
FS Desc FS Desc

Meta Meta
meta

meta
data

data
data data

LG1 LG1
FG1 FG1

sda1 sda2 sda1 sda2

OS OS
M5120

GPFS Cluster A ...

GPFS Cluster B

Figure 49: File System Layout of Single Node DR with SAP System Replication with Storage Expansion

On the remote site node (receiving the replication data from primary SAP HANA instance) you will
have two file systems configured. The primary file systems spans local disks only and is to be configured
in the exact same way as the primary site file system. This file system will host the replicated data
coming in from the active production SAP HANA instance. The second file system only consists of
storage expansion box drives attached to the remote site node. This file system will host the data
of the non-production SAP HANA instance. Follow instructions in 10.7: Expansion Storage Setup for
Non-productive SAP HANA Instance on page 124 to setup these additional disk drives.

10.6 Single Node with HA using IBM GPFS Storage Replication and DR
using System Replication

This approach also provides maximum redundancy for single node SAP HANA installations. We use
the term 1+1/1 to describe this style of single node installation. It can be applied to any SAP HANA
configuration size. 1+1/1 uses the IBM GPFS storage replication feature and SAP HANA System
Replication feature. For HA (1+1) it uses IBM GPFS storage replication. To achieve this, the active
and the standby node are in the same IBM GPFS cluster and have access to the system file system.
Whenever the active node writes data to disk IBM GPFS replicates it to the standby node.
In addition to that, SAP HANA System Replication transfers data to a DR node on a remote site. In
case of a disaster in the primary site data center the DR node can be used to host SAP HANA. SAP

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 120


1.8.80-12 © Copyright Lenovo, 2015
Technical Documentation

HANA System Replication can either run in synchronous or in asynchronous replication mode. The DR
node creates a separate IBM GPFS cluster consisting just of itself. It has its own file system on local
disk. There is no logical connection to the primary site IBM GPFS cluster. As a consequence, the IBM
GPFS network adapter on the DR node is to be left unconnected. This leads to two possible network
architectures. The first one provides redundant switches on both sites. Figure 50: Single Node with HA
using IBM GPFS Storage Replication and DR using System Replication on page 121 shows this design.

Quorum Node
Site B
Storage expansion for
non-prod DB instance

Worker Node Standby Node DR Node

GPFS Links
G8264 switches
SAP HANA Links
Inter-Switch Link (ISL)

Figure 50: Single Node with HA using IBM GPFS Storage Replication and DR using System Replication

The second architecture drops the switches on the DR site and instead connects the only required network
interfaces (the 10 Gbit connection for SAP HANA communication) directly to the primary site switches.
This is illustrated in figure 51: Single Node with HA using IBM GPFS Storage Replication and DR using
System Replication without remote site Switches on page 122.

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 121


1.8.80-12 © Copyright Lenovo, 2015
Technical Documentation

Quorum Node
Site B
Storage expansion for
non-prod DB instance

Worker Node Standby Node DR Node

GPFS Links
G8264 switches
SAP HANA Links
Inter-Switch Link (ISL)

Figure 51: Single Node with HA using IBM GPFS Storage Replication and DR using System Replication
without remote site Switches

10.6.1 Installation and configuration of SLES and IBM GPFS

The two nodes on the primary site are to be installed in the exact same way as a 1+1 HA environment
described in 10.1.1: Installation of SAP HANA appliance single node with HA on page 102. There is one
IBM GPFS cluster and one file system spanning both nodes with IBM GPFS taking care of replicating
the data to the standby node (r=2, m=2).
To install the DR node follow all steps of a standard SAP HANA single node installation apart from
installing SAP HANA itself (use the -H option). Please refer to 10.5: Single Node DR Installation with
SAP HANA System Replication on page 117 for details.
The OS and IBM GPFS have no logical dependency on the primary site node. This will be achieved on
application level with SAP HANA in the next step.
The final file system layout is shown in figure 52: File System of Single Node with HA and DR with
System Replication on page 123 and it illustrates the use of the two technologies, IBM GPFS storage
replication and SAP HANA system replication.

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 122


1.8.80-12 © Copyright Lenovo, 2015
Technical Documentation

SAP HANA
System Replication

node1 node2 node3 node3


Quorum

Shared File System A File system B


HDD HDD HDD
replica

replica
first

first
Data Data
second
replica

Data
HDD
Descriptor

Descriptor
System

System
File

File
FS Desc FS Desc FS Desc FS Desc

Meta Meta Meta


meta

meta
data

data
data data data

LG1 LG1 LG1 LG1


FG1 FG2 FG3 FG1

sda1 sda2 sda1 sda2 sda1 sda2 sda1 sda2

OS OS OS OS

GPFS Cluster A GPFS Cluster B

Figure 52: File System of Single Node with HA and DR with System Replication

10.6.2 Installation of SAP HANA

Install two separate instances of SAP HANA, one in each site. For the primary site please follow the
according steps for a clustered HA installation.
On the DR node you have to follow all steps of a standard SAP HANA single node installation. This
includes installing all components of SAP HANA and making sure that it runs self contained. You then
have to follow official SAP HANA documentation to enable SAP HANA System Replication between the
instance on the primary site node and the instance on the DR node.

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 123


1.8.80-12 © Copyright Lenovo, 2015
Technical Documentation

SAP HANA
System Replication
node1 node2 node3 node3
Quorum

Shared File System A File system B


HDD HDD HDD

replica

replica
first

first
Data Data
second
replica

Data
HDD

Descriptor
Descriptor
System

System
File

File
FS Desc FS Desc FS Desc FS Desc

Meta Meta Meta


meta

meta
data

data
data data data

LG1 LG1 LG1 LG1


FG1 FG2 FG3 FG1

sda1 sda2 sda1 sda2 sda1 sda2 sda1 sda2

OS OS OS OS
M5120

GPFS Cluster A ...

GPFS Cluster B

Figure 53: File System of Single Node with HA and DR with System Replication and Storage Expansion

10.7 Expansion Storage Setup for Non-productive SAP HANA Instance

This sections describes how to setup the disks in an expansion storage that hosts a non-productive SAP
HANA instance. Expansions storage is supported in environments where the nodes at a DR site would
be idle otherwise.
Depending on the memory size of the nodes you have a different number of drives in the expansions.
Create as many (8+p) RAID 5 arrays as possible. Declare remaining drives as hot spare. For details on
how to use the RAID configuration utility see 6.2.1: Storage Configuration – RAID Setup on page 46.
Each RAID 5 device will be given to IBM GPFS as an NSD.
Collect the device names of all newly created virtual drives. Then create NSDs on them according to the
following rules:
1. all NSDs will be dataAndMetadata
2. all NSDs go into the system pool
3. naming scheme is extXXnodeYY with XX being the two-digit drive number and YY the node number
4. one single failure group for all expansion box drives, make sure it is unique within you cluster
Store a disk descriptor file similar to the following as /tmp/nsdlistexp.txt:
1 %nsd: device=/dev/sdd
2 nsd=ext01node02
3 servers=gpfsnode02
4 usage=dataAndMetadata
5 failureGroup=2
6 pool=system
7 %nsd: device=/dev/sde
8 nsd=ext02node02
9 servers=gpfsnode02

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 124


1.8.80-12 © Copyright Lenovo, 2015
Technical Documentation

10 usage=dataAndMetadata
11 failureGroup=2
12 pool=system
13 %pool:
14 pool=system
15 blockSize=1M
16 usage=dataAndMetadata
17 layoutMap=cluster
18 allowWriteAffinity=yes
19 writeAffinityDepth=1
20 blockGroupFactor=1

Create NSDs
1 # mmcrnsd -F /tmp/nsdlistexp.txt

Create the file system


1 # mmcrfs /dev/sapmntext -F /tmp/nsdlistexp.txt -A no -B 1M -N 3000000 -v no -m 1 -M ←-
,→2 -r 1 -R 2 -s failureGroupRoundRobin -T /sapmntext

Mount the file system on the backup site node


1 # mmmount sapmntext

If your client has a storage expansion connected to both nodes, primary site and backup site, then you
need to apply above procedure two times, one for each node. Each expansion box file system is to be
handled separately. Do not create a single file system that spans over both expansion box disks!
This scenario is used if both data centers – thus both nodes – are to be considered equal and you want to
be able to run production SAP HANA out of both data centers. In this case non-production SAP HANA
instances must also be able to run on both nodes hence the need for a dedicated /sapmntext file system
on both sides.

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 125


1.8.80-12 © Copyright Lenovo, 2015
Technical Documentation

11 Virtualization
The Lenovo Solution can be installed inside of a VMware virtual machine starting with Support Pack-
ackage Stack (SPS) 05. Currently SAP supports following virtualization solutions:
• VMware vSphere 5.1 and SAP HANA SPS05 (or later releases) for non-production use cases
• VMware vSphere 5.5 and SAP HANA SPS07 (or later releases) for production and non-production
use cases.
For non-production use multiple virtual machines may be deployed. For production use only single
node installations are supported. See SAP Note 1788665 – SAP HANA Support for VMware Virtualized
Environments.
For VMware vSphere configuration please see SAP Note 1122388 – Linux: VMware vSphere configuration
Guidelines
The sizing of a virtual machine has to be done according to the existing SAP HANA sizing guidelines
for single node installations. The CPU/RAM ratio has to be met. In general SAP HANA virtualized
with VMware vSphere is sized the same as non-virtualized SAP HANA deployments. In other words,
for sizing the virtual machine (VM) the CPU/memory ratio as used for bare-metal sizing is taken into
account to ensure locality of memory access on the underlying hardware resources.

Virtual Total
Lenovo Total HDD
vCPUs memory Ratio HDD
Name for GPFS
(GB) for OS
VM1 10 64 1 128 416
VM2 20 128 2 128 736
VM3 30 192 3 128 1056
VM4 40 256 4 128 1376
VM5 50 320 5 128 1696
VM6 60 384 6 128 2016

Table 47: SAP HANA Virtual Machine Sizes by Lenovo

This document covers the installation of one VM on 2 or 4 socket System x3850 X6 Workload Optimized
solutions. The installation on 8 socket System x3950 X6 systems is not supported. For installation of
multiple VMs on System x3850 X6 machines please consult the SAP documentation.

11.1 Getting Started

11.1.1 Memory Overhead

CPU and memory overcommitment is not allowed in virtual HANA environments. For this reason memory
has to be spared for the ESXi hypervisor to run and manage the virtual machines.
A very conservative estimate for the amount of memory that needs to be unassigned the SAP HANA
virtual machines for overhead is 3 to 4 percent. For example, on a system having 1 TB of RAM,
approximately 30 to 40 GB would need to be left unassigned to the virtual machines.
In a system with 1TB of RAM a single VM6 machine with 384GB RAM could be installed, leaving
the rest of the system unused. Two VM6 machine would still leave enough unassigned memory for the
hypervisor and virtual machine memory overhead.

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 126


1.8.80-12 © Copyright Lenovo, 2015
Technical Documentation

11.1.2 Configure UEFI

Apply the UEFI configuration as described in section 6: Guided Install of the Lenovo Solution on page
39.

11.1.3 Start Embedded VMware ESXi Hypervisor

The VMware ESXi 5.5 hypervisor is to be installed on an USB pen drive. The drive is located at an
internal USB port in the server. This prevents an unintended removal of the USB pen drive.
Boot the server with attached USB pen drive. Enter BIOS and select
Boot Manager Boot from embedded hypervisor .

11.1.4 Enable SSH on VMware ESXi Hypervisor

By default, remote command execution is disabled on an ESXi host, and you cannot log in to the host
using a remote shell. You can enable remote command execution from the direct console or from the
vSphere Client.
To enable SSH access in the direct console
1. At the direct console of the ESXi host, press F2 and provide credentials when prompted.
2. Scroll to Troubleshooting Options and press .
3. Choose "Enable SSH" and press .
On the left, "Enable SSH" changes to "Disable SSH". On the right, "SSH is Disabled" changes to
"SSH is Enabled".
4. Press Esc until you return to the main direct console screen.

11.1.5 StorCLI on VMware ESXi 5.5

To be able to use the storage on an X6 machine you have to configure the RAID adapters.
You can install the StorCLI tool directly under VMware ESXi 5.5. As a prerequisite SSH has to be
enabled on the VMware ESXi 5.5.
You can download the latest StorCLI version from http://www-947.ibm.com/support/entry/portal/
docdisplay?lndocid=migr-5092951.
Copy the files VMware ESXi via SCP.

11.1.5.1 Installation Follow these steps to install the StorCLI utility:


1. Unzip file and change into support directory.
2. Copy the VIB to the ESXi server. The file can be placed anywhere where it is accessible to the
ESXi console shell. In the following examples the file is located in /tmp.
3. Issue the following command:
1 esxcli software vib install -v=/tmp/<VIBFILE> --no-sig-check

4. disable native driver for Megaraid_sas:


1 esxcfg-module -d <mod-name>

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 127


1.8.80-12 © Copyright Lenovo, 2015
Technical Documentation

5. A reboot is required to apply the configuration changes.

11.1.5.2 Configure RAID and CacheCade with StorCLI You must configure the RAID setup
and integration of the CacheCade VDs before you format the disks.
To see if the StorCLI works correctly apply following command:
1 storcli show all

You should see a list of the installed RAID adapters and an overview.
Counting of the adapters starts with 0. To see the setup of the first adapter use
1 storcli /c0 show

If you do not see any adapter although there is at least one installed, you must change the SCSI driver
in VMware ESXi.
Decide with the list below for every controller in the machine, which RAID levels, and which number of
RAID VDs you have to configure:
• 6 HDDs: 1 RAID5
• 9 HDDs: 1 RAID5
• 10 HDDs: 1 RAID6
• 18 HDDs: 2 RAID5
• 20 HDDs: 2 RAID6
Create a RAID5 array, where 252:0-7 is an example list of drives used, /cX the controller, and rX is the
RAID level:
1 storcli /c0 add vd type=r5 drives=252:0-7 wb ra cached strip=64 cachevd

All SSDs on a controller are used to create the CacheCade VD. There can only be one CacheCade VD
per controller.
Create the CacheCade VD, where 252:8-9 is an example list of the SSDs used, and /cX the controller.
The parameter assignvds=X needs the VD ID of the RAID array created before. If you created two
RAIDs on the controller, you can specify assignvds=X,Y.
1 storcli /c0 add vd cc type=r1 drives=252:8-9 wb assignvds=1

Adjust the settings of the CacheCade VD, where /c0 is the RAID controller and /vX is the ID of the
newly created CacheCade VD.
1 storcli /c0/v1 set rdcache=ra iopolicy=cached wrcache=wb

11.1.6 Setting up ESXi Storage in CLI

Since the ESXi Hypervisor runs on standard System x HANA Hardware there is no external storage
attached. You have to open a SSH session on the ESXi hypervisor.
To list the installed storage devices execute:
1 esxcli storage core device list

To list all filesystems known to the ESXi hypervisor call

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 128


1.8.80-12 © Copyright Lenovo, 2015
Technical Documentation

1 esxcli storage filesystem list

Figure 54: ESXi 5.x filesystems on a System x3850 X6. The VFAT filesystems belong to the USB device

Create a VMFS5 filesystem on a partition. Example VMFS5 creation on a System x3850 X6:
1 vmkfstools --createfs vmfs5 --setfsname "hana 38 - HDD" /vmfs/devices/disks/naa←-
,→.600605b0038acb6018f17abe32a77168

This creates a VMFS5 filesystem on the CacheCade accelerated RAID5. The device names will vary on
your setup.
Repeat the steps with every disk in the system you want to use for VMware.
Attention
These steps delete all data on the disks. Create a backup if necessary.

11.1.7 Setting Storage for SLES and HANA ISO

There are two ways to provide the needed ISOs for the virtual machines.
One is a NFS connected storage from an external source or a datastore on the server.

11.1.7.1 Setting up NFS datastore It is easier to store the SLES for SAP 11 and the non-OS
components ISOs on a separate filesystem and mount it via NFS on the ESXi hypervisor.
To create an NFS mount login to the hypervisor via SSH and execute:
1 esxcli storage nfs add --host=<hostname> --share=/<mount_dir> --volume-name=<←-
,→create_volume_name>

To see the mounted NFS volumes execute:


1 esxcli storage nfs list

11.1.7.2 Setting up a local datastore A datastore is a directory on the ESXi hypervisor in which
you copy the SLES and non-OS component ISOs. Therefore the filesystems must be created first. Connect
via SSH to the ESXi hypervisor.
All mounted volumes are available at /vmfs/volumes.

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 129


1.8.80-12 © Copyright Lenovo, 2015
Technical Documentation

Figure 55: ESXi5.5 Storage Path

Create a datastore named ISO on a VMFS5 volume.


• Change to a VMFS5 volume (cd).
• Create a datastore directory (mkdir ISO).
• Copy the SLES and non-OS components ISO to the datastore via SCP.

11.1.8 Restart VMware ESXi Hypervisor

To restart the ESXi 5.5 hypervisor press F12 at the ESXi prompt. You have to authenticate before you
can actually restart the hypervisor.

11.1.9 Installing VMware vSphere Client

VMware vSphere Client is required to perform many of the tasks described in this document. Complete
the following steps to install VMware vSphere Client on a suitable system in your network.
Note
To avoid any unexpected behavior, it is strongly recommended that you use the VMware
vSphere Client that matches the version of the SAP HANA system hardware’s VMware ESXi
5 hypervisor. If you already have an appropriate version of the VMware vSphere Client
installed, skip to the next section 11.2: Configuring and Starting VMs with vSphere Client on
page 131.
1. Boot the system hardware to the VMware ESXi 5 hypervisor. The IP address of the VMware ESXi
5 hypervisor is displayed on the console.
Note
If you have already added a host name to your DNS, you can use the host name instead
of the IP address.
2. On the Microsoft Windows system where VMware vSphere Client will be installed, open a secure
web connection (HTTPS) and enter the IP address of VMware ESXi 5 hypervisor in the browser
address bar. The VMware ESXi 5 welcome screen is displayed.
3. Download the vSphere client and follow the on-screen instructions to install the client. Note: If a
security warning window opens, click the Ignore button.

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 130


1.8.80-12 © Copyright Lenovo, 2015
Technical Documentation

Figure 56: ESXi 5.1 WEB Welcome

Note
VMware vCenter server also provides a web based vSphere Client that can be
used. Open a secure web connection (HTTPS) to the vCenter server to the address
https://<address to vCenter server>/vsphere-client/

11.2 Configuring and Starting VMs with vSphere Client

To configure and start the virtual machines, complete the following steps.
Note
The illustrations in this document might differ slightly from what you see on your screen.
1. Log in to the VMware vSphere Client. Type the IP address or host name of the host system, and
your user name and password and click the Login button.
(a) If a security warning window opens, ignore the warning and install the certificate.
(b) On a new server, you might also see a warning that there is no datastore; ignore this warning,
too.
2. The virtual machine is created with the aid of the vCenter GUI. You can use the WEB-GUI as
well, if you prefer it.

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 131


1.8.80-12 © Copyright Lenovo, 2015
Technical Documentation

Figure 57: Create new virtual machine

3. Choose Custom .

Figure 58: Choose custom configuration

4. Choose a name.

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 132


1.8.80-12 © Copyright Lenovo, 2015
Technical Documentation

Figure 59: Choose a name

5. Choose a datastore for the VM files.

Figure 60: Choose disk storage for VM files

6. Choose the newest virtual machine version.

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 133


1.8.80-12 © Copyright Lenovo, 2015
Technical Documentation

Figure 61: Newest virtual machine hardware version

(a) Windows Based Client (Version 8)


If you use the Vmware vSphere Microsoft Windows client, you will only be presented with the
possibility to choose a VM Hardware version of 6, 7 or 8. In order to run a virtual machine
above 32 vCPUs, you must upgrade the VM hardware at the end or use the vCenter’s vSphere
web client. See step 6b: Configuring and Starting VMs with vSphere Client on page 134 for
more details on upgrading the version using the Windows client.
(b) Web Based Client (Version 9)

Figure 62: Configure the use of more than 32 CPUs

7. Choose SUSE Linux Enterprise 11 (64-bit) .

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 134


1.8.80-12 © Copyright Lenovo, 2015
Technical Documentation

Figure 63: Choose Operating System

8. Choose number of virtual CPUs according to table 47: SAP HANA Virtual Machine Sizes by
Lenovo on page 126.
It is important to note that if you are using the vSphere Microsoft Windows Client, you will not
be able to configure a virtual machine over 32 vCPUs until you upgrade the VM hardware. If you
wish to create a virtual machine using more that 32 vCPUs, first select the maximum of 32 now and
change it following the directions in step 20: Configuring and Starting VMs with vSphere Client on
page 141.

Figure 64: Choose number of CPUs

9. Choose memory according to table 47: SAP HANA Virtual Machine Sizes by Lenovo on page 126.

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 135


1.8.80-12 © Copyright Lenovo, 2015
Technical Documentation

Figure 65: Choose Memory

10. Select the network cards.

Figure 66: Choose Network Cards

11. Choose the SCSI controller.

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 136


1.8.80-12 © Copyright Lenovo, 2015
Technical Documentation

Figure 67: Choose SCSI controller

12. Disk layout for virtual machines. Two disks needed for a VM. One for the OS, and one for GPFS.
Please see table 47 for required disk sizes.
13. Create a new virtual disk.

Figure 68: Create new HANA datastore

(a) Choose the OS size according to table 47: SAP HANA Virtual Machine Sizes by Lenovo on
page 126.

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 137


1.8.80-12 © Copyright Lenovo, 2015
Technical Documentation

Figure 69: Choose datastore size

(b) Choose a datastore for the OS.

Figure 70: Choose datastore

(c) Choose the correct SCSI node. The first virtual disk you create is assigned to "SCSI (0:0)",
the second to "SCSI (0:1)", and so on.

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 138


1.8.80-12 © Copyright Lenovo, 2015
Technical Documentation

Figure 71: Choose SCSI Node

(d) Finish the virtual drive creation.


14. Repeat steps 13 to 13d for virtual disks for GPFS.
Select Edit the virtual machine settings before completion to do this.
In the case that your virtual machine requires a drive size that is larger than the capacity of a
single available device, you must repeat steps 13 through 13d to include the total amount of storage
across multiple devices.
15. Add a new CD/DVD device.

Figure 72: Add a new CD/DVD device

(a) Select Datastore ISO File .

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 139


1.8.80-12 © Copyright Lenovo, 2015
Technical Documentation

Figure 73: Select ISO image

(b) Select Connect at power on .


(c) Select Browse... and look for the "SLES for SAP ISO (NFS Mounted Datastore)".
You need two CD/DVD drives for the installation. One for the SLES DVD ISO and one for
the non-OS components ISO.
(d) Select IDE (0:0) .

Figure 74: Select IDE device 0:0

(e) Finish the creation of the SLES for SAP DVD.

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 140


1.8.80-12 © Copyright Lenovo, 2015
Technical Documentation

Figure 75: Finish creation of SLES ISO mount

16. Create the non-OS components DVD.


Repeat step 15 for a second CD/DVD and include the Lenovo HANA ISO. Both ISOs are best put
into an NFS datastore that has been attached previously in the server settings of the VMware ESX
server.
17. Select the options Advanced General Configuration Parameters .

(a) Enable SAPOSCOL Enhanced Monitoring. This way the SAP OS Collector, saposcol, will
report some additional host metrics for programs that can read them such as: SAP Early
Watch, SAP Going Live Check. This should be done as explained in SAP Note 1606643 –
Linux: VMware vSphere host monitoring interface. Basically there are 3 steps:
• Install the VMware Tools in the VM
• Add following values to VMware ESX Host: Misc.GuestLibAllowHostInfo = TRUE
• Add the following values to the VMware Virtual Machine Definition for each guest:
tools.guestlib.enableHostInfo = TRUE and Smbios.reflectHost = TRUE
(b) Add a Row SMBIOS.reflectHost and set to true.
18. Change the boot options to Boot to BIOS at the next reboot.
19. Press OK to create the virtual machine.
20. Upgrading the Virtual Machine to VM Version 9 using Windows Client.
If it is required to use more than 32 vCPUs in your SAP HANA virtual machine (sizes larger than
3 slots), you must use the version 9 of the VMware virtual hardware. This is not possible during a
virtual machine creation using the Microsoft Windows client.
After creating a virtual machine, right mouse click on the newly created virtual machine in the
vSphere client and select "Upgrade Virtual Hardware". A pop-up will show asking you to confirm
the upgrade. Press "Yes" and continue.

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 141


1.8.80-12 © Copyright Lenovo, 2015
Technical Documentation

Figure 76: Upgrade virtual hardware

Figure 77: Confirm upgrade

(a) Increasing the number of virtual CPUs for larger VMs.


If you are installing a virtual machine larger than 3 slots, you will need to update the number
of vCPUs required for this system. Right mouse click on the newly created virtual machine in
the left-hand side of the vSphere client and select Edit Settings .
(b) Select the CPUs.
Select the CPUs and increase the number of virtual sockets and CPUs as required. We rec-
ommend to use 10 CPUs per socket for the SAP HANA virtual machine.

Figure 78: Upgrade virtual hardware

21. Upgrading the VM to VM version 10 using command line.


This describes the upgrade of the virtual hardware, CPU and RAM if a vCenter is not available.
You may be in need to do this, if you want to run the VM with large RAM, e.g. more than 256GB
RAM. To accomplish this, it is mandatory to have SSH to ESXi hypervisor enabled.
Every virtual machine has a VMX file, which contains all configuration data for the machine.
Usually the format is <vmname>.vmx. You can find the VMX file for your VM with the command
1 ~ # find . -name '*.vmx'

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 142


1.8.80-12 © Copyright Lenovo, 2015
Technical Documentation

This will list all available VMs. Choose the one you need and change into the directory. Open
the VMX file with an editor (e.g. vi). The VM has to be shut down to do this. Edit and change
following lines:
1 virtualHW.version = "10"
2 memsize = "<sizeoframyouneed>"
3 numvcpus = "<numofcpuyouneed>"

To take the changes in effect you must reload the VM


1 ~# vim-cmd vmsvc/reload <vmid>

11.3 Operating System (SLES for SAP 11 SP3) Installation

After starting the virtual machine the installation prompt appears. Use to select the line "SLES for
SAP Applications - Installation with external profile". Move the cursor ( + ) to the boot options
and change the autoyast parameter to autoyast=device://sr1/. See figure 79: Changing the autoyast
parameter for installation on page 143.

Figure 79: Changing the autoyast parameter for installation

After that press . The installation will continue automatically.

Note
Please continue with the installation instructions in section 6.3: Phase 2 – SLES for SAP on
page 51.

11.4 Operating System (Red Hat Enterprise Server 6.5) Installation

After starting the virtual machine the Installation prompt appears. Press for the kernel boot options.
Add "ks=cdrom://ks.cfg".
See figure 80: Adding kickstart parameter for install on page 144.

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 143


1.8.80-12 © Copyright Lenovo, 2015
Technical Documentation

Figure 80: Adding kickstart parameter for install

After that press . The installation will continue automatically.

Note
Please continue with the installation instructions in section 6.4: Phase 2 – RHEL on page 56,
but also execute the steps in the following section.

11.4.1 Changes after Red Hat Installation

After the installation you need to login as root and perform following tasks:
• Remove the file /etc/modprobe.d/bonding.conf.
• Remove the files ifcfg-bond0, ifcfg-bond1, and ifcfg-eth3 in /etc/sysconfig/network-scripts.
• Edit ifcfg-eth0 and remove the lines MASTER=bond0 and slave=yes.
• The file ifcfg-eth0 file should look like this:
1 DEVICE=eth0
2 TYPE=Ethernet
3 USERCTL=no
4 ONBOOT=yes
5 BOOTPROTO=none
6 NM_CONTROLLED=no
7 IPADDR=[IPADDR of Server]
8 NETMASK=[netmask]
9 IPV6INIT=no

• The configuration for eth1 and eth2 is similar. Please keep in mind that eth1 is the GPFS network
interface (gpfsnode01) and eth2 ist the HANA network interface (hananode01).
• Edit /etc/hosts and add IP address and full name of your server, IP and name of gpfsnode01, and
IP and name of hananode01.
• Reboot the VM.
• After reboot continue with installation as described in section 6.6: Phase 3 on page 59.

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 144


1.8.80-12 © Copyright Lenovo, 2015
Technical Documentation

12 Upgrading the Hardware Configuration

Note
Please note that this chapter may differ for special setups like DR. This chapter is about
standard appliance configurations.
There are several possibilities to upgrade IBM appliances. You can either upgrade the RAM of your
appliance (scale-up) or you can add servers to create or increase the size of a cluster (scale-out).
Table 48: RAID array and RAID controller overview on page 146 lists defined models according to
number of CPUs, memory, and number of RAID arrays.
An upgrade from 4U chassis (x3850 X6) to 8U chassis (x3950 X6) is possible – with some extra efforts.
Upgrades from 2 CPU sockets to 4, and 4 to 8 sockets are possible. Please note that the PCI-e slot
assignment changes (section 4.4: Card Placement on page 15) are required.
When scaling out a stand-alone installation (single server) to a cluster without changing the RAM it might
be necessary to add additional storage to the servers. Please note the different lines for stand-alone and
scale-out that might list different numbers of RAID arrays. Additional storage can mean either to add
9 HDDs to an existing storage expansion, or to add a new storage expansion, or (only for 8U chassis)
to add a second internal M5210 RAID controller. If your upgrade path requires new RAID controllers
please follow the instructions in section 4.4: Card Placement on page 15).

12.1 Power Policy Configuration

Unless specified to manufacturing, systems shipped from the factory have default settings that may not
meet customer desired settings. It is strongly recommended that during pre-installation setup, or after
installing additional hardware options, the power policy and power management selections should be
checked to insure:
• Sufficient power is available for the configuration
• The desired correct power redundancy and throttling settings have been selected
Note
Failure to properly set values can prevent the system from booting or log error events.
For more information on how to perform this task, refer to section ’Setting power supply power policy
and system power configurations’ of the System x3850 X6 and x3950 X6 Installation and Service Guide22 .

12.2 Reboot Behavior

When installing or performing upgrades, the operator should be prepared to expect multiple reboots
during the POST process as the system performs the required configuration and setting changes. A lack
of understanding reboot behavior could cause the operator to suspect bad or misbehaving hardware or
firmware and result in interrupting the required process. Interrupting the process will result in increased
time to complete the installation and may require service depending on what actions the operator has
performed improperly.
The number of reboots will vary depending upon the type (HW vs FW) and number of changes. Firmware
changes (primary bank, secondary bank, both, option) has most effect and may be as high as seven. The
number and size of installed memory DIMMs affects the time between reboots, not the number.

22 http://publib.boulder.ibm.com/infocenter/systemx/documentation/topic/com.ibm.sysx.3837.doc/nn1hu_

install_and_service_guide.pdf

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 145


1.8.80-12 © Copyright Lenovo, 2015
Technical Documentation

Chassis CPUs Usage Memory IA* EA** M5120/M5225 Note


Standalone 128-512GB 1 0 0
2 256GB 1 0 0 [1]
Scaleout
512GB 1 1 0 [1]
256-512GB 1 0 0
768-1024GB 1 1 1
Standalone 1.5-2TB 1 1 1 [2]
3-4TB 1 2 1 [4]
x3850 X6
6TB 1 3 2 [3]
4 512-1024GB 1 1 1
1.5TB 1 2 1 [4]
2TB 1 2 1 [4]
Scaleout
3TB 1 3 2 [3]
4TB 1 4 2 [3]
6TB 1 5 3 [3]
256-512GB 1 0 0
Standalone 768-1024GB 2 0 0
4
1.5-2TB 2 0 0 [2]
Scaleout 512-1024GB 2 0 0
512GB 1 0 0
1-2TB 2 0 0
3-4TB 2 1 1 [2]
Standalone
6TB 2 2 1 [2]
x3950 X6
8TB 2 2 1 [3]
12TB 2 5 3 [3]
8
1TB 2 0 0
2TB 2 1 1
4TB 2 3 2 [4]
Scaleout
6TB 2 5 3 [2]
8TB 2 6 3 [3]
12TB 2 10 5 [3]

Table 48: RAID array and RAID controller overview


* IA = Number of RAID arrays on Internal M5210 RAID controllers (excluding the RAID array for the OS).
** EA = Number of RAID arrays on External M5120/M5225 RAID controllers.
[1] = up to 4 nodes only
[2] = For Suite on HANA only, not for Datamart and BW.
[3] = Not approved with SAP HANA
[4] = For non-productive use only under relaxed HW requirements.

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 146


1.8.80-12 © Copyright Lenovo, 2015
Technical Documentation

Note
Before adding or removing any hardware, remove AC power and wait for the LCD display
and all Light Emitting Diodes (LEDs) to turn off.
For more information on this topic and to see a reboot guideline chart, refer to RETAIN tip MIGR-
509687323 .

12.3 Adding storage

12.3.1 Adding storage via EXP2524

Depending on your upgrade path, you have the following options:


• Add 9 HDDs to an already attached EXP.
• Attach a new EXP to the server and insert 9 (for 1 RAID5) or 18 HDDs (for 2 RAID5s) and 2
SSDs.
• Attach a new EXP to the server and insert 9 (for 1 RAID5) or 18 HDDs (for 2 RAID5s) and 2
SSDs or install 2 additional SSDs into 1st EXP for CacheCade RAID124 .
Please note: you can also configure RAID6 on the EXPs, you then need 1 HDD more per RAID array,
i.e. 10 respectively 20 HDDs per EXP).

Note
All steps – except the installation of a new RAID controller – can be executed without
downtime.
1. Install the M5120/M5225 in the server. (Skip this step when just adding storage to an existing
EXP.)
2. Install the HDDs and SSDs in the EXP. (When just adding storage, you will only add HDDs and
no SSDs).
3. Connect the EXP to power and via SAS cable to the RAID controller. (Skip this step when just
adding storage to an existing EXP.)
4. 12.3.3: Configure RAID array(s) on page 148.
5. 12.3.8: Configuring GPFS on page 150.

12.3.2 Adding storage on second internal M5210 controller

The second M5210 will be connected to 6 HDDs for a RAID5 and 2 SSDs for CacheCade.
1. Install the M5210 in the server.
2. Install the HDDs and SSDs.
3. 12.3.3: Configure RAID array(s) on page 148.
4. 12.3.8: Configuring GPFS on page 150.

23 http://www-947.ibm.com/support/entry/portal/docdisplay?lndocid=migr-5096873
24 Fordetails on hardware configuration and setup see Operations Guide for X6 based models section CacheCade RAID1
Configuration

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 147


1.8.80-12 © Copyright Lenovo, 2015
Technical Documentation

12.3.3 Configure RAID array(s)

Note
Appliance version 1.8.80-12 (and later) come with the tool saphana-raid-config.py. Use the
following three commands instead of the manual configuration described in the next chapters.
Execute this command to adjust the CacheCade settings:
saphana-raid-config.py -c
Execute this command to configure the unconfigured HDDs into RAID arrays:
saphana-raid-config.py -u
Execute this command to activate the CacheCade:
saphana-raid-config.py -c
Now continue with 12.3.8: Configuring GPFS on page 150.
The command line tool storcli is installed on your appliance. It will be used to configure the RAIDs.
Note
All commands were tested with storcli version 1.07.07. Other versions’ syntax may vary.
Look in the output of storcli64 /call show for the controller with the unconfigured drives (UGood).
The actual enclosure IDs (EID), slot numbers (Slt), and ID of the controller may vary in your setup.
1 :
2 Controller = 1
3 Status = Success
4 Description = None
5

6 Product Name = ServeRAID M5120


7 :
8 -------------------------------------------------------------------------
9 EID:Slt DID State DG Size Intf Med SED PI SeSz Model Sp
10 -------------------------------------------------------------------------
11 8:1 18 UGood - 371.597 GB SAS SSD N Y 512B TXA2D20400GA6I U
12 8:2 19 UGood - 371.597 GB SAS SSD N Y 512B TXA2D20400GA6I U
13 8:3 9 UGood - 1.089 TB SAS HDD N Y 512B HUC101212CSS60 U
14 8:4 10 UGood - 1.089 TB SAS HDD N Y 512B ST1200MM0007 U
15 8:5 11 UGood - 1.089 TB SAS HDD N Y 512B ST1200MM0007 U
16 8:6 12 UGood - 1.089 TB SAS HDD N Y 512B HUC101212CSS60 U
17 8:7 13 UGood - 1.089 TB SAS HDD N Y 512B ST1200MM0007 U
18 8:8 14 UGood - 1.089 TB SAS HDD N Y 512B HUC101212CSS60 U
19 8:9 15 UGood - 1.089 TB SAS HDD N Y 512B ST1200MM0007 U
20 8:10 16 UGood - 1.089 TB SAS HDD N Y 512B HUC101212CSS60 U
21 8:11 17 UGood - 1.089 TB SAS HDD N Y 512B HUC101212CSS60 U
22 -------------------------------------------------------------------------
23 :

Create the RAID5, where 8:3-11 is an example list of the HDDs used. It is following the scheme
<Enclosure Device ID>:<Slot Number range>. /c1 stands for controller 1.
1 storcli64 /c1 add vd type=raid5 drives=8:3-11 wb ra cached pdcache=off strip=64

If you have to configure a second RAID5 array, configure it accordingly.

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 148


1.8.80-12 © Copyright Lenovo, 2015
Technical Documentation

12.3.4 Deciding for a CacheCade RAID Level

You can configure the CacheCade RAID arrays either with RAID1 or RAID0. Depending on the hardware
setup you have to decide which RAID level you have to configure.
• 1 M5210: RAID0
• 1 M5210 + 1 M5120/M5225 (with 2 SSDs): RAID0
• 1 M5210 + 1 M5120/M5225 (with 4 SSDs): RAID1
• 1 M5210 + 2 or more M5120/M5225: RAID1
• 2 M5210: RAID1
Please keep in mind that all CacheCade VDs must have the same RAID level. This means that you have
to recreate existing CacheCade arrays that have the wrong RAID level.

12.3.5 Configuring RAID array when CacheCade is not yet configured

Create the CacheCade device, where assignvds=X is the RAID 5 (with X as the Logical/Virtual Drive
ID). If you created 2 RAID5 arrays, use assignvds=X,Y to assign the CacheCade VD to both arrays.
8:1-2 is an example list of SSDs used.
To decide for the RAID level (raidX) see the previous section.
1 storcli64 /c1 add vd cachecade type=raidX drives=8:1-2 wb assignvds=0

Adjust settings of the CacheCade device, where /vX is the CacheCade VD (with X as the Logical/Virtual
Drive ID):
1 storcli64 /c1/v1 set rdcache=ra iopolicy=cached wrcache=wb

12.3.6 Configuring RAID array with existing CacheCade

When you added storage to an existing EXP the CacheCade VD is already configured.
Assign the CacheCade VD to the newly created RAID5 array, where /cX is the controller, and /vX the
RAID5 array:
1 storcli64 /c1/v2 set ssdcaching=on

12.3.7 Changing the CacheCade RAID Level

To change the RAID level of an existing CacheCade VD you have to delete and recreate the CacheCade
VD.
At first, find the CacheCade VD ID and the slots of the SSDs. Use the following command, where /cX
is the RAID controller.
1 storcli64 /c0 show

Now delete the CacheCade VD, where /cX is the RAID controller and /vX is the ID of the CacheCade
VD.
1 storcli64 /c0/v1 delete cachecade

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 149


1.8.80-12 © Copyright Lenovo, 2015
Technical Documentation

Create the deleted CacheCade again, where /cX is the RAID controller and drives=12:1-2 is an example
list of SSD drives used.
1 storcli64 /c2 add vd cachecade type=raid1 drives=12:1-2 wb

Adjust the settings of the CacheCade VD, where /cX is the RAID controller and /vX is the ID of the
newly created CacheCade VD.
1 storcli64 /c2/v1 set rdcache=ra iopolicy=cached wrcache=wb

12.3.8 Configuring GPFS

Find the block device that belongs to the newly created RAID array. mmlsnsd -X, lsscsi, and lsblk
may be helpful.
Find the name of the new NSD(s). For example: If you are on gpfsnode01, execute mmlsnsd | grep
gpfsnode01 to find out the names that are already in use for the existing NSDs.
Create a stanza file (/var/mmfs/config/disk.list.data.gpfsnodeZZ.new) containing the information
about the new GPFS NSD(s). Repeat this block for all newly created RAID arrays accordingly. ZZ is
the node number (e.g. 01 in gpfsnode01).
1 %nsd: device=/dev/sdX
2 nsd=dataYYnodeZZ
3 servers=gpfsnodeZZ
4 usage=dataAndMetadata
5 failureGroup=10ZZ
6 pool=system

Execute
1 mmcrnsd -F /var/mmfs/config/disk.list.data.gpfsnodeZZ.new -v no
2 mmadddisk sapmntdata -F /var/mmfs/config/disk.list.data.gpfsnodeZZ.new -v no

Attention
The following command must only be executed on stand-alone configurations. Do not execute
it in a cluster environment!
mmrestripefs sapmntdata -b
This will balance the data between the used and unused disks equally.
Change the GPFS quotas to match the new requirements. Run the quota calculator and you will see a
result like this:
1 # saphana-quota-calculator.sh
2 Please set the Shared quota to 8187 GB
3 Please set the Data quota to 3072 GB
4 Please set the Log quota to 1024 GB
5

6 Use the following command(s) to set the quota(s)


7 mmsetquota sapmntdata:hanadata --block 3072G:3072G
8 mmsetquota sapmntdata:hanalog --block 1024G:1024G
9 mmsetquota sapmntdata:hanashared --block 8187G:8187G

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 150


1.8.80-12 © Copyright Lenovo, 2015
Technical Documentation

12.4 Adding memory

Note
The installation of additional memory requires a system downtime.
When the customer decides for a scale-up, i.e. adding RAM to the server(s), you have to follow the
memory DIMM placement rules for IBM X6 servers to get the best performance. The DIMMs must be
placed equally over all CPU books – each CPU book must contain the same amount of DIMMs in the
same slots.
Tables 49: x3850 X6 Memory DIMM Placement on page 151, and 50: x3950 X6 Memory DIMM Place-
ment on page 151 show which slots must be populated for specific configurations. The number of memory
DIMMs can be computed by "RAM size"/"DIMM size".

2 Sockets 4 Sockets
DIMMs per server 8 16 24 32 48 16 32 48 64 96
DIMM Slots 9, 6 3 3 3 3 3 3 3 3 3 3
DIMM Slots 1, 10 3 3 3 3 3 3 3 3 3 3
DIMM Slots 15, 24 7 3 3 3 3 7 3 3 3 3
DIMM Slots 19, 16 7 3 3 3 3 7 3 3 3 3
DIMM Slots 8, 5 7 7 3 3 3 7 7 3 3 3
DIMM Slots 2, 11 7 7 3 3 3 7 7 3 3 3
DIMM Slots 14, 23 7 7 7 3 3 7 7 7 3 3
DIMM Slots 20, 17 7 7 7 3 3 7 7 7 3 3
DIMM Slots 7, 4 7 7 7 7 3 7 7 7 7 3
DIMM Slots 3, 12 7 7 7 7 3 7 7 7 7 3
DIMM Slots 13, 22 7 7 7 7 3 7 7 7 7 3
DIMM Slots 21, 18 7 7 7 7 3 7 7 7 7 3

Table 49: x3850 X6 Memory DIMM Placement

4 Sockets 8 Sockets
DIMMs per server 16 32 48 64 96 32 64 96 128 192
DIMM Slots 9, 6 3 3 3 3 3 3 3 3 3 3
DIMM Slots 1, 10 3 3 3 3 3 3 3 3 3 3
DIMM Slots 15, 24 7 3 3 3 3 7 3 3 3 3
DIMM Slots 19, 16 7 3 3 3 3 7 3 3 3 3
DIMM Slots 8, 5 7 7 3 3 3 7 7 3 3 3
DIMM Slots 2, 11 7 7 3 3 3 7 7 3 3 3
DIMM Slots 14, 23 7 7 7 3 3 7 7 7 3 3
DIMM Slots 20, 17 7 7 7 3 3 7 7 7 3 3
DIMM Slots 7, 4 7 7 7 7 3 7 7 7 7 3
DIMM Slots 3, 12 7 7 7 7 3 7 7 7 7 3
DIMM Slots 13, 22 7 7 7 7 3 7 7 7 7 3
DIMM Slots 21, 18 7 7 7 7 3 7 7 7 7 3

Table 50: x3950 X6 Memory DIMM Placement

After the installation of additional memory, the SAP HANA’s global allocation limit must be reconfigured.

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 151


1.8.80-12 © Copyright Lenovo, 2015
Technical Documentation

12.5 Adding CPU Books

Note
The installation of additional CPU books requires a system downtime.
The following upgrade paths are possible:
• x3850 X6, 2 sockets → x3850 X6, 4 sockets
• x3950 X6, 4 sockets → x3950 X6, 8 sockets
• x3850 X6, 4 sockets → x3950 X6, 8 sockets, including the exchange of the 4U chassis to a 8U chassis
• x3850 X6, 4 sockets → x3950 X6, 4 sockets, including the exchange of the 4U chassis to a 8U chassis
Follow these steps to add additional CPU books to a server:
1. Disable the GPFS auto-mount for your GPFS filesystems. If you only have the standard GPFS
filesystem the following command is enough. If you have more GPFS filesystems, change the
configuration for them accordingly.
1 mmchfs sapmntdata -A no

2. Power off the machine.


3. Place the new CPU books in the server. Please make sure that the memory DIMMs are placed
correctly in the CPU books. (See 12.4: Adding memory on page 151.)
4. Adopt the PCI-e card placement according to the tables in section 4.4: Card Placement on page
15.
5. Power on the machine.
6. On SLES for SAP: Save the file /etc/udev/rules.d/71-ibm-saphana-persistent-net.rules
to another location.
On RHEL: Save the file /etc/udev/rules.d/99-ibm-saphana-persistent-net.rules to another
location.
7. Execute
1 saphana-udev-config.sh -sw

8. Reboot the machine.


9. Review the network settings.
10. Enable the GPFS auto-mount option for you GPFS filesystems again.
1 mmchfs sapmntdata -A yes

11. Mount the GPFS filesystem by hand.


1 mmmount sapmntdata

12. At last start the HANA database.

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 152


1.8.80-12 © Copyright Lenovo, 2015
Technical Documentation

13 Software Updates

13.1 Warning

Please be careful with updates of the software stack. Please update the software and driver components
only with a good reason, either because you are affected by a bug or have a security concern and only
after Lenovo or SAP support advised you to upgrade or after requesting approval from support via the
SAP OSS Ticket System on the queue BC-OP-LNX-IBM. Be defensive with updates as updates may affect
the proper operation of your SAP HANA appliance and the System x SAP HANA Development team
does not test every released patch or update.

13.2 Update Variants

This subsection gives an overview of the procedure in general, how updates should be applied. Then
there are two ways presented, how one could update in a cluster environment, either disruptive with a
downtime, or rolling, where one node is updated at a time and then re-added to the cluster.
Before performing a rolling update (non-disruptive one node at a time update) in a cluster environment
make sure that your cluster is in good health and all server nodes and storage devices are running.

13.2.1 General per node update procedure

This is the generic version for any kind of updates which require a system restart.
1. (on the target node) Check GPFS cluster health
Before performing any updates on any node, verify that the cluster is in a sane state. First check
that all nodes are running with the command
1 # mmgetstate -a

and check that all nodes are active, then verify that all disks are active
1 # mmlsdisk -e

The disks on the node to be taken down do not need to be in the up state, but make sure that all
other disks are up.
Warning
If disks of more than one server node are down, the file system will be shut down causing
all other SAP HANA nodes to fail.
2. (on the target node) Shutdown SAP HANA
Shutdown the SAP HANA and the sapstartsrv daemon via
1 # service sapinit stop

Verify that SAP HANA and sapstartsrv are not running anymore:
1 # ps ax | grep sapstart
2 # ps ax | grep hdb

No processes should be found, if any processes are found please retry stopping SAP HANA.

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 153


1.8.80-12 © Copyright Lenovo, 2015
Technical Documentation

3. (on the target node) Unmount the GPFS file system


Unmount locally the shared file system
1 # mmumount sapmntdata

and take care that no open process is preventing the file system from unmounting. If that happens
use
1 # lsof /sapmnt

to find processes still accessing the file system, e.g. running shells (root, <SID>adm, etc.). Other
nodes within the cluster can still mount the shared file system.
4. Shutdown GPFS
1 # mmshutdown

5. Perform upgrades
Do now the necessary updates.
6. Restart the system
Restart the server if necessary. GPFS and SAP HANA should start automatically during reboot.
Skip step 7.
7. Restart GPFS
If you did not restart the whole server in step 6, start GPFS
1 # mmstartup

8. Mount the file system if not already mounted.


You may mount the file system after starting GPFS
1 # mmmount sapmntdata /sapmnt

9. Start SAP HANA


1 # service sapinit start

10. (on any node) Verify GPFS disks


Verify all GPFS disks are active again
1 # mmlsdisk sapmntdata -e

If any disks are down, restart them with the command


1 # mmchdisk sapmntdata start -a

and check the disk status again.


11. (on any node) GPFS Restripe
Start a restripe so that all data is replicated proper again
1 # mmrestripefs sapmntdata -r

Warning
Currently the FPO feature used in the appliance is not compatible with file system
rebalancing. Do not use the -b parameter!
12. Continue with the next node

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 154


1.8.80-12 © Copyright Lenovo, 2015
Technical Documentation

13.2.2 Disruptive Cluster Update

In the disruptive cluster update scenario, one would shutdown the whole cluster an apply all updates.
This will cause a downtime.

13.2.3 Full Cluster Rolling Update

This update procedure applies when you are performing updates which either need a server restart like
Linux kernel update or need a restart of specific server software (e.g. GPFS) on affected nodes.
The idea of a rolling update is to update only one server at a time and after the server is back online in
the cluster, proceed with the next node in the same way. By doing so, you can avoid having downtimes.
For updating the SAP HANA software in a SAP HANA cluster, please refer to the SAP HANA Technical
Operations Manual. This can be done independent of other updates.

13.3 Linux Kernel Update

At the time this document is created, kernel version 3.0.101-0.8.1 is mandatory for SLES for SAP 11 SP3.
Please consult SAP if there is now a higher version recommended.
Warning
If the Linux kernel is updated, it is mandatory to recompile the GPFS portability layer kernel
module. Otherwise the system will not work anymore!

13.3.1 SLES Kernel Update Methods

There are multiple methods to update a SLES for SAP installation. Possible update sources include
updating by using kernel RPMs copied onto the target server, using a corporate-internal installed SLES
update server/repository or by using Novell’s update server via the Internet (requires registration of the
installation). Possible methods include command line based tools like rpm -Uvh or CLI/X11 based GUI
tools like SUSE’s YaST2.
Please refer to Novell’s official SLES documentation. A good starting point is the chapter "Installing
or Removing Software" in the SLES 11 Deployment guides obtainable from https://www.suse.com/
documentation/sles11/.
If you decide to update from RPM files, you need to update at least the following files:
• kernel-default-<kernelversion>.x86_64.rpm
• kernel-default-base-<kernelversion>.x86_64.rpm
• kernel-default-devel-<kernelversion>.x86_64.rpm
• kernel-source-<kernelversion>.x86_64.rpm
• kernel-syms-<kernelversion>.x86_64.rpm
• kernel-trace-devel-<kernelversion>.x86_64.rpm
• kernel-xen-devel-<kernelversion>.x86_64.rpm
Updating using YAST is recommended over updating from files.

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 155


1.8.80-12 © Copyright Lenovo, 2015
Technical Documentation

13.3.2 RHEL Kernel Update Methods

There are multiple methods to update a RHEL installation. Possible update sources including updating
by using kernel RPMs copied onto the target server, using a corporate-internal installed RHEL update
server/repository or by using Red Hat’s update server via the Internet (requires registration of the
installation).
Please refer to Red Hat’s official RHEL documentation. A good starting point is the Red Hat Deployment
Guide25 (chapter 27 "Manually Upgrading The Kernel").
If you decide to update from RPM files, you need to update at least the following files
• kernel-<kernelversion>.el6.x86_64.rpm
• kernel-devel-<kernelversion>.el6.x86_64.rpm
• kernel-firmware-<kernelversion>.el6.noarch.rpm
• kernel-headers-<kernelversion>.el6.x86_64.rpm4
There are two sources for Kernel upgrades on Red Hat Linux: http://www.redhat.com/security/
updates/, and http://www.redhat.com/docs/manuals/RHNetwork/
Download the kernel RPMs necessary for your system. Red Hat recommends to keep the old kernel
packages as a fallback in case there are problems with the new kernel.
Updating using repositories is recommended over updating from files.

13.3.3 Kernel Update Procedure

Step Title 3
1 Stop SAP HANA
2 Unmount GPFS file systems, stop GPFS
3 Update Kernel Packages
4 Build new GPFS portability layer
5 Restart GPFS & check GPFS status
6 Start SAP HANA

Table 51: Upgrade GPFS Portability Layer Checklist

1. Stop SAP HANA and all other SAP software running in the whole cluster or on the single node
cleanly. Login in as root on each node and execute
1 # service sapinit stop

Older versions of the appliance may not have this script, so please stop SAP HANA and other SAP
software manually.
Stop of SAP HANA is documented in the SAP HANA administration guidelines at the SAP Help
Portal26 or SAP Service Marketplace27 .
Make sure no process has files open on /sapmnt, you can test that with the command:
1 # lsof /sapmnt

25 https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/6/html/Deployment_Guide/index.

html
26 https://help.sap.com/hana
27 https://service.sap.com/hana

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 156


1.8.80-12 © Copyright Lenovo, 2015
Technical Documentation

2. Unmount GPFS file systems, stop GPFS


1 # mmumount all
2 # mmshutdown

3. Update Kernel Packages


Please update now the kernel by your preferred method.
4. Build new portability layer
1 # cd /usr/lpp/mmfs/src/
2 # make Autoconfig
3 # make World
4 # make InstallImages

5. Restart GPFS & check GPFS status


1 # mmstartup
2 # mmmount all
3 # mmgetstate
4 # mmlsmount all

6. Start SAP HANA using


1 # service sapinit start

Older versions of the appliance may not have this script, so please start SAP HANA and other SAP
software manually as documented in the SAP HANA administration guidelines at the SAP Help
Portal or SAP Service Marketplace.

13.4 Updating GPFS

Note
Upgrading GPFS requires a rebuild of the portability layer. The same applies if the Linux
kernel was upgraded.

13.4.1 GPFS Settings

Please check this section to get information about the required settings regarding a specific GPFS version.

13.4.2 enableLinuxReplicatedAIO=yes

This feature improves GPFS performance in clusters significant. To enable, execute:


1 # mmchconfig enableLinuxReplicatedAIO=yes -I

13.4.3 DR-clusters

DR-clusters should also set the following for the same reason
1 # mmchconfig enableRepWriteStream=false -I
2 mmchconfig: Attention: Unknown attribute specified: enableRepWriteStream. Press the←-
,→ ENTER key to continue.

Type 999 before pressing enter, otherwise the value will not be changed

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 157


1.8.80-12 © Copyright Lenovo, 2015
Technical Documentation

13.4.4 Disruptive GPFS Cluster Update

Step Title 3
1 Stop SAP HANA
2 Unmount GPFS file systems, stop GPFS
3 Upgrade to new GPFS Version
4 Build new GPFS portability layer
5 Update cluster and file system information
6 Restart GPFS, mount GPFS file systems
7 Check Status of GPFS
8 Start SAP HANA

Table 52: Upgrade GPFS Portability Layer Checklist

1. Stop SAP HANA and all other SAP software running in the whole cluster or on the single node
cleanly. Login in as root on each node and execute
1 # service sapinit stop

Older versions of the appliance may not have this script, so please stop SAP HANA and other SAP
software manually.
Stop of SAP HANA is documented in the SAP HANA administration guidelines at the SAP Help
Portal28 or SAP Service Marketplace29 .
Make sure no process has files open on /sapmnt, you can test that with the command:
1 # lsof /sapmnt

2. Unmount GPFS file systems, stop GPFS


1 # mmumount all -a
2 # mmshutdown -a

3. Upgrade to new GPFS version. This step may be skipped if only the portability layer needs to be
re-compiled due to a Linux kernel update. (Replace <newgpfsversion> with GPFS version number
of the update.)
1 # rpm -Uvh gpfs.base-<newgpfsversion>.x86_64.update.rpm
2 # rpm -Uvh gpfs.docs-<newgpfsversion>.noarch.rpm
3 # rpm -Uvh gpfs.gpl-<newgpfsversion>.gpl.noarch.rpm
4 # rpm -Uvh gpfs.msg.en_US-<newgpfsversion>.noarch.rpm

4. Build new portability layer


1 # cd /usr/lpp/mmfs/src/
2 # make Autoconfig
3 # make World
4 # make InstallImages

5. Update cluster and file system information to current GPFS version

28 https://help.sap.com/hana
29 https://service.sap.com/hana

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 158


1.8.80-12 © Copyright Lenovo, 2015
Technical Documentation

1 # mmchconfig release=LATEST
2 # mmstartup -a
3 # mmchfs sapmntdata -V full
4 # mmmount all -a

6. Check Status of GPFS


1 # mmgetstate -a
2 # mmlsmount all -L
3 # mmlsconfig | grep minReleaseLevel

7. Start SAP HANA using


1 # service sapinit start

Older versions of the appliance may not have this script, so please start SAP HANA and other SAP
software manually as documented in the SAP HANA administration guidelines at the SAP Help
Portal or SAP Service Marketplace.

13.4.4.1 Rolling GPFS Upgrade per node procedure

To minimize downtimes, please distribute the GPFS update package (GPFS-3.X.0.xx-x86_64-Linux.


tar.gz) on all nodes and extract the tar-ball before starting.
1. Check GPFS cluster health
Before performing any updates on any node, verify that the cluster is in a sane state. First check
that all nodes are running with the command
1 # mmgetstate -a

and check that all nodes are active, then verify that all disks are active:
1 # mmlsdisk -e

The disks on the node to be taken down do not need to be in the up state, but make sure that all
other disks are up.
Warning
If disks of more than one server node are down, the file system will be shut down causing
all other SAP HANA nodes to fail.
2. Shutdown SAP HANA
Shutdown the SAP HANA and the sapstartsrv daemon via
1 # service sapinit stop

Verify that SAP HANA, sapstartsrv and any other process accessing /sapmnt are not running
anymore:
1 # lsof /sapmnt

No processes should be found, if any processes are found please retry stopping SAP HANA and any
other process accessing /sapmnt.
3. Unmount the GPFS file system
Unmount locally the shared file system

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 159


1.8.80-12 © Copyright Lenovo, 2015
Technical Documentation

1 # mmumount sapmntdata

and take care that no open process is preventing the file system from unmounting. If that happens
use
1 # lsof /sapmnt

to find processes still accessing the file system, e.g. running shells (root, <SID>adm, etc.) close
them and retry. Other Nodes within the cluster can still mount the shared file system.
4. Shutdown GPFS
1 # mmshutdown

GPFS should unload its kernel modules during its shutdown, so check the output of this command.
5. Update GPFS Software
Change to the directory where you extracted the GPFS Update package GPFS-3.X.0.xx-x86_
64-Linux.tar.gz where X and xx denote the desired target GPFS version. Execute the following
commands
1 # rpm -Uvh gpfs.base-3.X.0-xx.x86_64.update.rpm
2 # rpm -Uvh gpfs.docs-3.X.0-xx.noarch.rpm
3 # rpm -Uvh gpfs.gpl-3.X.0-xx.gpl.noarch.rpm
4 # rpm -Uvh gpfs.msg.en_US-3.X.0-xx.noarch.rpm

Afterwards the GPFS Linux kernel module must be recompiled:


1 # cd /usr/lpp/mmfs/src/
2 # make Autoconfig
3 # make World
4 # make InstallImages

6. Restart GPFS
1 # mmstartup

Verify that the node started up correctly


1 # mmgetstate

During the startup phase the node is shown in the state arbitrating, this changes to active when
GPFS completed startup.
7. Mount file systems
Mount the file system after starting GPFS:
1 # mmmount sapmntdata /sapmnt

8. (on the target node) Start SAP HANA


1 # service sapinit start

9. (on any node) Verify GPFS disks


Verify all GPFS disks are active again:
1 # mmlsdisk sapmntdata -e

If any disks are shown as down, restart them with the command

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 160


1.8.80-12 © Copyright Lenovo, 2015
Technical Documentation

1 # mmchdisk sapmntdata start -a

and check the disk status again.


10. (on any node) GPFS Restripe
Start a restripe so that all data is replicated proper again
1 # mmrestripefs sapmntdata -r

Warning
Currently the FPO feature used in the appliance is not compatible with file system
rebalancing. Do not use the -b parameter!
11. Continue with the next node
After all nodes are updated you can update the GPFS cluster configuration and the GPFS "on disk
format" (the data structures written to disk) to the newer version. Not all updates require this update
steps but it is safe to do them in any case. This update is non-disruptive and can be performed while
the cluster is active.
1. Update the cluster configuration with the newest settings
1 # mmchconfig release=LATEST

2. Update the file system’s on disk format to activate new functionality


1 # mmchfs sapmntdata -V full

Notice that a successful upgrade of the GPFS on disk format to a newer version will make a
downgrade to previous GPFS versions impossible. You can verify the minimum needed GPFS
version with the command
1 # mmlsfs sapmntdata -V

13.5 Upgrading from GPFS 3.5 to 4.1

This section applies to single node and cluster installations. For single node installations only a disruptive
upgrade can be done.
Cluster installations can be upgraded either all at once (disruptive) or node-by-node (rolling).
DR installations can also be upgraded either all at once (disruptive) or node-by-node (rolling). Addi-
tionally, it is possible to upgrade the DR site first and the primary site at a later point. If the DR site
hosts a non-productive SAP HANA instance this approach can be used to verify the new code level in
pre-production.
Note
GPFS 4.1 is only supported with PTF 2 or higher (that is 4.1.0-2).
Make sure you have the required GPFS packages before continuing. GPFS has introduced three editions
with different content. GPFS 4.1 Standard Edition is required (Express is not sufficient). If you have a
gpfs.ext RPM file then you have Standard Edition.
Existing GPFS 3.5 clients are entitled to GPFS 4.1 Standard Edition. For further information, including
how to migrate licenses, see GPFS FAQ30
30 http://www-01.ibm.com/support/knowledgecenter/api/content/SSFKCN/com.ibm.cluster.gpfs.doc/gpfs_faqs/

gpfsclustersfaq.html#migto41

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 161


1.8.80-12 © Copyright Lenovo, 2015
Technical Documentation

13.5.1 Disruptive Upgrade from GPFS 3.5 to 4.1

Step Title 3
1 Stop SAP HANA
2 Unmount GPFS file systems, stop GPFS
3 Remove GPFS 3.5 packages, install GPFS 4.1 packages
4 Build new GPFS portability layer
5 Update cluster and file system information
6 Restart GPFS, mount GPFS file systems
7 Check Status of GPFS
8 Start SAP HANA

Table 53: GPFS Upgrade Checklist

1. SStop SAP HANA and all other SAP software running in the whole cluster or on the single node
cleanly. Login in as root on each node and execute
1 # service sapinit stop

Older versions of the appliance may not have this script, so please stop SAP HANA and other SAP
software manually.
Stop of SAP HANA is documented in the SAP HANA administration guidelines at the SAP Help
Portal31 or SAP Service Marketplace32 .
Make sure no process has files open on /sapmnt, you can test that with the command:
1 # lsof /sapmnt

2. Unmount GPFS file systems, stop GPFS processes


1 # mmumount all -a
2 # mmshutdown -a

3. Remove all GPFS 3.5 packages and install new 4.1 packages.
Get a list of all installed GPFS 3.5 packages
1 # rpm -qa | grep gpfs

Remove all GPFS 3.5 packages returned from above command


1 # rpm -e gpfs.base gpfs.docs gpfs.gpl gpfs.msg.en_US

Optionally, also remove a gpfs.gplbin package if you have that installed.


Install GPFS 4.1 packages
1 # rpm -ivh gpfs.base-4.1.0-0.x86_64.rpm
2 # rpm -ivh gpfs.docs-4.1.0-0.noarch.rpm
3 # rpm -ivh gpfs.ext-4.1.0-0.x86_64.rpm
4 # rpm -ivh gpfs.gpl-4.1.0-0.noarch.rpm
5 # rpm -ivh gpfs.gskit-8.0.50-16.x86_64.rpm
6 # rpm -ivh gpfs.msg.en_US-4.1.0-0.noarch.rpm

31 https://help.sap.com/hana
32 https://service.sap.com/hana

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 162


1.8.80-12 © Copyright Lenovo, 2015
Technical Documentation

Update to GPFS 4.1 PTF 2


1 # rpm -Uvh gpfs.base-4.1.0-2.x86_64.update.rpm
2 # rpm -Uvh gpfs.docs-4.1.0-2.noarch.rpm
3 # rpm -Uvh gpfs.ext-4.1.0-2.x86_64.update.rpm
4 # rpm -Uvh gpfs.gpl-4.1.0-2.noarch.rpm
5 # rpm -Uvh gpfs.msg.en_US-4.1.0-2.noarch.rpm

4. Build new portability layer


1 # cd /usr/lpp/mmfs/src/
2 # make Autoconfig
3 # make World
4 # make InstallImages
5 (optionally) # make rpm

5. Update cluster and file system information to current GPFS version. Active new method of cluster
configuration repository (CCR).
1 # mmstartup -a
2 # mmchconfig release=LATEST
3 # mmchcluster --ccr-enable
4 # mmchfs sapmntdata -V full
5 # mmmount all -a

6. Check Status of GPFS


1 # mmgetstate -a
2 # mmlsmount all -L
3 # mmlsconfig | grep minReleaseLevel

7. Start SAP HANA using


1 # service sapinit start

Older versions of the appliance may not have this script, so please start SAP HANA and other SAP
software manually as documented in the SAP HANA administration guidelines at the SAP Help
Portal or SAP Service Marketplace.

13.5.2 Rolling upgrade per node from GPFS 3.5 to 4.1

To minimize downtime distribute the GPFS 4.1 packages on all nodes before starting.
1. Check GPFS cluster health
Before performing any updates on any node, verify that the cluster is in a sane state. First check
that all nodes are running and active with the command
1 # mmgetstate -a

Then verify that all disks are active


1 # mmlsdisk -e

The disks on the node to be taken down do not need to be in the up state, but make sure that all
other disks are up.

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 163


1.8.80-12 © Copyright Lenovo, 2015
Technical Documentation

Warning
If disks of more than one server node are down, the file system will be shut down causing
all other SAP HANA nodes to fail.
2. Shutdown SAP HANA
Shutdown the SAP HANA and the sapstartsrv daemon via
1 # service sapinit stop

Verify that SAP HANA, sapstartsrv and any other process accessing /sapmnt are not running
anymore:
1 # lsof /sapmnt

No processes should be found. If any processes are found please retry stopping SAP HANA and all
other processes accessing /sapmnt.
3. Unmount the file system on the node to be upgraded
1 # mmumount sapmntdata

and take care that no open process is preventing the file system from unmounting. If that happens
use
1 # lsof /sapmnt

to find processes still accessing the file system, e.g. running shells (root, <SID>adm, etc.) close
them and retry. Other Nodes within the cluster still have /sapmnt mounted.
4. Shutdown GPFS processes on the node to be upgraded
1 # mmshutdown

GPFS unloads its kernel modules during its shutdown, so check the output of this command care-
fully.
5. Upgrade GPFS to 4.1
Change to the directory where you extracted the GPFS 4.1 packages.
Get a list of all installed GPFS 3.5 packages
1 # rpm -qa | grep gpfs

Remove all GPFS 3.5 packages returned from above command


1 # rpm -e gpfs.base gpfs.docs gpfs.gpl gpfs.msg.en_US

Optionally, also remove a gpfs.gplbin package if you have that installed.


Install GPFS 4.1 packages:
1 # rpm -ivh gpfs.base-4.1.0-0.x86_64.rpm
2 # rpm -ivh gpfs.docs-4.1.0-0.noarch.rpm
3 # rpm -ivh gpfs.ext-4.1.0-0.x86_64.rpm
4 # rpm -ivh gpfs.gpl-4.1.0-0.noarch.rpm
5 # rpm -ivh gpfs.gskit-8.0.50-16.x86_64.rpm
6 # rpm -ivh gpfs.msg.en_US-4.1.0-0.noarch.rpm

Update to GPFS 4.1 PTF 2:

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 164


1.8.80-12 © Copyright Lenovo, 2015
Technical Documentation

1 # rpm -Uvh gpfs.base-4.1.0-2.x86_64.update.rpm


2 # rpm -Uvh gpfs.docs-4.1.0-2.noarch.rpm
3 # rpm -Uvh gpfs.ext-4.1.0-2.x86_64.update.rpm
4 # rpm -Uvh gpfs.gpl-4.1.0-2.noarch.rpm
5 # rpm -Uvh gpfs.msg.en_US-4.1.0-2.noarch.rpm

Afterwards, the GPFS compatibility layer must be recompiled:


1 # cd /usr/lpp/mmfs/src/
2 # make Autoconfig
3 # make World
4 # make InstallImages
5 (optional) # make rpm

6. Restart GPFS
1 # mmstartup

Verify that the node started up correctly


1 # mmgetstate

During the startup phase the node is shown in state arbitrating for a short period of time. This
changes to active when GPFS completed startup successfully.
7. Mount file system
1 # mmmount sapmntdata /sapmnt

8. Start SAP HANA


1 # service sapinit start

9. Verify GPFS disks are active again (this command can be executed on any node)
1 # mmlsdisk sapmntdata -e

If any disks are shown as down, restart them with the command
1 # mmchdisk sapmntdata start -a

and check disk status again.


10. Restore correct replication level (this command can be executed on any node)
Start a restripe so that all data is properly replicated again
1 # mmrestripefs sapmntdata -r

Warning
Do not use the -b parameter!
11. Continue on the next node with step 2 of this procedure
After all nodes have been updated successfully you can update the GPFS cluster configuration and
the GPFS "on disk format" (the data structures written to disk) to the newer version. This update is
non-disruptive and can be performed while the cluster is active.
1. Update the cluster configuration to the newest version
1 # mmchconfig release=LATEST

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 165


1.8.80-12 © Copyright Lenovo, 2015
Technical Documentation

2. Active new method of cluster configuration repository (CCR)


1 # mmchcluster --ccr-enable

3. Update the file system’s on disk format to activate new functionality


1 # mmchfs sapmntdata -V full

Notice that a successful upgrade of the GPFS on disk format to a newer version will make a
downgrade to previous GPFS versions impossible. You can verify the minimum required GPFS
version for a file system with the command
1 # mmlsfs sapmntdata -V

13.6 Update Mellanox Network Cards

You should have received a binary update package, e.g. mlnx_fw_nic_2.0-3.0.0.5_sles11_x86-64.


bin. Please note, that the version number given here might differ. This packages needs to be copied to
all nodes you wish to update. It might be necessary to make the file executable:
1 chmod +x mlnx_fw_nic_2.0-3.0.0.5_sles11_x86-64.bin

Then you can start the installation with:


1 ./mlnx_fw_nic_2.0-3.0.0.5_sles11_x86-64.bin --add-kernel-support --enable-affinity

If this step should fail, you may have to install the python-devel package from the official SLES or
RHEL repositories.
This will upgrade your driver and firmware of the Mellanox network cards. Please review the output of
the above program for possible errors. After a successful upgrade, a reboot will be necessary.

13.7 SAP HANA

Warning
Make sure that the packages listed in Appendix E.5: FAQ #5: Missing RPMs on page 198
are installed on your appliance. An upgrade may fail without them.
Please refer to the official SAP HANA documentation for further steps.

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 166


1.8.80-12 © Copyright Lenovo, 2015
Technical Documentation

14 System Check and Support


This chapter describes different steps to check the appliance’s health status. The scripts described here
should be updated and executed in regular intervals by a system administrator. The other sections present
additional information and give deeper insight into the system.
Note
SAP Note 1661146 - Lenovo/IBM Check Tool for SAP HANA appliances provides details for
downloading and using the following scripts to catalog the hardware and software configura-
tions and create a set of information to assist service and support of the machine by SAP and
Lenovo.
We highly recommend that a SAP HANA system administrator regularly downloads and
updates these scripts to ensure to obtain the latest support information for the servers.

14.1 System Login

The latest version of the Lenovo Solution installation also adds a message of the day that shows the
current status of the GPFS filesystems, and memory usage. This will pop up once each login for every
user. The message is created by a cron job that runs once an hour, this means that the information is
not real time and the system status may have changed in the meantime.
1 ____ _ ____ _ _ _ _ _ _
2 / ___| / \ | _ \ | | | | / \ | \ | | / \
3 \___ \ / _ \ | |_) | | |_| | / _ \ | \| | / _ \
4 ___) / ___ \| __/ | _ |/ ___ \| |\ |/ ___ \
5 |____/_/ \_\_| |_| |_/_/ \_\_| \_/_/ \_\
6

7 IBM Systems Solution for SAP HANA appliance


8

9 See SAP Note 1650046 for maintenance and administration information:


10 https://service.sap.com/sap/support/notes/1650046
11

12 _Regularly_ check the system health!


13 ________________________________________________________________________________
14

15 ! INFO: Last hourly update on Tue Feb 3 16:01:01 CET 2015.


16 ! NOTICE: Memory usage is 3%.
17 ! NOTICE: All quota usages below 90%.
18 ! NOTICE: All GPFS NSDs up and ready.
Listing 1: SSH login screen

14.2 Basic System Check

Included with the installation is a very simple check script that will inform you and the customer that all
the hardware requirements and basic operating system requirements have been met by the guided install.
Using the option -h, you can see the various ways to call the saphana-support-ibm.sh script.

Note
It is highly recommended to work with the latest version of the system check script. You can
find it in SAP Note 1661146 - Lenovo/IBM Check Tool for SAP HANA appliances.

1 [root@server ~]# saphana-support-ibm.sh -h


2 Usage: saphana-support-ibm [OPTIONS]

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 167


1.8.80-12 © Copyright Lenovo, 2015
Technical Documentation

3 IBM Systems solution for SAP HANA appliance System Checking Tool
4 to check hardware system configuration for IBM and SAP Support teams.
5

6 Options:
7 -c Check system (no log file, default).
8 -s Print out the support information for SAP support.
9 (-s replaces the --support option.)
10 -h Print this information
11

12 Check extensions (only valid in conjunction with -c)


13 -v Verbose. Do not hide messages during check.
14 Recommended after installation.
15 -e Do exhaustive testing with longer running tests.
16 May impact HANA performance during check.
17 Implies -v.
18

19 If using the IBM Advanced Settings Utility from a Virtual Machine


20 -i host The host name of the IBM Management Module (IMM)
Listing 2: Support script usage

An output similar to the following should be reported when you use the options -c (check, which is the
default option). If for any reason you receive warnings or errors that you do not understand, please first
try this again with the option -v (verbose) and then open with the customer an SAP OSS customer
message with the output from the -s (support) option in Section 14.3: System Support on page 170
attached.
1 [root@server ~]# saphana-support-ibm.sh -c
2 ===================================================================
3 # IBM SYSTEM CHECK TOOLVersion 1.8.80-12.1916.9f461e5 -- 2015-01-29
4 # (C) Copyright IBM Corporation 2011-2014
5 # (C) Copyright Lenovo Group Ltd. 2015
6 # Analysis taken on: 20150203-1621
7 ===================================================================
8

9 -------------------------------------------------------------------
10 IBM Systems Solution for SAP HANA appliance Hardware Analysis
11 -------------------------------------------------------------------
12

13 Machine analysis for IBM x3950 X6 -[3837FT4]- [XXXXXXX]


14 IBM Workload Optimized Solution for SAP HANA appliance Model "AC48S1024" OK
15 -------------------------------------------------------------------
16

17 Appliance Solution analysis:


18 ----------
19 IBM System x3950 X6: Workload Optimized System for SAP HANA Model AC48S1024
20 Installed appliance version: 1.8.80-12.1916.9f461e5
21 Installed on: Do 29. Jan 15:00:58 CET 2015
22

23 Operating System Red Hat Enterprise Linux Server release 6.5 (Santiago)
24

25 Installation configuration:
26 ----------
27 Parameter clustered is master
28 Parameter exthostname is ibmhanar65.wdf.sap.corp

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 168


1.8.80-12 © Copyright Lenovo, 2015
Technical Documentation

29 Parameter cluster_ha_nodes is 1
30 Parameter cluster_nr_nodes is 3
31 Parameter hanainstnr is 65
32 Parameter hanasid is FLO
33 Parameter gpfs_node1 is gpfsnode01 192.168.65.101
34 Parameter gpfs_node2 is gpfsnode02 192.168.65.102
35 Parameter gpfs_node3 is gpfsnode03 192.168.65.103
36 Parameter hana_node1 is hananode01 192.168.165.101
37 Parameter hana_node2 is hananode02 192.168.165.102
38 Parameter hana_node3 is hananode03 192.168.165.103
39 Parameter step is 11
40 -------------------------------------------------------------------
41

42 Hardware analysis:
43 ----------
44 CPU Type: Pentium 4 Intel(R) Xeon(R) CPU E7-8880 v2 @ 2.50GHz OK
45 # of CPUs: 8 cores: 120 threads: 240 OK
46

47 Memory: 1024 GB / Free Memory: 974 GB OK


48

49 ServeRAID: 3 adapters OK
50

51 IBM General Parallel File System (GPFS):


52 ----------
53 GPFS with replication [4.1.0-5] Cluster HANAcluster.gpfsnode01 is active
54 GPFS device /dev/sapmntdata mounted on /sapmnt of size 53599GB
55

56

57 SAP Host Agent Information


58 ==========================
59 /usr/sap/hostctrl/exe/saphostctrl: 720, patch 613, changelist 1492399, linuxx86_64, ←-
,→opt (Apr 25 2014, 23:30:36)
60

61 SAP Host Agent known SAP instances


62 ----------------------------------
63 Inst Info : FLO - 65 - ibmhanar65 - 740, patch 36, changelist 1444691
64

65

66 SAP HANA Instances


67 ==================
68

69

70 SAP HANA Instance FLO/65


71 ------------------------
72 SAP HANA 1.00.80.00 Build 0391861-1510 Revision 80 is installed OK
73

74 SAP HANA FLO Landscape Overview


75 *******************************
76

77 overall host status: ok


78

79

80 General Health checks:

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 169


1.8.80-12 © Copyright Lenovo, 2015
Technical Documentation

81 ----------
82 NOTE: The following checks are for known problems of the system.
83 See the FAQ section of the IBM - SAP HANA Operations Guide
84 SAP Note 1661146 found at https://service.sap.com/notes
85

86 Only issues will be shown. If there is no output, no check failed.


87 To show suceeded checks, add the parameter -v. Recommended on first run.
88 ----------
89 -------------------------------------------------------------------
90 E N D O F I B M D A T A A N A L Y S I S
91 -------------------------------------------------------------------
92 Removing support script dump files older than 7 days.
Listing 3: Support script output

14.3 System Support

In case of a problem with the Lenovo Systems Solution for SAP HANA Platform Edition, you should
always direct the customer to open an OSS Message, whether or not it is an obvious problem with the
hardware. Lenovo, IBM, and SAP have an agreement that all problems with the Lenovo Solution are to
come first through SAP Support process, where there are Lenovo L3 Support members who will help the
customer determine what the root cause of the problem is. If it is determined that there is a problem with
an Lenovo Solution, then the Lenovo L3 support person will instruct and guide the customer in opening
the correct IBM PMR and help ensure that the appropriate attention has been given to the problem.
In order to make this process for all involved easier, Lenovo delivers a special program that can gather
much of the data necessary in an initial support call. Using this script the customer can help streamline
the support process in order to obtain the fastest and most competent support available.
This script is found in the directory /opt/ibm/saphana/bin and is called saphana-support-ibm.sh. In
order to collect support data, the customer should run this command from the shell as follows:
1 # saphana-support-ibm.sh -s

This script, along with the Linux SAP System Information Tool, can be found in the SAP OSS Notes
1661146 and 618104 respectively. When the SAP System Information Tool is placed in /opt/ibm/
saphana/bin, it will be automatically called from this script and its input will be also collected.

14.4 Additional Tools for System Checks

14.4.1 Lenovo Advanced Settings Utility

Note
X6 based servers and later technology come preinstalled with this utility.
In some cases it might be useful to check the UEFI settings of the HANA servers. Therefore, the
saphana-support-ibm.sh script uses the Lenovo Advanced Settings Utility (ASU), if it is installed, and
prints out warnings, if there is a misconfiguration. This check can be enabled via the -e parameter.
Download the latest Linux 64-bit RPM from https://www-947.ibm.com/support/entry/myportal/
docdisplay?lndocid=LNVO-ASU and install the RPM.
Before upgrading the ASU tool remove the old version. Find the installed version via rpm -qa | grep
asu.

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 170


1.8.80-12 © Copyright Lenovo, 2015
Technical Documentation

14.4.2 ServeRAID MegaCli Utility for Storage Management

Note
The MegaCli utility is superseded by the StorCLI utility. When planning to install MegaCli
please consider installing StorCLI instead. Installing both at the same time is also possible.

14.4.3 ServeRAID StorCLI Utility for Storage Management

Note
X6 based servers come preinstalled with this utility.
The saphana-support-ibm.sh script also analyzes the status of the ServeRAID controllers and the
controller-internal batteries to check whether the controllers are in a working and performing state.
For activation of this feature the StorCLI (Command Line) Utility for Storage Management software
must be installed. Go to https://www-947.ibm.com/support/entry/myportal/docdisplay?lndocid=
migr-5092950 and download the file locally and install the RPMs.
Before upgrading the StorCLI tool remove the old version. Find the installed version via rpm -qa |
grep storcli.

Warning
[1.6.60-7]+ With the change to RAID5 based storage configuration, installing the MegaCLI
Utility is even more important as a HDD/SSD failure is not directly visible with standard
GPFS commands until a whole RAID array has failed.

14.4.4 IBM SSD Wear Gauge CLI utility

Note
X6 based servers come preinstalled with this utility.
For models of the Lenovo Solution that come with SSDs, this means first generation SSD and second
generation XS and S models, it might be useful to check the state of the SSDs.
Go to http://www-947.ibm.com/support/entry/portal/docdisplay?lndocid=MIGR-5090923 and down-
load the latest binary of the IBM SSD Wear Gauge CLI utility (ibm_utl_ssd_cli-<version>_linux_
32-64.bin). Copy it to the machine to be checked.
When upgrading the tool remove existing binaries from /opt/ibm/ssd_cli/.
Copy the bin file into /opt/ibm/ssd_cli/:
1 # mkdir -p /opt/ibm/ssd_cli/
2 # cp ibm_utl_ssd_*_linux_32-64.bin /opt/ibm/ssd_cli/
3 # chmod u+x /opt/ibm/ssd_cli/ibm_utl_ssd_*_linux_32-64.bin

Execute the binary:


1 # /opt/ibm/ssd_cli/ibm_utl_ssd_*_linux_32-64.bin -u

Sample output:
1 1 PN:68Y7719-40K6897 SN:50301DEW FW:SA03SB6C
2 Percentage of cell erase cycles remaining: 100%
3 Percentage of remaining spare cells: 100%
4 Life Remaining Gauge: 100%

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 171


1.8.80-12 © Copyright Lenovo, 2015
Technical Documentation

14.5 Getting Support (IBM PMR, SAP OSS)

In case of a failure follow these instructions:


1. Check for hardware failure: The server’s IMM will report hardware incidents. You may also control
the IMM’s Virtual Light Path or the LEDs on the physical server.
• If only a hardware replacement is necessary, take the according steps with IBM.
2. Control the software status: Execute saphana-support-ibm.sh -cv with the latest version of the
support script (see section 14: System Check and Support on page 167). The script will check for
common root causes of failures. Consult the Lenovo SAP HANA Appliance Operations Guide 33 .
• Try to apply suggested solutions by the support script and the Operations Guide.
3. If you could not determine the root cause of the failure or there is no solution provided by the
support script or the Operations Guide, open an SAP OSS ticket. See the Quick Start Guide34 ,
section Getting help and technical assistance for more information.

33 SAP Note 1650046 (SAP Service Marketplace ID required)


34 http://www-947.ibm.com/support/entry/portal/docdisplay?lndocid=MIGR-5087035

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 172


1.8.80-12 © Copyright Lenovo, 2015
Technical Documentation

15 Backup and Restore of the Primary Partition


This section provides enough instructions necessary to create a simple system copy of the base operating
system found on the first hard drive, or primary partition. This image can then be used for a basic backup/
restore solution of the primary partition. This image, once copied initially, should also be transferred to
offline storage to ensure that data does not get lost due to irreparable hard drive failures. The intent
of this section is that the user can have a simple backup and restore solution using the tools available
within Linux to protect their system. For enterprise backup and restore solutions, we recommend to use
an enterprise backup and restore option to ensure backup/restore operations for the operating system as
well as the IBM General Parallel File System and SAP HANA file systems.
What follows is a description, how to create a backup of the operating system. We also describe how to
restore these items in case of a planned or unplanned disaster with the original Operating System (OS).
This is valid for systems installed with at least the version 1.8.80-10 of the System x automated installer.
Earlier Systems may require extra effort for OS35 backup partition creation. The following System x
server models can be used:
• System x3950 eX5 Workload Optimized System (7143) for SAP HANA Platform Edition,
• System x3690 eX5 Workload Optimized System (7147) for SAP HANA Platform Edition,
• System x3850/x3950 X6 Workload Optimized System (3837) for SAP HANA Platform Edition.
• System x3850/x3950 X6 Workload Optimized System (6241) for SAP HANA Platform Edition.

Warning
Do not go into production without verifying a full backup and a full restore of the operating
system!

15.1 Description

In order to perform a simple backup and restoration of the OS, excluding the SAP HANA executables,
configuration, data or logs, you need to run a few commands in Linux in order to set up a working copy
of the OS. What we will explain here is a method of copying the Linux file system that is contained on
two partitions of the first hard drive.
Using the Linux command rsync, you are able to intelligently copy a file system from one partition to
another quickly and with little effort. This tool can also be set up in nightly cron schedules to happen
automatically and semi-automate the process of taking a backup image of the OS. As seen in 81: Overview
of Backup/Restore Operations on page 174, the general concept is that the user uses rsync to copy the
contents of the root (/) and boot (/boot) directories from their original partitions onto two newly created
partitions on the same hard drive.

35 Operating System

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 173


1.8.80-12 © Copyright Lenovo, 2015
Technical Documentation

Normal Operation

(hanaboot)

(backboot)
/dev/sda3

/dev/sda4
/dev/sda2 /dev/sda5
(hanaroot) rsync (backroot)

Restore: Boot Backup Partition

(hanaboot)
(backboot)
/dev/sda4

/dev/sda3
/dev/sda5 /dev/sda2
rsync
(backroot) (hanaroot)

Return to Normal Operation

Figure 81: Overview of Backup/Restore Operations

This is not highly available due to the possibility of a hard drive failure of the device used for both the
primary and backup partitions, yet it does provide the reliability of a stable and usable backup method.
In order to obtain high availability of the backed-up image, we strongly recommend to copy the images
saved to the local partitions onto another external storage system.

15.1.1 Boot Loader

The server can use two different ways to boot. For X6 based systems, the standard way is using the Unified
Extensible Firmware Interface, or UEFI. According to Wikipedia36 , the Unified Extensible Firmware
Interface is a specification that defines a software interface between an operating system and platform
firmware. The second method is using the legacy method of BIOS, which is the only supported way
to boot SAP HANA on eX5 based systems. If you are using the Legacy Boot option, you will need
to become familiar with how each distribution handles the boot procedure with the Legacy BIOS boot
option.
Linux requires a boot loader that understands the specific boot method. Two options are available:
GRUB37 and LILO38 . The way a server boots; and, subsequently, installs the boot loader determines
some of the system partitioning and file system layout of the installed server. Although it is possible to
use both methods to boot and install the Lenovo Solution server, this document will only cover the steps
necessary to create a restore image using the UEFI boot mechanism with either the GRUB or LILO boot
loader.
Using ELILO39 , the Linux installation will place the boot loader under the directory /boot/efi. The
configuration file for ELILO can be found in /etc/elilo.conf. Using GRUB, the Linux installation will
place the boot loader under the directory /boot/grub. The configuration file for GRUB can be found in
/boot/grub/menu.conf or /boot/grub/grub.conf, depending on the version of GRUB

36 http://en.wikipedia.org/wiki/Unified_Extensible_Firmware_Interface
37 Grand Unified Bootloader
38 Linux Loader
39 EFI Linux Loader

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 174


1.8.80-12 © Copyright Lenovo, 2015
Technical Documentation

15.1.2 Drive Partitions

Starting with version 1.8.80-10 of the the Lenovo Solution installation media, it will create five (5)
partitions on the first drive (sda). Each partition has a specific label and purpose for the system backup
and restore. The labels are: hanaboot, hanaroot, hanaswap, backboot and backroot. The correlation
of these labels to the appropriate devices can be found by listing the symbolic links in the directory
/dev/disk/by-label.
An example partition layout for systems is shown below. Each device is partitioned into several physical
and logical partitions and named with a label, a simple identifier, and a Universally Unique Identifier
(UUID). Only the UUID40 is promised to remain connected to the proper partition as it was created.

Partition /dev/disk/by-label /dev/disk/by-id /dev/disk/by-uuid


/dev/sda1 hanaboot scsi-{33-hexadecimal-number}-part1 hexadecimal number
/dev/sda2 hanaroot scsi-{33-hexadecimal-number}-part2 hexadecimal number
/dev/sda3 swap scsi-{33-hexadecimal-number}-part3 hexadecimal number
/dev/sda4 backboot scsi-{33-hexadecimal-number}-part4 hexadecimal number
/dev/sda5 backroot scsi-{33-hexadecimal-number}-part5 hexadecimal number

15.2 Prerequisites

The Lenovo Solution server should also have been installed using the included automatic installer program.
If not, some of the names of the partitions might be different and these directions may not work correctly.

15.2.0.1 SUSE Linux Enterprise Server Partition Labels In a system installed with the SUSE
Linux Enterprise Server OS, not all partitions are labeled. This seems to be an issue with how SLES
handles the creation of labels for VFAT file system partitions. By default, SLES uses the values found
under the /dev/disk/by-id directory when describing specific partitions. This document will continue
to use the /dev/disk/by-label values, and it will be expected that these are translated to /dev/disk/
by-id values when implementing this backup solution on SLES.

15.2.0.2 Create entries in /etc/fstab for new mounts Before you start with the OS portion
of this procedure, you should ensure that the backboot and backroot devices are mounted to the file
system as /var/backup/root and /var/backup/boot/efi. These mount points should already exist in
the file /etc/fstab similar to the example (for SLES) below:
1 ## Sample SLES entries for HANA System Backup
2 /dev/disk/by-id/scsi-3600605b0038ac2601a9a1f01cc74cf23-part4 /var/backup/boot/efi vfat umask=0002,utf8=true 0 0
3 /dev/disk/by-id/scsi-3600605b0038ac2601a9a1f01cc74cf23-part5 /var/backup/root ext3 acl,user_xattr 1 1

Listing 4: Example SUSE fstab entries

Note
The hexadecimal portion of the value of /dev/disk/by-id/
scsi-3600605b0038ac2601a9a1f01cc74cf23-partx will be different for every individ-
ual drive and installation. We recommend to read the contents of /etc/fstab before and
copy only the value for the stated partitions for all new backup partitions. Pay particular
notice to rename the partition to the correct partition created!

40 Universally Unique Identifier

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 175


1.8.80-12 © Copyright Lenovo, 2015
Technical Documentation

1 ## Sample RHEL entries for HANA System Backup


2 UUID=c605201a-04bc-47a8-bbc4-b6808ee98fe1 /var/backup/root ext4 defaults ←-
,→ 1 2
3 UUID=FF50-7B37 /var/backup/boot/efi vfat umask=0077,shortname=winnt ←-
,→ 0 0

Listing 5: Example Red Hat fstab entries

Note
The hexadecimal portion of the value of /dev/disk/by-uuid/ will be different for every
individual drive and installation. We recommend to read the contents of /etc/fstab before
and copy only the value for the stated partitions for all new backup partitions. Pay particular
notice to rename the partition to the correct partition created!

15.2.1 Correcting the backup fstab

After each time the rsync command has completed, the root file system has now been copied exactly
from / into /var/backup/. In order to boot from the backup partition backroot, we want to switch the
partition labels (or ids) from hana* to the back* labelled partitions. The hana* partitions should be
mounted as the file system /var/restore in order to restore from the backed up image in the case of a
recovery. This also will work as a visual reminder that you are booting from the backup partition instead
of the primary partition.
Note
In current versions of the Lenovo Solution installer, the directory /var/restore has not been
created. Log on as the root user in the primary OS and ensure that that directory has been
created and that a rsync command has been executed.
After every rsync run, the fstab needs to be adopted as shown here.
On SUSE Linux Enterprise Server the entries in /var/backup/etc/fstab should be changed from:
1 ## Adding entries for HANA System Backup
2 /dev/disk/by-id/scsi-3600605b0038ac2601a9a1f01cc74cf23-part1 /boot/efi vfat umask=0002,utf8=true 0 0
3 /dev/disk/by-id/scsi-3600605b0038ac2601a9a1f01cc74cf23-part2 / ext3 acl,user_xattr 1 1
4 /dev/disk/by-id/scsi-3600605b0038ac2601a9a1f01cc74cf23-part3 swap swap defaults 0 0
5 /dev/disk/by-id/scsi-3600605b0038ac2601a9a1f01cc74cf23-part5 /var/backup ext3 acl,user_xattr 1 1
6 /dev/disk/by-id/scsi-3600605b0038ac2601a9a1f01cc74cf23-part4 /var/backup/boot/efi vfat umask=0002,utf8=true 0 0

Listing 6: Example SLES primary fstab file

to:
1 ## Adding entries for HANA System Backup
2 /dev/disk/by-id/scsi-3600605b0038ac2601a9a1f01cc74cf23-part2 /var/restore ext3 acl,user_xattr 1 1
3 /dev/disk/by-id/scsi-3600605b0038ac2601a9a1f01cc74cf23-part1 /var/restore/boot/efi vfat umask=0002,utf8=true 0 0
4 /dev/disk/by-id/scsi-3600605b0038ac2601a9a1f01cc74cf23-part3 swap swap defaults 0 0
5 /dev/disk/by-id/scsi-3600605b0038ac2601a9a1f01cc74cf23-part4 /boot/efi vfat umask=0002,utf8=true 0 0
6 /dev/disk/by-id/scsi-3600605b0038ac2601a9a1f01cc74cf23-part5 / ext3 acl,user_xattr 1 1

Listing 7: Example SLES backup fstab file

Notice that the entries /dev/disk/by-id will be different on your system. The mountpoints need to be
changed as shown above.
On Red Hat Enterprise Linux the entries in /var/backup/etc/fstab should be changed from:
1 ## Adding entries for HANA System Backup
2 LABEL=hanaroot / ext3 acl,user_xattr 1 1
3 LABEL=hanaboot /boot/efi vfat umask=0002,utf8=true 0 0
4 LABEL=hanaswap swap swap defaults 0 0
5 LABEL=backroot /var/backup ext3 acl,user_xattr 1 1
6 LABEL=backboot /var/backup/boot/efi vfat umask=0002,utf8=true 0 0

Listing 8: Example RHEL primary fstab file

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 176


1.8.80-12 © Copyright Lenovo, 2015
Technical Documentation

to:
1 ## Adding entries for HANA System Backup
2 LABEL=backroot / ext3 acl,user_xattr 1 1
3 LABEL=backboot /boot/efi vfat umask=0002,utf8=true 0 0
4 LABEL=hanaswap swap swap defaults 0 0
5 LABEL=hanaroot /var/restore ext3 acl,user_xattr 1 1
6 LABEL=hanaboot /var/restore/boot/efi vfat umask=0002,utf8=true 0 0

Listing 9: Example RHEL backup fstab file

Notice that not only the labels have changed, but also the names of the mounted file systems.

15.2.2 Add boot loader entry for backup partition

ELILO installed systems

After the fstab file has been modified, create a backup entry in the ELILO boot menu (/etc/elilo.conf)
by copying the whole subsection identified by the label=linux statement.
On RHEL replacing the label and root values with the value backup and backroot partition ID. On
SLES the according scsi-<id>-part<X> has to be changed to fit the <id> and partition <X> on the
given system. It is important to modify the string

###Don’t change this comment - YaST2 identifier: Original name: name###

on these installs. Otherwise, YaST will not see this option in the boot list for ELILO and may not present
it to you during boot.
1 ## Adding Restore entry to UEFI Boot menu
2 image = vmlinuz-3.0.76-0.11-default
3 ###Don't change this comment - YaST2 identifier: Original name: backup###
4 label = backup
5 append = "resume=/dev/sda3 splash=silent transparent_hugepage=never
6 intel_idle.max_cstate=0 processor.max_cstate=0 showopts "
7 description = "Backup of SAP HANA Platform Edition Image"
8 initrd = initrd-3.0.76-0.11-default
9 root = /dev/disk/by-id/scsi-3600605b0038ac2601a9a1f01cc74cf23-part5

Listing 10: Example UEFI Configuration for Primary Partition

If you update the kernel, you will also need to update the lines image = and initrd = in this file for the
backup entry.
After changing the elilo.conf run
1 elilo --refresh-EBM

to update the boot loader. The intention is that you will be able to start up the backup partition in
order to copy the saved state in the backup partition over top of the primary partition.

Grub installed systems

In systems installed using the GRUB boot loader (by default all Red Hat based installs and SUSE installs
on System eX5 hardware), edit the contents of /boot/grub/grub.cfg (RHEL), or /boot/grub/menu.lst
(SLES), and copy the section for the primary partition to edit it as the new backup partition.
This is a copy of the default boot line with the title, root and kernel lines changes to match the
partition used for the backup partition.On RHEL replacing the label and root values with the value
backup and backroot partition ID. On SLES the according scsi-<id>-part<X> has to be changed to
fit the <id> and partition <X> on the given system.

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 177


1.8.80-12 © Copyright Lenovo, 2015
Technical Documentation

1 title Backup of SAP HANA Platform Edition Image


2 root (hd0,<PARTITION NR, see below>)
3 kernel /boot/vmlinuz-2.6.32-431.el6.x86_64 ro root=LABEL=backroot
4 KEYBOARDTYPE=pc KEYTABLE=us LANG=en_US.UTF-8 crashkernel=auto
5 processor.max_cstate=0 intel_idle.max_cstate=0
6 transparent_hugepage=never SYSFONT=latarcyrheb-sun16
7 rd_NO_LUKS rd_NO_LVM rd_NO_DM rd_NO_MD rhgb quiet
8 initrd /boot/initramfs-2.6.32-431.el6.x86_64.img
Listing 11: Example GRUB Configuration for Primary Partition

On SLES use the command:


1 yast2 bootloader

to update the boot loader, on RHEL:


1 grub-install /dev/sda

Note
The partition number for a GRUB installed partition is based on the device syntax
of (device[,partmap-name1part-num1[,partmap-name2part-num2[,...]]]). The syntax
(hd0) represents using the entire disk of the first device, for example sda, while the syntax
(hd0,1) represents using the second partition of the device, for example sda2. Notice that
GRUB identifies the first partition on the first device as (hd0,0) or (hd0) for short.

Note
In our example, we presume that the hanaroot partition is (hd0,1) and the backroot par-
tition is (hd0,4).
Append or change these lines in /var/backup/etc/grub.conf. Here, we exchange the meanings of the
hanaroot and backroot partitions. When booting into this kernel, the hanaroot is the partition to be
restored, and the backroot is the default partition to be booted. The title, root and kernel lines are
changed to match the partition used for the backroot partition. We should also change the parameter
default in the header subsection to point to the Restore image (usually the subsection number 2) rather
than the original SAP HANA image.
1 default=2
2 title Restore from SAP HANA Platform Edition Backup Image
3 root (hd0,<PARTITION NR, see above>)
4 kernel /boot/vmlinuz-2.6.32-431.el6.x86_64 ro root=LABEL=backroot
5 KEYBOARDTYPE=pc KEYTABLE=us LANG=en_US.UTF-8 crashkernel=auto
6 processor.max_cstate=0 intel_idle.max_cstate=0
7 transparent_hugepage=never SYSFONT=latarcyrheb-sun16
8 rd_NO_LUKS rd_NO_LVM rd_NO_DM rd_NO_MD rhgb quiet
9 initrd /boot/initramfs-2.6.32-431.el6.x86_64.img
Listing 12: Example GRUB Configuration for Backup Partition

15.3 Backup of the Linux operating system

In order to perform an initial backup run as root the following commands. The initial backup will take a
long time as it is copying the entire file system under the hanaroot partition into the backroot partition.
Subsequent executions of the rsync command will be shorter as it is intelligent enough to only copy what
has changed between calls of the command.

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 178


1.8.80-12 © Copyright Lenovo, 2015
Technical Documentation

As the system administrator (root) run:


1 export start_stamp=$(date +%s)
2 rsync -aAXxv --delete / /var/backup --exclude={/dev/*,/proc/*,/sys/*,/tmp/*,/boot/efi*, /run/*,/mnt/*,/media/*,/lost+←-
,→found,/var/backup/*,/sapmnt/*,/var/lib/ntp/proc/*}
3 export end_stamp=$(date +%s)
4 echo "Backup Completed in $( echo "(${end_stamp}-${start_stamp})/60"| \
5 bc ) minutes $( echo "(${end_stamp}-${start_stamp})%60"| bc ) seconds"

Listing 13: Example rsync command

15.4 Restoring the operating system

In case of a planned or unplanned system outage, it may be wished for to recover the last known good
backup of the root and boot file system partitions that have been copied on to the backup partitions.
In the case of a hard drive failure where the backup partitions have been lost, the copies stored on an
external storage must be recopied into the backup partitions after the hard drive failure has been resolved
by the hardware support team. After that, the restore can take place as described here.
Restart the machine and boot the backup OS. At boot time, select the created boot option for the backup
partition from the list shown in either the ELILO or GRUB boot loader menu (see 82: Sample GRUB
boot loader screen on page 179. This should be done only after checking that the boot loader menu in the
backup partition has been properly updated according to the directions in 15.2.1: Correcting the backup
fstab on page 176 above.

Figure 82: Sample GRUB boot loader screen

Once the backup partition is booted, run the following command to transfer the backup to the original
root partition:
1 export start_stamp=$(date +%s)
2 rsync -aAXxv --delete / /var/restore --exclude={/dev/*,/proc/*,/sys/*,/tmp/*,/boot/efi*, /run/*,/mnt/*,/media/*,/lost+←-
,→found,/var/restore/*,/sapmnt/*,/var/lib/ntp/proc/*}
3 export end_stamp=$(date +%s)
4 echo "Backup Completed in $( echo "(${end_stamp}-${start_stamp})/60"| \
5 bc ) minutes $( echo "(${end_stamp}-${start_stamp})%60"| bc ) seconds"

Listing 14: Example rsync command

Then you need to revert the changes made in 15.3: Backup of the Linux operating system on page 178:
Swap the mountpoints of \ and \boot\efi with the original root partition in the /var/restore/etc/
fstab.
1 ## Adding entries for HANA System Backup
2 /dev/disk/by-id/scsi-3600605b0038ac2601a9a1f01cc74cf23-part1 /boot/efi vfat umask=0002,utf8=true 0 0
3 /dev/disk/by-id/scsi-3600605b0038ac2601a9a1f01cc74cf23-part2 / ext3 acl,user_xattr 1 1

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 179


1.8.80-12 © Copyright Lenovo, 2015
Technical Documentation

4 /dev/disk/by-id/scsi-3600605b0038ac2601a9a1f01cc74cf23-part3 swap swap defaults 0 0


5 /dev/disk/by-id/scsi-3600605b0038ac2601a9a1f01cc74cf23-part4 /var/backup/boot/efi vfat umask=0002,utf8=true 0 0
6 /dev/disk/by-id/scsi-3600605b0038ac2601a9a1f01cc74cf23-part5 /var/backup/root ext3 acl,user_xattr 1 1

Listing 15: Example SLES primary fstab file

1 ## Adding entries for HANA System Backup


2 LABEL=hanaboot /boot/efi vfat umask=0002,utf8=true 0 0
3 LABEL=hanaroot / ext3 acl,user_xattr 1 1
4 LABEL=hanaswap swap swap defaults 0 0
5 LABEL=backboot /var/backup/boot/efi vfat umask=0002,utf8=true 0 0
6 LABEL=backroot /var/backup/root ext3 acl,user_xattr 1 1

Listing 16: Example RHEL primary fstab file

Warning
Be careful after using the rsync command to pay attention to the files /var/restore/
etc/fstab and the boot loaders /var/restore/boot/grub/grub.cfg or /var/restore/etc/
elilo.conf. Ensure that they have the reverse meaning to that described in the previous
section.
On the primary partition, you should now be able to boot into the primary partition using the boot
loaders default menu item.

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 180


1.8.80-12 © Copyright Lenovo, 2015
Technical Documentation

16 SAP HANA Backup and Recovery

Warning
The snapshot restore functionality in SAP HANA Revision 80 is broken. The described
procedure could be done successfully with SAP HANA Revision 91.
This section provides instructions necessary to create a simple SAP HANA Platform Edition backup and
restore procedure. These images can then be used for a basic backup/restore solution. Initially, they are
copied locally and must be transferred to an offline storage for any real use. The intent is that the user
can have a simple backup and restore solution using the tools available with IBM GPFS and SAP HANA.
For advanced backup and restore solutions, we recommend to use an enterprise backup solution to ensure
backup/restore operations for IBM GPFS and SAP HANA.
What follows is a description how to take snapshots of the IBM GPFS file system and the SAP HANA
database. We also describe how to restore SAP HANA in case of a planned or unplanned disaster. This
enables the administrator to take backups of the SAP HANA data without interrupting the database
service (so called online backups of the database). The time it takes to actually backup the data afterwards
to a secure place does not affect SAP HANA operation.
Note
Features from SAP HANA Studio for snapshot generation are described as well. Identical
results can be achieved using the command-line SQL interface found in the SAP HANA guide
books.

16.1 Description

The procedure to backup SAP HANA and IBM GPFS only applies to SAP HANA 1.0 SPS 07 and later.
These instructions are also included in the SAP HANA Operations Guide. All screenshots were taken
with this release. The GUI may change with newer releases.
• This procedure can restore data:
– on the very same environment the snapshot was taken from,
– on an environment that copies the landscape of the original system.
• A change in landscape (m–to–n copy) is not supported.
• Make sure to always check the following locations for latest information:
– http://help.sap.com/hana/SAP_HANA_Administration_Guide_en.pdfSAP HANA Admin-
istration Guide, Chapter: Backup and Recovery,
– http://www.saphana.com/docs/DOC-1220SAP HANA Backup and Recovery Overview,
– IBM GPFS snapshot http://publib.boulder.ibm.com/infocenter/clresctr/vxrx/topic/
com.ibm.cluster.IBMGPFS.v4r1.IBMGPFS200.doc/bl1adv_logcopy.htmdocumentation.

Warning
Do not go into production without verifying a full backup and restore procedure!

16.2 Backup of SAP HANA

Open SAP HANA Studio.

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 181


1.8.80-12 © Copyright Lenovo, 2015
Technical Documentation

In SAP HANA Studio either right-click on Backup and choose "Manage Storage Snapshots" from the
context menu or click on "Storage Snapshots" on the right. This allows to generate a snapshot. The
following dialog opens:

Click on “Prepare”. You are then asked to give this snapshot a name. This name will be stored in the
SAP HANA backup catalog. It does not appear outside of SAP HANA.

After clicking the OK button the snapshot is generated. Any log entries are merged into the data area
so that it has a consistent state that can be recovered from.

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 182


1.8.80-12 © Copyright Lenovo, 2015
Technical Documentation

While the snapshot is active you can not have further snapshots or backups taken from this SAP HANA
instance. Notice file snapshot_databackup_0_1 in /sapmnt/data/<SID>/mnt00001/hdb00001 – this file
indicates that the content of this directory is a valid SAP HANA snapshot and can be used to recover
from.
The next step is to take a IBM GPFS snapshot. Login to any server of the SAP HANA installation. It
does not matter on which server you issue the IBM GPFS snapshot commands.
1 mmcrsnapshot sapmntdata <snapshotname>
2 Writing dirty data to disk
3 Quiescing all file system operations
4 Writing dirty data to disk again
5 Resuming operations.
6 Snapshot <snapshotname> created with id 2.

We recommend to include the current data and/or time in the snapshot name. You can do this via
1 mmcrsnapshot sapmntdata `date +%F--%T`

After this command has finished you have a new folder <snapshotname> in /sapmnt/.snapshots This
subfolder contains all files that you can then use to copy to a safe place. The IBM GPFS snapshot is
taken from the entire GPFS file system.

If the IBM GPFS snapshot has finished successfully confirm this fact and release the SAP HANA snap-
shot. In SAP HANA Studio click on “Confirm”. This opens the following dialog:

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 183


1.8.80-12 © Copyright Lenovo, 2015
Technical Documentation

We recommend to use the name given to the IBM GPFS snapshot as part of the mmcrsnapshot command.
After you acknowledge this window the wizard finishes and you can leave the storage snapshot dialog.
If the IBM GPFS snapshot did not finish successfully or was manually aborted, click on the “Abandon”
button and act accordingly.
Copy the IBM GPFS snapshot data to a safe place on an external storage device. E.g. this could be an
NFS export on a storage server. For instance, this can be done with the following tools: simple Linux
copy (cp), secure copy (scp) or rsync command. On the other hand, integration into IBM Tivoli Storage
Manager or other automated file backup tools is also possible. This depends highly on the customer
demands and availabilities regarding hardware and backup requirements.
See table 54 for the files and directories which need to be copied to an external storage in order to have
a full SAP HANA backup.

Path Exclude
/sapmnt/.snapshot/<snapshotname>/shared/
/sapmnt/.snapshot/<snapshotname>/shared
<SID>/HDB<INST_NR>/backup
/sapmnt/.snapshot/<snapshotname>/data/
<SID>

Table 54: Required SAP HANA directories for restore

After data is successfully copied you need to delete the IBM GPFS snapshot:
1 mmdelsnapshot sapmntdata <snapshotname>

Having more than one active snapshots at a time is supported by IBM GPFS. The maximum number
of snapshots in sapmntdata is 256 (this applies to IBM GPFS 3.5 and 4.1). You can list all existing
IBM GPFS snapshots with mmlssnapshot sapmntdata

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 184


1.8.80-12 © Copyright Lenovo, 2015
Technical Documentation

However, keep in mind that all IBM GPFS snapshots still remain on the same physical disks as your
production SAP HANA data. This does by no means represent a valid backup location! Moreover,
having IBM GPFS snapshots will lead to a slightly decreased file system performance. Therefore it is
essential to move and archive such backup to a remote device and to delete the snapshot.

16.3 Restore of SAP HANA

To prepare for a restore:


• SAP HANA instance must be stopped.
• Copy from the backup data data/<SID> to /sapmnt/data/<SID> (see screenshot of the terminal
window below).
• In case that you want to restore SAP HANA data on an new instance you need to restore the
profiles.
• Ensure correct file permissions on the snapshot data. File owner must be the database administra-
tor.
The terminal screenshot below visualizes a snapshot (plus subfolders and files) that has been copied back
at the correct location. Simple tools like cp, scp or rsync can be used to copy back the data.
This data can then be used for restoration.

There are two ways to restore the SAP HANA snapshot. Either with SAP HANA Studio or with a
command line statement.

Restore with SAP HANA Studio

In SAP HANA Studio right-click on the SAP HANA instance you want to recover to and select “Recover”.
The recovery wizard appears.

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 185


1.8.80-12 © Copyright Lenovo, 2015
Technical Documentation

Specify “Snapshot” as the type of backup to recover from. This disables the location box.

If you restore on the same system from which the snapshot was taken you can skip the license key question.
If you are restoring to a different system you need to provide a license key. If you do not specify a valid
key the restore still completes successfully but the database instance will be locked afterwards. It is
possible to specify a valid license key later on.

The final screen summarizes the restore parameters.

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 186


1.8.80-12 © Copyright Lenovo, 2015
Technical Documentation

In the next step, restore takes places. Restore time depends on the amount of data being recovered and
the number of servers involved.

Restore via command line

In order to restore the SAP HANA snapshot, execute the following commands as <sid>adm:
1 su - nktadm
2 ./HDBSettings.sh recoverSys.py --command "RECOVER DATA USING SNAPSHOT CLEAR LOG"

After the restore completes successfully the procedure automatically starts the SAP HANA instance. The
file snapshot_databackup_0_1 in /sapmnt/data/<SID>/mnt00001/hdb00001 is automatically removed
upon a successful restore.

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 187


1.8.80-12 © Copyright Lenovo, 2015
Technical Documentation

17 Troubleshooting
For the Lenovo Systems Solution for SAP HANA Platform Edition the installation of SLES for SAP as
well as the installation and configuration of IBM GPFS, and SAP HANA has been greatly simplified
by an installation process with an accompanying guided installation. This process automatically installs
and configures the base OS components necessary for the SAP HANA appliance software. It is no longer
supported to install the OS manually for the Lenovo Solution.

17.1 Adding SAP HANA Worker/Standby Nodes in a Cluster

When configuring a clustered configuration by hand, install SAP HANA worker and standby nodes as
described in the Lenovo SAP HANA Appliance Operations Guide 41 (Section 4.3 Cluster Operations →
Adding a cluster node).

17.2 GPFS mount points missing after Kernel Update

If you updated the Linux kernel, you will have to update the portability layers for GPFS before starting
SAP HANA. After a kernel reboot, you will not see the GPFS mount points available. Follow the
directions above in section regarding updating both portability layers.

17.3 Degrading disk I/O throughput

One possible reason for degrading disk I/O on the HDDs or SSDs could be a discharged or disconnected
battery on the RAID controller. In that case the cache policy is changed from "WriteBack" (default) to
WriteThrough, meaning that the data is written to disk instead to the cache. This will have a significant
I/O performance impact.
To verify, please proceed as follows:
1. The StorCLI tool (see section 14.4.3: ServeRAID StorCLI Utility for Storage Management on page
171) is installed during HANA setup. The path is /opt/MegaRAID/storcli/. If you have been
using the MegaCli64 client before, you don’t have to learn new commands. The commands are the
same.
2. Determine current cache policy:
1 # /opt/MegaRAID/storcli/storcli64 -LdPdInfo -aAll | grep "Current Cache Policy"

3. Depending on the model there may be up to 3 output lines (for adapters 0,1,2). Sample output:
1 Current Cache Policy: WriteThrough, ReadAheadNone, Direct, No Write Cache if ←-
,→Bad BBU
2 Current Cache Policy: WriteThrough, ReadAheadNone, Direct, No Write Cache if ←-
,→Bad BBU
3 Current Cache Policy: WriteBack, ReadAheadNone, Direct, No Write Cache if Bad ←-
,→BBU

If the output contains "WriteThrough", the cache policy has been switched from the "WriteBack"
default due to some issue.
You can then check each each battery’s status. For example, with the sample output above you
would check the status of the first two adapter’s batteries (the third one is OK).

41 SAP Note 1650046 (SAP Service Marketplace ID required)

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 188


1.8.80-12 © Copyright Lenovo, 2015
Technical Documentation

1 # /opt/MegaRAID/MegaCli/storcli64 /c0/bbu show all


2 # /opt/MegaRAID/MegaCli/storcli64 /c1/bbu show all

If the output contains "Get BBU Capacity Info Failed", the battery is most likely bad or disconnected
and needs to be replaced or reconnected to the adapter.
If the output indicates a state of charge that is significantly smaller than 100%, then the battery is
most likely bad and should be replaced.
If any of the above issues occurs, a hardware support call with IBM should be opened.

17.4 SAP HANA will not install after a system board exchange

When a IBM Certified Engineer exchanges a system board, he is required only to reset the Manufacturer
Type and Model (MTM) and serial number of the machine inside of the EEPROM Settings. SAP HANA
hardware checker (before revision 27) looks at the description of the string instead of the MTM.
To workaround this issue a Lenovo services person can use the Lenovo Advanced Settings Utility (ASU)
tool (see section 14.4.1: Lenovo Advanced Settings Utility on page 170) to reset the system product data
to the correct data for the SAP installer to work. ASU is installed under /opt/ibm/toolscenter/asu.
The tool can then be used to view or set the firmware settings of the IMM from the command line. For
example to show and subsequently reset the System Product Identifier required by SAP HANA, you can
use the following commands:
1 # asu64 show SYSTEM_PROD_DATA.SysInfoProdIdentifier --host <IMM Hostname>

(--host can be omitted if the command is run on the actual system)


1 # asu64 set SYSTEM_PROD_DATA.SysInfoProdIdentifier "System x3850 X6"

Then dmidecode should return the correct system name after a system reboot.

17.5 Known Kernel Updates

17.6 Important SAP Notes (SAP Service Marketplace ID required)

You can find a list of SAP Notes in Appendix F.4: SAP Notes (SAP Service Marketplace ID required)
on page 208. This chapter is to describe some of these SAP Notes in more detail.

17.6.1 SAP Note 1641148 HANA server hang caused by GPFS issue

https://service.sap.com/sap/support/notes/1641148

17.6.1.1 Symptom You are running a SAP HANA scale out landscape and see different time zone
settings for the sidadm user.

17.6.1.2 Reason and Prerequisites Your SAP HANA scale out landscape shows different time
zone settings for at least one server, i.e. the master node shows time zone UTC and all other nodes
show time zone CET. This may be caused by an inconsistency in the installation process and should be
corrected.

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 189


1.8.80-12 © Copyright Lenovo, 2015
Technical Documentation

17.6.1.3 Solution To change the time zone settings of the sidadm user: go to the home directory
/usr/sap/
1 .sapenv.csh: setenv TZ <time zone>
2 .sapenv.sh: export TZ=<time zone>

Make sure this is done for all HANA nodes. Additionally, for a scale out installation a NTP server should
be configured. You may either use your corporate NTP or ask your hardware partner to setup a NTP
server for you, i.e. on the management node of the appliance. If you see different time settings for the
sidadm and the root user check /etc/adjtime. If you see quite big values check your NTP and do a
re-sync. If the time setting is done login as sidadm user again and restart the database.

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 190


1.8.80-12 © Copyright Lenovo, 2015
Technical Documentation

Appendices
A GPFS Disk Descriptor Files
GPFS 3.5 introduced a new disk descriptor format called stanzas. The old disk descriptor format is
deprecated since GPFS 3.5. This stanza format is also valid for GPFS 4.1 (introduced with release 1.8).
Create the file /var/mmfs/config/disk.list.data.gpfsnode01 by concatenating the following parts:
1. Always add
%nsd: device=/dev/sdb
nsd=data01node01
servers=gpfsnode01
usage=dataAndMetadata
failureGroup=1001
pool=system
2. When having one RAID array in the SAS expansion unit
%nsd: device=/dev/sdc
nsd=data02node01
servers=gpfsnode01
usage=dataAndMetadata
failureGroup=1001
pool=system
3. When having two RAID arrays in SAS expansion unit, add also
%nsd: device=/dev/sdd
nsd=data03node01
servers=gpfsnode01
usage=dataAndMetadata
failureGroup=1001
pool=system
4. Always add these line at the end
%pool:
pool=system
blockSize=1M
usage=dataAndMetadata
layoutMap=cluster
allowWriteAffinity=yes
writeAffinityDepth=1
blockGroupFactor=1

B Topology Vectors (GPFS 3.5 failure groups)


This is currently valid only for the DR-enabled clusters, for standard HA-enabled clusters use the plain
single number failure groups as described in the instructions above.
With GPFS 3.5 TL2 (the base version for DR) a new failure group (FG) format called "Topology vectors"
was introduced which is being used for the DR solution. A more detailed description for topology vectors
can be found in the GPFS 3.5 Advanced Administration Guide chapter "GPFS File Placement Optimizer".

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 191


1.8.80-12 © Copyright Lenovo, 2015
Technical Documentation

In short, the topology vector is a replacement for the old FGs, storing more information on the infras-
tructure of the cluster. Topology vectors are used for NSDs, but as the same topology vector is used for
all disks of a server node it will be explained in the context of a server node.
In a standard DR cluster setup all nodes are grouped evenly into four FGs (five when using the Tiebreaker-
Node) with two FGs on every site.
A topology vector consists of three numbers divided by commas. The first of the three numbers is either
1 or 2 (for all the SAP HANA nodes) or 3 for the tiebreaker node. The second number is 0 (zero) for all
site A nodes and 1 for all site B nodes. The third number enumerates the nodes in each of the failure
groups starting from 1.
In a standard eight node DR-cluster (4 nodes per site) we would have these topology vectors:

Site Failure Group Topology Vector Node


Failure group 1 1,0,1 gpfsnode01 / hananode01
(1,0,x) 1,0,2 gpfsnode02 / hananode02
Site A
Failure group 2 2,0,1 gpfsnode03 / hananode03
(2,0,x) 2,0,2 gpfsnode04 / hananode04
Failure group 3 1,1,1 gpfsnode05 / hananode01
(1,1,x) 1,1,2 gpfsnode06 / hananode02
Site B
Failure group 4 2,1,1 gpfsnode07 / hananode03
(2,1,x) 2,1,2 gpfsnode08 / hananode04
Failure group 5
Site C 3,0,1 gpfsnode99
(tiebreaker) (3,0,x)

Table 55: Topology Vectors in a 8 node DR-cluster

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 192


1.8.80-12 © Copyright Lenovo, 2015
Technical Documentation

C Quotas

C.1 Quota Calculation

Note
This section is only for information purposes. Please use the quota calculator in the next
section C.2: Quota Calculation Script on page 193.
The quota calculation for this and the following appliance releases is more complex than the quota
calculations in previous releases. An utility script is provided to make the calculation easier.
In general the quota calculations follows SAP recommendations for HANA 1.4 and later.
For HANA single nodes and clusters, there will be set quotas for HANA log files, HANA data volumes
and for the shared HANA data. In DR-enabled cluster a quota should be set only for SAP HANA’s log
files.
The formula for the quota calculation is
1 quota for logs = (# active Nodes) x 1024GB
2 quota for data = (# active Nodes) x (RAM per node in GB) x 3 x (Replication factor)
3 quota for shared = (available space) - (quota for logs) - (quota for data)

The number of active nodes needs explanation. For single nodes, this number is of course 1. For clusters
this is the count of all cluster nodes which are not dedicated standby nodes. A dedicated standby node
is a node which has no HANA instance running with a configured role of master/slaves. Two examples:
• In an eight node cluster, there is only one HANA database installed. The first six nodes are installed
as worker nodes, the last two are installed as standbys. So this cluster has clearly two dedicated
standby nodes.
• Another eight node cluster has a HANA system ABC installed with the first seven nodes as workers
and the last nodes as a standby node. A second HANA system QA1 is installed with a worker node
on the last (eight) node and a standby node on node seven. This cluster has no dedicated standby
node as the eight node is not "standby only", it’s actually active for the QA1 cluster.
For DR the log quota will also be calculated based on the number of active nodes, in this case as only
one HANA cluster is allowed on the DR file system, solely on the count of the worker nodes.
The replication factor should be 1 for single nodes, 2 for clusters and 3 for DR enabled clusters.
Manual calculation is not recommended. Please use the new saphana-quota-calculator.sh.

C.2 Quota Calculation Script

A script is available to ease the quota calculation. The standard installation uses this script to calculate
the quotas during installation and the administrator can also call this script to recalculate the quotas after
a topology change happened, e.g. installation of more HANA instances, changing node role, shrinking or
growing the cluster.
Most values are read from the system or guessed. For a cluster the standard assumption is to have one
dedicated standby node. For a DR solution no reliable guess on the nodes can be made and manual
override must be used.
The basic call is
1 # saphana-quota-calculator.sh

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 193


1.8.80-12 © Copyright Lenovo, 2015
Technical Documentation

As a result it will give the calculated quotas and the commands to set them to the calculated result.
After reviewing these you can add the -a parameter to the call which will automatically set the quotas
as calculated.
In the case you are running a cluster and the number of dedicated standbys is not one, use the parameter
-s <# standby> to set a specific number of standby hosts. 0 is also a valid value.
In the case of a DR enabled cluster, the guess for the active worker nodes will be always wrong. Please
use also the parameter -w <# workers> to set the number of nodes running HANA as active worker.
The number of workers and standbys should equal the number of nodes on a site.
Additional parameters are -r to get a more detailed report on the quota calculation and -c to verify
the currently set quotas (allows a deviation of 10%, too inaccurate for larger clusters with more than 8
nodes).

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 194


1.8.80-12 © Copyright Lenovo, 2015
Technical Documentation

D Lenovo X6 Server MTM List & Model Overview


Starting with the support of Intel Xeon IvyBridge EX family of processors, SAP has changed there naming
of the models. Previously, SAP had named these "T–Shirt" sizes of S,M,L,XL, etc. The new naming
convention is purely based on the amount of memory each predefined configuration should contain, for
example 128, 256, 512, etc. Each of these servers are orderable with the proper components to fulfill the
SAP pre-configured system sizes.
The following table shows the SAP HANA T-Shirt Sizes to Machine Type Model (MTM) code mapping.
The last X in the MTM is a placeholder for the region code the server was sold in, for example, a U for
the USA. While the Machine Type is either 3837 or 6241, the different Models are shown below.

Chassis CPUs Memory Usage Model Possible Model


128GB Standalone AC32S128S -AC3, -H2*
Standalone AC32S256S -AC3, -H3*
256GB
Scale-out AC32S256C -AC3
2
384GB Standalone AC32S384S -AC3
Standalone AC32S512S -AC3, -H4*
512GB
Scale-out AC32S512S -AC3
256GB Standalone AC34S256S -AC3
Standalone AC34S512S -AC3, -H5*
512GB
4U Scale-out AC34S512C -AC3
768GB Standalone AC34S768S -AC3
Standalone AC34S1024S -AC3, -H6*
1TB
4 Scale-out AC34S1024C -AC3
1.5TB Standalone AC34S1536S -AC3
2TB Standalone AC34S2048S -AC3
3TB Standalone AC34S3072S -AC3
4TB Standalone AC34S4096S -AC3
6TB Standalone AC34S6144S -AC3

Table 56: Lenovo MTM Mapping & Model Overview

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 195


1.8.80-12 © Copyright Lenovo, 2015
Technical Documentation

Chassis CPUs Memory Usage Model Possible Model


256GB Standalone AC44S256S -AC4
Standalone AC44S512S -AC4, -HB*
512GB
Scale-out AC44S512C -AC4
768GB Standalone AC44S768S -AC4
4
Standalone AC44S1024S -AC4, -HC*
1TB
Scale-out AC44S1024C -AC4
1.5TB Standalone AC44S1536S -AC4
2TB Standalone AC44S2048S -AC4
Standalone AC48S512S -AC4
512GB
Scale-out AC48S1024C -AC4
8U 1.5TB Standalone AC48S1536S -AC4
Standalone AC48S2048S -AC4, -HD*
2TB
Scale-out AC48S2048C -AC4
Standalone AC483072S -AC4
3TB
8 Scale-out AC483072C -AC4
Standalone AC48S4096S -AC4
4TB
Scale-out AC48S4096C -AC4
Standalone AC48S6144S -AC4
6TB
Scale-out AC48S6144C -AC4
Standalone AC48S8192S -AC4
8TB
Scale-out AC48S8192C -AC4
Standalone AC48S12288S -AC4
12TB
Scale-out AC48S12288C -AC4

Table 57: Lenovo MTM Mapping & Model Overview

The model numbers follow this schema:


1. AC3/AC4 is describing the server chassis. AC3 are 4 rack unit sized servers for up to 4 CPU
books. AC4 servers are 8 rack unit sized server for up to 8 CPU books.
2. 2S/4S/8S give the number of installed CPU books and by this the number of populated CPU
sockets.
3. 128/256/... is the size of the installed RAM in GB.
4. S/C designates the intended usage, either S for Standalone/Single Node or C for Cluster/Scale-out
nodes.
These model numbers describe the current configuration of the server. A 3837-H2* is configured with 2
CPUs in a 4 Socket chassis with 128GB RAM and will be recognized as a AC32S128S by the installation
and any installed scripts. When upgrading this machine with additional 128GB of RAM, the installation
and already installed script will show the model as AC32S256S, while the burned-in MTM will still show
3837-H2* or 3837-AC3.

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 196


1.8.80-12 © Copyright Lenovo, 2015
Technical Documentation

E Frequently Asked Questions

Warning
These FAQ entries are only valid for certain appliance models and versions. Do not apply the
changes in this list until advised by either the support script or Lenovo support.
The support script saphana-support-ibm.sh can detect various known problems in your appliance. In
case such a problem is found, the support script will give an FAQ entry number. Please follow only the
instructions given in the particular entry. When in doubt please contact Lenovo support via SAP’s OSS
ticket system.
Information on how to run the support script can be found in the Operations Guide, section 2.3 Basic
System Check. Please use always the latest support script which may detect new issues found after
installing your appliance. You can find the latest version attached to SAP Note 1661146 – Lenovo Check
Tool for SAP HANA appliances.

E.1 FAQ #1: SAP HANA Memory Limits

Problem: If left unconfigured, each installed and running HANA instance may use up to 90% of the
system’s memory. If multiple unconfigured HANA systems or misconfigured HANA systems are running
on the same machine(s) "Out of Memory" situations may occur. In this case the so called "OOM Killer"
of Linux gets triggered which will terminate running processes at random and in most cases will kill
SAP HANA or GPFS first, leading to service interruption. An unconfigured HANA system is a system
lacking a global_allocation_limit setting in the HANA system’s global.ini file. Misconfigured SAP HANA
systems are multiple systems running at the same time with a combined memory limit over 90% of the
physical installed memory.
Solution: Please configure the global allocation limit for all systems running at the same time. This
can be done by setting the global_allocation_limit parameter in the systems’ global.ini configuration
files. Please calculate the combined memory allocation for HANA so that at least 25GB are free for other
programs. Please use only the physically installed memory for your calculation.
More information on the parameter global_allocation_limit can be found in the "HANA Administration
Guide" at http://help.sap.com/hana_appliance/. Please configure the memory limits as described
there.

E.2 FAQ #2: GPFS parameter readReplicaPolicy

Problem: Older cluster installations do not have the GPFS parameter "readReplicaPolicy" set to "local"
which may improve performance in certain cases. Newer cluster installations have this value set and
single nodes are not affected by this parameter at all. It is recommended to configure this value.
Solution: Execute the following command on any cluster node at any time:
1 # mmchconfig readReplicaPolicy=local

This can be done during normal operation and the change becomes effective immediately for the whole
GPFS cluster and is persistent over reboots.

E.3 FAQ #3: SAP HANA Memory Limit on XS sized Machines

Problem: For a general description of the SAP HANA memory limit see Appendix E.1: FAQ #1: SAP
HANA Memory Limits on page 197. XS sized servers have only 128GB RAM installed of which even a

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 197


1.8.80-12 © Copyright Lenovo, 2015
Technical Documentation

single SAP HANA system will use up to 90% equaling 115GB if no lower memory limit is configured.
This leaves too little memory for other processes which may trigger Out-Of-Memory situations causing
crashes.
Solution: Please configure the global allocation limit for the installed SAP HANA system to 100GB
or less. If multiple systems are are running at the same time, please calculate the combined memory
allocation for HANA so that at least 25GB are free for other programs. Please use only the physically
installed memory for your calculation.
More information on the parameter global_allocation_limit can be found in the "HANA Administration
Guide" at http://help.sap.com/hana_appliance/. Please configure the memory limits as described
there.

E.4 FAQ #4: Overlapping NSDs

Problem: Under some rare conditions single node SSD or XS/S gen 2 models may be installed with
overlapping NSDs. Overlapping means that the whole drive (e.g. /dev/sdb) as well as a partition on the
same device (e.g. /dev/sdb2) may be configured as NSDs in GPFS. As GPFS is writing data on both
NSDs, each NSD will overwrite and corrupt data on the other NSD. In the end at some point the whole
device NSD will overwrite the partition table and the partition NSD is lost and GPFS will fail. This is
the most common situation where the problem will be noticed.
Consider any data stored in /sapmnt to be corrupted even if the file system check finds no errors.
Solution: The only solution is to reinstall the appliance from scratch. To prevent installing with the
same error again, the single node installation must be completed in phase 2 of the guided installation.
Do not deselect "Single Node Installation".

E.5 FAQ #5: Missing RPMs

Problem: An upgrade of SAP HANA or another SAP software component fails because of missing
dependencies. As some of these package dependencies were added by SAP HANA after your system was
initially installed, you may install those missing packages and still receive full support of the Lenovo
Systems solution. If you no longer have the SLES for SAP DVD or RHEL DVD (depending on what
OS you are using) that had been delivered with your system, you may obtain it again from the SUSE
Customer Center respectively Red Hat.
Solution: Ensure that the packages listed below are installed on your appliance.
• SUSE Linux Enterprise Server for SAP Applications
– libuuid
– gtk2 - Added for HANA Developer Studio
– java-1_6_0-ibm - Added for HANA Developer Studio
– libicu - Added since revision 48 (SPS04)
– mozilla-xulrunner192-* - Added for HANA Developer Studio
– ntp
– sudo
– syslog-ng
– tcsh
– libssh2-1 - Added since revision 53 (SPS05)

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 198


1.8.80-12 © Copyright Lenovo, 2015
Technical Documentation

– expect - Added since revision 53 (SPS05)


– autoyast2-installation - Added since revision 53 (SPS05)
– yast2-ncurses - Added since revision 53 (SPS05)
• Red Hat Enterprise Linux: At the moment there are no known packages that have to be installed
additionally.
Missing packages can be installed from the SLES for SAP DVD shipped with your appliance using the
following instructions. It is possible to add the DVD that was included in your appliance install as a
repository and from there install the necessary RPM package. First Check to see if the SUSE Linux
Enterprise Server is already added as an repository:
1 # zypper repos
2

3 # | Alias | Name | Enabled | Refresh


4 --+----------------+----------------+---------+--------
5 1 | SUSE-Linux-... | SUSE-Linux-... | Yes | No

If it doesn’t exist, please place the DVD in the drive (or add it via the Virtual Media Manager) and add
it as a repository. This example uses the SLES for SAP 11 SP1 media.
1 # zypper addrepo --type yast2 --gpgcheck --no-keep-packages\
2 --refresh --check dvd:///?devices=/dev/sr1 \
3 "SUSE-Linux-Enterprise-Server-11-SP1_11.1.1"
4

5 This is a changeable read-only media (CD/DVD), disabling autorefresh.


6 Adding repository 'SLES-for-SAP-Applications 11.1.1' [done]
7 Repository 'SUSE-Linux-Enterprise-Server-11-SP1_11.1.1'
8 successfully added
9 Enabled: Yes
10 Autorefresh: No
11 GPG check: Yes
12 URI: dvd:///?devices=/dev/sr1
13

14 Reading data from 'SUSE-Linux-Enterprise-Server-11-SP1_11.1.1'


15 media
16 Retrieving repository 'SUSE-Linux-Enterprise-Server-11-SP1_11.1.1'
17 metadata [done]
18 Building repository 'SUSE-Linux-Enterprise-Server-11-SP1_11.1.1'
19 cache [done]

The drawback of this solution is, that you always have to insert the DVD into the DVD-Drive or mounted
via VMM or KVM. Another possibility is to copy the DVD to a local repository and add this repository
to zypper. First find out if the local repository is a DVD repository
1 # zypper lr -u
2 # | Alias | Name ←-
,→ | Enabled | Refresh | URI
3 --+--------------------------------------------------+-----------------------------------------------
,→
4 1 | SUSE-Linux-Enterprise-Server-11-SP3 11.3.3-1.138 | SUSE-Linux-Enterprise-Server←-
,→-11-SP3 11.3.3-1.138 | Yes | No | cd:///?devices=/dev/sr0

Copy the DVD to a local Directory


1 # cp -r /media/SLES-11-SP3-DVD*/* /var/tmp/install/sles11/ISO/

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 199


1.8.80-12 © Copyright Lenovo, 2015
Technical Documentation

Register the directory as a repository to zypper


1 # zypper addrepo --type yast2 --gpgcheck --no-keep-packages -f file:///var/tmp/←-
,→install/sles11/ISO/ "SUSE-Linux-Enterprise-Server-11-SP3"
2 Adding repository 'SUSE-Linux-Enterprise-Server-11-SP3' [done]
3 Repository 'SUSE-Linux-Enterprise-Server-11-SP3' successfully added
4 Enabled: Yes
5 Autorefresh: Yes
6 GPG check: Yes
7 URI: file:/var/tmp/install/sles11/ISO/

For verification you can list the repositories again. you should see an output similar to this
1 # zypper lr -u
2 # | Alias | Name ←-
,→ | Enabled | Refresh | URI
3 --+--------------------------------------------------+-----------------------------------------------
,→
4 1 | SUSE-Linux-Enterprise-Server-11-SP3 | SUSE-Linux-Enterprise-Server←-
,→-11-SP3 | Yes | Yes | file:/var/tmp/install/sles11/ISO/
5 2 | SUSE-Linux-Enterprise-Server-11-SP3 11.3.3-1.138 | SUSE-Linux-Enterprise-Server←-
,→-11-SP3 11.3.3-1.138 | Yes | No | cd:///?devices=/dev/sr0

Then search to ensure that the package can be found. This example searches for libssh.
1 # zypper search libssh
2

3 Loading repository data...


4 Reading installed packages...
5

6 S | Name | Summary | Type


7 --+-----------+-------------------------------------+--------
8 | libssh2-1 | A library implementing the SSH2 ... | package

Then install the package:


1 # zypper install libssh2-1
2

3 Loading repository data...


4 Reading installed packages...
5 Resolving package dependencies...
6 :
7 :
8 1 new package to install.
9 Overall download size: 55.0 KiB. After the operation, additional 144.0
10 KiB will be used.
11 Continue? [y/n/?] (y):
12 Retrieving package libssh2-1-0.19.0+20080814-2.16.1.x86_64 (1/1), 55.0
13 KiB (144.0 KiB unpacked)
14 Retrieving: libssh2-1-0.19.0+20080814-2.16.1.x86_64.rpm [done]
15 Installing: libssh2-1-0.19.0+20080814-2.16.1 [done]

E.6 FAQ #6: CPU Governor set to ondemand

Problem: Linux is using a technology for power saving called "CPU governors" to control CPU throttling
and power consumption. By default Linux uses the governor "ondemand" which will dynamically throttle

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 200


1.8.80-12 © Copyright Lenovo, 2015
Technical Documentation

CPUs up and down depending on CPU load. SAP advised to use the governor "performance" as the
ondemand governor will impact HANA performance due to too slow CPU upscaling by this governor.
Since appliance version 1.5.53-5 (or simply SLES for SAP 11 SP2 based appliances) we changed the CPU
governor to performance. In case of an upgrade you also need to change the governor setting. If you are
still running SLES for SAP 11 SP1 based appliances, you may also change this setting to trade in power
saving for performance. This performance boost was not quantified by the development team.
Solution: On all nodes append the following lines to the file /etc/rc.d/boot.local:
1 bios_vendor=$(/usr/sbin/dmidecode -s bios-vendor)
2 # Phoenix Technologies LTD means we are running in a VM and governors are not ←-
,→available
3 if [ $? -eq 0 -a ! -z "${bios_vendor}" -a "${bios_vendor}" != "Phoenix Technologies ←-
,→LTD" ]; then
4 /sbin/modprobe acpi_cpufreq
5 for i in /sys/devices/system/cpu/cpu*/cpufreq/scaling_governor
6 do
7 echo performance > $i
8 done
9 fi

The setting will change on the next reboot. You can also change safely the governor settings immediately
by executing the same lines at the shell. Copy & paste all the lines at once, or type them one by one.

E.7 FAQ #7: No disk space left bug (Bug IV33610)

Problem: Starting HANA fails due to insufficient disk space. The following error message will be found
in indexserver or nameserver trace:
1 Error during asynchronous file transfer, rc=28: No space left on device.

Using the command ’df’ will show that there is still disk space left. This problem is due to a bug in
GPFS versions between 3.4.0-12 and 3.4.0-20 which will cause GPFS to step into a read-only mode. See
SAP Note 1846872.
Solution: Make sure to shutdown all HANA nodes by issuing shutdown command from the studio, or
login in with ssh using the sidadm user. Then run:
1 HDB info

to see if there is any HANA processes running. If there are, run


1 kill -9 proc_pid

to shut them down, one by one.


Download and apply GPFS version 3.4.0.23. Refer to the section 13.4: Updating GPFS on page 157 for
information about how to upgrade GPFS.
Note
It is recommended that you consider upgrading your GPFS version from 3.4 to 3.5 as support
for GPFS 3.4 has been discontinued from IBM.
SAP highly recommends that you run uniqueChecker.py script after patching GPFS to make sure that
your database is consistent.

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 201


1.8.80-12 © Copyright Lenovo, 2015
Technical Documentation

E.8 FAQ #8: Setting C-States

Problem: Poor performance of SAP HANA due to Intel processor settings.


Solution: As recommended in the SAP Notes 1824819 – SAP HANA DB: Recommended OS settings for
SLES 11 / SLES for SAP Applications 11 SP2 and 1954788 – SAP HANA DB: Recommended OS settings
for SLES 11 / SLES for SAP Applications 11 SP3 and additionally described in the IBM RETAIN Tip
H20700042 - Linux Ignores C-State Settings in Unified Extensible Firmware Interface (UEFI), the control
(’C’) states of the Intel processor should to be turned off for the most reliable performance of SAP HANA.
By default C-States are enabled in the UEFI due to the fact that we set the processor to Customer
Mode. With C-States being turned on you might see performance degradations with SAP HANA. We
recommend to turn off the processor C-States using the Linux kernel boot parameter:
1 processor.max_cstate=0

The Linux kernel used by SAP HANA includes a built-in driver (’intel_idle’) which will ignore any
C-State limits imposed by Basic Input/Output System (BIOS)/Unified Extensible Firmware Interface
(UEFI) when it is active.
This driver may cause issues by enabling C-States even though they are disabled in the BIOS or UEFI.
This can cause minor latency as the CPUs transition out of a C-State and into a running state. This is
not the preferred state for the SAP HANA appliance and must be changed.
To prevent the ’intel_idle’ driver from ignoring BIOS or UEFI settings for C-States, add the following
start parameter to the kernel’s boot loader configuration file:
1 intel_idle.max_cstate=0

Append both parameters to the end of the kernel command line of your boot loader (/boot/grub/menu.lst)
and reboot the server.
Warning
For clustered configurations, this change needs to be done on each server of the cluster. Only
make this change when all servers can be rebooted at once, or when you have an active stand-
by node to take over the rebooting systems HANA services. Do not try to reboot more servers
than stand-by nodes are active
For further information please refer to the SUSE knowledgebase article.

E.9 FAQ #9: ServeRAID M5120 RAID Adapter FW Issues

Problem: After the initial release of the new X6-based servers (x3850 X6, x3950 X6) a serious issue
in various firmware versions of the ServeRAID M5210 RAID adapter has been found which can trigger
continuous controller resets. This happens only under heavy load and each controller reset may cause
service interruption. Certain firmware versions do not exhibit this issue, but these versions show severely
degraded I/O performance. Only servers using the ServeRAID M5120 controller for attaching an external
SAS enclosure are affected.
Future appliance versions will be have the workaround for the controller reset issue preinstalled while the
performance issue can be only solved by an up- or downgrade to an unaffected firmware version.
Non-exhaustive list of known affected firmware versions:

42 http://www.ibm.com/support/entry/portal/docdisplay?lndocid=migr-5091901

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 202


1.8.80-12 © Copyright Lenovo, 2015
Technical Documentation

Issue Affected versions


Controller resets 23.7.1-0010, 23.12.0-0011, 23.12.0-0016, 23.12.0-0019
Lowered Performance 23.16.0-0018, 23.16.0-0027

Table 58: ServeRAID M5120 Firmware Issues

Solution: The current recommendation is to use firmware version 23.22.0-0024 (or newer, if listed as
stable by Lenovo SAP HANA Team) and to change the following configuration value in the installed OS.
Both can be done after installation.

E.9.1 Changing Queue Depth

On the installed appliance, please edit /etc/init.d/ibm-saphana and change the lines
1 function start() {
2 QUEUESIZE=1024
3 for i in /sys/block/sd* ; do
4 if [ -d $i ]; then
5 echo $QUEUESIZE > $i/queue/nr_requests
6 fi
7 done

to this version (if not already set)


1 function start() {
2 QUEUESIZE=1024
3 QUEUEDEPTH=250
4 for i in /sys/block/sd* ; do
5 if [ -d $i ]; then
6 echo $QUEUESIZE > $i/queue/nr_requests
7 echo $QUEUEDEPTH > $i/device/queue_depth
8 fi
9 done

by inserting lines 3 & 7. The new settings will be set on the next reboot or by calling
1 # service ibm-saphana start

Please ignore any output.

E.9.2 Use recommended Firmware version

1. Check which FW Package Build is installed on all M5120 RAID controllers:


1 # /opt/MegaRAID/storcli/storcli64 -AdpAllInfo -aAll | grep 'M5120' -B 5 -A 3
2

3 Adapter #1
4

5 ==============================================================================
6 Versions
7 ================
8 Product Name : ServeRAID M5120
9 Serial No : xxxxxxxxxx
10 FW Package Build: 23.22.0-0024

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 203


1.8.80-12 © Copyright Lenovo, 2015
Technical Documentation

Currently, version 23.22.0-0024 is recommended. Download the 23.22.0-0024 FW package for


ServeRAID 5100 SAS/SATA adapters via IBM Fixcentral or use following direct link: https:
//ibm.biz/BdRatD.
2. Make the downloaded file executable and then run it:
chmod +x ibm_fw_sraidmr_5100-23.22.0-0024_linux_32-64.bin
./ibm_fw_sraidmr_5100-23.22.0-0024_linux_32-64.bin -s
3. Please reboot the server after updating all M5120 controllers.
4. After reboot: Check if the queue depth is set to 250 for all devices on M5120 RAID controller:
1 for dev in $(lsscsi |grep -i m5120 |grep -E -o '/dev/sd[a-z]+'| cut -d '/' -f3)←-
,→ ; do cat /sys/block/${dev}/device/queue_depth ; done

E.10 FAQ #10: GPFS Parameter enableLinuxReplicatedAIO

With GPFS version 3.5.0-13 the new GPFS parameter enableLinuxReplicatedAIO was introduced.
Please note the following:
• Single node installations: Single node installations are not affected by this parameter. It can
be set to "yes" or "no".
• Cluster installations:
– GPFS 3.5.0-13 - 3.5.0-15: The parameter must be set to "no". When upgrading to GPFS
3.5.0-16 or higher you have to manually set the value to "yes".
Warning
Instead of setting the parameter to "no" we highly recommend to upgrade GPFS
to 3.5.0-16 or higher.
– GPFS 3.5.0-16 or higher: The parameter must be set to "yes".
• DR cluster installations: The parameter must be set to "yes".
The support script (saphana-support-ibm.sh) checks if the parameter is set correctly. If it is not set
correctly, adjust the setting:
1 # mmchconfig enableLinuxReplicatedAIO=no
2 # mmchconfig enableLinuxReplicatedAIO=yes

E.11 FAQ #11: GPFS NSD on Devices with GPT Labels

Problem: In some very rare occasions GPFS NSDs may be created on devices with a GUID Partition
Tables (GPT). When the NSD is created parts of the primary GPT header are overwritten. Newer UEFI
firmware releases offer an option to repair damaged GPTs and if activated the UEFI may try to recover
the primary GPT from the backup copy during boot-up. This will destroy the NSD header and in case
of single nodes this leads to the loss of all data in the GPFS filesystem.
To cause this issue, the following prerequisites must all apply:
• A storage device used as a NSD in a GPFS filesystem must have a GPT before the NSD was
created. This can only happen if the drive or RAID array was used before and has not been wiped
or reassembled. As part of the HANA appliance, GPT labels on non-OS disks are only created as
part of the mixed eX5/X6 clusters. If a system was only used for the HANA appliance, this cannot
occur unless there was a misconfiguration.

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 204


1.8.80-12 © Copyright Lenovo, 2015
Technical Documentation

• GPFS 3.4 or GPFS 3.5 was used when the NSD and the filesystem was created, either during
installation or manually after installation, regardless of the current running GPFS version. GPFS
4.1.0 uses protective partition tables to prevent this issue when creating new NSDs.
• An UEFI version with GPT recovery functionality is either installed or an upgrade to such a version
is planned. Further risk comes from the UEFI upgrade as these new UEFI versions will enable the
GPT recovery by default.
The probability for this combination is very low.
Solution: If the support script pointed you to this FAQ entry, please contact Lenovo Support via SAP’s
OSS Ticket System and put the message on the Queue BC-OP-LNX-IBM. Please prepare a support script
dump as described in SAP Note 1661146 – Lenovo Check Tool for SAP HANA appliances. The Lenovo
support will then devise a solution for your installation.
When the ASU tool is installed, run the command
1 # /opt/ibm/toolscenter/asu/asu64 show | grep -i gpt

The setting has various names, but any variable named GPT and Recovery should be set to "None". If it
is set to "Automatic" do not reboot the system. If there is no such setting, do not upgrade the UEFI
firmware until the GPTs have been cleared.

E.12 FAQ #12: GPFS pagepool should be set to 4GB

Problem: GPFS in your appliance is configured to use 16GB RAM for its so called pagepool. Recent
tests showed that the size of this pagepool can be safely reduced to 4GB which will yield 12GB of memory
for other running processes. Therefore it is recommended to change this parameter on all appliance
installations and versions. Updated versions of the support script will warn if the pagepool size is not
4GB and will refer to this FAQ entry.
Solution: Please change the pagepool size to 4GB. Execute
1 # mmchconfig pagepool=4G

to change the setting cluster-wide. This means this command needs to be run only once on Single Node
and clustered installation.
The pagepool is allocated during the startup of GPFS, so a GPFS restart is required to activate the new
setting. Please stop HANA and any processes that access GPFS filesystems before restarting GPFS. To
restart GPFS execute
1 # mmshutdown
2 # mmstartup

In clusters all nodes need to be restarted. You can do this one node at a time or restart all nodes at
once by adding the parameter -a to both commands. In the latter case please make sure no program is
accessing GPFS filesystems on any node.
To verify the configured pagepool size run
1 # mmlsconfig | grep pagepool

To verify the current active pagepool size run


1 # mmdiag --config

and search for the pagepool line. This value is shown in bytes.

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 205


1.8.80-12 © Copyright Lenovo, 2015
Technical Documentation

E.13 FAQ #13: Limit Page Cache Pool to 4GB (SAP Note #1557506

Problem: SLES offers an option to limit the size of the page cache pool. Per default the page cache
size is umlimited. SAP recommends in SAP Note 1557506 – Linux paging improvements to limit this
page cache to 4GB of RAM. This may improve resilience against Out-Of-Memory events.
Future appliance software versions will set this value by default. RHEL does currently not offer this
option.
Solution: Add the following line to file /etc/sysctl.conf:
1 vm.pagecache_limit_mb = 4096

and run
1 # systctl -e -p

to activate this value without a reboot. This change can be done without a downtime.

E.14 FAQ #14: restripeOnDiskFailure and start-disks-on-startup

GPFS 3.5 and higher come with the new parameter restripeOnDiskFailure. The GPFS callback script
start-disks-on-startup automatically installed on the Lenovo Solution is superseded by this parameter
– IBM GPFS NSDs are automatically started on startup when restripeOnDiskFailure is activated.
On DR cluster installations, neither the callback script nor restripeOnDiskFailure should be activated.
Solution: To enable the new parameter on all nodes in the cluster execute:
1 # mmchconfig restripeOnDiskFailure=yes -N all

To remove the now unnecessary callback script start-disks-on-startup execute:


1 # mmdelcallback start-disks-on-startup

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 206


1.8.80-12 © Copyright Lenovo, 2015
Technical Documentation

F References

F.1 Lenovo References

Lenovo Solution Documentation


• Lenovo Systems Solution for SAP HANA Quick Start Guide
• Lenovo Systems X6 Solution for SAP HANA Implementation Guide
• SAP Note 1650046 – Lenovo Systems X6 Solution for SAP HANA Operations Guide
Lenovo System x Documentation
• IBM X6 Portfolio Overview Redbook
• IBM eX5 Portfolio Overview Redbook
• IBM System Storage EXP2500 Express Specifications
• IBM System Networking RackSwitch G8052 Redbook
• IBM System Networking RackSwitch G8124E Redbook
• IBM System Networking RackSwitch G8264 Redbook
• LNVO-ASU – Lenovo Advanced Settings Utility (ASU)
• LNVO-DSA – Lenovo Dynamic System Analysis (DSA)
• MIGR-5090923 – IBM SSD Wear Gauge CLI utility

F.2 IBM References

IBM General Parallel File System Documentation


• IBM General Parallel File System Documentation
• GPFS FAQ (with supported OS levels)
• GPFS Service on IBM Fix Central (IBM ID required) for GPFS 3.5.0
• GPFS Books
– IBM developerWorks Article: GPFS Quick Start Guide for Linux
• GPFS Support in IBM Support Portal (IBM ID required)

F.3 SAP General Help (SAP Service Marketplace ID required)

• SAP Service Marketplace


• SAP Help Portal
• SAP HANA Ramp-Up Knowledge Transfer Learning Maps
• SAP HANA Software Download on SAP Service Marketplace → Software Downloads → SAP
Software Download Center → Support Packages and Patches / Installations and Upgrades → A–Z
Index

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 207


1.8.80-12 © Copyright Lenovo, 2015
Technical Documentation

F.4 SAP Notes (SAP Service Marketplace ID required)

Generic SAP Notes about SAP HANA


• SAP Note 1730996 – Unrecommended external software and software versions
• SAP Note 1730929 – Using external tools in an SAP HANA appliance
• SAP Note 1803039 – Statistics server CHECK_HOSTS_CPU intern. error when restart
SAP Notes about the Lenovo Systems Solution for SAP HANA
• SAP Note 1650046 – Lenovo SAP HANA Appliance Operations Guide
• SAP Note 1661146 – Lenovo Check Tool for SAP HANA appliances
• SAP Note 1880960 – Lenovo Systems Solution for SAP HANA Platform Edition Customer Main-
tenance
• SAP Note 1898103 – Health Checker for the IBM SAP HANA appliance
SAP Notes regarding SAP HANA
• SAP Note 1681092 – Multiple SAP HANA databases on one SAP HANA system
• SAP Note 1642148 – FAQ: SAP HANA Database Backup & Recovery
• SAP Note 1819928 – SAP HANA appliance: Revision 50 of SAP HANA database
• SAP Note 1780950 – Connection problems due to host name resolution
• SAP Note 1829651 – Time zone settings in HANA scale out landscapes
• SAP Note 1743225 – Potential failure of connections with scale out nodes
• SAP Note 1888072 – SAP HANA DB: Indexserver crash in __strcmp_sse42
• SAP Note 1890444 – Slow HANA system due to CPU power save mode
SAP Notes regarding SUSE Linux Enterprise Server for SAP Applications
• SAP Note 784391 – SAP support terms and 3rd-party Linux kernel drivers
• SAP Note 1310037 – SUSE LINUX Enterprise Server 11: Installation notes
• SAP Note 1954788 – SAP HANA DB: Recommended OS settings for SLES 11 / SLES for SAP
Applications 11 SP3
• SAP Note 618104 – Linux SAP System Information Tool
• SAP Note 1056161 – SUSE Priority Support for SAP applications
• SAP Note 2001528 – Linux: SAP HANA Database SPS 08 revision 80 (or higher) on RHEL 6 or
SLES 11
SAP Notes regarding Red Hat Enterprise Linux
• SAP Note 2013638 – SAP HANA DB: Recommended OS settings for RHEL 6.5
SAP Notes regarding IBM GPFS
• SAP Note 1084263 – Cluster File System: Use of GPFS on Linux
• SAP Note 1902281 – GPFS 3.5 incompatibility with Linux kernel 3.0.58 and higher
• SAP Note 2051052 – GPFS "No space left on device" when df shows free space
SAP Notes regarding Virtualization

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 208


1.8.80-12 © Copyright Lenovo, 2015
Technical Documentation

• SAP Note 1122387 – Linux: SAP Support in virtualized environments

F.5 Novell SUSE Linux Enterprise Server References

Currently Supported
• SUSE Linux Enterprise Server 11 SP3 Release Notes
• SUSE Linux Enterprise Server for SAP Applications 11 SP3 Media

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 209


1.8.80-12 © Copyright Lenovo, 2015
Technical Documentation

G Changelog
This section describes the changes that have been done within a release version since it was published.
1. Added FAQ entry #14 (Appendix E.14: FAQ #14: restripeOnDiskFailure and start-disks-on-
startup on page 206).
2. Error corrections in Backup and Restore/Recovery sections of
chapter 15: Backup and Restore of the Primary Partition on page 173
chapter 16: SAP HANA Backup and Recovery on page 181.
3. DR chapter: GPFS filesets for data and shared added (chapter 8.4.4: Filesystem Creation on page
76)
4. New controller M5225 included

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 210


1.8.80-12 © Copyright Lenovo, 2015

Potrebbero piacerti anche