Documenti di Didattica
Documenti di Professioni
Documenti di Cultura
David Feisthammel
Daniel Lu
David Ye
Michael Miller
As the high demand for storage continues to accelerate for enterprises in recent years,
Lenovo® and Microsoft have teamed up to craft a software-defined storage solution
leveraging the advanced feature set of Windows Server 2016 and the flexibility of the Lenovo
System x3650 M5 rack server and RackSwitch™ G8272 switch.
This solution provides a solid foundation for customers looking to consolidate both storage
and compute capabilities on a single hardware platform, or for those enterprises that wish to
have distinct storage and compute environments. In both situations, this solution provides
outstanding performance, high availability protection and effortless scale out growth potential
to accommodate evolving business needs.
This deployment guide provides insight to the setup of this environment and guides the
reader through a set of well-proven procedures leading to readiness of this solution for
production use. This guide is based on Storage Spaces Direct as implemented in Windows
Server 2016 RTM (Release to Manufacturing).
Do you have the latest version? Check whether you have the latest version of this
document by clicking the Check for Updates button on the front page of the PDF.
Pressing this button will take you to a web page that will tell you if you are reading the
latest version of the document and give you a link to the latest if needed. While you’re
there, you can also sign up to get notified via email whenever we make an update.
Contents
When discussing high performance and shareable storage pools, many IT professionals think
of expensive SAN infrastructure. Thanks to the evolution of disk and virtualization technology,
as well as ongoing advancements in network throughput, the realization of having an
economical, highly redundant and high performance storage subsystem is now present.
S2D supports two general deployment scenarios, which have been called disaggregated and
hyperconverged. Microsoft sometimes uses the term “converged” to describe the
disaggregated deployment scenario. Both scenarios provide storage for Hyper-V, specifically
focusing on Hyper-V Infrastructure as a Service (IaaS) for service providers and enterprises.
In the disaggregated approach, the environment is separated into compute and storage
components. An independent pool of servers running Hyper-V acts to provide the CPU and
memory resources (the “compute” component) for the running of VMs that reside on the
storage environment. The “storage” component is built using S2D and Scale-Out File Server
(SOFS) to provide an independently scalable storage repository for the running of VMs and
applications. This method, as illustrated in Figure 2 on page 5, allows for the independent
scaling and expanding of the compute farm (Hyper-V) and the storage farm (S2D).
For the hyperconverged approach, there is no separation between the resource pools for
compute and storage. Instead, each server node provides hardware resources to support the
running of VMs under Hyper-V, as well as the allocation of its internal storage to contribute to
the S2D storage repository.
5
Figure 3 Hyperconverged configuration - nodes provide shared storage and Hyper-V hosting
Solution configuration
The primary difference between configuring the two deployment scenarios is that no vSwitch
creation is necessary in the disaggregated solution, since the S2D cluster is used only for the
storage component and does not host VMs. This document specifically addresses the
deployment of a Storage Spaces Direct hyperconverged solution. If a disaggregated solution
is preferred, it is a simple matter of skipping a few configuration steps, which will be
highlighted along the way.
The following components and information are relevant to the test environment used to
develop this guide. This solution consists of two key components, a high-throughput network
infrastructure and a storage-dense high-performance server farm.
In this solution, the networking component consists of a pair of Lenovo RackSwitch G8272
switches, which are connected to each node via 10GbE Direct Attach Copper (DAC) cables.
In addition to the Mellanox ConnectX-4 NICs described in this document, Lenovo also
supports Chelsio T520-LL-CR dual-port 10GbE network cards that use the iWARP protocol.
This Chelsio NIC can be ordered via the CORE special-bid process as Lenovo part number
46W0609. Contact your local Lenovo client representative for more information. Although the
body of this document details the steps required to configure the Mellanox cards, it is a simple
matter to substitute Chelsio NICs in the solution.
Figure 4 shows high-level details of the configuration. The four server/storage nodes and two
switches take up a combined total of 10 rack units of space.
The use of RAID controllers: Microsoft does not support any RAID controller attached to
the storage devices used by S2D, regardless of a controller’s ability to support
“pass-through” or JBOD mode. As a result, the N2215 SAS HBA is used in this solution.
The ServeRAID M1215 controller is used only for the pair of mirrored (RAID-1) boot drives
and has nothing to do with S2D.
Figure 5 on page 8 shows the layout of the drives. There are 14x 3.5” drives in the server, 12
at the front of the server and two at the rear of the server. Four are 800 GB SSD devices,
while the remaining ten drives are 4 TB SATA HDDs. These 14 drives form the tiered storage
pool of S2D and are connected to the N2215 SAS HBA. Two 2.5” drive bays at the rear of the
server contain a pair of 600 GB SAS HDDs that are mirrored (RAID-1) for the boot drive and
connected to the ServeRAID™ M1215 SAS RAID adapter.
One of the requirements for this solution is that a non-RAID storage controller is used for the
S2D data volume. Note that using a RAID storage controller set to pass-through mode is not
supported at the time of this writing. The ServeRAID adapter is required for high availability of
the operating system and is not used by S2D for its storage repository.
7
Figure 5 x3650 M5 storage subsystem
Network wiring of this solution is straight-forward, with each server being connected to each
switch to enhance availability. Each system contains a dual-port 10 GbE Mellanox
ConnectX-4 adapter to handle operating system traffic and storage communications.
To allow for redundant network links in the event of a network port or external switch failure,
the recommendation calls for the connection from Port 1 on the Mellanox adapter to be joined
to a port on the first G8272 switch (“S2DSwitch1”), plus a connection from Port 2 on the same
Mellanox adapter to be linked to an available port on the second G8272 switch
(“S2DSwitch2”). This cabling construct is illustrated in Figure 6. Defining an Inter-Switch Link
(ISL) ensures failover capabilities on the switches.
The last construction on the network subsystem is to leverage the virtual network capabilities
of Hyper-V on each host to create a SET-enabled team from both 10 GbE ports on the
Mellanox adapter. From this a virtual switch (vSwitch) is defined and logical network adapters
Also, for the disaggregated solution, the servers are configured with 128 GB of memory,
rather than 256 GB, and the CPU has 10 cores instead of 14 cores. The higher-end
specifications of the hyperconverged solution are to account for the dual functions of compute
and storage that each server node will take on, whereas in the disaggregated solution, there
is a separation of duties, with one server farm dedicated to Hyper-V hosting and a second
devoted to S2D.
9
Overview of the installation tasks
This document specifically addresses the deployment of a Storage Spaces Direct
hyperconverged solution. Although nearly all configuration steps presented apply to the
disaggregated solution as well, there are a few differences between these two solutions. We
have included notes regarding steps that do not apply to the disaggregated solution. These
notes are also included as comments in PowerShell scripts.
Leveraging the benefits of SMB Direct comes down to a few simple principles. First, using
hardware that supports SMB Direct and RDMA is critical. Use the Bill of Materials found in
“Appendix: Bill of Materials for hyperconverged solution” on page 29 as a guide. This solution
utilizes a pair of Lenovo RackSwitch G8272 10/40 Gigabit Ethernet switches and a dual-port
10GbE Mellanox ConnectX-4 PCIe adapter for each node.
Redundant physical network connections are a best practice for resiliency as well as
bandwidth aggregation. This is a simple matter of connecting each node to each switch. In
our solution, Port 1 of each Mellanox adapter is connected to the Switch 1 and Port 2 of each
Mellanox adapter is connected to Switch 2, as shown in Figure 7 on page 11.
Switch 1
Node 1
Node 2
Node 3
Node 4
As a final bit of network cabling, we configure an Inter-Switch Link (ISL) between our pair of
switches to support the redundant node-to-switch cabling described above. To do this, we
need redundant high-throughput connectivity between the switches, so we connect Ports 53
and 54 on each switch to each other using a pair of 40Gbps QSFP+ cables. Note that these
connections are not shown in Figure 7.
In order to leverage the SMB Direct benefits listed above, a set of cascading requirements
must be met. Using RDMA over Converged Ethernet (RoCE) requires a lossless fabric, which
is typically not provided by standard TCP/IP Ethernet network infrastructure, since the TCP
protocol is designed as a “best-effort” transport protocol. Datacenter Bridging (DCB) is a set
of enhancements to IP Ethernet, which is designed to eliminate loss due to queue overflow,
as well as to allocate bandwidth between various traffic types.
To sort out priorities and provide lossless performance for certain traffic types, DCB relies on
Priority Flow Control (PFC). Rather than using the typical Global Pause method of standard
Ethernet, PFC specifies individual pause parameters for eight separate priority classes. Since
the priority class data is contained within the VLAN tag of any given traffic, VLAN tagging is
also a requirement for RoCE and, therefore SMB Direct.
Once the network cabling is done, it's time to begin configuring the switches. These
configuration commands need to be executed on both switches. We start by enabling
Converged Enhanced Ethernet (CEE), which automatically enables Priority-Based Flow
Control (PFC) for all Priority 3 traffic on all ports. Enabling CEE also automatically configures
Enhanced Transmission Selection (ETS) so that at least 50% of the total bandwidth is always
11
available for our storage (PGID 1) traffic. These automatic default configurations are suitable
for our solution. The commands are listed in Example 1.
After enabling CEE, we configure the vLANs. Although we could use multiple vLANs for
different types of network traffic (storage, client, management, cluster heartbeat, Live
Migration, etc.), the simplest choice is to use a single vLAN (12) to carry all our SMB Direct
solution traffic. Employing 10GbE links makes this a viable scenario. Enabling vLAN tagging
is important in this solution, since RDMA requires it.
For redundancy, we configure an ISL between a pair of 40GbE ports on each switch. We use
the last two ports, 53 and 54, for this purpose. Physically, each port is connected to the same
port on the other switch using a 40Gbps QSFP+ cable. Configuring the ISL is a simple matter
of joining the two ports into a port trunk group. See Example 3.
Once we've got the configuration complete on the switch, we need to copy the running
configuration to the startup configuration. Otherwise, our configuration changes would be lost
once the switch is reset or reboots. This is achieved using the write command, Example 4.
Example 4 Use the write command to copy the running configuration to startup
write
Repeat the entire set of commands above (Example 1 on page 12 through Example 4) on the
other switch, defining the same vLAN and port trunk on that switch. Since we are using the
same ports on both switches for identical purposes, the commands that are run on each
switch are identical. Remember to commit the configuration changes on both switches using
the write command.
Note: If the solution uses another switch model or switch vendor’s equipment, other than the
RackSwitch G8272, it is essential to perform the equivalent command sets for the switches.
The commands themselves may differ from what is stated above but it is imperative that the
same functions are executed on the switches to ensure proper operation of this solution.
This flexibility in the tool grants full control to the server owner and ensures that these
important updates are performed at a convenient time.
Windows Server 2016 contains all the drivers necessary for this solution with the exception of
the Mellanox ConnectX-4 driver, which was updated by Mellanox after the final Release to
Manufacturing (RTM) build of the OS was released. To obtain the latest CX-4 driver, visit:
http://www.mellanox.com/page/products_dyn?product_family=32&mtag=windows_driver
In addition, it is recommended to install the Lenovo IMM2 PBI mailbox driver. Although this is
actually a null driver and is not required for the solution, installing this driver removes the
“bang” from the Unknown device in the Windows Device Manager. You can find the driver
here:
https://www-945.ibm.com/support/fixcentral/systemx/selectFixes?parent=Lenovo%2BSys
tem%2Bx3650%2BM5&product=ibm/systemx/8871&&platform=Windows+2012+R2&function=all#I
MMPBI
13
Figure 8 UEFI main menu
3. Create a RAID-1 pool from the two 2.5” HDDs installed at the rear of the system.
Leave the remaining 12 drives (four 800 GB SSDs and ten 4 TB HDDs) that are connected to
the N2215 SAS HBA as unconfigured. They will be managed directly by the operating system
when the time comes to creating the storage pool.
System x® servers, including the x3650 M5, feature an Integrated Management Module
(IMM) to provide remote out-of-band management, including remote control and remote
media.
Select the source that is appropriate for your situation. The following steps describe the
installation:
1. With the method of Windows deployment selected, power the server on to begin the
installation process.
2. Select the appropriate language pack, correct input device, and the geography, then
select the desired OS edition (GUI or Core components only).
3. Select the RAID-1 array connected to the ServeRAID M1215 controller as the target to
install Windows (you might need to scroll through a list of available drives).
4. Follow the prompts to complete the installation of the OS.
Windows Server 2016 contains all the drivers necessary for this solution with the exception of
the Mellanox ConnectX-4 driver, which was updated by Mellanox after the final Release to
Manufacturing (RTM) build of the OS was released. To obtain the latest CX-4 driver, visit:
http://www.mellanox.com/page/products_dyn?product_family=32&mtag=windows_driver
Note that it is a good idea to install the Hyper-V role on all nodes even if you plan to
implement the disaggregated solution. Although you may not regularly use the storage cluster
to host VMs, if the Hyper-V role is installed, you will have the option to deploy an occasional
VM if the need arises.
Once the roles and features have been installed and the nodes are back online, operating
system configuration can begin.
To ensure that the latest fixes and patches are applied to the operating system, perform
updating of the Windows Server components via Windows Update. It is a good idea to reboot
each node after the final update is applied to ensure that all updates have been fully installed,
regardless what Windows Update indicates.
Upon completing the Windows Update process, join each server node to the Windows Active
Directory Domain. Use the following PowerShell command to accomplish this task.
From this point onward, when working with cluster services be sure to log onto the systems
with a Domain account and not the local Administrator account. Ensure that a Domain
account is part of the local Administrators Security Group, as shown in Figure 9.
15
Figure 9 Group membership of the Administrator account
Verify that the internal drives are online, by going to Server Manager > Tools > Computer
Management > Disk Management. If any are offline, select the drive, right-click it, and click
Online. Alternatively, PowerShell can be used to bring all 14 drives in each host online with a
single command.
Since all systems have been joined to the domain, we can execute the PowerShell command
remotely on the other hosts while logged in as a Domain Administrator. To do this, use the
command shown in Example 8.
For the Mellanox NICs used in this solution, we need to enable Data Center Bridging (DCB),
which is required for RDMA. Then we create a policy to establish network Quality of Service
(QoS) to ensure that the Software Defined Storage system has enough bandwidth to
communicate between the nodes, ensuring resiliency and performance. We also need to
disable regular Flow Control (Global Pause) on the Mellanox adapters, since Priority Flow
Control (PFC) and Global Pause cannot operate together on the same interface.
To make all these changes quickly and consistently, we again use a PowerShell script, as
shown in Example 9 on page 17.
For an S2D hyperconverged solution, we deploy a SET-enabled Hyper-V switch and add
RDMA-enabled host virtual NICs to it for use by Hyper-V. Since many switches won't pass
traffic class information on untagged vLAN traffic, we need to make sure that the vNICs using
RDMA are on vLANs.
To keep this hyperconverged solution as simple as possible and since we are using dual-port
10GbE NICs, we will pass all traffic on vLAN 12. If you need to segment your network traffic
more, for example to isolate VM Live Migration traffic, you can use additional vLANs.
Example 10 shows the PowerShell script that can be used to perform the SET configuration,
enable RDMA, and assign vLANs to the vNICs. These steps are necessary only for
configuring a hyperconverged solution. For a disaggregated solution these steps can be
skipped since Hyper-V is not enabled on the S2D storage nodes.
Now that all network interfaces have been created (including the vNICs required by a
hyperconverged deployment if necessary), IP address configuration can be completed, as
follows:
1. Configure a static IP address for the operating system or public facing interface on the
SMB1 vNIC (for example, 10.10.10.x). Configure default gateway and DNS server settings
as appropriate for your environment.
17
2. Configure a static IP address on the SMB2 vNIC, using a different subnet if desired (for
example, 10.10.11.x). Again, configure default gateway and DNS server settings as
appropriate for your environment.
3. Perform a ping command from each interface to the corresponding server nodes in this
environment to confirm that all connections are functioning properly. Both interfaces on
each node should be able to communicate with both interfaces on all other nodes.
Example 11 PowerShell commands used to configure the SMB vNIC interfaces on Node 1
Set-NetIPInterface -InterfaceAlias "vEthernet (SMB1)" -Dhcp Disabled
New-NetIPAddress -InterfaceAlias "vEthernet (SMB1)" -IPAddress 10.10.10.11 -PrefixLength 24
Set-DnsClientServerAddress -InterfaceAlias "vEthernet (SMB1)" -ServerAddresses 10.10.10.1
Set-NetIPInterface -InterfaceAlias "vEthernet (SMB2)" -Dhcp Disabled
New-NetIPAddress -InterfaceAlias "vEthernet (SMB2)" -IPAddress 10.10.11.11 -PrefixLength 24
Set-DnsClientServerAddress -InterfaceAlias "vEthernet (SMB2)" -ServerAddresses 10.10.10.1
It's a good idea to disable any network interfaces that won't be used for the solution before
creating the Failover Cluster. This includes IBM USB Remote NDIS Network device. The only
interfaces that will be used in this solution are the SMB1 and SMB2 vNICs.
Figure 10 shows the network connections. The top two connections (in blue box) represent
the two physical ports on the Mellanox adapter and must remain enabled. The next
connection (in red box) represents the IBM USB Remote NDIS Network device, which can be
disabled. Finally, the bottom two connections (in the green box) are the SMB Direct vNICs
that will be used for all solution network traffic. There may be additional network interfaces
listed, such as those for multiple Broadcom NetXtreme Gigabit Ethernet NICs. These should
be disabled as well.
Since RDMA is so critical to the performance of the final solution, it’s a good idea to make
sure each piece of the configuration is correct as we move through the steps. We can’t look
for RDMA traffic yet, but we can verify that the vNICs (in a hyperconverged solution) have
RDMA enabled. Example 12 on page 18 shows the PowerShell command we use for this
purpose and Figure 11 on page 19 shows the output of that command in our environment.
Example 12 PowerShell command to verify that RDMA is enabled on the vNICs just created
Get-NetAdapterRdma | ? Name -Like *SMB* | ft Name, Enabled
Although not strictly necessary, it is a best practice to assign base and maximum processors
for VMQ queues on each server in order to ensure maximum efficiency of queue
management. Although the concept is straight forward, there are a few things to keep in mind
when determining proper processor assignment. First, only physical processors are used to
manage VMQ queues. Therefore, if Hyper-Threading (HT) Technology is enabled, only the
even-numbered processors are considered viable. Next, since processor 0 is assigned to
many internal tasks, it is best not to assign queues to this particular processor.
Example 13 PowerShell commands used to determine processors available for VMQ queues
# Check for Hyper-Threading (if there are twice as many logical procs as number of cores, HT is enabled)
Get-WmiObject -Class win32_processor | ft -Property NumberOfCores, NumberOfLogicalProcessors -AutoSize
# Check procs available for queues (check the RssProcessorArray field)
Get-NetAdapterRSS
Once you have this information, it's a simple math problem. We have a pair of 14-core CPUs
in each host, providing 28 processors total, or 56 logical processors, including
Hyper-Threading. Excluding processor 0 and eliminating all odd-numbered processors leaves
us with 27 processors to assign. Given the dual-port Mellanox adapter, this means we can
assign 13 processors to one port and 14 processors to the other. This results in the following
processor assignment:
Mellanox 1: procs 2, 4, 6, 8, 10, 12, 14, 16, 18, 20, 22, 24, 26, 28
Mellanox 2: procs 30, 32, 34, 46, 38, 40, 42, 44, 46, 48, 50, 52, 54
Use the following PowerShell script to define the base (starting) processor as well as how
many processors to use for managing VMQ queues on each physical NIC consumed by the
vSwitch (in our solution, the two Mellanox ports.)
19
Set-NetAdapterVmq -Name "Mellanox 2" -BaseProcessorNumber 30 -MaxProcessors 13
# Check VMQ queues
Get-NetAdapterVmqQueue
Now that we’ve got the networking internals configured for one system, we use PowerShell
remote execution to replicate this configuration to the other three hosts. Example 15 shows
the PowerShell commands, this time without comments. These commands are for configuring
a hyperconverged solution using Mellanox NICs. If Chelsio NICs are being used, eliminate
the first 9 steps. If configuring a disaggregated solution, eliminate the last 9 steps.
The final piece of preparing the infrastructure for S2D is to create the Failover Cluster.
Once the cluster is built, you can also use PowerShell to query the health status of the cluster
storage.
The default behavior of Failover Cluster creation is to set aside the non-public facing subnet
(configured on the SMB2 vNIC) as a cluster heartbeat network. When 1 GbE was the
standard, this made perfect sense. However, since we are using 10 GbE in this solution, we
2. Note the Cluster Use setting for each network. If this setting is Cluster Only, right-click on
the network entry and select Properties.
3. In the Properties window that opens ensure that the Allow cluster network
communication on this network radio button is selected. Also, select the Allow clients
to connect through this network checkbox, as shown in Figure 13 on page 21.
Optionally, change the network Name to one that makes sense for your installation and
click OK.
After making this change, both networks should show “Cluster and Client” in the Cluster Use
column, as shown in Figure 14.
It is generally a good idea to use the cluster network Properties window to specify cluster
network names that makes sense and will aid in troubleshooting later. To be consistent, we
21
name our cluster networks after the vNICs that carry the traffic for each, as shown in
Figure 14.
Figure 14 Cluster networks shown with names to match the vNICs that carry their traffic
It is also possible to accomplish the cluster network role and name changes using
PowerShell. Example 18 provides a script to do this.
Figure 15 shows output of the PowerShell commands to display the initial cluster network
parameters, modify the cluster network names, enable client traffic on the second cluster
network, and check to make sure cluster network names and roles are set properly.
For information on how to create a cluster file share witness, read the Microsoft article,
Configuring a File Share Witness on a Scale-Out File Server, available at:
https://blogs.msdn.microsoft.com/clustering/2014/03/31/configuring-a-file-share-wi
tness-on-a-scale-out-file-server/
Note: Make sure the file share for the cluster file share witness has the proper permissions
for the cluster name object as in the example shown in Figure 16.
Once the cluster is operational and the file share witness has been established, it is time to
enable and configure the Storage Spaces Direct feature.
23
2. Configure S2D cache tier using the highest performance storage devices available, such
as NVMe or SSD
3. Create two storage tiers, one called “Capacity” and the other called “Performance.”
Note: You may notice that during process of enabling S2D, the process pauses for an
extended period with the message “Waiting until physical disks are claimed...” In our
testing we saw this delay at roughly 24-28%, which lasted anywhere from 20 minutes to
over an hour. This is a known issue that is being worked by Microsoft. This pause does not
affect S2D configuration or performance once complete.
Take a moment to run a few PowerShell commands at this point to verify that all is as
expected. First, run the command shown in Example 20. The results should be similar to
those in our environment, shown in Figure 17 on page 24.
At this point we can also check to make sure RDMA is working. We provide two suggested
approaches for this. First, Figure 18 shows a simple netstat command that can be used to
verify that listeners are in place on port 445 (in the yellow boxes). This is the port typically
used for SMB and the port specified when we created the network QoS policy for SMB in
Example 9 on page 17.
Figure 18 The netstat command can be used to confirm listeners configured for port 445
The second method for verifying that RDMA is configured and working properly is to use
PerfMon to create an RDMA monitor. To do this, following these steps:
1. At the PowerShell or Command prompt, type perfmon and press Enter.
2. In the Performance Monitor window that opens, select Performance Monitor in the left
pane and click the green plus sign (“+”) at the top of the right pane.
3. In the Add Counters window that opens, select RDMA Activity in the upper left pane. In
the Instances of selected object area in the lower left, choose the instances that represent
your vNICs (for our environment, these are “Hyper-V Virtual Ethernet Adapter #2” and
“Hyper-V Virtual Ethernet Adapter #3”). Once the instances are selected, click the Add
button to move them to the Added counters pane on the right. Click OK.
4. Back in the Performance Monitor window, click the drop-down icon to the left of the green
plus sign and choose Report.
25
Figure 21 Choose the “Report” format
5. This should show a report of RDMA activity for your vNICs. Here you can view key
performance metrics for RDMA connections in your environment, as shown in Figure 22
on page 26.
Table 1 shows the volume types supported by Storage Spaces Direct and several
characteristics of each.
Use case All data is hot All data is cold Mix of hot and cold
data
Minimum nodes 3 4 4
Once S2D installation is complete and volumes have been created, the final step is to verify
that there is fault tolerance in this storage environment. Example 24 shows the PowerShell
command to verify the fault tolerance of the S2D storage pool and Figure 23 shows the output
of that command in our environment.
27
Figure 23 PowerShell query showing the fault domain awareness of the storage pool
To Query the virtual disk, use the command in Example 25. The command verifies the fault
tolerance of a virtual disk (volume) in S2D and Figure 24 shows the output of that command
in our environment.
Example 25 PowerShell command to determine S2D virtual disk (volume) fault tolerance
Get-VirtualDisk –FriendlyName <VirtualDiskName> | FL FriendlyName, Size,
FaultDomainAwareness
Figure 24 PowerShell query showing the fault domain awareness of the virtual disk
Over time, the storage pool may get unbalanced because of adding or removing physical
disks/storage nodes or data written or deleted to the storage pool. In this case, use the
PowerShell command shown in Example 26 to improve storage efficiency and performance.
Summary
Windows Server 2016 Technical Preview introduces Storage Spaces Direct (S2D), which
enables building highly available and scalable storage systems with local storage. This is a
significant step forward in Microsoft Windows Server software-defined storage (SDS) as it
simplifies the deployment and management of SDS systems and also unlocks use of new
classes of disk devices, such as SATA and NVMe disk devices, that were previously not
possible with clustered Storage Spaces with shared disks.
With Windows Server 2016 Technical Preview Storage Spaces Direct, you can now build
highly available storage systems using Lenovo System x servers with only local storage. This
eliminates the need for a shared SAS fabric and its complexities, but also enables using
devices such as SATA SSDs, which can help further reduce cost or NVMe SSDs to improve
performance.
This document has provided an organized, stepwise process for deploying a S2D solution
based on Lenovo System x servers and RackSwitch Ethernet switches. Once configured, this
solution provides a versatile foundation for many different types of workloads.
Our worldwide team of IT Specialists and IT Architects can help customers scope and size
the right solutions to meet their requirements, and then accelerate the implementation of the
solution with our on-site and remote services. For customers also looking to elevate their own
skill sets, our Technology Trainers can craft services that encompass solution deployment
plus skills transfer, all in a single affordable package.
To inquire about our extensive service offerings and solicit information on how we can assist
in your new Storage Spaces Direct implementation, please contact us at
x86svcs@lenovo.com.
For more information about our service portfolio, please see our website:
http://shop.lenovo.com/us/en/systems/services/?menu-id=services
A5B7 16GB TruDDR4™ Memory (2Rx4, 1.2V) PC4-17000 CL15 2133MHz LP RDIMM 64
A2HP Configuration ID 01 8
29
Part number Description Quantity
ATET Intel Xeon Processor E5-2680 v4 14C 2.4GHz 35MB 2400MHz 120W 4
ATFJ Addl Intel Xeon Processor E5-2680 v4 14C 2.4GHz 35MB Cache 2400MHz 120W 4
A2HP Configuration ID 01 4
A2JX Controller 01 4
A2HP Configuration ID 01 4
A2JY Controller 02 4
Change history
Changes in the 9 January 2017 update:
Added detail regarding solution configuration if using Chelsio NICs
Added PowerShell commands for IP address assignment
Moved network interface disablement section to make more logical sense
Updated Figure 2 on page 5 and Figure 3 on page 6
Fixed reference to Intel v3 processors in Figure 4 on page 7
Updated cluster network rename section and figure
Removed Bill of Materials for disaggregated solution
Authors
This paper was produced by the following team of specialists:
Dave Feisthammel is a Senior Solutions Architect working at the Lenovo Center for
Microsoft Technologies in Kirkland, Washington. He has over 25 years of experience in the IT
field, including four years as an IBM client and 15 years working for IBM. His areas of
expertise include systems management, as well as virtualization, storage, and cloud
technologies.
David Ye is a Senior Solutions Architect and has been working at Lenovo Center for
Microsoft Technologies for 15 years. He started his career at IBM as a Worldwide Windows
Level 3 Support Engineer. In this role, he helped customers solve complex problems and was
involved in many critical customer support cases. He is now a Senior Solutions Architect in
the System x Enterprise Solutions Technical Services group, where he works with customers
on Proof of Concepts, solution sizing, performance optimization, and solution reviews. His
areas of expertise are Windows Server, SAN Storage, Virtualization, and Microsoft Exchange
Server.
Michael Miller is a Windows Engineer with the Lenovo Server Lab in Kirkland, Washington.
Mike has 35 years in the IT industry, primarily in client/server support and development roles.
The last 10 years have been focused on Windows server operating systems and server-level
hardware, particularly on operating system/hardware compatibility, advanced Windows
features, and Windows test functions.
31
At Lenovo Press, we bring together experts to produce technical publications around topics of
importance to you, providing information and best practices for using Lenovo products and
solutions to solve IT challenges.
See a list of our most recent publications at the Lenovo Press web site:
http://lenovopress.com
Lenovo may have patents or pending patent applications covering subject matter described in this document.
The furnishing of this document does not give you any license to these patents. You can send license
inquiries, in writing, to:
LENOVO PROVIDES THIS PUBLICATION “AS IS” WITHOUT WARRANTY OF ANY KIND, EITHER
EXPRESS OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
NON-INFRINGEMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Some
jurisdictions do not allow disclaimer of express or implied warranties in certain transactions, therefore, this
statement may not apply to you.
This information could include technical inaccuracies or typographical errors. Changes are periodically made
to the information herein; these changes will be incorporated in new editions of the publication. Lenovo may
make improvements and/or changes in the product(s) and/or the program(s) described in this publication at
any time without notice.
The products described in this document are not intended for use in implantation or other life support
applications where malfunction may result in injury or death to persons. The information contained in this
document does not affect or change Lenovo product specifications or warranties. Nothing in this document
shall operate as an express or implied license or indemnity under the intellectual property rights of Lenovo or
third parties. All information contained in this document was obtained in specific environments and is
presented as an illustration. The result obtained in other operating environments may vary.
Lenovo may use or distribute any of the information you supply in any way it believes appropriate without
incurring any obligation to you.
Any references in this publication to non-Lenovo Web sites are provided for convenience only and do not in
any manner serve as an endorsement of those Web sites. The materials at those Web sites are not part of the
materials for this Lenovo product, and use of those Web sites is at your own risk.
Any performance data contained herein was determined in a controlled environment. Therefore, the result
obtained in other operating environments may vary significantly. Some measurements may have been made
on development-level systems and there is no guarantee that these measurements will be the same on
generally available systems. Furthermore, some measurements may have been estimated through
extrapolation. Actual results may vary. Users of this document should verify the applicable data for their
specific environment.
Send us your comments via the Rate & Provide Feedback form found at
http://lenovopress.com/lp0064
Trademarks
Lenovo, the Lenovo logo, and For Those Who Do are trademarks or registered trademarks of Lenovo in the
United States, other countries, or both. These and other Lenovo trademarked terms are marked on their first
occurrence in this information with the appropriate symbol (® or ™), indicating US registered or common law
trademarks owned by Lenovo at the time this information was published. Such trademarks may also be
registered or common law trademarks in other countries. A current list of Lenovo trademarks is available on
the Web at http://www.lenovo.com/legal/copytrade.html.
The following terms are trademarks of Lenovo in the United States, other countries, or both:
Lenovo® ServeRAID™ vNIC™
RackSwitch™ System x®
Lenovo(logo)® TruDDR4™
Intel, Xeon, and the Intel logo are trademarks or registered trademarks of Intel Corporation or its subsidiaries
in the United States and other countries.
Active Directory, Hyper-V, Microsoft, SQL Server, Windows, Windows Server, and the Windows logo are
trademarks of Microsoft Corporation in the United States, other countries, or both.
Other company, product, or service names may be trademarks or service marks of others.