Documenti di Didattica
Documenti di Professioni
Documenti di Cultura
Virtual Appliance
Getting Started Guide
This product is protected by United States Patents Nos. 7,093,127 B2; 6,715,098; 7,058,788 B2; 7,330,960 B2; 7,165,145 B2
;7,155,585 B2; 7.231,502 B2; 7,469,337; 7,467,259; 7,418,416 B2; 7,406,575 B2 , and additional patents pending."
13111
NSSVA Getting Started Guide
Contents
Introduction
Components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2
Benefits . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2
Hardware/software requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .5
NSSVA Specification and requirement summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .9
Minimum ESX server requirements: shared server configuration . . . . . . . . . . . . . . .9
Minimum ESX server requirements: dedicated server configuration . . . . . . . . . . . .10
Supported Disk Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .11
NSSVA Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .12
ESX server deployment planning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .12
About this document . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .12
Knowledge requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .13
Index
Introduction
FalconStor Network Storage Server Virtual Appliance (NSSVA) for VMware
Infrastructure 3 and 4 is a pre-configured, production ready virtual machine that
delivers high speed iSCSI and virtualization storage service through VMwares
virtual appliance architecture.
It provides enterprise-class data protection features including application-aware,
space-efficient snapshot technology that can maintain up to 64 point-in-time copies
of each volume. The FalconStor NSS Virtual Appliance can also be used as a cost-
effective virtual iSCSI SAN solution by creating a virtual SAN on a VMware ESX
server and turning internal disk resources into a shareable pool of storage.
If the FalconStor NSS Virtual Appliance is deployed on a single VMware ESX server,
that server can share storage resources with other servers in the environment. This
is accomplished without the need for external storage arrays, SAN switches, or
costly host bus adapters (HBA). Internal data drives are detected by the software
and incorporated into the management console through a simple GUI. At that point,
storage can be provisioned and securely allocated via the iSCSI protocol, which
operates over standard Ethernet cabling.
To enable high availability (HA), the FalconStor NSS Virtual Appliance can be
deployed on two VMware ESX servers that can share storage with each other as
well as additional VMware ESX servers. In this model, each NSS Virtual Appliance
maintains mirrored data from the other server. If one of the servers is lost, all virtual
machines that were running on the failed server can restart using the storage
resources of the remaining server. Downtime is kept to a minimum as applications
are quickly brought back online.
Thin Provisioning technology and space-efficient snapshots further decrease costs
by minimizing consumption of physical storage resources. The Thin Replication
feature minimizes bandwidth utilization by sending only unique data blocks over the
wire. Built-in compression and encryption reduce bandwidth consumption and
enhance security, without requiring specialized network devices to connect remote
locations with the data center or DR site. Tape backup for multiple remote offices
can be consolidated to a central site, eliminating the need for distributed tape
autoloaders and associated management headaches and overhead.
NSSVA is supported under the VMware Ready program for virtual appliances. It is a
TOTALLY Open solution for VMware Infrastructure that enables a virtual SAN
(vSAN) service directly on VMware ESX servers. The local direct attached storage
becomes a shared SAN for all ESX Servers on the iSCSI network. The ability to
convert direct attached storage within an ESX Server opens the door for small to
medium enterprises to initially deploy VMware Infrastructure without the added
expense of a dedicated SAN appliance and to enjoy the broader benefits of
VMwares business continuity and resource management feature.
Additionally, most businesses, small and large, seek out VMwares advanced
enterprise features VMware VMotion (live migration of a running virtual machine
from one ESX server to another), HA (High Availability auto restart of virtual
machines), and DRS (Distributed Resource Scheduling moving virtual machine
workloads based on preset metrics or schedules).
Components
NSS Virtual Appliance consists of the following components:
Component Description
NSS Virtual Appliance A virtual machine that runs FalconStor NSS software. This virtual appliance
delivers high speed iSCSI and virtualization storage service through
VMwares virtual appliance architecture: a plug-and-play VMware virtual
machine running on VMware ESX server. NSSVA is a TOTALLY Open
virtual storage array and a VMware Certified Virtual Appliance.
FalconStor The Windows management console that can be installed anywhere there is
Management Console IP connectivity to the NSS Virtual Appliance.
Snapshot Agents Optional. Collaborate with Windows NTFS volumes and applications in order
to guarantee that snapshots are taken with full application level integrity for
fastest possible recovery. A full suite of Snapshot Agents is available so that
each snapshot can later be used without lengthy chkdsk and database/email
consistency repairs. Snapshot Agents are available for Oracle, Microsoft
Exchange, Lotus Notes/Domino, Microsoft SQL Server, IBM DB2
Universal Database, Sybase and many other applications.
SAN Disk Manager Host-side software that helps you register host machines with the NSS
virtual appliance.
Benefits
High Availability
MicroScan Replication
In the branch or remote office, VMware Infrastructure and FalconStor NSSVA can
help to reduce operational costs through a server and storage consolidation to a
central data center. FalconStors MicroScan Replication option with built-in WAN
acceleration completes remote office server and storage consolidation IT strategies
by providing highly efficient replication of branch or remote office data to your central
data center. MicroScan Replication also reduces the amount of information
replicated by ensuring that data already sent to the central data center is not sent
more than once, thereby reducing traffic on the WAN.
FalconStor NSSVA also supports VMware Site Recovery Manager (SRM) through
integration with FalconStor MicroScan replication. FalconStor NSSVA, combined
with VMware Infrastructure, provides a complete highly available virtualization
solution for most small to medium enterprise as well as large enterprise
environments that are focused on consolidation and virtualization for remote office
branch offices.
Cross-Mirror failover
Three Versions
Hardware/software requirements
Component Requirement
NSS Virtual Appliance NSSVA supports the following VMware ESX Server platform:
VMware ESX Server 4.1
VMware ESXi 4.1
VMware ESX Server 4.0 Update 2
VMware ESXi 4.0 Update 2
VMware ESX Server 3.5 Update 5
VMware ESXi 3.5 Update 5
All necessary critical patches for VMware ESX server platforms are
available on the VMware download patches web site: http://
support.vmware.com/selfsupport/download/.
FalconStor A virtual or physical machine running any version of Microsoft Windows that
Management Console supports the Java 2 Runtime Environment (JRE).
VMware ESX Server FalconStor Virtual Appliances for VMware are supported only on VMware
hardware Compatibility certified server hardware.
To ensure system compatibility and stability, refer to the online
compatibility guide http://www.vmware.com/resources/compatibility/
search.php?action=base&deviceCategory=server.
64-bit processor For maximum virtualization and iSCSI SAN service, NSSVA uses 64-bit
system architecture. To verify 64-bit virtual machine support, refer to the
Processor Check for 64-Bit Compatibility from the VMware Download
Center.
Component Requirement
General failover Two ESX servers hosting two instances of NSSVA. The failover pair
(cross-mirror) should be installed with identical Linux operating system versions.
Both servers must reside on the same network segment, so that the
secondary server is reachable by the clients of the primary server
during failover. The network segment must have another device
able to generate a network ping (i.e. router, switch, or server). a
Each NIC in the primary server needs to be on its own subnet.
Reserve an IP address for each network adapter in the primary
failover server. The IP address must be on the same subnet as the
secondary server. b
Use static IP addresses for your failover configuration. It is also
recommended that the your server IP addresses be defined in a
DNS server so they can be resolved.
Enable iSCSI target mode on the primary and secondary servers
before creating the failover configuration.
The first time you set up a failover configuration, the secondary
server must not have any Replica resources.
At least one device reserved for a virtual device on each primary
server with enough space to hold the configuration repository that
will be created. The main repository should be established on a
RAID5 or RAID1 filesystem for ultimate reliability.
Each server must have identical internal storage.
Each server must have at least two network ports (one for the
crossover cable) and have network ports on the same subnet.
Only one dedicated cross-mirror IP address is allowed for the
mirror. The IP address must be 192.168.n.n.
Only virtual devices can be mirrored. Service-enabled devices and
system disks cannot be mirrored.
The number of disks on each virtual machine must match, & disks
must have matching ACSLs (adapter, channel, SCSI ID, LUN).
When failover occurs, both servers may have partial storage. To
prevent a possible dual mount situation, it is recommended that you
configure the power control for a VMware ESX server.
Prior to configuration, virtual resources can exist on the primary
server as long as the identical ACSL is unassigned or unowned by
the secondary server. After configuration, pre-existing virtual
resources will not have a mirror. You will need to use the Verify &
Repair option to create the mirror.
Note: One service IP is provided for iSCSI traffic when using cross-mirror.
Component Requirement
General failover The Microsoft iSCSI initiator has a default retry period of 60 seconds. You
requirements for iSCSI must change it to 300 seconds in order to sustain the disk for five minutes
clients (Window iSCSI during failover so that applications will not be disrupted by temporary
clients) network problems. This setting is changed through the registry.
1. Go to Start --> Run and type regedit.
2. Find the following registry key:
HKEY_LOCAL_MACHINE\system\CurrentControlSet\control\class\4D6E97B-
xxxxxxxxx\<iscsi adapter interface>\parameters\
where iscsi adapter interface corresponds to the adapter instance, such
as 0000, 0001, .....
3. Right-click Parameters and select Export to create a backup of the
parameter values.
4. Double-click MaxRequestHoldTime.
5. Pick Decimal and change the Value data to 300.
6. Click OK.
7. Reboot Windows for the change to take effect.
BIOS VT Support The VMware ESX server must be able to support hardware virtualization for
the 64-bit virtual machine. To verify BIOS VT support, refer to VMware
knowledgeBase article 1011712 -- Determining if Intel Virtualization
Technology or AMD Virtualization is enabled in the BIOS without rebooting.
2000 MHz CPU NSSVA reserves NSS resources of 2000MHz for storage virtualization,
resource reservation iSCSI service, Snapshot, and replication processes, to ensure sufficient
resources for the VMware ESX server and multiple virtual machines.
2 GB Memory NSSVA reserves 2 GB of memory resources for storage virtualization,
resource reservation iSCSI service, Snapshot, and replication processes, to ensure sufficient
resources for the VMware ESX server and multiple virtual machines. The
ESX server specifications are:
500MB for VMware ESX server system
2 GB for FalconStor NSS Virtual Appliance
More memory for the other virtual machines running on the same ESX
server
Component Requirement
Storage NSSVA supports Supports up to four (4) Storage Capacity licenses for ESX
3.5/4.0/4.1 using SAS drives (direct attached storage configuration)
connected to a RAID controller with battery-backed cache and up to ten
(10) Storage Capacity licenses for ESX 4.0/4.1 for use with external shared
storage only.
Additional storage can be added in 1 TB increments.
Storage is allocated from the standard VMware virtual disk on the local
storage or the raw device disk on SAN storage.
NSSVA also supports Storage Pools, into which you can add different sized
virtual disks. The system allocates resources for storage provisioning or
snapshots on demand.
Network Adapter NSSVA is pre-configured with two virtual network adapters that manage
your multiple path iSCSI connection or dedicated cross-mirror link. For the
best network performance, the ESX server needs two physical network
adapters for one-to-one mapping to the independent virtual switches and
the virtual network adapters of NSSVA. In addition, the ESX server may
need extra physical network adapters for Virtual infrastructure
management, VMware VMotion, or physical network redundancy.
Two physical network adapters for one-to-one virtual network mapping to
FalconStor NSSVA.
Optional physical network adapters links to one virtual switch for physical
network adapters redundancy.
Optional physical network adapters for virtual center management
through the independent network.
Optional physical network adapters for VMotion process through the
independent network.
a. This allows the secondary server to detect the network in the event of a failure.
b. The IP address is used by the secondary server to monitor the health of the primary server. The health
monitoring IP address remains with the server in the event of failure so that the servers health can be
continually monitored. (The NSSVA clients and the console cannot use the health monitoring IP address to
connect to a NSSVA.)
Local Disks Format using VMFS (or use an existing VMFS volume).
Create .vmdk file to provision to NSSVA. Virtualize the disk
and create an NSSVA SAN resource (do not use SED).
Once the ESX servers detect the NSSVA disk over iSCSI,
you can use it as a RAW Device Mapping (RDM) disk
(virtual or physical) or as a VMFS volume (recommended).
SAN Disks* Format the SAN disk using VMFS (or use an existing VMFS
volume on the SAN). Create a .vmdk file to provision to
NSSVA. Virtualize the disk and create an NSSVA SAN
resource (do not use SED).
Once the ESX servers detect the NSSVA disk over iSCSI,
you can use it as a RAW Device Mapping (RDM) disk
(virtual or physical) or as a VMFS volume (recommended).
Raw SAN Disks Create Raw Device Mapping (RDM) in virtual mode to
provision the NSSVA. Virtualize the disk and create an
NSS SAN resource (do not use SED).
Once the ESX servers detect the NSSVA disk over iSCSI,
you can use it as a RAW Device Mapping (RDM) disk
(virtual or physical) or as a VMFS volume
(recommended).
Create Raw Device Mapping (RDM) in virtual mode to
provision the NSSVA. Reserve the disk for Service -
enabled use and create an NSSVA SED resource. Do not
preserve the device Inquiry String so that the disk displays
as a FalconStor disk instead of a VMware virtual disk later.
Once the ESX servers detect the NSS disk, you must use
it as a RAW disk RDM (virtual or physical). Do not use
VMFS format in this configuration.
NSSVA Configuration
ESX server deployment planning
The FalconStor NSS Virtual Appliance is a pre-configured and ready-to-run solution
installed on a dedicated ESX server in order to function as a storage server. NSSVA
can also be installed on a ESX server that runs other virtual machines. To deliver
high availability storage service, NSSVA can be installed on a second VMware ESX
server that will function as a standby storage server with redundant cross-mirror
storage.
Dedicated NSSVA
When NSSVA is installed on a dedicated ESX server no other virtual machine runs
on the system.
Dedicated High Availability NSSVA
When NSSVA is installed on two dedicated ESX servers; they can be configured for
Active/Passive high availability.
Shared NSSVA
When NSSVA is installed on an ESX server on which other virtual machines are
installed or will be installed, NSSVA will share the CPU and memory resources with
other virtual machines and still offer storage services for the other virtual machines
on the same or the other ESX servers.
Shared HA NSSVA
When NSSVA is installed on two ESX servers on which other virtual machines are
installed or will be installed, NSSVA will share the CPU and memory resources with
other virtual machines. The two NSSVAs can be configured for Active/Passive high
availability.
Knowledge requirements
Individuals deploying NSSVA should have administrator level experience with
VMware ESX and will need to know how to perform the following tasks:
Create a new virtual machine from an existing disk
Add new disks to an existing virtual machine as Virtual Disks or Mapped
Raw Disks
Troubleshoot virtual machine networks and adapters
Although not required, it is also helpful to have knowledge about the technologies
listed below:
Linux
iSCSI
TCP/IP
Note: By default, the ESX server does not allow root to use SSH to log into the
server. To enable SSH for root login, refer to VMware KB article #8375637 -
Enabling root SSH login on an ESX host.
Note: These items are note checked when using ovf import to install the NSSVA.
4. Enter the number of the VMFS volume where you will be installing the NSS
Virtual Appliance system.
The installation script copies the system image source and extracts it to the
specified volume. The NSS Virtual Appliance is then registered onto the ESX
system.
Note: For NSSVA Lite: While extracting the NSS virtual appliance system, you
will be asked to enter your login credentials for the target. (i.e. Please enter login
information for target vi://127.0.0.1)
Installing NSSVA via Virtual Appliance Import from a downloaded zip file
1. On the client machine, unzip the NSSVA.zip file and extract the package to any
folder. For example, create a folder called FalconStor-NSSVA.
2. If not already active, launch the VMware vSphere Client/VMware Infrastructure
Client and connect to the ESX server with root privileges.
3. For VMware vSphere Client, Select File ---> Deploy OVF Template
For VMware Infrastructure Client, Select File ---> Virtual Appliance --->Import
4. For the Import Location of the Import Virtual Appliance Wizard, click the Browse
button on the Import from file option. Then select the folder to which you
extracted the package (i.e. the FalconStor-NSSVA folder), expand the folder,
and select the file: FalconStor-NSSVA.ovf in the FalconStor-VA folder.
The Virtual Appliance Details checks the virtual appliance information for
FalconStor NSSVA.
5. Click Next to continue the import.
The Name and Location displays the default appliance name: FalconStor-
NSSVA. You can change the name of the virtual machine. This change will not
be applied into the actual appliance name.
6. On the Datastore list, click on the datastore containing at least 26 GB of space
for the NSSVA system Import.
7. For Network Mapping, select the virtual machine network of the ESX server that
the NSSVA virtual Ethernet adapter will link to.
8. On the Ready to Complete screen, review all settings and click Finish to start the
virtual appliance import task.
The virtual appliance import status window displays the completion percentage.
It usually takes five to 10 minutes to complete this task.
9. Click Close when the completion percentage reaches 100% and the import
window displays Completed Successfully.
10. Only for import operations: Once NSSVA has been installed, it is recommended
that you convert the system and patch disk into eagerzerothick type disk.
SSH to log into the ESX server. If you cannot SSH, refer to Installing NSSVA
via the installation script on page 15.
Convert the System and Patch Disk to the eagerzerothick disk using the
following commands:
To convert the system disk:
#vmkfstools -k/vmfs/volumes/<datastore>/<vm_name>/
<vm_name>.vmdk
To convert the patch disk:
#vmkfstools -k/vmfs/volumes/<datastore>/<vm_name>/
<vm_name>_1.vmdk
11. Only for import operations: Disable logging on the NSS virtual appliance as
described below:
Navigate to directory in which your NSSVA VMFS volumes reside on the
ESX server.
Change the following settings on the vmx file on the FalconStor NSSVA VM:
logging = "FALSE"
#log.fileName = "/dev/null"
statslog.fileName = "/dev/null"
stats.enabled = "FALSE"
monitor.nmistats = "FALSE"
monitor.callstack = "FALSE"
Note: When using OVF import for installing the NSSVA Lite version, you will
need to manually add a 100 GB data disk in order to launch the Basic
environment configuration.
Note: The Snapshot Director is not available in the NSSVA Lite or Trial version.
Notes:
If you are running Windows Server 2003 SP2 on the virtual machine with the
firewall enabled, open TCP ports 11576, 11582, & 11762 for the SAN Client.
The SAN Client is not available in the NSSVA Lite or Trial version.
Note: The Snapshot Agent is not available for the NSSVA Lite or Trial version.
The first time you log into the NSSVA console, the FalconStor Virtual Appliance
Setup utility pops up automatically and displays the basic environment configuration
as shown in the picture below. If you want to configure the system after the initial
setting, you can run the utility by executing the vaconfig command on the NSSVA
virtual appliance console. Once you run the vaconfig utility, the system checks if
VMware Tool should be updated.
1. Launch the VMware vSphere Client/VMware Infrastructure Client and connect to
the ESX server by the account with root privilege.
2. Right-click the installed FalconStor-NSSVA then click Open Console. If the
NSSVA has not been powered on, click VM on the top menu then click the Power
On.
3. On the NSSVA console, login as a root user. The default password is IPStor101
(case sensitive).
The FalconStor Virtual Appliance Setup utility launches.
4. Move the cursor to <Configure> and scroll to select the item you want to change.
5. Highlight Host Name and click Enter to configure the host name of the virtual
appliance.
6. Highlight Time Zone and click Enter to configure the time zone. Select whether
you want to set the system clock to UTC (the default is No). Scroll up and down
to search for the correct time zone of your location.
7. Highlight Root Password and click Enter to change the new root password of the
virtual appliance. You will need to enter the new password again on the confirm
window.
8. Highlight Network Configuration and click Enter to modify your network
configurations. Select eth0 or eth1 to change the IP address setting. Answer
No to using DHCP and then set the IP address of the selected virtual network
adapter. If you want to set the IP subnet mask, press down to move the cursor
on the netmask setting.
The default IP addresses are listed below:
eth0: 169.254.254.1/255.255.255.0
eth1: 169.254.254.2/255.255.255.0
9. Repeat the network configuration to set the IP address of another virtual network
adapter.
10. Highlight Default Gateway and click Enter to change the new default gateway of
the virtual appliance.
11. Highlight Name Server and click Enter to modify the server name. You can add
four DNS server records into the virtual appliance setting.
12. Highlight NTP Server configuration and click Enter to add four DNS server
records into the virtual appliance setting.
13. After making all configuration changes, tab over to Finish and click Enter.
The utility will list the configuration changes you made.
14. Click Yes to accept and apply the setting on the virtual appliance.
The Update Complete screen displays, prompting you to continue checking and
upgrading VMware Tools.
Note: Replication and High Availability features are not available in the NSSVA
Lite or Trial version of NSSVA.
1. Unzipped the FalconStor NSS Virtual Appliance package, and then run the setup
program.
3. Read the License Agreement and click Yes if you agree to the terms.
4. Enter the User Name and Company Name on the Customer Information screen.
5. On the Choose Destination Location, change the installation folder or click Next
to accept the default destination: "C:\Program Files\FalconStor\IPStor".
6. On the Select Program Folder, click Next to accept the default program folder:
FalconStor\IPStor.
7. Review the settings on the Start Copying File and click Next to start the program
files installation.
3. Enter the IP address of NSSVA eth0. Use the default administrator account
"root" and enter the default administrator password "IPStor101".
The connected NSS Virtual Appliance is listed on the FalconStor Management
Console as shown below. The default host name is "FalconStor-NSSVA".
2. Click Add.
Register keycodes
If your computer has Internet access, the console will register a keycode
automatically after you enter it; otherwise the registration will fail.
You can have a 60 day grace period to use the product without a registered keycode
(or 30 day grace period for a trial). If this machine cannot connect to the Internet,
you can perform offline registration.
To register a keycode:
3. On the Select the method to register this license page, indicate if you want to
perform Online registration via the Internet or Offline registration.
4. For offline registration, enter a file name to export the license information to local
disk and E-mail it from a computer with Internet access to:
activate.keycode@falconstor.com
It is not necessary to write anything in the subject or body of the e-mail.
If your E-mail is working correctly, you should receive a reply within a few
minutes.
5. When you receive a reply, save the attached signature file to the same local disk.
6. Enter the path to the saved file in step 5 and click Send to import the registration
signature file.
7. Afterwards, you will see a message stating that the license was registered
successfully.
Index
C S
Console SAN Disk Manager 2
Register keycodes 25 Snapshot Agents 2
console 20 Snapshot Director 17
D T
Datastore 16 TCP ports 18
Thin Provisioning 1
F Thin Replication 1
Failover
Requirements V
Clients 7 virtual iSCSI SAN 1
FalconStor Management Console 2 VMware Infrastructure Client 20
FalconStor Virtual Appliance Setup utility 20 VMware Site Recovery Manager (SRM) 3
firewall 18
fsadmin 19
fsuser 19
H
high availability (HA) 1
I
Installation
Snapshot Agent 18
iSCSI Client
Failover 7
K
Keycodes
Register 25
Knowledge requirements 13
M
Microsoft iSCSI initiator 7
N
Network Mapping 16
NSS Virtual Appliance 2
R
root user 19
26