Sei sulla pagina 1di 59

Legato NetWorker

technical overview

RV NRW Seminar Storage


December 2001

Reinhold Danner
Compaq Technical
Customer Support Center
Agenda

§ Overview
§ Features
§ High availability
§ SAN and Library Consolidation
§ NAS
§ SAN and Serverless backups

2
Agenda

§ Overview
§ Features
§ High availability
§ SAN and Library Consolidation
§ NAS
§ SAN and Serverless backups

3
What is Legato Networker

Legato NetWorker is a powerful and highly


scalable family of distributed storage
management software components for
heterogeneous environments of all sizes.
NetWorker provides complete centralized
protection and life cycle management of
corporate data assets.

4
NetWorker product development
1990 2001

Unix File System backup (Networker)

Windows NT File System backup

UNIX and NT RDBMS Backup (Networker Modules)

UNIX and NT ERP & Messaging backup (NW Modules

Storage Frameworks (GEMS)

Media Manager (SmartMedia)

LAN Free backup

Server-Less backup
5
Networker Server Platforms

§ Solaris
§ HP-UX
§ Tru64 UNIX
§ AIX
§ IRIX
§ Linux (Redhat, SuSE, Caldera, TurboLinux)
§ Windows NT and 2000
§ NetWare
§ See Compatibility Guide
6
NetWorker supports
§ Application and Database § Framework integration
support – Tivoli
– Oracle – HP Openview
– Informix – Unicenter TNG
– Sybase – Generic SNMP module
– DB2
– SAP R/3 on Oracle
– Microsoft SQL Server
– MS Exchange
See compatibility
– Lotus Notes
list for more

7
33-tier
-tier any -two-any storage architecture
any-two-any
for Client/Server Environments

NetWorker
Server Server Metadata
(1 Data Zone) Tru64 UNIX
Indices

Storage
Node

Storage Node
HP-UX

Client
Linux Solaris NT AIX
8
GEMS
Global Enterprise Management of Storage

§ Control Zone Control Zone(s)


– Command and
Control Functions
§ Data Zones
...
– Data Protection Corporate Intranet
– Local Administration
Data Zone Data Zone Data Zone
– Performance
Optimisation

X018 9
Core NetWorker Technology
Metadata (Backup Database)
Relational, Fast
Granular (Separate File and Media DB)
Point-in-Time, True Image Recovery
Enables Rapid NetWorker Server Recovery
Easy to Replicate/Fail Over

Tape Format
Parallel (Backup and Recovery)
Platform Independent
Self Describing
Resilient
Enables Rapid File Location

§ Mature, proven technology that has spanned a


decade of computing
10
NetWorker Open Tape Format
File Number
Markers Data Blocks End of Data

EOF EOF EOF EOF

label=vol.001
blocksize=32KB
media_type=4mm 0 1 2 76 0 1

Volume Header Record Numbers Block Headers

§ Legato's standard tape format which enable tapes to be written to


and read from ANY platform
§ Parallel Backup with interleaving capabilities for high performance -
and parallel recoveries
§ Platform and Media independent
– Same format for all supported types of media
§ Self-describing
§ Resilient
– Skips soft errors
11
§ Enables Rapid file Location
Modular scalable Architecture with
Storage Node

• Backup Device location flexibility (local or remote)


• Availability: Reduce network traffic bottlenecks
• Manageability: Single system view for devices, media and index
• Failover: Dynamic allocation of devices across storage nodes
12
Agenda

§ Overview
§ Features
§ High availability
§ SAN and Library Consolidation
§ NAS
§ SAN and Serverless backups

13
Comprehensive Multi -Platform
Multi-Platform
Data Protection
§ Central administration through GUI and
scriptable commands from server/client
§ Policy based scheduling of enterprise wide
backup operations
§ Flexible grouping of clients
§ Full, incremental, differential and consolidated
backup schedule options
§ Optional multi-tier staging of backup operations
§ Ad-hoc user backup
§ Directed recovers
§ Parallel restore capability 14
Comprehensive Multi -Platform
Multi-Platform
Data Protection ((cont)
cont)
§ Pre- and Post processing options accommodate
special requirement (offline DB backup)
§ Automatic and on-demand cloning for added
protection (Saveset duplication)
§ Open file management options (NT)
§ Firewall support
§ Archiving
§ Hierachical Storage Management (HSM)
§ Can use a dedicated network for backup and
recovery
15
Browse and Rentention Policies

Browse Policy for purging unwanted file version


information
– backup, archive and migration data visible via User
Administrator interface
– point in time recovery
– one per saveset

Retention Policy for grooming and recycling media


– backup, archive and migration data visible via System
Administration interface
– Browse Policy depends on/interrelated to Retention Policy
– tracks media usage
16
Indexing ((catalog)
catalog) Technology

§ Excellent performance and scaling


– Index performance scales linearly with # of
savesets
– Fast browsing
– Huge index sizes supported
§ Fault resilient
– No compression
– Index files are write-once-read-many

17
NetWorker Index Database Model

/nsr

mm index

client1 clientN

mmvolume6 db6 db6

saveset Saved
And Files saveset
volumes Description

18
Networker Index features
§ Saveset based policies
§ Directed recovery within same platform
§ cross-platform browsing
§ 64 bit format data streams
– No need to split larger savesets into 2GB chunks
§ Index files are check-summed
§ <saveset>.rec files are write once, read many
§ No compression needed
§ Indices are saved automatically during scheduled
server initiated backups
§ Index backups are none-blocking 19
Client File Index: 6.x
S1.rec
(Save Time S1)
V6hdr
S1
S2
S2.rec Contains an
(Save Time S2) entry for each
save set
file (each
All records from . save time)
All records from this save time (S2) .
. S1.K0
this save time (S1) are in this file
are in this file

S1.K1 keyed by name

Contains keys
that point to the
keyed by file id records in
S1.rec.

§ The *.rec files are the ones that are backed-up. Anything else in this
directory can be rebuilt from the *.rec files by means of nsrck(8). 20
NetWorker Parallelism and Interleaving

§ Parallelism § Multiplexing
– Multiple data streams – Multiple data streams written to
from multiple clients to a one or multiple tape drives
NW Server or Storage Node – Provides maximum throughput

Memory Buffer or
Shared Memory *

Multiple savesets on self describing


NW Server tape intermixed at the block level
21
* (power edition only)
Immediate technology

DATA STREAMS (Control and Data)

NETWORKER NETWORKER SCSI


SCSI TCP/IP and UDP
CLIENT SERVER

NETWORKER
SERVER SCSI

DATA
TCP/IP
and UDP

SCSI NETWORKER
CLIENT 22
Full and Differential backup
Deltas grow over time, relative to a Full

3
...........
1 7
Full Backup Differential - backup Full Backup
all files that have
changed since the last
full
§ Full backup for every system every night is too slow
§ So 'DIFF' or 'INCR' must be used.
§ 'Level' backups grow over time so there is a pratical limit to
how many can be performed 23
Incremental backups

§ Incrementals use less media space because they only


backup changes made on that day
§ Incrementals consume less network bandwidth as there
is less data to transmit
§ Recovery from incrementals is slower because all
incremental backups must be used in the recovery.
24
Saveset Consolidation
Day 1 - Full Day 2 - Level 1 Day 3 - Level 1 Backups

Merges Level 1 backup


with full back up Saveset
to create a new full backup

§ Less traffic on network


during backups
§ Faster recovery times
§ Consolidation impacts
NetWorker Server only
Full Day 2 - Full Day 3 - Full
25
Comparison of backup levels
Backup Level Pros Cons

Full Faster restore Slower backup


Higher load on server
Higher load on client/network
Takes up more volume space

Incremental Faster backup Slower restore


Lower load on client/network Data can spread across multiple volumes
Lower load on server
Takes up least volume space

Consolidate Faster backup Longest high load on server


Faster restore Need at least two drives
Lower load on client/network Takes up the most volume space
Smaller backup window

26
Comparison of backup levels
Backup Level Pros Cons

Full Faster restore Slower backup


Higher load on server
Higher load on client/network
Takes up more volume space

Incremental Faster backup Slower restore


Lower load on client/network Data can spread across multiple volumes
Lower load on server
Takes up least volume space

Consolidate Faster backup Longest high load on server


Faster restore Need at least two drives
Lower load on client/network Takes up the most volume space
Smaller backup window

27
Archive Module
June Financials
June_Numbers.xls
Fin_Data.doc Annotation (up to 1K
or 1024 characters)
Fin_Graphs.html
June_Financials
Proposal.txt
Group of Data

§ Data 'moved' from § Server or User initiated


online storage to on demand
offline media § Recall has annotated
§ Automatic 'grooming' text search capabilities
releases disk space
§ Separate media pool
28
§ Long term retention
NetWorker HSM
Full High Water
100%
§ Migration based on
Mark § Last access time
§ Minimum file size
Low Water
Disk § File owner
Mark
Utilization § File group
§ File type
0%

Empty Time

§ HSM on UNIX products § Transparent to user /


– HSM on Tru64 UNIX based on application
symbolic links § Recall happens
– HSM on Solaris based on automatically by
DMAPI/XDSM standard accessing migrated
§ HSM on NT files 29
Saveset Staging
Clients
NetWorker Server

-
¬
¯
® High Speed Disk
Storage Pool

§ Initial backup to intermediate high speed devices


§ When stage area reach preset threshold, NW
can automatically stage data to removable media
§ Interim recovery operations occurs at disk speed30
Saveset and Volume Cloning
Clients NetWorker Server

¬ -
®
Multi-Device
Tape Library

§ Duplicate saveset for offsite vaulting


§ Backup operation reads filesystem once
§ Recover can use original or clone
§ Automatic scheduled or user initiated on demand
31
Agenda

§ Overview
§ Features
§ High availability
§ SAN and Library Consolidation
§ NAS
§ SAN and Serverless backups

32
NetWorker Server - NT MS -CLusterServer
MS-CLusterServer
clients
LAN

ping peng pong

\nsr
rd=ping:\\.\Tape0 rd=pong:\\.\Tape0

•Clients save and recover files from virtual IP Name


•Shared copperSCSI tape drives are not supported
•Device attached to systems where Networker Server
is running are considered local 33
•NetWorker supports NT V4.0 MSCS & W2k Clusters
NetWorker Server - Tru64 UNIX TruCluster 5
clients
LAN

ClusterAlias: peng
Cluster interconnect
ping pong
SCSI/FC

/nsr
HSG80 HSG80
/dev/ntape/tape1c
rd=ping:/dev/ntape/tape0_d0 /dev/ntape/tape2c
/dev/ntape/tape3c

•supported on TruCluster V5.0a, V5.1 and V5.1a


•assumes name of default CLUA
•runs as a CAA application (resource)
•only one Index (default ClusterAlias) stores file entries of all filesystems
34
•shared /nsr -> /cluster/members/{memb}/nsr -> /<ClusterAlias>/nsr
Agenda

§ Overview
§ Features
§ High availability
§ SAN and Library Consolidation
§ NAS
§ SAN and Serverless backups

35
Benefits of Library / Drive Sharing

§ Tape robot are very expensive


– Sharing this cost between multiple backup servers/storage
nodes, increases ROI
– Media management is centralised. Administration costs
are reduced and ROI is improved
– Most products perform library sharing but not drive sharing
§ Library sharing allows you to allocate drives to systems
§ Drive sharing allows you to dynamically change this
allocation. Sometimes known as Dynamic Drive Sharing
(DDS)
– Library sharing is important in traditional backup
environments
– Drive sharing is required in most SAN solutions
36
Library Sharing with ACSLS
NetWorker Server/
Storage Node
Mainframe

SCSI
SCSI

ACSLS Server

TCP/IP

SCSI SCSI

NetWorker Server/
Storage Node NetWorker Server/
Storage Node 37
NetWorker Library Sharing ((static)
static)
NetWorker Data Zone
NetWorker
Server

§ Jukebox is shared among one NetWorker Server


and/or multiple Storage Nodes
§ Drives are used by one Server/Storage Node only 38
NetWorker dynamic device sharing (DDS)
within one Data Zone Robotic Control
LAN Library

NW Server 1
A

Storage Node
B 2
§ Drive 1 (/dev1)
Storage Node
(rd=snB:/dev1) C

(rd=snC:/dev1) Drive - physical tape drive


Device - access path to phys. Drive
all devices sharing a drive
§ Drive 2 (/dev2) have same HW id
(rd=snB:/dev2) Drive number - phys. drive's position in the jb
Device number - device's position in the jb
(rd=snC:/dev2) device resource
Hardware ID = unique Ids for phys. drives 39
GEMS SmartMedia ((dynamic)
dynamic)

Robotic Control § GEMS SmartMedia


– Central Jukebox
Data Management
SmartMedia Library
LCP Paths – Library
Server NetWorker
Server Consolidation
DCP
over Data Zones
Storage Node DCP – Dynamic drive
sharing (DDS)
NetWorker
DCP
Server
Application DCP

i.e. for each tape drive one DCP is configured for each system
4 tape drive => configure 4x DCP for application node
40
Open management framework

41
AlphaStor - key functional areas

Device
Device&
&
Library
LibraryMgmt.
Mgmt.

AlphaStor/Robot Manager

Media
MediaLife
LifeCycle
Cycle
Management
Management

AlphaStor/Media Manager
42
True Device Sharing

• AlphaStor creates
logical separation.
• NetWorker sees
devices as locally SAN
attached.
• Mixed device types
possible. DLT7000

SuperDLT

43
Device and Library Functionality
§ Device sharing across backup servers (data zones)
§ Dynamic device allocation (automated drive selection
– maximizes usage)
§ Mixed media support (multiple drive types in same
library, form factors, etc.)
§ Support for broad range of SCSI libraries plus
controller-based libraries in SAN environment
(StorageTek ACSLS/LibStation, IBM 3494, ADIC
AML)

44
Media Life Cycle Management
§ Automates tape and other media life cycle
management
– Onsite/offsite tape rotation policy definition
§ Allows media from multiple NetWorker servers
(data zones) to reside in the same library
§ Consolidated media reporting and management
across backup servers (data zones)
§ Available ‘scratch’ tape reporting for individual
or combined applications
§ A point and click web interface for Operators
45
Compaq EBS for Data Centers
Legato NetWorker § Integrated SAN of Tape and Disk on
a high-speed, independent Fibre
Channel (100MB/sec) storage
NT NT 2000 2000 Alpha Alpha Alpha Sun Sun backbone
T64 T64 T64 Solaris
§
Solaris Leverages current DLT Library
technology and shares the tape
Fibre Channel device to all servers on Fabrics
SAN Switch 8/16 (cascaded) Switches
§ Provides a centralized, automated
backup solution for multiple Compaq
ProLiant and/or Compaq Alpha
MDR and/or SUN Servers
§ Provides large management savings
and tape standardization for all
servers
§ Storage consolidation through
Heterogeneous OS & Platform
EMA12000 ESL9326DX Support - NT, W2K, Tru64 UNIX /
Library TruCluster, Sun Solaris 46
Agenda

§ Overview
§ Features
§ High availability
§ SAN and Library Consolidation
§ NAS
§ Serverless backups

47
Network Attached Storage Support
§ NDMP local backup
Networker
Backup
Server
NDMP LAN

ISOLATED
BACKUP
NAS The NAS files is a
DATA FLOW
Tape storage node!
Library

48
Network Attached Storage Support ((cont)
cont)

§ Remote Backup

Tape Library Networker


Backup
Server
LAN

NDMP

Backup through
NAS
the network to
server attached
library
49
Network Attached Storage Support ((cont)
cont)

§ NDMP 3-party backup


Networker
Backup
Server
NDMP LAN

NDMP

NAS
ISOLATED
BACKUP
DATA FLOW
Tape Library

Filers with low capacity


do not need a library
50
SnapImage backup

§ NetWorker SnapImage Backup


- Block Level Image Backup/ Restore
– Available on Solaris 2.6/2.7 and HP-UX 11.0
– Live, low impact block level backups
– Full image and File level recovery
– NDMP backup to to locally attached tape device
or across the LAN to a tape device attached to
another NDMP server

51
Agenda

§ Overview
§ Features
§ High availability
§ SAN and Library Consolidation
§ NAS
§ Serverless backups

52
Serverless backup on a SAN

Workstations
§ Direct disk-to-tape and
disk-to-disk storage
Messaging Network
§ No impact to LAN or
Application Storage application servers
Servers Server

§ Enables continuous
Storage Area Network – Backup
– Restore

Disks § Legato solutions


Tape
– NetWorker, Celestra
and NDMP
53
Reengineering Backup

1%
Control
Process
100%
Backup Inter
Application Process
Reengineered Protocol
Data Moving
99% Process

54
Celestra 3-Tier Architecture

er
ork
Management

tW
Ne
LAN
Software Complexity

Std. Interface
NDMP
Workload

we ra
Po lest
Data Control

r
Application
Ce
Servers

Std. Interface Third Party Extended Block Copy Interface


ver

SAN
Mo

Data Movement
ta
Da

Tape Library

55
Celestra in a SAN Environment
#1
Application turns NetWorker
on backup
with NDMP
LAN

#2 Application Celestra
Celestra sends a Servers Power Agents
block list to data
mover

Data
SAN
Mover
#3
Data move
sends block
to tape

Tape Library
56
Celestra Live Backup
Backup Events
Document
saved • Backup starts at 11:00pm
Celestra • Snapshot taken, Cache is
Write
Power opened
CheckPoint
Interceptor
Application Server • Begin saving blocks to tape
• At 11:05pm a write to the file
system is done
Celestra Device • Write Interceptor first saves
OR DataMover original block to cache
• Write Interceptor then writes new
File system Tape Library block
• Resume saving blocks to tape
• When we arrive at changed block,
save original from cache instead
Cache
• Resume saving from filesystem
• Backup of filesystem
- snapshot of 11:00pm -
is complete
57
Celestra Power Delivers
n Offloads LAN, CPU and I/O
n Moves data directly from disk to tape
n Server and network performance not impacted
n Safe, reliable live backup of active file systems
n High speed data transfer - exceeds 1TB/hour
n Full, incremental, sparse backup
n Image, directory, and file restore
n Proven technology (100+ sites over 5 years)

58

Potrebbero piacerti anche