Sei sulla pagina 1di 49

NetApp Manageability SDK

DataFabric Manager server APIs Overview Sept 2009

Purpose and Scope


This document provides an overview of major APIs and use cases that can be achieved using DataFabric Manager server APIs bundled in NetApp Manageability SDK. This document assumes readers to be familiar with Operations Manager, Performance Advisor, Protection Manager and Provisioning Manager. NM SDK bundles APIs of these products. Screenshots used are from Operations Manager and NetApp Management Console
2010 NetApp. All rights reserved. 2

Whats coming up
API Classification
DataFabric Manager Server Infrastructure APIs Operations Manager APIs Performance Advisor APIs Protection Manager APIs Provisioning Manager APIs

2010 NetApp. All rights reserved.

DataFabric Manager Server Infrastructure APIs

What is DataFabric Manager Server?


DataFabric Manager software provides infrastructure services such as discovery, monitoring, role-based access control, auditing, logging, etc. for products in the Storage and Data Suites. The DataFabric Manager server supports the following products:
Operations Manager & Performance Advisor Protection Manager Provisioning Manager

DataFabric Manager Server is an independent application that runs outside the NetApp storage system. DataFabric Manager Server could have any one of the following applications as its clients:
NetApp Management Console (Performance Advisor, Protection Manager) Operations Manager web UI Applications built using interfaces provided by the DataFabric Manager Server

2010 NetApp. All rights reserved.

api-proxy API
This API is uses DataFabric Manager Server as a proxy to storage systems, multistore or host agents being managed . Some of the use cases for using DataFabric Manager Server as a proxy for storage system are: Single userid/password for all NetApp systems
All the applications that use DataFabric Manager Server as proxy need only DataFabric Manager Server login credentials and dont need to worry about the login credentials of the different storage systems that they are managing. This needs the user to have required privileges to run APIs .

Auto-discovery of NetApp systems


DataFabric Manager Server automatically discovers all the NetApp storage systems in a given network. This reduces the discovery burden on the application.

Faster access and single portal to data DataFabric Manager Server collects and stores detailed information about storage systems in its database. Applications can query DataFabric Manager for quick access to this data. For any other purposes, the application can use DataFabric Manager as a proxy to talk to the storage system.

2010 NetApp. All rights reserved.

client-registry-* APIs:
This API stores the name-value pairs in the DataFabric Manager Server on behalf of the applications This feature will be useful to store any persistent data of the application Use Case: User settings for an Application User would customize the application settings when he first uses the application
like window size, custom views and other application settings

Later, when user revisits the Applications his settings can be retrieved from the DataFabric Manager Server and would find his customization intact.

2010 NetApp. All rights reserved.

dfm-* APIs:
These APIs provide info about the DataFabric Manager Server settings, DataFabric Manager Objects status
Some of the use cases for these APIs:
Use-Case 1: Related objects list
Suppose an object (e.g. volume) needs to be removed/deleted. Determine all objects related to this object using dfm-related-objectslist-info Analyze the impact (e.g. if this volume belongs a backup resourcepool) on related objects before removing or deleting.

Use-Case 2: Check what features are enabled on DataFabric Manager Server


Applications want to know if a particular feature is licensed on DataFabric Manager Server Get the details of the DataFabric Manager Server using the API dfmabout The output element has a field licensed-features which provides the details of the licensed plug-ins on the DataFabric Manager Server
2010 NetApp. All rights reserved. 8

Operations Manager APIs

Operations Manager API Overview


Operations Manager APIs provide following features:
Discovery: Detect storage systems, volumes, qtrees, LUNs, disks, and quotas. Filter the discovered objects by RBAC and user-defined grouping. Monitoring & Reporting: Monitor health, capacity utilization and performance of storage systems and components Manage schedules for data protection and reports. Manage user favorite and recently viewed reports Events and Alarms: Configure alarms and thresholds for event management. DataFabric Manager Server raises alarms for the configured events and sends the alerts through e-mail, pager, or by generating SNMP traps to other monitoring applications. Active Management: Create groups of storage systems, multistore units, host agents, volumes, qtrees, and LUNs (based on geography, department, function and so on) Configure RBAC settings Manage local users, user groups, domain users, and user roles on storage system

2010 NetApp. All rights reserved.

10

aggregate-*, volumes-*, lun-*, qtree-*, disks-*


details ofqtrees all ListList details of all List details of allof LUNS List details all aggregates from all List from details of all disks all managed hosts from all managed hosts volumes from all managed hosts from all managed hosts managed hosts

* This slide has animation. Use Slide Show mode


2010 NetApp. All rights reserved. 11

group-* : create, list, destroy or manage DFM groups group-member-* : list group members

Logically group discovered hosts

2010 NetApp. All rights reserved.

12

event-* : generate, acknowledge, list, delete events eventclass-* : create, list or destroy custom events

Add custom events or list canned events

2010 NetApp. All rights reserved.

13

alarm-* : create, destroy, modify, list & test alarms

add alarms for custom events or canned events

2010 NetApp. All rights reserved.

14

rbac-* : Role-Based Access Control (RBAC) management APIs

add/modify/delete user capabilities using rbac-* APIs

2010 NetApp. All rights reserved.

15

dfm-schedule-* : create, list, destroy or modify schedules report-* : manage report schedules, report outputs, list reports and corresponding graphs
List of custom define daily/weekly/monthly schedules using dfm-schedule-*

Use report-* APIs to list/run/schedule reports

2010 NetApp. All rights reserved.

16

Use Case 1
Capacity Monitor and Charge back

2010 NetApp. All rights reserved.

17

Use case

Scenario: Capacity Monitoring for charge back


Create a group with storage elements (storage system, aggregate, volume, qtree etc) that need to be monitored for capacity usage
group-create, group-add-member

Set the charge back rates


group-set-options Create a monthly schedule to run on last day of the month using dfm-schedule-create Create a report schedule to generate monthly chargeback report on a per above created schedule report-schedule-add
2010 NetApp. All rights reserved. 18

Use case
Enable the report schedule
report-schedule-enable

contd..

To run the scheduled report use report-schedule-run Create custom events to be triggered for events like crossing usage space thresholds eventclass-add-custom, event-generate Set alarms for the registered events alarm-create (can specify email-ids to mail the events or run background scripts by specifying alarm script)

2010 NetApp. All rights reserved.

19

Use Case 2
Automate Group member addition

2010 NetApp. All rights reserved.

20

Use case..
Scenario : You want to automate the addition of a volume into a group on creation of new volumes and generate an information event accordingly Create a custom event to be generated whenever a volume is added to a group
eventclass-add-custom

Use proxy mechanism to create volumes on NetApp Storage


api-proxy, volume-create

Add the newly created volume object to the group


group-add-member

Generate the custom event


event-generate

2010 NetApp. All rights reserved.

21

Performance Advisor APIs

Performance Advisor API Overview


Performance Advisor provides APIs to: Manage performance counter groups
Create, list, destroy, modify, enable/disable, get data, get top n counter data etc

Manage Performance Views


Create, list, destroy, modify, get data, associate a new object, etc

Manage thresholds and alarms


Create, delete, list and modify thresholds and set or destroy alarms for threshold breach

Collect performance data of storage system or multistore units and its objects from the DataFabric Manager Server. There are several objectives you can have when you use this information:
Monitoring storage systems or multistore units for usage levels and optimal functioning, by allowing you to access historical and real-time data for all monitored devices recognized by the DataFabric Manager server Identifying bottlenecks, and potential bottlenecks, in the data infrastructure Performing short-term trend analysis for the data infrastructure Allows defining

2010 NetApp. All rights reserved.

23

perf-counter-* : manage counter groups; perf-view-* : manage performance views and gather view data

Use perf-countergroup-* APIs to manage counter groups

Use perf-view-* APIs to manage performance view


2010 NetApp. All rights reserved. 24

perf-threshold-* : Set threshold values on one or more objects based on a performance counter
Make use of alarm-* APIs to set alarms for threshold events

Use perf-threshold-* APIs to set, destroy, list or modify threshold settings

2010 NetApp. All rights reserved.

25

Use Case 1
Performance Monitoring

2010 NetApp. All rights reserved.

26

Use case..
Scenario : You want to monitor the performance of different protocols like CIFS, NFS, FC for all the storage systems in group QA_Group and be alerted when certain performance threshold limits are breached.
Create a Counter Group with all the performance objects of interest for the resource group QA_Group Eg.: create counter group IO_counters using the API perf-countergroup-create Create performance view using the API perf-view-create Eg.: create a view showing what is the I/O rate, I/O amount for different protocols

Set thresholds on performance counters of the objects of interest using perf-threshold-create Eg.: set max I/O rate threshold for CIFS/ NFS/FC I/O
If needed, register a custom event/event-class to be triggered when the threshold is crossed eventclass-add-custom, event-generate Set alarms for the registered events using the API alarm-create
2010 NetApp. All rights reserved. 27

Protection Manager APIs

Protection Manager API Overview


Protection manager APIs provides following feature:
Grouping of primary data with identical data protection requirements Grouping of secondary storage for storage provisioning where data will be backed up Manage/define schedules for data protection that define time schedules when data needs to be protected Manage/define protection policy that defines how, where, when, which data needs to be protected and how long that data needs to be retained Manage data protection jobs Manage time and throughput of backups Operations to restore data from a backup
2010 NetApp. All rights reserved. 29

dataset-*, resourcepool-*
Manage datasets: Collection of user data (can be volumes, qtrees, etc in which data resides) that needs to be backed up

Manage resourcepools: Logical collection of unused storage to provision new volumes and LUNS to backup data

2010 NetApp. All rights reserved.

30

dp-schedules-*, dp-policy-*
dp-schedules-* APIs list/define at what times data should be protected

dp-policy-* APIs list/defines how and where to transfer data at what times

2010 NetApp. All rights reserved.

31

Use Case 1
Data Protection Work Flow

2010 NetApp. All rights reserved.

32

Use Case
Use Case: You want to implement a Data Protection work flow create datesets, protection schedules, protection policy, resource pools and apply the protection policy.
Step 1: Group resources together organizes the resources into groups based on geography
Ex: QAData, NY_BCK, CA_MRR Use APIs group-create, group-add-member

Step 2: Configure Hosts

ensure that the appropriate Data ONTAP licenses are enabled on each host according to its purpose in the protection strategy.
Ex:

QAData: This group of storage store the QAData to be protected. So enable the SnapVault ONTAP Primary license on this system NY_BCK: enable the SnapVault Secondary license on these systems that will be used to store backups of the QA data. Also enables the SnapMirror license on these systems, because the backups they store will be mirrored to CA_MRR CA_MRR: enable the SnapMirror license on the these storage systems that will be used to store mirrors of the NY_BCK storage systems that store the QA data backups.

Use ONTAP APIs license-add, license-list-info

2010 NetApp. All rights reserved.

33

Use Case
Step 3: Create Resource Pools
Create two resource pools NY_BCK_RP and CA_MRR_RP

contd..

NY_BCK_RP is used for provisioning the storage for the backup of the QA Data CA_MRR_RP is used for provisioning the storage for mirrors of the backups on NY_BCK_RP systems Use APIs resourcepool-create, resourcepool-add-member

Step 4: Create Schedules


Create 2 schedules for the protection of data
Backup schedule this will be used for backing up the data of QADATA storage systems Mirror schedule this will be used to mirror the backup data of the NY_BCK_RP systems Use APIs dfm-schedule-create

Step 5: Choose a Protection Policy


Based on the protection analysis for the QA data, choose a protection policy
Ex.: choose the Back up, then mirror policy. Use APIs dp-policy-list-iter-*, dp-policy-copy

2010 NetApp. All rights reserved.

34

Use Case
Step 6: Attach schedules to Policy

contd..

The remote backup schedule is attached to the backup connection between the Primary data node and the Backup node of the policy. The mirror schedule is attached to the mirror connection between the Backup node and the Mirror node of the policy. Use APIs dp-policy-edit-*, dp-policy-modify

Step 7: Create Dataset


Identify the data to be part of a dataset QA Data residing on different volumes/qtrees can be formed as one dataset Create a new dataset QADATA_DS Use APIs dataset-create, dataset-add-member

Step 8: Apply policy to dataset


Apply the Back up, then Mirror policy to the dataset QADATA_DS Use APIs dataset-edit-*, dataset-modify

2010 NetApp. All rights reserved.

35

Use Case
Step 9: Associate Resource Pools with the Destination Nodes
Associate the NY_BCK_RP resource pool with the Back up node in the QADATA_DS dataset Associate the CA_MRR_RP resource pool with the Mirror node in the QADATA_DS dataset Use APIs dataset-edit-*, dataset-modify-node

Step 10: Configure Alarms


Configure alarms to trigger action on the conformance events Use API alarm-info

Step 11: Run Conformance Check


run the conformance check to make sure that the protection policy associated with the dataset is working fine Use API dataset-conform-begin
Note : Conformance checks should not be scheduled. Use it only when needed.

2010 NetApp. All rights reserved.

36

Provisioning Manager APIs

Provisioning Manager API Overview


The Provisioning Manager application API provides the following capabilities:
Define policies to automate storage provisioning and configure default settings for exporting storage Provisioning new and existing storage Resizing space and capacity of existing storage Periodic conformance checking to ensure that the provisioned storage conforms to the provisioning policy

2010 NetApp. All rights reserved.

38

provisioning-policy-*
These APIs are used to manage provisioning policies.
Possible operations are creation, deletion, modification and listing of provisioning policies
Create, modify & delete provisioning policies using provisioning-policy-* set of APIs

2010 NetApp. All rights reserved.

39

vfiler-*
These APIs are used to provision & manage vFilers and vFiler templates
Possible operations are: create, modify, destroy & list vFilers and also create, modify, destroy & list vFiler template objects
Create, modify, destroy using vfiler-* set of APIs

2010 NetApp. All rights reserved.

40

dataset-provision-member / dataset-resize-member / dataset-member-delete-snapshots Provision a new member into the effective primary node of a data set Resize, change maximum capacity and change snap reserve for a data set member on the effective primary node of the data set. Selectively delete snapshots copies from a dataset member

Provision storage using dataset-provision-member

Resize storage using dataset-resize-member

Delete snapshot copies from a dataset member

2010 NetApp. All rights reserved.

41

Use Case 1
NAS Provisioning Work Flow

2010 NetApp. All rights reserved.

42

NAS Provisioning Work Flow


Create a provisioning workflow to provision home directories as CIFS shares using a NAS provisioning policy. To achieve this, follow the steps below: Design the policy, identify resourcepools and create datasets from which storage will be provisioned. This is a one time activity.
Step 1: Provisioning Policy
Create a NAS type provisioning policy using provisioning-policy-create API. Set attributes such as storage reliability level, quota settings, space guarantee settings along with space thresholds as required.

Step 2: Create a Resource Pool


Group desired aggregates or storage systems to create a resource pool from which storage will be provisioned. Use resourcepool-create, resourcepool-add-member APIs

Step 3: Create a Dataset and attach provisioning policy


Create a dataset using dataset-create API Associate NAS provisioning policy and resource pool to newly created dataset using dataset-modify/ dataset-modify-node API. Make sure to set appropriate export settings for NAS (CIFS in this case. NFS or both is another option) using dataset-modify-node API.

Once above steps are complete, you can easily provision storage from the created dataset with the attributes as set in the provisioning policy by invoking below mentioned APIs.
Step 4: Provision storage (shares) from the dataset
Use dataset-edit-* APIs to obtain a lock on dataset Use dataset-provision-member API to provision storage from the newly created dataset
43

2010 NetApp. All rights reserved.

Use Case 2
LUN Batch Provisioning

2010 NetApp. All rights reserved.

44

LUN Batch Provisioning


Create a LUN provisioning workflow to enable batch LUN provisioning (for e.g. 20 LUNs) using a SAN provisioning policy. To achieve this, follow the steps below: Design a policy, identify resourcepools and create datasets from which storage will be provisioned. This is a one time activity
Step 1: Create a SAN Provisioning Policy
Create a SAN type provisioning policy using provisioning-policy-create API. Set storage reliability level, space guarantee settings along with space thresholds as required.

Step 2: Create a Resource Pool


Group desired aggregates or storage systems to create a resource pool from which storage will be provisioned. Use resourcepool-create, resourcepool-add-member APIs

Step 3: Create a Dataset and attach provisioning policy


Create a dataset using dataset-create API Associate SAN provisioning policy and resource pool to newly created dataset using dataset-modify / dataset-modify-node API. Make sure to set appropriate export settings for SAN (FCP or iSCSI) using dataset-modify-node API.

To batch provision LUNs, write a script that can take inputs on how many LUNs to create and which dataset or Provisioning Policy to use. Invoke below API in a loop to create required number of LUNs from a dataset.
Step 4: Provision storage (LUNs)
Use dataset-edit-* APIs to obtain a lock on dataset Use dataset-provision-member API to provision storage from a dataset

2010 NetApp. All rights reserved.

45

Use Case 3
MultiStore (vFiler) Provisioning

2010 NetApp. All rights reserved.

46

MultiStore (vFiler) Provisioning


Step 1: Create a vFiler template (optional).
A vFiler template contains configuration information that is used during vFiler setup. Use vfiler-template-create API

Step 2: Create a new vFiler on a storage system


Create a new vFiler on a storage system, by either specifying a storage system or a resourcepool on which it is to be created Use vfiler-create API

Step 3: Configure and setup a vFiler based on a specified template


Specify protocols allowed, IP, password etc. Depending on the input a CIFS setup will also be done on the vFiler. Use vfiler-setup API

2010 NetApp. All rights reserved.

47

For more info


This presentation does not cover the entire API set available in DFM server. For the latest list of APIs and for more details refer NetApp Manageability SDK on NTN. For details on how to use the APIs refer to sample codes provided in the NM SDK bundle
Sample codes are available in C#, Perl and Java

2010 NetApp. All rights reserved.

48

Thank You

2010 NetApp. All rights reserved.

49

Potrebbero piacerti anche