Documenti di Didattica
Documenti di Professioni
Documenti di Cultura
Whats coming up
API Classification
DataFabric Manager Server Infrastructure APIs Operations Manager APIs Performance Advisor APIs Protection Manager APIs Provisioning Manager APIs
DataFabric Manager Server is an independent application that runs outside the NetApp storage system. DataFabric Manager Server could have any one of the following applications as its clients:
NetApp Management Console (Performance Advisor, Protection Manager) Operations Manager web UI Applications built using interfaces provided by the DataFabric Manager Server
api-proxy API
This API is uses DataFabric Manager Server as a proxy to storage systems, multistore or host agents being managed . Some of the use cases for using DataFabric Manager Server as a proxy for storage system are: Single userid/password for all NetApp systems
All the applications that use DataFabric Manager Server as proxy need only DataFabric Manager Server login credentials and dont need to worry about the login credentials of the different storage systems that they are managing. This needs the user to have required privileges to run APIs .
Faster access and single portal to data DataFabric Manager Server collects and stores detailed information about storage systems in its database. Applications can query DataFabric Manager for quick access to this data. For any other purposes, the application can use DataFabric Manager as a proxy to talk to the storage system.
client-registry-* APIs:
This API stores the name-value pairs in the DataFabric Manager Server on behalf of the applications This feature will be useful to store any persistent data of the application Use Case: User settings for an Application User would customize the application settings when he first uses the application
like window size, custom views and other application settings
Later, when user revisits the Applications his settings can be retrieved from the DataFabric Manager Server and would find his customization intact.
dfm-* APIs:
These APIs provide info about the DataFabric Manager Server settings, DataFabric Manager Objects status
Some of the use cases for these APIs:
Use-Case 1: Related objects list
Suppose an object (e.g. volume) needs to be removed/deleted. Determine all objects related to this object using dfm-related-objectslist-info Analyze the impact (e.g. if this volume belongs a backup resourcepool) on related objects before removing or deleting.
10
group-* : create, list, destroy or manage DFM groups group-member-* : list group members
12
event-* : generate, acknowledge, list, delete events eventclass-* : create, list or destroy custom events
13
14
15
dfm-schedule-* : create, list, destroy or modify schedules report-* : manage report schedules, report outputs, list reports and corresponding graphs
List of custom define daily/weekly/monthly schedules using dfm-schedule-*
16
Use Case 1
Capacity Monitor and Charge back
17
Use case
Use case
Enable the report schedule
report-schedule-enable
contd..
To run the scheduled report use report-schedule-run Create custom events to be triggered for events like crossing usage space thresholds eventclass-add-custom, event-generate Set alarms for the registered events alarm-create (can specify email-ids to mail the events or run background scripts by specifying alarm script)
19
Use Case 2
Automate Group member addition
20
Use case..
Scenario : You want to automate the addition of a volume into a group on creation of new volumes and generate an information event accordingly Create a custom event to be generated whenever a volume is added to a group
eventclass-add-custom
21
Collect performance data of storage system or multistore units and its objects from the DataFabric Manager Server. There are several objectives you can have when you use this information:
Monitoring storage systems or multistore units for usage levels and optimal functioning, by allowing you to access historical and real-time data for all monitored devices recognized by the DataFabric Manager server Identifying bottlenecks, and potential bottlenecks, in the data infrastructure Performing short-term trend analysis for the data infrastructure Allows defining
23
perf-counter-* : manage counter groups; perf-view-* : manage performance views and gather view data
perf-threshold-* : Set threshold values on one or more objects based on a performance counter
Make use of alarm-* APIs to set alarms for threshold events
25
Use Case 1
Performance Monitoring
26
Use case..
Scenario : You want to monitor the performance of different protocols like CIFS, NFS, FC for all the storage systems in group QA_Group and be alerted when certain performance threshold limits are breached.
Create a Counter Group with all the performance objects of interest for the resource group QA_Group Eg.: create counter group IO_counters using the API perf-countergroup-create Create performance view using the API perf-view-create Eg.: create a view showing what is the I/O rate, I/O amount for different protocols
Set thresholds on performance counters of the objects of interest using perf-threshold-create Eg.: set max I/O rate threshold for CIFS/ NFS/FC I/O
If needed, register a custom event/event-class to be triggered when the threshold is crossed eventclass-add-custom, event-generate Set alarms for the registered events using the API alarm-create
2010 NetApp. All rights reserved. 27
dataset-*, resourcepool-*
Manage datasets: Collection of user data (can be volumes, qtrees, etc in which data resides) that needs to be backed up
Manage resourcepools: Logical collection of unused storage to provision new volumes and LUNS to backup data
30
dp-schedules-*, dp-policy-*
dp-schedules-* APIs list/define at what times data should be protected
dp-policy-* APIs list/defines how and where to transfer data at what times
31
Use Case 1
Data Protection Work Flow
32
Use Case
Use Case: You want to implement a Data Protection work flow create datesets, protection schedules, protection policy, resource pools and apply the protection policy.
Step 1: Group resources together organizes the resources into groups based on geography
Ex: QAData, NY_BCK, CA_MRR Use APIs group-create, group-add-member
ensure that the appropriate Data ONTAP licenses are enabled on each host according to its purpose in the protection strategy.
Ex:
QAData: This group of storage store the QAData to be protected. So enable the SnapVault ONTAP Primary license on this system NY_BCK: enable the SnapVault Secondary license on these systems that will be used to store backups of the QA data. Also enables the SnapMirror license on these systems, because the backups they store will be mirrored to CA_MRR CA_MRR: enable the SnapMirror license on the these storage systems that will be used to store mirrors of the NY_BCK storage systems that store the QA data backups.
33
Use Case
Step 3: Create Resource Pools
Create two resource pools NY_BCK_RP and CA_MRR_RP
contd..
NY_BCK_RP is used for provisioning the storage for the backup of the QA Data CA_MRR_RP is used for provisioning the storage for mirrors of the backups on NY_BCK_RP systems Use APIs resourcepool-create, resourcepool-add-member
34
Use Case
Step 6: Attach schedules to Policy
contd..
The remote backup schedule is attached to the backup connection between the Primary data node and the Backup node of the policy. The mirror schedule is attached to the mirror connection between the Backup node and the Mirror node of the policy. Use APIs dp-policy-edit-*, dp-policy-modify
35
Use Case
Step 9: Associate Resource Pools with the Destination Nodes
Associate the NY_BCK_RP resource pool with the Back up node in the QADATA_DS dataset Associate the CA_MRR_RP resource pool with the Mirror node in the QADATA_DS dataset Use APIs dataset-edit-*, dataset-modify-node
36
38
provisioning-policy-*
These APIs are used to manage provisioning policies.
Possible operations are creation, deletion, modification and listing of provisioning policies
Create, modify & delete provisioning policies using provisioning-policy-* set of APIs
39
vfiler-*
These APIs are used to provision & manage vFilers and vFiler templates
Possible operations are: create, modify, destroy & list vFilers and also create, modify, destroy & list vFiler template objects
Create, modify, destroy using vfiler-* set of APIs
40
dataset-provision-member / dataset-resize-member / dataset-member-delete-snapshots Provision a new member into the effective primary node of a data set Resize, change maximum capacity and change snap reserve for a data set member on the effective primary node of the data set. Selectively delete snapshots copies from a dataset member
41
Use Case 1
NAS Provisioning Work Flow
42
Once above steps are complete, you can easily provision storage from the created dataset with the attributes as set in the provisioning policy by invoking below mentioned APIs.
Step 4: Provision storage (shares) from the dataset
Use dataset-edit-* APIs to obtain a lock on dataset Use dataset-provision-member API to provision storage from the newly created dataset
43
Use Case 2
LUN Batch Provisioning
44
To batch provision LUNs, write a script that can take inputs on how many LUNs to create and which dataset or Provisioning Policy to use. Invoke below API in a loop to create required number of LUNs from a dataset.
Step 4: Provision storage (LUNs)
Use dataset-edit-* APIs to obtain a lock on dataset Use dataset-provision-member API to provision storage from a dataset
45
Use Case 3
MultiStore (vFiler) Provisioning
46
47
48
Thank You
49