- Manuals
- Brands
- Dell
- Storage System
- VRealize
- Product Guide
Dell VRealize 4.4 Storage System Product Guide PDF
Summary of Content for Dell VRealize 4.4 Storage System Product Guide PDF
EMC Storage Analytics Version 4.4
Product Guide P/N 302-001-532
REV 14
Copyright 2014-2017 Dell Inc. or its subsidiaries. All rights reserved.
Published October 2017
Dell believes the information in this publication is accurate as of its publication date. The information is subject to change without notice.
THE INFORMATION IN THIS PUBLICATION IS PROVIDED AS-IS. DELL MAKES NO REPRESENTATIONS OR WARRANTIES OF ANY KIND
WITH RESPECT TO THE INFORMATION IN THIS PUBLICATION, AND SPECIFICALLY DISCLAIMS IMPLIED WARRANTIES OF
MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. USE, COPYING, AND DISTRIBUTION OF ANY DELL SOFTWARE DESCRIBED
IN THIS PUBLICATION REQUIRES AN APPLICABLE SOFTWARE LICENSE.
Dell, EMC, and other trademarks are trademarks of Dell Inc. or its subsidiaries. Other trademarks may be the property of their respective owners.
Published in the USA.
EMC Corporation Hopkinton, Massachusetts 01748-9103 1-508-435-1000 In North America 1-866-464-7381 www.EMC.com
2 EMC Storage Analytics 4.4 Product Guide
Introduction 7 Product overview......................................................................................... 8 Terminology................................................................................................. 9
Installation and Licensing 11 Prerequisites............................................................................................... 12 Installing the EMC Adapter ........................................................................ 15 Installing Navisphere CLI.............................................................................15 Adapter instances....................................................................................... 16
Adding an EMC Adapter instance for vCenter................................ 16 Configuring the vCenter Adapter................................................... 17 Adding EMC Adapter instances for EMC resources....................... 18 Editing EMC Adapter instances......................................................21
Uninstalling ESA......................................................................................... 22
Monitoring your Environment 23 EMC dashboards........................................................................................ 24
EMC overview dashboards............................................................ 25 Topology dashboards.....................................................................27 Metrics dashboards....................................................................... 27 Top-N dashboards......................................................................... 28 Dashboard XChange...................................................................... 29
Using badges to monitor resources............................................................ 29 Adjusting default tolerances....................................................................... 30 Monitoring storage..................................................................................... 30 Checking capacity and performance........................................................... 31 Troubleshooting with inventory trees......................................................... 32
Managing Alerts and Recommendations 33 Viewing alerts and alert settings.................................................................34 Finding resource alerts............................................................................... 35 Enabling XtremIO alerts..............................................................................35 Understanding event correlation................................................................ 36
Performing Recommended Actions 37 Actions menu overview...............................................................................38 Managing policies....................................................................................... 38
Changing the service level objective (SLO) for a VMAX3 or VMAX All Flash storage group.................................................................. 38 Changing the tier policy for a File System..................................... 38 Changing the tier policy for a LUN.................................................39
Managing capacity .................................................................................... 39 Extending file system capacity...................................................... 39 Extending volumes on EMC XtremIO storage systems.................. 39 Expanding LUN capacity................................................................40
Managing VNX storage resources............................................................... 41 Migrating a VNX LUN to another storage pool............................... 41
Chapter 1
Chapter 2
Chapter 3
Chapter 4
Chapter 5
CONTENTS
EMC Storage Analytics 4.4 Product Guide 3
Rebooting a Data Mover on VNX storage.......................................41 Rebooting a VNX storage processor...............................................41 Enabling performance statistics for VNX Block............................. 42 Enabling FAST Cache on a VNX Block storage pool.......................42
Enabling FAST Cache on Unity and VNXe storage pools.............................42 Managing VPLEX data migrations ..............................................................42
Troubleshooting 45 Launching Unisphere.................................................................................. 46 Finding adapter instance IDs.......................................................................46 Managing log files.......................................................................................46
Installation logs..............................................................................46 Log Insight overview......................................................................47 Error handling and event logging................................................... 49 Viewing error logs..........................................................................49 Creating and downloading a support bundle.................................. 49 Log file sizes and rollover counts................................................... 49
Managing the collection of XtremIO snapshots...........................................51 Editing the Collection Interval for a resource..............................................52 Configuring the thread count for an adapter instance................................ 52 Using SSH to connect to vRealize Operations Manager............................. 53 Troubleshooting metrics and scoreboards.................................................. 53 Understanding error messages................................................................... 54 Understanding resources and relationships................................................ 55 References................................................................................................. 57
List of Alerts 59 Avamar alerts............................................................................................. 60 Isilon alerts..................................................................................................61 RecoverPoint alerts....................................................................................62 ScaleIO alerts............................................................................................. 63 Unity, UnityVSA, and VNXe alerts.............................................................. 66 VMAX alerts............................................................................................... 68 VNX Block alerts........................................................................................ 68 VNX Block notifications.............................................................................. 73 VNX File alerts............................................................................................75 VNX File notifications................................................................................. 78 VPLEX alerts.............................................................................................. 82 XtremIO alerts............................................................................................85
Dashboards and Metric Tolerances 89 EMC Avamar Overview dashboard............................................................. 90 Isilon Overview dashboard...........................................................................91 Top-N Isilon Nodes dashboard.................................................................... 91 RecoverPoint for VMs Overview dashboard............................................... 92 RecoverPoint for VMs Performance dashboard......................................... 92 Top-N RecoverPoint for VMs Objects dashboard.......................................93 ScaleIO Overview dashboard......................................................................93 Unity Overview dashboard..........................................................................94 Top-N Unity LUNs, File Systems and VVols dashboard.............................. 94 VMAX Overview dashboard........................................................................96 Top-N VNX File Systems dashboard...........................................................97 Top-N VNX LUNs dashboard...................................................................... 97 VNX Overview dashboard...........................................................................98
Chapter 6
Appendix A
Appendix B
CONTENTS
4 EMC Storage Analytics 4.4 Product Guide
VPLEX Communication dashboard............................................................. 99 VPLEX Overview dashboard..................................................................... 100 VPLEX Performance dashboard.................................................................101 XtremIO Overview dashboard................................................................... 102 XtremIO Top-N dashboard........................................................................ 103 XtremIO Performance dashboard..............................................................103
Metrics 105 Avamar metrics......................................................................................... 106 Isilon metrics............................................................................................. 110 ScaleIO metrics......................................................................................... 114 RecoverPoint for Virtual Machines metrics................................................116 Unity and UnityVSA metrics...................................................................... 119 VMAX metrics........................................................................................... 125 VNX Block metrics.................................................................................... 128 VNX File/eNAS metrics.............................................................................132 VNXe metrics............................................................................................ 136 VPLEX metrics...........................................................................................141 XtremIO metrics........................................................................................152
Views and Reports 157 Avamar views and reports......................................................................... 158 eNAS views and reports............................................................................159 Isilon views and reports............................................................................. 161 ScaleIO views and reports.........................................................................163 VMAX views and reports........................................................................... 164 VNX, VNXe, and Unity/UnityVSA views and reports.................................166 XtremIO views and reports........................................................................176
Topology Diagrams 179 Topology mapping.....................................................................................180 Avamar topology....................................................................................... 180 Isilon topology............................................................................................181 RecoverPoint for Virtual Machines topology............................................. 182 ScaleIO topology....................................................................................... 183 Unity topology...........................................................................................184 UnityVSA topology....................................................................................185 VMAX3 and VMAX All Flash topology....................................................... 186
VMAX3 and VMAX All Flash topology rules.................................. 186 VMAX VVol topology................................................................................. 187 VNX Block topology.................................................................................. 188 VNX File/eNAS topology...........................................................................189 VNXe topology..........................................................................................190 VPLEX Local topology............................................................................... 191 VPLEX Metro topology............................................................................. 192 XtremIO topology......................................................................................193
Appendix C
Appendix D
Appendix E
CONTENTS
EMC Storage Analytics 4.4 Product Guide 5
CONTENTS
6 EMC Storage Analytics 4.4 Product Guide
CHAPTER 1
Introduction
This chapter contains the following topics:
l Product overview.................................................................................................8 l Terminology......................................................................................................... 9
Introduction 7
Product overview EMC Storage Analytics (ESA) is a management pack for VMware vRealize Operations Manager that enables the collection of analytical data from EMC resources.
ESA complies with VMware management pack certification requirements and has received the VMware Ready certification.
The collector types are shown in the following figure. Refer to the EMC Simple Support Matrix for a list of supported product models.
Figure 1 EMC Adapter architecture
vRealize Operations Manager Node
EMC Adapter / Management Pack
Dashboards, reports, views, icons
Merge/Proxy (Moxy) framework
vRealize Operations Manager Cluster
Collectors
Common services Resource discovery Event processing
Metric collection Actions
Alerts RecommendationsTopology mapping
Traversal specsCapacity models
VNX File/eNAS XtremIOVMAX3
VNX Block
vCenter
Unity/UnityVSA RecoverPoint for VMs
VPLEX
VMAX All Flash
ScaleIO
Isilon
Avamar
n services
tri ll ti Actions
ti sl i Traversal specsCapacity models
_
VNXe
Introduction
8 EMC Storage Analytics 4.4 Product Guide
Terminology Familiarize yourself with commonly used terms.
adapter
A vRealize Operations Manager component that collects performance metrics from an external source such as VMware vCenter or a storage system. Third- party adapters such as the EMC Adapter are installed on the vRealize Operations Manager server to enable creation of adapter instances within vRealize Operations Manager.
adapter instance
A specific external source of performance metrics, such as a specific storage system. An adapter instance resource is an instance of an adapter that has a one- to-one relationship with an external source of data, such as an EMC VNX
storage system.
dashboard
A tab on the home page of the vRealize Operations Manager graphical user interface (GUI). vRealize Operations Manager ships with default dashboards. Dashboards are fully customizable by the end user.
health rating
An overview of the current state of any resource, from an individual operation to an entire enterprise. vRealize Operations Manager checks internal metrics for the resource and uses its proprietary analytics formulas to calculate an overall health score on a scale of 0 to 100.
icon
A pictorial element in a widget that enables a user to perform a specific function. Hover over an icon to display a tooltip that describes the function.
metric
A category of data collected for a resource. For example, the number of read operations per second is one of the metrics collected for each LUN resource.
resource
Any entity in the environment for which vRealize Operations Manager can collect data. For example, LUN 27 is a resource.
resource kind
A general type of a resource, such as LUN or DISK. The resource kind dictates the type of metrics that are collected.
widget
An area of the ESA GUI that displays metrics-related information. Youcan customize widgets for your own environment.
Introduction
Terminology 9
Introduction
10 EMC Storage Analytics 4.4 Product Guide
CHAPTER 2
Installation and Licensing
This chapter contains the following topics:
l Prerequisites.......................................................................................................12 l Installing the EMC Adapter ................................................................................ 15 l Installing Navisphere CLI.................................................................................... 15 l Adapter instances............................................................................................... 16 l Uninstalling ESA.................................................................................................22
Installation and Licensing 11
Prerequisites Before you install ESA, verify that you have configured your environment according to the requirements in this section.
Software requirements The following software is required:
l A supported version of VMware vRealize Operations Manager Advanced or Enterprise edition as listed in the EMC Simple Support Matrix. Obtain the OVA installation package for vRealize Operations Manager from VMware. Refer to the vRealize Operations Manager vApp Deployment and Configuration Guide on the VMware support page to install the software. ESA does not support Foundation and Standard editions.
l Supported versions of EMC systems and minimum operating environment requirements as listed in the EMC Simple Support Matrix.
l A supported web browser as listed in the release notes for your version of vRealize Operations Manager.
License requirements You must purchase the following licenses:
l VMware license for vRealize Operations Manager (Advanced or Enterprise).
l EMC Storage Analytics electronic or physical license. If you purchase an electronic license for ESA, you receive a letter that directs you to an electronic licensing system to activate the software to which you are entitled. Otherwise, you receive a physical license key.
l EMC product licenses. A 90-day trial for all supported products is available with ESA. The 90-day trial provides the same features as licensed products, but after 90 days of use, the adapter stops collecting data. You can add a license at any time. To install software for trial, leave the license field blank.
n Unity and UnityVSA adapter instances do not require you to provide a license in the configuration wizard. The ESA license for the Unity and UnityVSA collector is tracked on the array. In EMC Unisphere, select Settings- > Software and Licenses > License Information to ensure that the ESA license is valid and current.
n Only one EMC Adapter instance is required for VPLEX Local or VPLEX Metro systems. You can monitor both clusters in a VPLEX Metro system by adding a single EMC Adapter instance for one of the clusters. Adding an EMC Adapter instance for each cluster in a VPLEX Metro system introduces unnecessary stress on the system.
System configuration
User accounts
l StorageTo create an EMC Adapter instance for a storage array, you must have a user account that allows you to connect to the storage array or EMC SMI-S Provider. For example, to add an EMC Adapter for a VNX array, use a global account with an operator or administrator role (a local account does not work).
l vCenterTo create an EMC Adapter instance for vCenter (where Adapter Type = EMC Adapter and Connection Type = VMware vSphere), you must have an account that allows you access to vCenter and the objects it
Installation and Licensing
12 EMC Storage Analytics 4.4 Product Guide
monitors. In this case, vCenter (not the EMC Adapter) enforces access credentials. To create an EMC Adapter instance for vCenter, use an account assigned with a minimum role of Read-Only at the vCenter root and enable propagation of permissions to descendant objects. Depending on the size of the vCenter, wait approximately 30 seconds before testing the EMC Adapter. More information about user accounts and access rights is available in the vSphere API/SDK documentation (see information about authentication and authorization for VMware ESXi and vCenter Server). Ensure that the adapter points to the vCenter server that vRealize Operations Manager monitors.
DNS configuration
To use the EMC Adapter, the vRealize Operations Manager vApp requires network connectivity to the storage systems to be monitored. DNS must be correctly configured on the vRealize Operations Manager server to enable hostname resolution by the EMC Adapter.
Time zone and synchronization settings
Ensure time synchronization for all ESA resources by using Network Time Protocol (NTP). Also, set correct time zones for ESA resources. Failure to observe these practices might affect the collection of performance metrics and topology updates.
EMC system configuration requirements
Use the port assignments and IP addresses in the following table to configure the environment for EMC systems.
Table 1 EMC system configuration requirements
Connection type Data source Protocol Default port
IP address/credential/notes
EMC Avamar MCSDK API HTTP SOAP
9443
Isilon REST API HTTPS 8080 Isilon Storage Administration web interface IP address
EMC RecoverPoint
for Virtual Machines REST API HTTPS 443
EMC ScaleIO REST API HTTPS 443 IP address and port of the ScaleIO Gateway
Unity/UnityVSA REST API HTTPS 443 Unisphere Management IP address and user credential that has the array Administrator role.
VMAX and Unisphere for VMAX
Unisphere for VMAX REST API
HTTPS 8443 l Unisphere must be available on the network and accessible through a port specified at the end of the IP address (for example, 10.10.10.10:8443).
l All VMAX systems must be registered for performance data collection to work with ESA.
l For data collection only, the Unisphere user credentials for ESA must have PERF_MONITOR permissions
l For the ability to use actions, the user must have STORAGE_ADMIN permissions.
Installation and Licensing
Prerequisites 13
Table 1 EMC system configuration requirements (continued)
Connection type Data source Protocol Default port
IP address/credential/notes
VMware vSphere vCenter Web Services SDK
HTTPS 443
VNX Block Navisphere CLI (naviseccli)
TCP/SSL 443 or 2163 Storage processors require IP addresses that are reachable from the vRealize Operations Manager server. Bidirectional traffic for this connection flows through port 443 (HTTPS). Statistics logging must be enabled on each
storage processor (SP) for metric collection (System >
System Properties > Statistics Logging in Unisphere).
VNX File/eNAS Control Station CLI
SSH 22 IP address that is reachable from the vRealize Operations Manager server. Bi-directional Ethernet traffic flows through port 22 using Secure Shell (SSH). If you are using the EMC VNX nas_stig script for security
(/nas/tools/nas_stig), do not use root in the
password credentials. Setting nas_stig to On limits
direct access for root accounts, preventing the adapter instance from collecting metrics for VNX File and eNAS.
VNXe REST API HTTPS 443 Unisphere's Management IP address and user credential that has the array's Administrator role.
VPLEX REST API (topology)
VPlexcli (metrics)
HTTPS
SSH
443
22
XtremIO REST API HTTPS 443 IP address of the XtremIO Management Server (XMS) and the serial number of the XtremIO Cluster to monitor. If enhanced performance is required, administrators can configure the thread count for the XtremIO adapter instance. See Configuring the thread count for an adapter instance on page 52.
Installation and Licensing
14 EMC Storage Analytics 4.4 Product Guide
Installing the EMC Adapter The ESA installation procedure installs the EMC Adapter and the dashboards.
Before you begin
Obtain the PAK file for the EMC Adapter from EMC Online Support.
Note
If You use Internet Explorer, the installation file downloads as a ZIP file but functions the same way as the PAK file.
WARNING
When you upgrade EMC Storage Analytics the standard EMC dashboards are overwritten. To customize a standard EMC dashboard, clone it, rename it, and then customize it.
Procedure
1. Save the PAK file in a temporary folder.
2. Start the vRealize Operations Manager administrative user interface in your web browser and log in as an administrator.
For example, enter https://vROPs_ip_address.
3. Select Administration > Solutions and then click Add (+) to upload the PAK file.
4. When the message appears that the PAK file is ready to install, complete the wizard.
Depending on your system's performance, the installation can take from 5 to 15 minutes.
5. When the installation completes, click Finish.
The EMC Adapter appears in the list of installed solutions.
Installing Navisphere CLI For vRealize Operations Manager 6.1 or later, the Navisphere CLI (naviseccli) is automatically installed on all Data Nodes that are available during the initial installation. If you add more nodes in the vRealize Operations Manager cluster after ESA is installed or if you are using vRealize Operations Manager 6.0 or earlier, use this procedure to manually install naviseccli.
l Install naviseccli before you add the EMC Adapter instance to vRealize Operations Manager. If naviseccli is not installed, errors could might in scaled-out vCenter environments that consist of a Master Node and multiple Data Nodes. Naviseccli is automatically installed on the Master Node. However, because the Data Node collects metrics, the EMC Adapter might report errors if naviseccli is not installed.
l For VNX Block systems, install the naviseccli in the Data Node that you assign to collect metrics for VNX systems.
The naviseccli-bin-xxx-rpm file is included in the ESA package.
Installation and Licensing
Installing the EMC Adapter 15
Procedure
1. Enable SSH for both master and data nodes.
Refer to Using SSH to connect to vRealize Operations Manager on page 53 for instructions.
2. Extract the PAK file by using decompression software such as WinZip.
3. Copy the naviseccli-bin-version.rpm file (for example, naviseccli- bin-7.33.1.0.33-x64.rpm)to a target directory in the data node.
If you are using Windows, you can use WinSCP for the copy operation.
4. Establish a secure connection to the data node and change to the target directory.
5. Run this command: rpm -i naviseccli-bin-version.rpm where version is the appropriate version of the naviseccli utility for the node.
6. Repeat this procedure to install naviseccli in other nodes, as required.
Adapter instances Adapter instances specify the adapter type and the information that is needed for vRealize Operations Manager to identify and access resources.
The vCenter adapter instance enables other adapter instances to display visible connections between the VMware objects and the array objects.
EMC Adapter instances provide access to EMC resources.
Note
After adapter instances are created, the vRealize Operations Manager Collector requires several minutes to collect statistics, depending on the size of the storage array. Large storage array configurations require up to 45 minutes to collect metrics and resources and update dashboards. Once the initial data is collected, subsequent statistical collections run quickly.
Adding an EMC Adapter instance for vCenter To view health trees for the storage environment from the virtual environment, install an EMC Adapter instance for vCenter before installing other EMC resource adapter instances.
Install a separate instance for each vCenter that the vRealize Operations Manager environment monitors.
Procedure
1. In a web browser, type: https://vROps_ip_address/vcops-web-ent to start the vRealize Operations Manager custom user interface and log in as an administrator.
2. Select Administration > Solutions > EMC Adapter, and then click the Configure icon.
The Manage Solution dialog box appears.
3. Click the Add icon to add a new adapter instance.
4. Configure the following Adapter Settings and Basic Settings:
Installation and Licensing
16 EMC Storage Analytics 4.4 Product Guide
l Display NameAny descriptive name, for example: My vCenter l DescriptionOptional
l Connection TypeVMware vSphere
l License (optional)Leave blank for EMC Adapter instance for vCenter
l Management IPIP address of the vCenter server
l Array ID (optional)Leave blank for VMware vSphere connection type
5. In the Credential field, select any previously defined credentials for this storage system; otherwise, click the Add New icon (+) and configure these settings:
l Credential nameAny descriptive name, for example: My VMware Credentials
l UsernameUsername that ESA uses to connect to the VMware vRealize system
Note
If a domain user is used, the format for the username is DOMAIN \USERNAME.
l PasswordPassword for the ESA username
6. Click OK.
7. Configure the Advanced Settings, if they are required:
l CollectorvRealize Operations Manager Collector
l Log LevelConfigure log levels for each adapter instance. The levels for logging information are ERROR, WARN, INFO, DEBUG, and TRACE.
The Manage Solution dialog box appears.
8. To test the adapter instance, click Test Connection.
If the connection is correctly configured, a confirmation box appears.
9. Click OK.
The new adapter instance polls for data every five minutes by default. At every interval, the adapter instance collects information about the VMware vSphere datastore and virtual machines with Raw Device Mapping (RDM). Consumers of the registered VMware service can access the mapping information.
Note
To edit the polling interval, select Administration > Environment Overview > EMC Adapter Instance. Select the EMC Adapter instance you want to edit, and click the Edit Object icon.
Configuring the vCenter Adapter After the vCenter Adapter is installed, configure it manually.
Procedure
1. Start the vRealize Operations Manager custom user interface and log in as administrator.
Installation and Licensing
Configuring the vCenter Adapter 17
In a web browser, type https://vROps_ip_address/vcops-webent and type the password.
2. Select Administration > Solutions.
3. In the solutions list, select VMware vSphere > vCenter Adapter, and click the Configure icon.
The Manage Solution dialog box appears.
4. Click the Add icon.
5. In the Manage Solution dialog box, provide values for the following parameters:
a. Under Adapter Settings, type a name and optional description.
b. Under Basic Settings:
l For vCenter Server, type the vCenter IP address.
l For Credential, either select a previously defined credential or click the Add icon to add a new credential. For a new credential, in the Manage Credential dialog box, type a descriptive name and the username and password for the vRealize system. If you use a domain username, the format is DOMAIN \USERNAME. Optionally, you can edit the credential using the Manage Credential dialog box. Click OK to close the dialog box.
c. (Optional) Configure the Advanced Settings:
l Collector: The vRealize Operations Manager Collector
l Auto Discovery: True or False
l Process Change Events: True or False
l Registration user: The registration username used to collect data from vCenter Server.
l Registration password: The registration password used to collect data from vCenter Server
6. Click Test Connection.
7. Click OK in the confirmation dialog box.
8. Click Save Settings to save the adapter.
9. Click Yes to force the registration.
10. Click Next to go through a list of questions to create a new default policy, if required.
Adding EMC Adapter instances for EMC resources Each EMC resource requires an adapter instance.
Before you begin
l Install the EMC Adapter for vCenter. All EMC resources adapter instances require the EMC Adapter instance for vCenter to be installed first.
l Obtain the adapter license key (if required) for your EMC product.
Adapter instances are licensed by product. Observe these exceptions and requirements:
Installation and Licensing
18 EMC Storage Analytics 4.4 Product Guide
Product Comment
eNAS adapter instance No license required.
Unity adapter instance The license is automatically verified through the array.
VNX Unified array Uses the same license for VNX File and VNX Block.
VNX File adapter instance License required for VNX File system.
VNX Block l To avoid a certificate error if the main storage processor is down, test both storage processors for the VNX Block system to accept both certificates.
l Global Scope is required for VNX Block access.
VPLEX Metro Add an adapter instance for only one of the clusters (either one); this action enables you to monitor both clusters with a single adapter instance.
EMC RecoverPoint for Virtual Machines
Ensure that your EMC RecoverPoint model matches your license.
Procedure
1. In a web browser, type: https://vROps_ip_address/vcops-web-ent to start the vRealize Operations Manager custom user interface and log in as an administrator.
2. Select Administration > Solutions > EMC Adapter and click the Configure icon.
The Manage Solution dialog box appears.
3. Click the Add icon to add a new adapter instance.
4. Configure the following Adapter Settings and Basic Settings:
l Display NameA descriptive name, such as My Storage System or the array ID
l Description(optional)Description with more details
l LicenseLicense key (if required) for the array that you want to monitor The license key for the adapter instance appears on the Right to Use Certificate that is delivered to you or through electronic licensing.
Note
If you leave the license field blank, the adapter instance runs under a 90-day trial. When the 90-day trial expires, ESA stops collecting metrics until you add a valid license to the adapter instance.
5. Configure these settings based on the adapter instance for your product:
Installation and Licensing
Adding EMC Adapter instances for EMC resources 19
Table 2 Adapter configuration settings
Supported product Field: Connection Type
Field: Management IP Field: Array ID
Avamar Avamar Use the IP address of the Avamar server where MCS is running.
Not applicable
eNAS eNAS Use the IP address of the primary Control Station (CS). Not applicable
Isilon arrays Isilon If SmartConnect Zone is configured, use the SmartConnect zone name or IP address. Otherwise, use any node IP address.
EMC RecoverPoint for Virtual Machines
RecoverPoint for Virtual Machines
Use the IP address of the virtual EMC RecoverPoint appliance.
Not applicable
ScaleIO arrays ScaleIO Use the IP address and port of the ScaleIO Gateway. Not applicable
Unity Unity Use the IP address of the management server. Not applicable
UnityVSA UnityVSA Use the IP address of the management server. Not applicable
VMAX3 and VMAX All Flash VMAX Use the IPv4 or IPv6 address, and the port number of the configured EMC Unisphere for VMAX.
Required
VNX Block arrays VNX Block Use the IP address of one SP in a single array. Do not add an adapter instance for each SP.
Required for multi-node
VNX File and Unified models,VG2 and VG8 gateway models
VNX File Use the IP address of the primary CS. Not applicable
VNXe3200 VNXe Use the IP address of the management server. Not applicable
VPLEX Local or VPLEX Metro VPLEX Use the IP address of the management server. For a Metro cluster, use the IP address of either management server, but not both.
Not applicable
XtremIO XtremIO Use the IP address of the XMS that manages the XtremIO target cluster.
Use the serial number of the XtremIO target cluster.
6. In the Credential field, select any previously defined credentials for this product; otherwise, click the Add New icon and configure these settings:
l Credential nameA name for the credentials information
l UsernameUsername that EMC Storage Analytics uses to connect to the EMC product:
n AvamarMCUser account, or another Avamar Administrator user
n IsilonOneFS storage administration server
n ScaleIOScaleIO Gateway
n RecoverPoint for Virtual MachinesVirtual EMC RecoverPoint appliance
n Unity and UnityVSAManagement server
n VMAXUnisphere user. For data collection only, the Unisphere user credentials for ESA must have PERF_MONITOR permissions and, for the ability to use actions, the user must have STORAGE_ADMIN permissions.
Installation and Licensing
20 EMC Storage Analytics 4.4 Product Guide
n VNX File or eNASCS username
n VNX BlockSP username
n VNXeManagement server
n VPLEXManagement server (for example, the service user). The default credentials are service/Mi@Dim7T.
n XtremIOXMS username
l PasswordEMC product management password.
7. Click OK.
The Manage Solution dialog reappears.
8. If required, configure the following Advanced Settings:
l CollectorAutomatically select collector
l Log Level Configure log levels for each adapter instance. The levels for logging information are ERROR, WARN, INFO, DEBUG, and TRACE.
The Manage Solution dialog box appears.
9. Click Test Connection to validate the values you entered.
If the adapter instance is correctly configured, a confirmation box appears.
Note
Testing an adapter instance validates the values you entered. Failure to do this step causes the adapter instance to change to the (red) warning state if you enter invalid values and do not validate them. If the connection test fails, verify that all fields contain the correct information and remove any white spaces at the end of the values.
10. To finish adding the adapter instance, click OK.
Editing EMC Adapter instances You can edit installed EMC Adapter instances.
Before you begin
l Install the EMC Adapter.
l Configure the EMC Adapter instance for your EMC product.
l Obtain an adapter license key for your product.
Adapter instances are licensed per product. For details, refer to License requirements on page 12.
Procedure
1. Start the vRealize Operations Manager custom user interface and log in as administrator.
For example in a web browser, type: https://vROps_ip_address/vcops-web- ent.
2. Select Administration > Solutions > EMC Adapter.
3. Select the EMC adapter you want to edit and click the Configure icon.
The Manage Solution dialog box appears.
Installation and Licensing
Editing EMC Adapter instances 21
4. Edit the fields you need to change. See Table 2 on page 20 for field descriptions.
5. Click Test Connection to verify the connection.
6. To finish editing the adapter instance, click Save Settings.
Uninstalling ESA Remove ESA objects to uninstall ESA.
Procedure
1. Select Administration > Adapter Instances > EMC Adapter.
2. Remove adapter instances for which the Adapter Type is EMC Adapter by clicking Uninstall solution (X) for each adapter type.
Installation and Licensing
22 EMC Storage Analytics 4.4 Product Guide
CHAPTER 3
Monitoring your Environment
This chapter contains the following topics:
l EMC dashboards................................................................................................24 l Using badges to monitor resources.................................................................... 29 l Adjusting default tolerances...............................................................................30 l Monitoring storage.............................................................................................30 l Checking capacity and performance...................................................................31 l Troubleshooting with inventory trees.................................................................32
Monitoring your Environment 23
EMC dashboards Dashboards provide a graphic representation of the status and relationships of selected objects.
The standard dashboards are delivered as templates. If a dashboard is accidentally deleted or changed, you can generate a new one. Use the standard vRealize Operations Manager dashboard customization features to create additional dashboards if required (some restrictions might apply).
From the vRealize Operations Manager main menu, select Dashboards > All Dashboards > EMC. The available dashboards are listed in the navigation panel on the left.
Dashboards include various widgets, depending on the type of dashboard.
l Resource TreeShows the end-to-end topology and health of resources across vSphere and storage domains. Configure the hierarchy that is shown by changing the widget settings; changing these settings does not alter the underlying object relationships in the database. Select any resource in this widget to view related resources in the stack.
l Health TreeProvides a navigable visualization of resources that have parent or child relationships to the resource you select in the Resource Tree widget. Single- click to select resources, or double-click to change the navigation focus.
l Sparkline ChartShows sparklines for the metrics of the resource you select in the Resource Tree widget.
l Metric PickerLists all the metrics that are collected for the resource you select in the Resource Tree widget. Double-click a metric to create a graph of the metric in the Metric Chart widget.
l Metric ChartGraphs the metrics you select in the Metric Picker widget. Display multiple metrics simultaneously in a single graph or in multiple graphs.
l Resource Events (VNX/VNXe only)Shows a graph that illustrates the health of the selected object over a period of time. Object events are labeled on the graph. Hover over or click a label to display event details, including event ID, start time, cancel time, trigger, resource name, and event details.
Note
The VMware documentation provides instructions for modifying or deleting dashboards to suit your environment.
l Be sure to rename any dashboards you modify so that they will not be overwritten during an upgrade.
l If you attempt to modify a component that does not exist, such as a dashboard for a storage system that does not exist in your environment, vRealize Operations Manager generates a generic error message indicating that the task failed.
Monitoring your Environment
24 EMC Storage Analytics 4.4 Product Guide
The following table lists the default dashboards for each EMC resource.
Table 3 Default dashboards
Dashboard name
Avamar Isilon ScaleIO VNX Unity VMAX VPLEX XtremIO RecoverPoint for VMs
Storage Topology
--- X X X X X X X X
Storage Metrics
--- X X X X X X X X
Overview X X X X X X X X X
Topology X X X X X X X --- ---
Metrics X X X X X X --- X X
Top-N --- X --- X X --- --- X X
Performance --- --- --- --- --- --- X X X
Communication --- --- --- --- --- --- X --- ---
Properties X --- --- --- --- --- --- --- ---
Note
eNAS dashboards are available on the Dashboard XChange.
EMC overview dashboards Overview tabs for EMC resources display a single view of performance and capacity metrics for selected resources that have configured adapter instances. Scoreboards and heat maps group the contents by adapter instance.
Heatmaps and scoreboards use color to provide a high-level view of performance and capacity metrics for selected devices. Tolerances are displayed in the key at the bottom of each heatmap. Hover your mouse over specific areas of a graph or heatmap to see more details.
l For measurable metrics, colors range from green to shades of yellow and orange to red.
l Metrics with varied values that cannot be assigned a range show relative values from lowest (light blue or light green) to highest (dark blue or dark green). Because the range of values for relative metrics have no lower or upper limits, the numerical difference between light and dark blue or green might be minimal.
Note
It is normal for white boxes to appear in the heatmap:
l While the metrics are being gathered for an adapter instance.
l When the adapter itself or an individual resource has been deleted and the resources have not been removed from the Environment Overview page.
The following figures show examples of Overview dashboards.
Monitoring your Environment
EMC overview dashboards 25
Figure 2 VMAX Overview dashboard
Figure 3 VPLEX Overview dashboard
Monitoring your Environment
26 EMC Storage Analytics 4.4 Product Guide
Topology dashboards The topology dashboards provide an entry point for viewing resources and relationships between storage and virtual infrastructure objects for supported adapter instances.
Details for every object in every widget are available by selecting the object and clicking the Resource Detail icon at the top of each widget.
The default topology dashboards contain the Resource Tree, Health Tree, and Sparkline Chart widgets.
The following figure shows the Isilon Topology dashboard with a node selected in the Resource Tree widget. The Sparkline Chart reflects the information for the selected node.
Figure 4 Isilon Topology dashboard
Metrics dashboards The metrics dashboards display resources and metrics for storage systems and enable you to view graphs of resource metrics.
The default Metrics dashboards contain the Resource Tree, Metric Picker, Metric Chart, and Resource Events (VNX/VNXe only) widgets.
Note
Performance metrics are not supported for user LUNs on vault drives. Place user LUNs on drives other than vault drives.
The following figure shows the VNX Metrics dashboard with a VNX storage pool selected in the Resource Tree.
Monitoring your Environment
Topology dashboards 27
Figure 5 VNX Metrics dashboard
Top-N dashboards The Top-N dashboards enable you to view your top performing devices at a glance.
The Top-N dashboards are available for:
l Isilon Nodes
l EMC RecoverPoint for Virtual Machines Objects
l Unity LUNs, File Systems, and VVols
l VNX File Systems
l VNX LUNs
l XtremIO Volumes
Top performing devices are selected based on the current value of the associated metric that you configured for each widget. You can change the time period and the number of objects in your top performer list.
The following figure shows the Top-N VNX LUNs dashboard. Figure 6 Top-N VNX LUNs dashboard
Monitoring your Environment
28 EMC Storage Analytics 4.4 Product Guide
Dashboard XChange The Dashboard XChange is a user community page for users to exchange EMC Storage Analytics custom dashboards.
ESA provides a set of default dashboards that provide you with a variety of functional views into your storage environment. You can also create custom dashboards to visualize collected data according to your own requirements. The Dashboard XChange on ESA Community is an extension of that feature that enables you to:
l Export custom dashboards to the Dashboard XChange to benefit a wider EMC Storage Analytics community
l Import custom dashboards from the Dashboard XChange to add value to your own environment
The Dashboard XChange, hosted on the Dell EMC Community Network, also hosts dashboards designed by EMC to showcase widget functions that might satisfy a particular use case in your environment. Import these dashboards into your existing environment to enhance the functionality offered by EMC Storage Analytics and edit imported dashboards to meet the specific requirements of your own storage environment.
The Dashboard XChange provides these resources to help you create custom dashboards:
l How-to video that shows how to create custom dashboards
l Best practices guide that provides detailed guidelines for dashboard creation
l Slide show that demonstrates how to import dashboards from or export them to the Dashboard XChange
Note that there are XChange Zones for supported platforms.
Using badges to monitor resources vRealize Operations Manager provides badges that enable you to analyze capacity, workload, and stress of supported resource objects.
The badges are based on a default policy that is defined in vRealize Operations Manager for each resource type:
l Workload badgeDefines the current workload of a monitored resource. It displays a breakdown of the workload based on supported metrics.
l Stress badgeDefines the workload over a period of time. It displays one-hour time slices over the period of a week. The color of each slice reflects the stress status of the resource.
l Capacity badgeDisplays the percentage of a resource that is currently consumed and the remaining capacity for the resource.
Note
Depending on the resource and supported metrics, full capacity is sometimes defined as 100% (for example, Busy %); it can also be defined by the maximum observed value (for example, Total Operations IO/s).
l Time Remaining badgeCalculated from the Capacity badge and estimates when the resource will reach full capacity.
Monitoring your Environment
Dashboard XChange 29
Adjusting default tolerances Change the values for metric tolerance levels to suit your environment.
ESA contains default tolerance ranges for metrics that are appropriate for the majority of users. The ranges are displayed at the bottom of each heatmap. Change them to suit your needs. Be sure to note the default values in case you want to revert to the original tolerance levels. The VMware documentation provides detailed instructions for modifying heatmap widget configurations.
Monitoring storage The Storage Topology dashboard provides an entry point for viewing resources and relationships between storage and virtual infrastructure objects.
Procedure
1. Under Dashboards, select Storage Topology.
2. In the Storage System Selector widget, select an object to display its topology in the Storage Topology and Health widget.
3. Select an object in the Storage Topology and Health widget to display its Parent and Child resources, as shown in the following figure.
Figure 7 Storage Topology dashboard
4. (Optional) Double-click an object to change the navigation focus.
5. To view more details, select an object and click Object Detail, as shown in the previous figure.
The tabs shown in the following figure provide more details for the selected object.
Monitoring your Environment
30 EMC Storage Analytics 4.4 Product Guide
Figure 8 Viewing storage details
Checking capacity and performance Monitor the capacity and performance of your system using the Storage Metrics dashboard.
Monitoring helps you plan ahead and avoid congestion on your system.
Procedure
1. Under Dashboards, select Storage Metrics.
2. In the Storage System Selector, select a storage array to populate the Resource Selector.
3. In the Resource Selector, select an object to populate the Metric Picker widget with all metrics collected for the selected resource.
4. Double-click a metric to create a graph of the metric in the Metric Graph widget.
The following figure shows an example of the Storage Metrics dashboard with all widgets populated.
Monitoring your Environment
Checking capacity and performance 31
Figure 9 Storage Metrics dashboard
Troubleshooting with inventory trees Inventory trees in vRealize Operations Manager help troubleshoot problems you encounter with EMC resources by filtering out irrelevant data.
vRealize Operations Manager inventory trees are available for these EMC resources: VNX Block, VNX File, Unity, and VMAX.
Procedure
1. Select Environment > EMC Adapter.
2. Select the device to view its nodes and expand the list to view objects under the selected node.
3. Use the menu tabs to find the details you need. A sample is shown in the following figure.
Figure 10 Device details
Monitoring your Environment
32 EMC Storage Analytics 4.4 Product Guide
CHAPTER 4
Managing Alerts and Recommendations
This chapter contains the following topics:
l Viewing alerts and alert settings.........................................................................34 l Finding resource alerts.......................................................................................35 l Enabling XtremIO alerts..................................................................................... 35 l Understanding event correlation........................................................................ 36
Managing Alerts and Recommendations 33
Viewing alerts and alert settings View symptoms, alerts, and recommendations for EMC Adapter instances through the vRealize Operations Manager GUI. ESA defines the alerts, symptoms, and recommendations for resources that the EMC Adapter instance monitors.
Alerts Select Alerts > All Alerts to display all alerts, including ESA symptoms, alerts, and recommendations.
To refine your search, use the tools in the menu bar. For example, use predefined grouping, predefined filters, or enter a search string. The following figure shows the All Alerts window. Figure 11 All Alerts window
Select an alert to view detailed properties of the alert, including the symptoms that triggered the alert and recommendations for responding to the alert, as shown in the following figure. Figure 12 Alert Details window
Select the embedded links on the page to view more details.
Managing Alerts and Recommendations
34 EMC Storage Analytics 4.4 Product Guide
Alert Settings Select Alert Settings to view Alert Definitions, Symptom Definitions, Recommendations, Actions, and Notification Settings for alerts that ESA generates. The following figure shows a sample Symptom Definitions page. Figure 13 Symptom Definitions page
Finding resource alerts An alert generated by ESA is associated with a specific resource.
Procedure
1. Select the resource from one of the dashboard views.
2. Click the Show Alerts icon on the menu bar to view the list of alerts for the resource.
3. Click the alert link to view details and recommendations for the alert.
Enabling XtremIO alerts The following alerts for XtremIO Volume and Snapshot metrics out of range are disabled by default to align with XMS default settings:
l Average Small Reads (IO/s)
l Average Small Writes (IO/s)
l Average Unaligned Reads (IO/s)
l Average Unaligned Writes (IO/s)
Use the following procedure to enable alerts.
Procedure
1. Select Administration > Policies > Policy Library.
2. Select Default Policy and click the Edit Selected Policy button (pencil icon).
3. Select 6. Alert/Symptom Definitions.
4. For each alert that you want to enable, select the alert, and under State, select Local to enable it.
Managing Alerts and Recommendations
Finding resource alerts 35
5. Click Save.
Understanding event correlation Understanding how events, alerts, and resources are related helps with troubleshooting. Event correlation is available for VNX Block and VNX File.
EMC Adapter instances monitor events on certain resources, which appear as alerts in vRealize Operations Manager.
vRealize Operations Manager manages the life cycle of an alert and cancels an active alert based on its rules. For example, vRealize Operations Manager might cancel an alert if EMC Storage Analytics no longer reports it.
Events that vRealize Operations Manager generates influence the health score calculation for certain resources. For example, in the Details pane for the selected resource, events that contribute to the health score appear as alerts.
vRealize Operations Manager generates events and associates them only with the resources that triggered them. vRealize Operations Manager determines how the alerts appear and how they affect the health scores of the related resources.
Note
When you remove a resource, vRealize Operations Manager automatically removes existing alerts associated with the resource, and the alerts no longer appear in the GUI.
Managing Alerts and Recommendations
36 EMC Storage Analytics 4.4 Product Guide
CHAPTER 5
Performing Recommended Actions
This chapter contains the following topics:
l Actions menu overview...................................................................................... 38 l Managing policies...............................................................................................38 l Managing capacity ............................................................................................ 39 l Managing VNX storage resources.......................................................................41 l Enabling FAST Cache on Unity and VNXe storage pools.................................... 42 l Managing VPLEX data migrations ..................................................................... 42
Performing Recommended Actions 37
Actions menu overview As an administrator, you can perform certain actions on EMC storage resources. These actions are available from the Actions menu on the storage system's resource page and, in some cases, from the details page for the alert.
For these actions to be available, the following requirements must be met:
l ESA must be installed and the EMC Adapter instances configured.
l The EMC Adapter instances require the use of administrator credentials on the storage array.
l The vRealize Operations Manager user must have an administrator role that can access the Actions menu.
Managing policies Change service level objectives and tier policies.
Changing the service level objective (SLO) for a VMAX3 or VMAX All Flash storage group
This action is available from the Actions menu when you select a VMAX3 or VMAX All Flash storage group.
Procedure
1. From the summary page of a VMAX3 or VMAX All Flash storage group, click Actions > Change SLO.
2. In the Change SLO dialog box, provide the following information for the storage group:
l New SLO
l New Workload type
3. Click OK.
Results
The SLO for the storage group is changed.
Changing the tier policy for a File System This action is available in the Actions menu when you select a File System on the Summary tab.
Procedure
1. From the File System's Summary page, click Actions > Change File System Tiering Policy.
2. In the dialog box, select a tiering policy and click Begin Action.
Results
The policy is changed. You can check the status under Recent Tasks.
Performing Recommended Actions
38 EMC Storage Analytics 4.4 Product Guide
Changing the tier policy for a LUN This action is available from the Actions menu when you select a Unity, UnityVSA, VNX, or VNXe LUN on the Summary tab.
Procedure
1. From the Summary tab of a supported storage system LUN, click Action > Change Tiering Policy.
2. In the Change Tiering Policy dialog box, select a tiering policy and click Begin Action.
Results
The policy is changed. Check the status under Recent Tasks.
Managing capacity Extend storage on file systems, LUNs, and volumes.
Extending file system capacity This action is available from the Actions menu when you select a file system or under a recommended action when a file system's used capacity is high.
Procedure
1. Do one of the following:
l Select a file system and click Actions > Extend File System.
l From the alert details window for a file system, click Extend File System.
2. In the Extend File System dialog box, type a number in the New Size text box, and then click OK.
3. Click OK in the status dialog box.
Results
The file system size is increased and the alert (if present) is cancelled.
Extending volumes on EMC XtremIO storage systems Extend XtremIO volumes manually or configure a policy to extend them automatically when used capacity is high.
l To extend a volume manually if you have not configured an automated policy, refer to Extending XtremIO volumes manually.
l To configure a policy that automatically extends an XtremIO volume when capacity becomes high, refer to Configuring an extend volume policy for XtremIO.
Extending XtremIO volumes manually Extend XtremIO volumes manually if you have not configured an automated policy.
This action is available from the Actions menu when you select an XtremIO volume or as a recommended action when a volume's used capacity is high.
Procedure
1. Do one of the following:
Performing Recommended Actions
Changing the tier policy for a LUN 39
l Select an XtremIO volume and click Actions > Extend Volume.
l From the alert details window for an XtremIO volume, click Extend Volume.
2. In the Extend Volume dialog box, type a number in the New Size text box, and then click OK.
3. Click OK in the status dialog box.
Results
The volume size is increased and the alert (if present) is cancelled.
Configuring an extend-volume policy for XtremIO Set a policy that automatically extends an XtremIO volume when capacity becomes high.
Procedure
1. In the vRealize Operations Manager main menu, click Administration > Policies > Policy Library.
2. Select Default Policy and click Edit (pencil icon).
3. In the left panel, select Alert/System Definitions.
4. Under Alert Definitions, use the filter to find and select Capacity used in the XtremIO volume is high.
5. In the Automate column, select Local, and then click Save.
Results
When Capacity used in the volume is high is triggered, the volume is extended automatically.
Expanding LUN capacity This action is available from the Actions menu when you select a Unity, UnityVSA, VNX, or VNXe LUN.
Procedure
1. Select a LUN for a supported storage system.
2. Under Actions, click Expand.
3. Type the new size and select the size qualifier.
4. Click Begin Action.
Results
The LUN is expanded. You can check the status under Recent Tasks.
Performing Recommended Actions
40 EMC Storage Analytics 4.4 Product Guide
Managing VNX storage resources This section includes instructions for migrating LUNs, rebooting a data mover or storage processor, enabling performance statistics, and enabling FAST Cache.
Migrating a VNX LUN to another storage pool This action is available from the vRealize Operations Manager Actions menu.
Procedure
1. From the Summary page of the VNX LUN, click Actions > Migrate.
2. In the Migrate dialog box, provide the following information:
l Storage Pool TypeSelect Pool or RAID Group.
l Storage Pool NameType the name of the pool to migrate to.
l Migration RateSelect Low, Medium, High, or ASAP.
3. Click OK.
Results
The LUN is migrated.
Rebooting a Data Mover on VNX storage This action is available from the Actions menu when a VNX Data Mover is selected or under a recommended action when the health state of the Data Mover has an error.
Procedure
1. Do one of the following:
l Select a VNX Data Mover and click Actions > Reboot Data Mover.
l From the alert details window for a VNX Data Mover, click Reboot Data Mover.
2. In the Reboot Data Mover dialog box, click OK.
Results
The Data Mover is restarted and the alert is cancelled.
Rebooting a VNX storage processor This action is available from the Actions menu on the Summary tab for the storage processor or as a recommendation when the storage processor cannot be accessed.
Procedure
1. Do one of the following:
l On the Summary tab for the storage processor, click Actions > Reboot Storage Processor.
l Under Recommendations, click Reboot Storage Processor.
2. In the Reboot Storage Processor dialog box, click Begin Action.
Performing Recommended Actions
Managing VNX storage resources 41
Results
The storage processor is restarted. The restart can take several minutes. Check the status under Recent Tasks.
Enabling performance statistics for VNX Block This action is available only as a recommended action when an error or warning occurs on a VNX Block array. It is not available from the vRealize Operations Manager Actions menu.
Procedure
1. From the Summary page of the VNX Block array that reports an error or warning, click Enable Statistics.
2. In the Enable Statistics dialog box, click OK.
3. Confirm the action by checking the Message column under Recent Tasks.
Enabling FAST Cache on a VNX Block storage pool This action is available from the Actions menu when you select a VNX Block storage pool or as a recommended action when FAST Cache is configured and available.
Procedure
1. Select the Summary tab for a VNX Block storage pool.
2. Do one of the following:
l From the Actions menu, select Enable FAST Cache.
l Under Recommendations, click Configure FAST Cache.
3. In the Configure FAST Cache dialog box, click OK.
4. Check the status under Recent Tasks.
Enabling FAST Cache on Unity and VNXe storage pools This action is available from the Actions menu when you select a Unity or VNXe storage pool and FAST Cache is enabled and configured.
Procedure
1. Under Details for the storage pool, select Actions > Configure FAST Cache.
2. In the Configure FAST Cache dialog box, click Begin Action.
3. Check the status under Recent Tasks.
Managing VPLEX data migrations EMC VPLEX systems are commonly used to perform non-disruptive data migrations. Analytics for storage system performance and trends for the entire VPLEX storage environment are impacted when you swap a back-end storage system on a VPLEX system. Therefore, EMC recommends that you start a new ESA baseline for the VPLEX system after data migration.
Optionally, you can stop the VPLEX adapter instance collections during the migration cycle. When collections are restarted after the migration, orphaned VPLEX resources appear in EMC Storage Analytics, but those resources are unavailable. Remove the orphaned resources manually.
Performing Recommended Actions
42 EMC Storage Analytics 4.4 Product Guide
Use the following procedure to start a new baseline.
Procedure
1. Before you begin data migration, delete all resources associated with the existing ESA VPLEX adapter instance.
2. Remove the existing ESA VPLEX adapter instance by using the Manage Adapter Instances dialog.
3. Perform the data migration.
4. Create a new ESA VPLEX adapter instance to monitor the updated VPLEX system.
Performing Recommended Actions
Managing VPLEX data migrations 43
Performing Recommended Actions
44 EMC Storage Analytics 4.4 Product Guide
CHAPTER 6
Troubleshooting
This chapter contains the following topics:
Note
EMC Storage Analytics Release Notes contains a list of known problems and limitations that address many issues not included here.
l Launching Unisphere..........................................................................................46 l Finding adapter instance IDs.............................................................................. 46 l Managing log files.............................................................................................. 46 l Managing the collection of XtremIO snapshots.................................................. 51 l Editing the Collection Interval for a resource..................................................... 52 l Configuring the thread count for an adapter instance........................................52 l Using SSH to connect to vRealize Operations Manager.....................................53 l Troubleshooting metrics and scoreboards..........................................................53 l Understanding error messages...........................................................................54 l Understanding resources and relationships........................................................55 l References.........................................................................................................57
Troubleshooting 45
Launching Unisphere EMC Storage Analytics provides metrics that enable you to assess the health of monitored resources. If the resource metrics indicate that you need to troubleshoot those resources, EMC Storage Analytics provides a way to launch Unisphere on the array.
The capability to launch Unisphere on the array is available for:
l VNX Block
l VNX File
l Unity
To launch Unisphere on the array, select the resource and click the Link and Launch icon. The Link and Launch icon is available on most widgets (hovering over an icon displays a tooltip that describes its function).
Note
This feature requires a fresh installation of the EMC Adapter (not an upgrade). You must select the object to launch Unisphere. Unisphere launch capability does not exist for VMAX or VPLEX objects.
Finding adapter instance IDs Find the ID for an EMC Adapter instance.
Procedure
1. In vRealize Operations Manager, select Administration > Environment > Adapter Types > EMC Adapter.
2. In the Internal ID column, view the IDs for adapter instances.
Managing log files Find information about installation logs, VMware vRealize Log Insight, support bundles, error logs, and log file sizes and rollover counts.
Installation logs Find error logs.
Errors in the ESA installation are written to log files in the following directory in vRealize Operations Manager:
/var/log/emc Log files in this directory follow the naming convention: install-2012-12-11-10:54:19.log.
Use a text editor to view the installation log files.
Troubleshooting
46 EMC Storage Analytics 4.4 Product Guide
Log Insight overview VMware vRealize Log Insight provides log management for VMware environments. Log Insight includes dashboards for visual display of log information. Content packs extend this capability by providing dashboard views, alerts, and saved queries.
For information about working with Log Insight, refer to the Log Insight documentation.
Log Insight configuration Send the ESA logs stored on the vRealize Operations Manager virtual machine to the Log Insight instance to facilitate performance analysis and perform root cause analysis of problems.
The adapter logs in vRealize Operations Manager are stored in a subdirectory of the / storage/vcops/log/adapters/EmcAdapter directory. The directory name and the log file are created by concatenating the adapter instance name with the adapter instance ID.
Note that the adapter name parsing changes dots and spaces into underscores. The adapter instance ID is concatenated to create the subdirectory name as well as the log file name.
Multiple instances of each of the adapter types appear because ESA creates a new directory and log file for the Test Connection part of discovery as well as for the analytics log file.
The Test Connection logs have a null name associated with the adapter ID.
You can forward any logs of interest to Log Insight, but remember that forwarding logs consumes bandwidth.
Sending logs to Log Insight Set up syslog-ng to send ESA logs to Log Insight.
Before you begin
Import the vRealize Operations Manager content pack into Log Insight. This context- aware content pack includes content for supported EMC Adapter instances.
VMware uses syslog-ng for sending logs to Log Insight. Search online for syslog- ng documentation. Refer to the EMC Simple Support Matrix for the EMC products that support Log Insight. The steps that follow represent an example of sending VNX and VMAX logs to Log Insight.
Procedure
1. Access the syslog-ng.conf directory:
cd /etc/syslog-ng 2. Save a copy of the file:
cp syslog-ng.conf syslog-ng.conf.noli
Troubleshooting
Log Insight overview 47
3. Save another copy to modify:
cp syslog-ng.conf syslog-ng.conf.tmp
4. Edit the temporary (.tmp) file by adding the following to the end of the file:
#LogInsight Log forwarding for ESA <<<<<<<<< comment source esa_logs { internal(); <<<<<<<<<<<<<<< internal syslog-ng events required. file("/storage/vcops/log/adapters/ EmcAdapter/ESA3_0_VNX_Adapter-1624/ ESA3_0_VNX_Adapter-1624.log" <<<<<<<<<<<<<<<<<<< path to log file to monitor and forward follow_freq(1) <<<<<<<<<<<<<<<<<<<<<<<<<<<<< how often to check file (1 second). flags(no-parse) <<<<<<<<<<<<<<<<<<<<<<<<<<<<< dont do any processing on the file ); <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<< end of first entry repeat as needed file("/storage/vcops/log/adapters/ EmcAdapter/ESA3_0_Adapter_VMAX-1134065754/ ESA3_0_Adapter_VMAX-1134065754.log" follow_freq(1) flags(no-parse)); <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<< end of second entry
file("/storage/vcops/log/adapters/ EmcAdapter/ESA3_0_Adapter_VMAX-1001/ ESA3_0_Adapter_VMAX-1001.log" follow_freq(1) flags(no-parse)); <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<< end of third entry }; <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<< end of source entry
destination loginsight { udp("10.110.44.18" port(514)); }; <<<<<<<<<<< protocol,destination IP and port.
log { source(esa_logs); <<<<<<<<<<<<<<<<<<<< connect the source and destination to start logging destination(loginsight); };
5. Copy the .tmp file to the .conf file:
cp syslog-ng.conf syslog-ng.conf.tmp
6. Stop and restart logging:
Note
Use syslog, not syslog-ng, in this command.
service syslog restart
Troubleshooting
48 EMC Storage Analytics 4.4 Product Guide
7. Log in to Log Insight to ensure that the logs are being sent.
Error handling and event logging Errors in the EMC Storage Analytics operation are written to log files available through vRealize Operations Manager.
Error logs are available in the /data/vcops/log directory. This directory contains the vRealize Operations Manager logs.
Adapter logs (including adapters other than the EMC Adapter) are in /data/ vcops/log/adapters.
View logs relating to EMC Storage Analytics operation in the vRealize Operations Manager GUI. Create and download a support bundle used for troubleshooting.
Viewing error logs ESA enables you to view error log files for each adapter instance.
Procedure
1. Start the vRealize Operations Manager custom user interface and log in as administrator.
For example in a web browser, type: http://vROPs_ip_address/vcops-web- ent
2. Select Administration > Support > Logs.
3. Expand the Collector folder, then the adapters folder, then the EmcAdapter folder. Log files appear under the EmcAdapter folder. Double- click a log entry in the log tree.
Entries appear in the Log Content pane.
Creating and downloading a support bundle Procedure
1. Select Administration > Support > Support Bundles.
The bundle encapsulates all necessary logs.
2. Click the Create Support Bundle icon (+).
3. Select the bundle checkbox and click OK.
4. Click OK to generate the support bundle.
The bundle appears in the Support Bundles window.
5. When the bundle Status indicates Succeeded, select the bundle and click the Download Support Bundle icon (down arrow).
The bundle is downloaded to your local drive.
Log file sizes and rollover counts Logs for each EMC Adapter instance are in folders under /data/vcops/log/ adapters/EmcAdapter, one folder for each adapter instance.
For example, if you have five EMC Adapter instances, a directory (folder) appears for each of them.
Log files in this directory follow this naming convention:
Troubleshooting
Error handling and event logging 49
EMC_adapter_name-adapter_instance_ID.log.rollover_count For example: VNX_File-131.log.9 The log filename begins with the name of the EMC Adapter instance. Filenames beginning with EmcAdapter are common to all connectors.
The number that follows the EMC Adapter instance name is the adapter instance ID, which corresponds to a VMware internal ID.
The last number in the filename indicates the rollover increment. When the default log file size is reached, the system starts a new log file with a new increment. The lowest- numbered increment represents the most recent log. Each rollover is 10 MB (default value, recommended). Ten rollovers (default value) are allowed; the system deletes the oldest log files.
Configuring log file sizes and rollover counts Change the default values for all adapter instances or for a specific adapter instance.
Before you begin
CAUTION
EMC recommends that you not increase the 10 MB default value for the log file size. Increasing this value makes the log file more difficult to load and process as it grows in size. If more retention is necessary, increase the rollover count instead.
Procedure
1. On the vRealize Operations Manager virtual machine, find and edit the /usr/lib/vmware-vcops/user/plugins/inbound/emc-vcops- adapter/conf/adapter.properties file.
2. Locate these EMC Adapter instance properties:
com.emc.vcops.adapter.log.size=10MB com.emc.vcops.adapter.log.count=10
3. To change the properties for all EMC Adapter instances, edit only the log size or log count values. For example:
com.emc.vcops.adapter.log.size=12MB com.emc.vcops.adapter.log.count=15
4. To change the properties for a specific EMC Adapter instance, insert the EMC Adapter instance ID as shown in this example:
com.emc.vcops.adapter.356.log.size=8MB com.emc.vcops.adapter.356.log.count=15
Activating configuration changes Activate changes you made to the log file size or rollover count for an EMC Adapter instance.
Procedure
1. In vRealize Operations Manager, select Administration > Configuration > Inventory Explorer > Adapter Instances > EMC Adapter Instance.
Troubleshooting
50 EMC Storage Analytics 4.4 Product Guide
2. Under List, select a resource from the list and click the Edit Object icon.
The Edit Object dialog box opens.
3. Click OK .
This step activates the changes you made to the log file size or rollover count for the EMC Adapter instance.
Verifying configuration changes Verify the changes you made to the log file size or rollover counts of an EMC Adapter instance.
Procedure
1. Log into vRealize Operations Manager.
2. Change directories to /data/vcops/log/adapters/EmcAdapter.
3. Verify the changes you made to the size of the log files or the number of saved rollover backups.
If you changed:
l Only the default properties for log file size and rollover count, all adapter instance logs reflect the changes.
l Properties for a specific adapter instance, only the logs for that adapter instance reflect the changes.
l Log file size or rollover count to higher values, you do not see the resulting changes until those thresholds are crossed.
Managing the collection of XtremIO snapshots XtremIO snapshots are collected by default. In some environments, an excessive number of snapshots in the system can cause performance issues for the vRealize Operations server. To avoid an excess of snapshots, turn off collection of XtremIO snapshots.
Before you begin
For multiple XtremIO adapter instances, use the instructions in Finding adapter instance IDs on page 46 to find the IDs for the adapters you want to modify.
Procedure
1. Log in to the vRealize Operations server using SSH.
Using SSH to connect to vRealize Operations Manager on page 53 provides instructions.
2. Open /usr/lib/vmware-vcops/user/plugins/inbound/emc-vcops- adapter/conf/adapter.properties
3. Change: com.emc.vcops.adapter.xtremio.skip.snapshots=false to com.emc.vcops.adapter.xtremio.skip.snapshots=true.
For multiple XtremIO adapter instances, you can specify the adapter ID in the key to make changes to only the corresponding adapter instances. For example, to skip collecting snapshots for the XtremIO adapter instance with ID 623, modify the entry to: com.emc.vcops.adapter. 623.xtremio.skip.snapshots=true
4. Follow the steps in Activating configuration changes on page 50 to save your changes.
Troubleshooting
Managing the collection of XtremIO snapshots 51
Snapshot collection is turned off.
Note
If the vRealize Operations environment is a multi-node cluster setup, change the configuration for each node.
Editing the Collection Interval for a resource From the vRealize Operations Manager user interface, edit the Collection Interval for a resource.
The default interval time is five minutes. Changing this time affects the frequency of collection times for metrics, but the EMC Adapter recognizes the change only if the resource is the EMC Adapter instance. This is normal vRealize Operations Manager behavior.
Note
For Unity systems, the maximum collection interval is five minutes.
The vRealize Operations Manager online help provides instructions for configuring Resource Management settings.
Configuring the thread count for an adapter instance Configure the thread count for an adapter instance for best performance.
EMC recommends that only administrative personnel perform this procedure. If the thread count is not specified in adapter.properties, then the thread count = vCPU count +2. The maximum allowed thread count is 20.
Procedure
1. Access the /usr/vmware-vcops/user/plugins/inbound/emc-vcops- adapter/conf/adapter.properties file.
2. Open and edit the thread count property for all adapter instances or for a specific adapter instance.
l If you want to edit the thread count property for all adapter instances, change the com.emc.vcops.adapter.threadcount property.
l If you want to edit the thread count property for a specific adapter instance, insert the adapter instance ID after adapter and change the property value. For example: com.emc.vcops.adapter.7472.threadcount.
Note
To find an adapter instance ID, refer to Finding adapter instance IDs on page 46.
3. To activate the property change, restart the adapter instance in the vRealize Operations Manager.
Troubleshooting
52 EMC Storage Analytics 4.4 Product Guide
Using SSH to connect to vRealize Operations Manager Use SSH to log in to vRealize Operations Manager as root.
Procedure
1. Open the VM console for the vRealize Operations Manager.
2. Press Alt-F1 to open the command prompt.
3. Enter root for the login and leave the password field blank.
You are prompted for a password.
4. Set the root password.
You are logged in.
5. Use this command to enable SSH:
service sshd start
You can use SSH to log in successfully.
Troubleshooting metrics and scoreboards
Table 4 Metrics and scoreboard questions
Symptom Problem or question Resolution
Unisphere Analyzer Must the Unisphere Analyzer for VNX be running to collect metrics?
No. VNX Block metrics are gathered through naviseccli commands. VNX File metrics are gathered through CLI commands. However, statistics logging must be enabled on each SP on VNX Block, and statistics logging has a performance impact on the array. No additional services are required for VNX File.
Primary SP or CS down Will ESA continue to collect VNX statistics if the primary SP or CS goes down?
Yes. ESA automatically collects metrics from the secondary CS if the CS fails over. The credentials on the secondary CS must match the credentials on the primary CS.
Resources and metrics per node
How many resources and metrics are supported per node in vRealize Operations Manager?
l Small Node4vCPU, 16 GB Memory. Supports 2,000 objects and 1,000,000 metrics.
l Medium Node8vCPU, 32 GB Memory. Supports 6,000 objects and 3,000,000 metrics.
l Large Node16vCPU, 64 GB Memory. Supports 10,000 objects and 5,000,000 metrics.
Health score is 0 What does it mean when a resource has a health score of 0?
The resource is either down or not available.
Blue question mark What does the blue question mark in the health score indicate?
A blue question mark indicates that vRealize Operations Manager was unable to poll that resource. It will retry during the next polling interval.
Troubleshooting
Using SSH to connect to vRealize Operations Manager 53
Table 4 Metrics and scoreboard questions (continued)
Symptom Problem or question Resolution
Health scores How do health scores work? Health scores measure a resource's behavior and grades it on a scale of 0-100. A health score of 100 indicates normal behavior, while a lower health score indicates that the resource is acting abnormally. The resource might not be in an unhealthy state but there is an abnormality. Health scores are calculated by a proprietary algorithm that accounts for several factors, including thresholds and historical statistics. vRealize Operations Manager might take up to 30 days to gather enough information to determine what is considered normal in your environment. Until then, you might not see any changes in your health scores.
FAST Cache heat map How does the FAST Cache heat map work?
The FAST Cache heat maps are based on the FAST Cache read and write hit ratios. This heat map turns red if these ratios are low, indicating that FAST Cache is not being used efficiently. The heat maps turn green when FAST Cache is servicing a high percentage of I/O.
VMAX metrics A VMAX device is not visible and metrics are not collected on a multi-node vRealize Operations cluster for a virtual machine -> VMAX device relationship. The virtual machine -> VMAX device cross-adapter relationship is only supported on single vRealize Operations node because of technical restraints. Metrics for a VMAX device are only displayed if it has a corresponding consumer.
Workaround: Create an extra VMware adapter instance on the node where the VMAX adapter instance is running.
Understanding error messages Learn the meaning of resource down and license errors.
Table 5 Error messages
Symptom Problem or question Resolution
Invalid license error Why do I receive an invalid license error message when I configure the adapter instance for VNX File, even though I purchased the license of the model of the VNX array that I plan to monitor?
The CS might not be reporting the correct model or the array. Log in to the CS and check the array model with the command: /nas/sbin/model. Verify that the returned
array model matches the model on the Right to Use certificate.
Resource down Why are multiple EMC Adapter instances for my storage systems marked as down, even though I
License keys are specific to the model for which the license was purchased.
Troubleshooting
54 EMC Storage Analytics 4.4 Product Guide
Table 5 Error messages (continued)
Symptom Problem or question Resolution
have added license keys for each of them?
l Verify that you are using the correct license key for the adapter instance.
l After adding a license, click the Test button to test the configuration and validate the license key.
l If you saved the configuration without performing a test and the license is invalid, the adapter instance is marked as Resource down.
l To verify that a valid license exists, select
Environment Overview. The list that appears shows the license status.
Resource down after
upgrade
Why are my EMC Adapter instances marked down after upgrading to the latest version of the EMC Adapter?
EMC Adapter instances require a license to operate. Edit your EMC Adapter instances to add license keys obtained
from EMC. Select Environment Overview >
Configuration > Adapter Instances.
Resource down after CS
failover
Why is the VNX File adapter instance marked as down and metric collection stopped after a CS failover?
The failover might have been successful, but the new CS might not be reporting the correct model of the array. This results in a failure to validate the license and all data collection stops. Log in to the CS and check the array model with the command: /nas/sbin/model. If the
model returned does not match the actual model of the array, Primus case emc261291 in the EMC Knowledgebase provides possible solutions.
Understanding resources and relationships Frequently asked questions about resources and relationships within vCenter are answered here.
Table 6 Questions about resources and relationships
Symptom Problem or question Resolution
vCenter resources details
How is the detailed view of vCenter resources affected in ESA?
Any changes in the disk system affects the health of vCenter resources such as virtual machines, but ESA does not show changes in other subsystems. Metrics for other subsystems show either No Data or ?.
Relationships Can I see relationships between my vCenter and EMC storage resources?
Yes. Relationships between resources are not affected and you can see a top to bottom view of the virtual and storage infrastructures if the two are connected.
Deleted resource still appears
I deleted a resource. Why does it still appear in the vRealize Operations Manager?
vRealize Operations Manager does not delete any resources automatically because it retains historical statistics and topology information that might be important to the user. The resource enters an unknown state (blue). To
remove the resource, delete it on the Inventory Explorer page.
Nodes per cluster How many nodes are supported per vRealize
vRealize Operations Manager clusters consist of a master node and data nodes. A total of eight nodes are supported: the master node (required) and up to seven data nodes.
Troubleshooting
Understanding resources and relationships 55
Table 6 Questions about resources and relationships (continued)
Symptom Problem or question Resolution
Operations Manager cluster?
Troubleshooting
56 EMC Storage Analytics 4.4 Product Guide
References Read these documents for more information.
VMware vRealize Operations Manager documentation
l vRealize Operations Manager Release Notes contains descriptions of known issues and workarounds.
l vRealize Operations Manager vApp Deployment and Configuration Guide explains installation, deployment, and management of vRealize Operations Manager.
l vRealize Operations Manager User Guide explains basic features and use of vRealize Operations Manager.
l vRealize Operations Manager Customization and Administration Guide describes how to configure and manage the vRealize Operations Manager custom interface.
VMware documentation is available at http://www.vmware.com/support/pubs.
EMC documentation
l EMC Storage Analytics Release Notes provides a list of the latest supported features, licensing information, and known issues.
l EMC Storage Analytics Product Guide (this document) provides installation and licensing instructions, a list of resource kinds and their metrics, and information about storage topologies and dashboards.
Note
The EMC Storage Analytics Community provides more information about installing and configuring ESA.
Troubleshooting
References 57
Troubleshooting
58 EMC Storage Analytics 4.4 Product Guide
APPENDIX A
List of Alerts
ESA generates the listed events when the resources are queried. This appendix contains the following topics:
l Avamar alerts.....................................................................................................60 l Isilon alerts......................................................................................................... 61 l RecoverPoint alerts........................................................................................... 62 l ScaleIO alerts.....................................................................................................63 l Unity, UnityVSA, and VNXe alerts......................................................................66 l VMAX alerts.......................................................................................................68 l VNX Block alerts................................................................................................ 68 l VNX Block notifications......................................................................................73 l VNX File alerts................................................................................................... 75 l VNX File notifications.........................................................................................78 l VPLEX alerts......................................................................................................82 l XtremIO alerts................................................................................................... 85
List of Alerts 59
Avamar alerts ESA provides alerts for Avamar DPN, DDR, and Client resources.
Table 7 Avamar DPN alert messages
Alert message Badge Severity Condition Description/Recommendation
DPN used capacity (%) is high Risk Critical >= 90% Avamar system is almost full and may become read-only soon. Reclaim space or increase capacity.
Warning >= 80% Reclaim space or increase capacity.
Info >= 70% Monitor space usage and plan for growth accordingly.
The DPN has experienced a problem. State: Offline
Health Critical Offline If ConnectEMC has been enabled, a Service Request (SR) is logged. Go to EMC Online Support to view existing SRs. Search the knowledgebase for Avamar Data Node offline solution esg112792.
Avamar server has experienced a disk failure on one or more nodes. State: Degraded
Warning Degraded All operations are allowed, but immediate action should be taken to fix the problem.
Avamar Administrator was able to communicate with the Avamar server, but normal operations have been temporarily suspended. State: Suspended
Warning Suspended Restart or enable scheduler to resume backups and restores.
MCS could not communicate with this node. State: Time-Out
Health Critical Time-Out Refer to Avamar Administrator guide, Troubleshooting guide and KB articles for assistance.
Node status cannot be determined. State: Unknown
Critical Unknown
One or more Avamar server nodes are in an offline state. State: Node Offline
Warning Node Offline
Avamar Administrator was unable to communicate with the Avamar server. State: Inactive
Warning Inactive
Successful backups (%) in the last 24 hours is low
Risk Info <= 90% Investigate backup failures and remediate.
Warning <= 80% The system's ability to restore data may be compromised. Investigate backup failures and remediate
List of Alerts
60 EMC Storage Analytics 4.4 Product Guide
Table 8 Avamar DDR alert messages
Alert message Badge Severity Condition Description/Recommendation
DDR used capacity (%) is high. Risk Critical >= 90% Data Domain system is almost full and may become read-only soon. Reclaim space or increase capacity
Warning >= 80% Data Domain system is becoming full. Reclaim space or increase capacity.
Info >= 70% Monitor space and plan for growth accordingly.
The file system has experienced a problem.
Health Critical Disabled Data Domain file system disabled. Contact administrator to enable. No backups or restores can be performed.
Critical Unknown Data Domain file system in an unknown state. Contact administrator to resolve. Backups and restores may fail.
Table 9 Avamar Client alert messages
Alert message Badge Severity Condition Description/Recommendation
The latest backup operation for this client has failed.
Risk Warning Failed Remediate failure.
The backup elapsed time for this client is high.
Efficiency Warning >= 24 hours Backups are running longer than expected. Investigate and remediate
The change rate between backups exceeds 20%.
Efficiency Info Job Bytes Scanned >= 20%
Change rate exceeds 20%. Change Block Tracking may have been disabled.
Isilon alerts Cluster and Node alerts are available for Isilon 8.0 and later. Alert messages are collected from REST API.
Table 10 Isilon Cluster alert messages
Alert Message Badge Severity Type/ID
Allocation error detected. Risk Warning 800010002
System is running out of file descriptors. 800010006
File system problems detected. 899990001
Table 11 Isilon Node alert messages
Alert Message Badge Severity Type/ID
Clock failure has occurred. Health Info 900010001
List of Alerts
Isilon alerts 61
Table 11 Isilon Node alert messages (continued)
Alert Message Badge Severity Type/ID
CPU 0 about to throttle due to temperature. Risk Warning 900020026
CPU 1 about to throttle due to temperature. 900020027
Isilon System temperature out of spec. 900080030
CPU throttling Health 900020035
There are fan issues in Isilon System. Risk 999910003
There are temperature issues in Isilon System.
999910004
There are voltage issues in Isilon System. 999910005
There are storage transport issues in Isilon System.
999910007
Internal network link down. Efficiency 200020003
External network link down. 200020005
Node is offline. Health Critical 200010001
The snapshot reserve space is nearly full. Risk Info 600010005
Disk Errors detected. Health Immediate 199990001
RecoverPoint alerts ESA provides RecoverPoint alerts based on events for Consistency Group, Copy, and vRPA and alerts based on metrics for vRPA, Consistency Group, System, Cluster, and Splitter. Cancel cycle and Wait cycle for these alerts is 1.
Table 12 RecoverPoint for Virtual Machines alerts based on message event symptoms
Resource kind
Message summary
Badge Severity Event message Recommendation
Consistency group
Problem with RecoverPoint consistency group.
Health Critical RecoverPoint consistency group state is unknown.
Check the status of the consistency group.
Warning RecoverPoint consistency group is disabled.
Copy Problem with RecoverPoint copy.
Health Critical RecoverPoint copy state is unknown.
Check the status of the copy.
Warning RecoverPoint copy state is disabled.
vPRA Problem with vPRA Health Critical vRPA status is down. Check the status of the vPRA.
Warning vRPA status is removed for maintenance.
Immediate vRPA status is unknown.
List of Alerts
62 EMC Storage Analytics 4.4 Product Guide
Table 13 RecoverPoint for Virtual Machines alerts based on metrics
Resource kind
Message summary Metric and criteria Badge Severity Recommendation
vRPA Problem with vRPA. vRPA | CPU Utilization (%) >95
Health Warning Check the status of the vRPA.
Consistency group
Consistency group protection window limit has been exceeded.
Consistency group protection window ratio < 1
Protection window limit has been exceeded.
Lag limit has been exceeded.
Link | Lag (%) > 95 Lag limit has been exceeded.
RecoverPoint for Virtual Machines system
Number of splitters is reaching upper limit. (Version 4.3.1)
RecoverPoint System | Number of splitters > 30
Risk Information Consider adding another RecoverPoint for Virtual Machines system.
Number of splitters is reaching upper limit. (Version 5.0)
RecoverPoint System | Number of splitters > 60
Cluster Number of consistency groups per cluster is reaching upper limit.
RecoverPoint cluster | number of consistency groups > 122
Consider adding another RecoverPoint cluster.
Number of vRPAs per cluster is reaching upper limit.
RecoverPoint cluster | number of vRPAs > 8
Consider adding another RecoverPoint cluster.
Number of protected virtual machines per cluster is reaching upper limit.
RecoverPoint cluster | number of protected virtual machines > 486
Consider adding another RecoverPoint cluster.
Number of protected volumes per cluster is reaching upper limit.
RecoverPoint cluster | number of protected VMDKs > 1946
The maximum number of protected volumes per vRPA cluster is 2K.
Splitter Number of attached volumes per splitter is reaching upper limit.
Splitter | number of volumes attached > 3890
The maximum number of attached volumes per splitter is 4K.
ScaleIO alerts ESA provides ScaleIO alerts for System, Protection Domain, Device Disk, SDS, Storage pool, SDC, and MDM.
Table 14 ScaleIO System alerts
Metric Badge Severity Condition
Used Capacity Risk Critical Warning
> 95 >85
Thick Used Capacity Critical Warning
> 95 >85
List of Alerts
ScaleIO alerts 63
Table 14 ScaleIO System alerts (continued)
Metric Badge Severity Condition
Thin Used Capacity Critical Warning
> 95 >85
Snap Used Capacity Critical Warning
> 95 >85
Table 15 ScaleIO Protection Domain alerts
Metric Badge Severity Condition
Status Health Critical No Active
Used Capacity Risk Critical Warning
> 95 >85
Thick Used Capacity Critical Warning
> 95 >85
Thin Used Capacity Critical Warning
> 95 >85
Snap Used Capacity Critical Warning
> 95 >85
Table 16 ScaleIO Device/Disk alerts
Metric Badge Severity Condition
Status Health Critical -> Error, Info -> {Remove, Pending}
Used Capacity Risk Critical Warning
> 95 >85
Spare Capacity Allocated
Critical Warning
> 95 >85
Thick Used Capacity Critical Warning
> 95 >85
Thin Used Capacity Critical Warning
> 95 >85
Protected Capacity Critical Warning
> 95 >85
Table 17 ScaleIO SDS alerts
Metric Badge Severity Condition
Status Health Critical Disconnected
List of Alerts
64 EMC Storage Analytics 4.4 Product Guide
Table 17 ScaleIO SDS alerts (continued)
Metric Badge Severity Condition
Used Capacity Risk Critical Warning
> 95 >85
Thick Used Capacity Critical Warning
> 95 >85
Thin Used Capacity Critical Warning
> 95 >85
Protected Capacity
Note
Note: Not available from REST API
Critical Warning
> 95 >85
Snap Used Capacity Critical Warning
> 95 >85
Table 18 ScaleIO Storage Pool alerts
Metric Badge Severity Condition
Status
Note
Not available from REST API
Health Critical Warning
Warning
Warning
Warning
Degraded capacity Unreachable capacity
Unavailable unused capacity
Extremely unbalanced
Unbalanced
Used Capacity Risk Critical Warning
> 95 >85
Thick Used Capacity Critical Warning
> 95 >85
Thin Used Capacity Critical Warning
> 95 >85
Protected Capacity Critical Warning
> 95 >85
Snap Used Capacity Critical Warning
> 95 >85
Table 19 ScaleIO SDC alerts
Metric Badge Severity Condition
State Health Critical Disconnected
List of Alerts
ScaleIO alerts 65
Table 20 ScaleIO MDM Cluster alerts
Metric Badge Severity Condition
State Health Critical Not clustered Clustered degraded
Clustered tie breaker down
Clustered degraded tie breaker down
Unity, UnityVSA, and VNXe alerts ESA provides alerts for the following resources on Unity, UnityVSA, and VNXe: Disk, Tier, Storage Pool, Storage Processor, LUN, File System, and NAS Server.
Table 21 Unity, UnityVSA, and VNXe alerts
Resource kind
Metric Badge Severity Condition Message summary
Disk Total Latency (ms)
Risk Critical > 75 Disk total latency (ms) is high.
Immediate > 50
Warning > 25
State Health Critical Includes "critical"
This disk is reporting a problem.
Immediate
Warning
Info
Tier Full (%) Risk Info > 95 Consumed capacity (%) of this tier is high.
Storage Pool Full (%) Risk Critical > 90 Consumed capacity (%) of this storage pool is high.
Immediate > 85
Efficiency Info < 5 Consumed capacity (%) of this storage pool is low.
State Health Critical Includes "critical"
This storage pool is reporting a problem.
Immediate
Warning
Info
SP (Storage Processor)
CIFS SMBv1 Read Response (ms)
Risk Critical > 75 CIFS SMBv1 average read response time(ms) is high.
Immediate > 50
List of Alerts
66 EMC Storage Analytics 4.4 Product Guide
Table 21 Unity, UnityVSA, and VNXe alerts (continued)
Resource kind
Metric Badge Severity Condition Message summary
Warning > 25
CIFS SMBv1 Write Response (ms)
Risk Critical > 75
Immediate > 50
Warning > 25
CIFS SMBv2 Read Response (ms)
Risk Critical > 75 CIFS SMBv2 average read response time(ms) is high.
Immediate > 50
Warning > 25
CIFS SMBv2 Write Response (ms)
Risk Critical > 75
Immediate > 50
Warning > 25
NFS v3 Read Response (ms)
Risk Critical > 75 NFSv3 average read response time (ms) is high.
Immediate > 50
Warning > 25
NFS v3 Write Response (ms)
Risk Critical > 75
Immediate > 50
Warning > 25
State Health Critical Includes "critical"
This storage processor is reporting a problem.
Immediate
Warning
Info
LUN State Health Critical Condition includes critical
This LUN is reporting a problem.
Immediate
Warning
Info
File System State Health Critical Condition includes critical
This file system is reporting a problem.
Immediate
Warning
Info
NAS Server State Health Critical Condition includes critical
This NAS Server is reporting a problem.
List of Alerts
Unity, UnityVSA, and VNXe alerts 67
Table 21 Unity, UnityVSA, and VNXe alerts (continued)
Resource kind
Metric Badge Severity Condition Message summary
Immediate
Warning
Info
VMAX alerts ESA provides alerts for VMAX Device, Storage Resource Pool, and SLO resources. The Wait Cycle is 1 for all these VMAX alerts.
Table 22 VMAX alerts
Resource kind
Symptom Badge Severity Condition Message
Device VmaxDevice_percent_full98.0 Risk Critical > 98 Device available capacity is low.
VmaxDevice_percent_full95.0 Risk Immediate > 95 Device available capacity is low.
SRP (VMAX3 Storage Resource Pool)
VmaxSRPStoragePool_percent_full98. 0
Risk Critical > 98 Storage resource pool available capacity is low.
VmaxSRPStoragePool_percent_full95. 0
Risk Immediate > 95 Storage resource pool available capacity is low.
SLO Compliance Risk Warning is MARGINAL SLO compliance status needs attention.
Critical is CRITICAL SLO compliance status needs attention.
VNX Block alerts ESA provides alerts for the following resources on VNX Block: Storage Pool, FAST Cache, Tier, Storage Processor, RAID Group, Disk, LUN, Port, Fan and Power Supply, and Array.
Table 23 VNX Block alerts
Resource type
Metric Badge Severity Condition Message summary
Storage Pool
Full (%) Risk Critical > 90 Capacity used in this storage pool is very high.
Immediate > 85 Capacity used in this storage pool is very high.
List of Alerts
68 EMC Storage Analytics 4.4 Product Guide
Table 23 VNX Block alerts (continued)
Resource type
Metric Badge Severity Condition Message summary
Efficiency Info < 5 Capacity used in this storage pool is low.
Subscribed (%) Risk Info >100 This storage pool is oversubscribed.
State Health Critical Offline This storage pool is offline.
Faulted This storage pool is faulted.
Expansion Failed This storage pool's expansion failed.
Cancel Expansion Failed The cancellation of this storage pool's expansion failed.
Verification Failed The verification of this storage pool failed.
Initialize Failed The initialization of this storage pool failed.
Destroy Failed The destruction of this storage pool failed.
Warning Offline and Recovering This storage pool is offline and recovering.
Critical Offline and Recovery Failed
The recovery of this offline storage pool failed.
Warning Offline and Verifying This storage pool is offline and verifying.
Critical Offline and Verification Failed
This storage pool is offline and verification failed.
Faulted and Expanding This storage pool is faulted and expanding.
Faulted and Expansion Failed
This expansion of this storage pool failed.
Faulted and Cancelling Expansion
This storage pool is faulted and is cancelling an expansion.
Faulted and Cancel Expansion Failed
This storage pool is faulted and the cancellation of the expansion failed.
Faulted and Verifying This storage pool is faulted and verifying.
Faulted and Verification Failed
This storage pool is faulted and verification failed.
Unknown The status of this storage pool is unknown.
FAST Cache State Health Info Enabling FAST Cache is enabling.
List of Alerts
VNX Block alerts 69
Table 23 VNX Block alerts (continued)
Resource type
Metric Badge Severity Condition Message summary
Warning Enabled_Degraded The status of this storage pool is unknown.
Info Disabling FAST Cache is disabling.
Warning Disabled FAST Cache is created but disabled.
Critical Disabled_Faulted FAST Cache is faulted.
Critical Unknown The state of FAST Cache is unknown.
Tier Subscribed (%) Risk Info > 95 Consumed capacity (%) of this tier is high.
Storage Processor
Busy (%) Risk Warning > 90 Storage processor utilization is high.
Info > 80 Storage processor utilization is high.
Read Cache Hit Ratio (%)
Efficiency Info < 50 Storage processor read cache hit ratio is low.
Dirty Cache Pages (%)
Efficiency Critical > 95 Storage processor dirty cache pages is high.
Info < 10 Storage processor dirty cache pages is high.
Write Cache Hit Ratio (%)
Efficiency Warning > 20 Storage processor write cache hit ratio is low.
Info < 25 Storage processor write cache hit ratio is low.
N/A Health Critical N/A Storage processor could not be reached by CLI.
RAID Group Full (%) Risk Info > 90 RAID group capacity used is high.
Efficiency Info < 5 RAID group capacity used is low.
State Health Critical Invalid The status of this RAID group is invalid.
Info Explicit_Remove This RAID group is explicit remove.
Info Expanding This RAID group is expanding.
Info Defragmenting This RAID group is defragmenting.
Critical Halted This RAID group is halted.
Info Busy This RAID group is busy.
Critical Unknown This RAID group is unknown.
Disk Busy (%) Risk Critical > 95 Disk utilization is high.
Immediate > 90 Disk utilization is high.
Warning > 85
Info > 75
List of Alerts
70 EMC Storage Analytics 4.4 Product Guide
Table 23 VNX Block alerts (continued)
Resource type
Metric Badge Severity Condition Message summary
Hard Read Error (count)
Health Critical > 10 Disk has read error.
Immediate > 5 Disk has read error.
Warning > 0 Disk has read error.
Hard Write Error (count)
Health Critical > 75 Disk has write error.
Immediate And Disk has write error.
Warning Total IO/s > 1 Disk has write error.
Response Time (ms)
Risk Critical > 75 Disk average response time (ms) is in range.
And N/A
Total IO/s > 1 Disk is not idle.
Immediate 75 >= x > 50 Disk average response time (ms) is in range.
And N/A
Total IO/s > 1 Disk is not idle.
Warning 50 >= x > 25 Disk average response time (ms) is in range.
And N/A
Total IO/s > 1 Disk is not idle.
State Health Critical Removed This disk is removed.
Faulted The disk is faulted.
Unsupported The disk is unsupported.
Unknown The disk is unknown.
Info Powering up The disk is powering up.
Unbound The disk is unbound.
Warning Rebuilding The disk is rebuilding.
Info Binding The disk is binding.
Info Formatting The disk is formatting.
Warning Equalizing The disk is equalizing.
Info Unformatted The disk is unformatted.
Probation The disk is in probation
Warning Copying to Hot Spare The disk is copying to hot spare.
N/A Critical N/A Disk failure occurred.
LUN Service Time (ms)
Risk Critical > 25 LUN service time (ms) is in range.
List of Alerts
VNX Block alerts 71
Table 23 VNX Block alerts (continued)
Resource type
Metric Badge Severity Condition Message summary
And N/A
Total IO/s > 1 LUN is not idle.
Immediate > 25 LUN service time (ms) is in range.
And N/A
Total IO/s > 1 LUN is not idle.
Warning > 25 LUN service time (ms) is in range.
And N/A
Total IO/s > 1 LUN is not idle.
Latency (ms) Risk Critical 75 >= x > 50 LUN total latency (ms) is in range.
And N/A
Total IO/s > 1 LUN is not idle.
Immediate 75 >= x > 50 LUN total latency (ms) is in range.
And N/A
Total IO/s > 1 LUN is not idle.
Warning 50 >= x > 25 LUN total latency (ms) is in range.
And N/A
Total IO/s > 1 LUN is not idle.
State Health Critical Device Map Corrupt This LUN's device map is corrupt.
Faulted This LUN is faulted.
Unsupported This LUN is unsupported.
Unknown This LUN is unknown.
Info Binding This LUN is binding.
Warning Degraded This LUN is degraded.
Info Transitioning This LUN is transitioning.
Info Queued This LUN is queued.
Critical Offline This LUN is offline.
Port N/A Health Info N/A Link down occurred.
N/A The port is not in use.
Warning N/A Link down occurred.
Info N/A The port is not in use.
Fan and Power Supply
N/A Health Critical N/A Device (FAN or Power Supply) is having problem. Device state is "empty."
List of Alerts
72 EMC Storage Analytics 4.4 Product Guide
Table 23 VNX Block alerts (continued)
Resource type
Metric Badge Severity Condition Message summary
Warning N/A Device (FAN or Power Supply) is having problem. Device state is "unknown."
Critical N/A Device (FAN or Power Supply) is having problem. Device state is "removed."
N/A Device (FAN or Power Supply) is having problem. Device state is "faulted."
N/A Device (FAN or Power Supply) is having problem. Device state is "missing."
Array N/A Health Warning N/A Statistics logging is disabled.
N/A Performance data won't be available until it is enabled.
VNX Block notifications ESA provides the following notifications for the VNX Block resources listed in the table in this section.
Table 24 VNX Block notifications
Category Resource kind Message
Failures Disk Disk failure occurred. SP Front-end Port Link down occurred.
Background Event Disk Disk rebuilding started. Disk rebuilding completed. Disk zeroing started.
Note
This alert is not available for 1st generation models.
Disk zeroing completed.
Note
This alert is not available for 1st generation models.
LUN LUN migration queued. LUN migration completed.
List of Alerts
VNX Block notifications 73
Table 24 VNX Block notifications (continued)
Category Resource kind Message
LUN migration halted. LUN migration started.
EMC Adapter Instance Fast VP relocation resumed.
Note
This alert is not available for 1st generation models.
Fast VP relocation paused.
Note
This alert is not available for 1st generation models.
Storage Pool Fast VP relocation started. Fast VP relocation stopped. Fast VP relocation completed.
Storage Processor SP boot up. SP is down.
Note
This alert is not available for 1st generation models.
FAST Cache FAST Cache started. Configuration Storage Pool Storage Pool background initialization started.
Storage Pool background initialization completed. LUN LUN creation started.
LUN creation completed. Snapshotsnapshot namecreation completed.
EMC Adapter Instance SP Write Cache was disabled. SP Write Cache was enabled.
Note
This alert is not available for 1st generation models.
Non-Disruptive upgrading started. Non-Disruptive upgrading completed.
LUN Deduplication on LUN was disabled.
Note
This alert is not available for 1st generation models.
List of Alerts
74 EMC Storage Analytics 4.4 Product Guide
Table 24 VNX Block notifications (continued)
Category Resource kind Message
Deduplication on LUN was enabled.
Note
This alert is not available for 1st generation models.
Storage Pool Deduplication on Storage Pool paused.
Note
This alert is not available for 1st generation models.
Deduplication on Storage Pool resumed.
Note
This alert is not available for 1st generation models.
LUN Compression on LUN started. Compression on LUN completed. Compression on LUN was turned off.
VNX File alerts ESA provides alerts for File Pool, Disk Volume, File System, and Data Mover resources for VNX File.
Table 25 VNX File alerts
Resource kind Metric Badge Severity Condition Message summary
File Pool Full (%) Risk Critical > 90 Capacity consumed of the file pool is high.
Immediate > 85
Efficiency Info < 5 Capacity consumed of the file pool is low.
Disk Volume Request Comp. Time (s)
Risk Critical > 25,000 dVol's average request completion time is high.
Immediate > 15,000
Warning > 10,000
Service Comp. Time (s)
Risk Critical > 25,000
Immediate > 15,000
Warning > 10,000
List of Alerts
VNX File alerts 75
Table 25 VNX File alerts (continued)
Resource kind Metric Badge Severity Condition Message summary
File System Full (%) Risk Critical > 90 Capacity consumed of this file system is high.
Immediate > 85
Efficiency Info < 5
Data Mover NFS v2 Read Response (ms)
Risk Critical > 75 NFS v2 average read response time is high.
Immediate > 50
Warning > 25
NFS v2 Write Response (ms)
Risk Critical > 75 NFS v2 Average write response time is high.
Immediate > 50
Warning > 25
NFS v3 Read Response (ms)
Risk Critical > 75 NFS v3 average read response time is high.
Immediate > 50
Warning > 25
NFS v3 Write Response (ms)
Risk Critical > 75 NFS v3 average write response time is high.
Immediate > 50
Warning > 25
NFS v4 Read Response (ms)
Risk Critical > 75 NFS v4 average read response time is high.
Immediate > 50
Warning > 25
NFS v4 Write Response (ms)
Risk Critical > 75 NFS v4 average write response time is high.
Immediate > 50
Warning > 25
CIFS SMBv1 Read Response (ms)
Risk Critical > 75 CIFS SMB v1 average read response time is high.
Immediate > 50
Warning > 25
CIFS SMBv1 Write Response (ms)
Risk Critical > 75 CIFS SMB v1 average write response time is high.
Immediate > 50
List of Alerts
76 EMC Storage Analytics 4.4 Product Guide
Table 25 VNX File alerts (continued)
Resource kind Metric Badge Severity Condition Message summary
Warning > 25
CIFS SMBv2 Read Response (ms)
Risk Critical > 75 CIFS SMB v2 average read response time is high.
Immediate > 50
Warning > 25
CIFS SMBv2 Write Response (ms)
Risk Critical > 75 CIFS SMB v2 average write response time is high.
Immediate > 50
Warning > 25
State Health Info Offline Data Mover is powered off.
Error Disabled Data Mover will not reboot.
Out_of_service Data Mover cannot provide service. (For example, taken over by its standby)
Warning Boot_level=0 Data Mover is powered up.
Data Mover is booted to BIOS.
Data Mover is booted to DOS.
DART is loaded and initializing.
DART is initialized.
Info Data Mover is controlled by control station.
Error Fault/Panic Data Mover has faulted.
Online Data Mover is inserted and has power, but not active or ready.
Slot_empty There is no Data Mover in the slot.
Unknown Cannot determine the Data Mover state.
List of Alerts
VNX File alerts 77
Table 25 VNX File alerts (continued)
Resource kind Metric Badge Severity Condition Message summary
Hardware misconfigured
Data Mover hardware is misconfigured.
Hardware error Data Mover hardware has error.
Firmware error Data Mover firmware has error.
Data Mover firmware is updating.
VNX File notifications ESA provides notifications for the VNX File resources listed in the table in this section.
Table 26 VNX File notifications
Category Resource type Message
Control Station Events
Array The NAS Command Service daemon is shutting down abnormally. (MessageID:ID) The NAS Command Service daemon is shutting down abnormally. (MessageID:ID) The NAS Command Service daemon is shut down completely. The NAS Command Service daemon is forced to shut down. (MessageID:ID)
Data Mover Warm reboot is about to start on this data mover. Unable to warm reboot this data mover. Cold reboot has been performed.
EMC Adapter instance AC power has been lost. VNX storage system will be powered down in timeout_wait seconds. (MessageID:ID) AC power is restored and back on.
File system Automatic extension failed. Reason: Internal error. COMMAND:COMMAND, ERROR:DM_EVENT_STAMP, STAMP:ERROR(MessageID:ID) Automatic extension started. Automatic extension failed. Reason: File system has reached the maximum size. STAMP:DM_EVENT_STAMP (MessageID: ID) Automatic extension failed. Reason: Percentage used could not be determined. STAMP:DM_EVENT_STAMP (MessageID:ID)
List of Alerts
78 EMC Storage Analytics 4.4 Product Guide
Table 26 VNX File notifications (continued)
Category Resource type Message
Automatic extension failed. Reason: Filesystem size could not be determined. STAMP:DM_EVENT_STAMP (MessageID:ID) Automatic extension failed. Reason: Available space could not be determined. STAMP:DM_EVENT_STAMP (MessageID:ID) Automatic extension failed. Reason: File system is not RW mounted. STAMP:DM_EVENT_STAMP (MessageID:ID) Automatic extension failed. Reason: Insufficient available space. STAMP:DM_EVENT_STAMP (MessageID:<ID) Automatic extension failed. Reason: Available pool size could not be determined. STAMP:DM_EVENT_STAMP (MessageID:ID) Automatic extension failed. Reason: Slice flag could not be determined. STAMP:DM_EVENT_STAMP (MessageID:<ID) Automatic extension failed. Reason: Available space is not sufficient for minimum size extension. STAMP:DM_EVENT_STAMP (MessageID:ID) Automatic extension failed. Reason: Maximum filesystem size could not be determined. STAMP:DM_EVENT_STAMP (MessageID:ID) Automatic extension failed. Reason: High Water Mark (HWM) could not be determined. STAMP:DM_EVENT_STAMP (MessageID:ID) Forced automatic extension started. Automatic extension ended. Automatic extension ended. The filesystem is now at its maximum size limit. Forced automatic extension is cancelled. The requested extension size is less than the high water mark (HWM) set for the filesystem. The filesystem's available storage pool size will be used as the extension size instead of the requested size. Automatic extension completed. Forced automatic extension completed. The file system is at the maximum size. Automatic extension failed. Reason: Volume ID could not be determined. STAMP:DM_EVENT_STAMP(MessageID:ID)
List of Alerts
VNX File notifications 79
Table 26 VNX File notifications (continued)
Category Resource type Message
Automatic extension failed. Reason: Storage system ID could not be determined. STAMP:DM_EVENT_STAMP(MessageID:ID) Automatic extension failed. Reason: Filesystem is spread across multiple storage systems. STAMP:DM_EVENT_STAMP(MessageID:ID) Automatic extension failed. STAMP:DM_EVENT_STAMP(MessageID: ID)
EMC Adapter instance The JServer is not able to start. VNX File System statistics will be impacted. (MessageID:ID)
File system Filesystem is using condition of its cap_setting prop_name capacity. Filesystem has condition of its cap_setting prop_name capacity available.
File pool Storage pool is using condition of its value cap_setting capacity Storage pool has condition of its cap_setting capacity available.
File system Filesystem is usingconditionof the maximum allowable file system size (16 TB). Filesystem has condition of the maximum allowable file system size (16 TB). Filesystem is using condition of the maximum storage pool capacity available. Filesystem has condition of the maximum storage pool capacity available. Filesystem will fill its value cap_setting capacity on sdate.
File pool Storage pool will fill its cap_setting capacity on sdate.
File system Filesystem will reach the 16 TB file system size limit onsdate. Filesystem will fill its storage pool's maximum capacity onsdate.
Data Mover Data Mover is using stat_value of its stat_name capacity.
File pool Storage usage has crossed threshold value thresholdand has reached to pool_usage_percentage.
List of Alerts
80 EMC Storage Analytics 4.4 Product Guide
Table 26 VNX File notifications (continued)
Category Resource type Message
Storage usage has crossed threshold threshold pool_usage_percentag and has reached tovalue.
File system Filesystem has filled its cap_setting prop_name capacity.
File pool Storage pool has filled its cap_setting capacity. File system Filesystem has almost filled its cap_setting prop_name
capacity. File pool Storage pool has almost filled its cap_setting capacity. File system Filesystem is using condition of its current node
capacity. Dart Events Data Mover The SCSI HBA hbano is operating normally.
The SCSI HBA hbano has failed. (MessageID:ID) The SCSI HBA hbano is inaccessible. (MessageID:ID)
File system Filesystem has encountered a critical fault and is being unmounted internally. (MessageID:ID) Filesystem has encountered a corrupted metadata and filesystem operation is being fenced. (MessageID:ID) Filesystem usage rate currentUsage% crossed the high water mark threshold usageHWM%. Its size will be automatically extended. Filesystem is full.
EMC Adapter instance Power Supply A in Data Mover Enclosure was removed. Power Supply A in Data Mover Enclosure is OK. Power Supply A in Data Mover Enclosure failed: details (MessageID:ID) Power Supply B in Data Mover Enclosure was installed. Power Supply B in Data Mover Enclosure was removed. Power Supply B in Data Mover Enclosure is OK. Power Supply B in Data Mover Enclosure failed: details (MessageID:ID) One or more fans in Fan Module 1 in Data Mover Enclosure failed. (MessageID:ID) One or more fans in Fan Module 2 in Data Mover Enclosure failed. (MessageID:ID) One or more fans in Fan Module 3 in Data Mover Enclosure failed. (MessageID:ID)
List of Alerts
VNX File notifications 81
Table 26 VNX File notifications (continued)
Category Resource type Message
Multiple fans in Data Mover Enclosure failed. (MessageID:ID) All Fan Modules in Data Mover Enclosure are in OK status. Power Supply A in Data Mover Enclosure is going to shut down due to overheating. (MessageID:ID) Power Supply B in Data Mover Enclosure is going to shut down due to overheating. (MessageID:ID) Both Power Supplies in Data Mover Enclosure are going to shut down due to overheating. (MessageID:ID) Power Supply A in Data Mover Enclosure was installed.
Data Mover DNS server serverAddr is not responding. Reason: reason (MessageID:ID) Network device deviceName is down. (MessageID:ID)
File system Automatic fsck is started via Data Mover DATA_MOVER_NAME. Filesystem may be corrupted. (MessageID:ID) Manual fsck is started via Data Mover DATA_MOVER_NAME. Automatic fsck succeeded via Data mover DATA_MOVER_NAME. Manual fsck succeeded via Data mover DATA_MOVER_NAME. Automatic fsck failed via Data mover DATA_MOVER_NAME. Manual fsck failed via Data mover DATA_MOVER_NAME.
VPLEX alerts ESA provides alerts for the following VPLEX resources: Cluster, FC Port, Ethernet, Local Device, Storage View, Storage Volume, Virtual Volume, VPLEX Metro, Distributed Device, Engine, Director, and Extent.
Table 27 VPLEX alerts
Resource kind
Message Badge Recommendation Severity Condition
Cluster VPLEX cluster is having a problem.
Health Check the health state of your VPLEX cluster. Ignore this alert if the health state is expected.
Critical VPLEX cluster health state is major-failure.
VPLEX cluster health state is critical-failure.
Immediate VPLEX cluster health state is unknown.
List of Alerts
82 EMC Storage Analytics 4.4 Product Guide
Table 27 VPLEX alerts (continued)
Resource kind
Message Badge Recommendation Severity Condition
Warning VPLEX cluster health state is minor-failure.
VPLEX cluster health state is degraded.
FC Port FC port is having a problem.
Health Check the operational status of your FC port. Ignore this alert if the operational status is expected.
Critical FC port operational status is error.
FC port operational status is lost-communication.
Immediate FC port operational status is unknown.
Warning FC port operational status is degraded.
FC port operational status is stopped.
Ethernet Port Ethernet port is having a problem.
Health Check the operational status of your Ethernet port. Ignore this alert if the operational status is expected.
Critical Ethernet port operational status is error.
Ethernet port operational status is lost-communication.
Immediate Ethernet port operational status is unknown.
Warning Ethernet port operational status is degraded.
Ethernet port operational status is stopped.
Local Device Local device is having a problem.
Health Check the health state of your local device. Ignore this alert if the health state is expected.
Critical Local device health state is major-failure.
Local device health state is critical-failure.
Immediate Local device health state is unknown.
Warning Local device health state is minor-failure.
Local device health state is degraded.
Storage View Storage view is having a problem.
Health Check the operational status of your storage view. Ignore this alert if the operational status is expected.
Critical Storage view operational status is error.
Warning Storage view operational status is degraded.
Storage view operational status is stopped.
List of Alerts
VPLEX alerts 83
Table 27 VPLEX alerts (continued)
Resource kind
Message Badge Recommendation Severity Condition
Storage Volume Storage volume is having a problem.
Health Check the health state of your storage volume. Ignore this alert if the health state is expected.
Critical Storage volume health state is critical-failure.
Immediate Storage volume health state is unknown.
Warning Storage volume health state is non-recoverable-error.
Storage volume health state is degraded.
Virtual Volume Virtual volume is having a problem.
Health Check the health state of your virtual volume. Ignore this alert if the health state is expected.
Critical Virtual volume health state is critical-failure.
Virtual volume health state is major-failure.
Immediate Virtual volume health state is unknown.
Warning Virtual volume health state is minor-failure.
Virtual volume health state is degraded.
VPLEX Metro VPLEX metro is having a problem.
Health Check the health state of your VPLEX metro. Ignore this alert if the health state is expected.
Critical VPLEX metro health state is critical-failure.
VPLEX metro health state is major-failure.
Immediate VPLEX metro health state is unknown.
Warning VPLEX metro health state is minor-failure.
VPLEX metro health state is degraded.
Distributed Device
Distributed device is having a problem.
Health Check the health state of your distributed device. Ignore this alert if the health state is expected.
Critical Distributed device health state is critical-failure.
Distributed device health state is major-failure.
Immediate Distributed device health state is unknown.
Warning Distributed device health state is minor-failure.
Distributed device health state is non-recoverable-error.
Distributed device health state is degraded.
List of Alerts
84 EMC Storage Analytics 4.4 Product Guide
Table 27 VPLEX alerts (continued)
Resource kind
Message Badge Recommendation Severity Condition
Engine Engine is having a problem.
Health Check the operational status of your engine. Ignore this alert if the health state is expected.
Critical Engine operational status is error.
Engine operational status is lost- communication.
Immediate Engine operational status is unknown.
Warning Engine operational status is degraded.
Director Director is having a problem.
Health Check the operational status of your director. Ignore this alert if the health state is expected.
Critical Director operational status is critical-failure.
Director operational status is major-failure.
Immediate Director operational status is unknown.
Warning Director operational status is minor-failure.
Director operational status is degraded.
Extent Extent is having a problem.
Health Check the health state of your extent. Ignore this alert if the health state is expected.
Critical Extent health state is critical- failure.
Immediate Extent health state is unknown.
Warning Extent health state is non- recoverable-error.
Extent health state is degraded.
XtremIO alerts ESA provides alerts for XtremIO Cluster, Storage Controller, Disk Array Enclosure (DAE), DAE Row Controller, and NVRAM resources and alerts based on metrics for Cluster SSD, Volume, and Snapshot. The Wait Cycle is 1 for all these XtremIO alerts.
Table 28 XtremIO alerts based on external events
Resource kind
Message Badge Recommendation Severity Condition
Cluster XtremIO cluster is having a problem.
Health Check the state of your XtremIO cluster. Ignore this alert if the state is expected.
Critical XtremIO cluster health state is "failed."
Warning XtremIO cluster health state is "degraded."
List of Alerts
XtremIO alerts 85
Table 28 XtremIO alerts based on external events (continued)
Resource kind
Message Badge Recommendation Severity Condition
XtremIO cluster health state is "partial fault."
Storage Controller
Storage controller is having problem.
Health Check the state of your storage controller. Ignore this alert if the state is expected.
Critical Storage controller health state is "failed."
Warning Storage controller health state is "degraded."
Storage controller health state is "partial fault."
DAE XtremIO DAE is having a problem.
Health Check the health state of the DAE
Warning DAE health state is "minor failure."
Immediate DAE health state is "major failure."
Critical DAE health state is "critical failure."
Warning DAE health state is "initializing."
Warning DAE health state is "uninitialized."
Critical DAE health state is "failed."
Critical DAE health state is "disconnected."
DAE Row Controller
XtremIO DAE Row Controller is having a problem.
Health Check the health state of the DAE Row Controller
Warning DAE Row Controller health state is "minor failure."
Immediate DAE Row Controller health state is "major failure."
Critical DAE Row Controller health state is "critical failure."
Warning DAE Row Controller health state is "initializing."
Warning DAE Row Controller health state is "uninitialized."
Critical DAE Row Controller health state is "failed."
Critical DAE Row Controller health state is "disconnected."
XtremIO NVRAM
XtremIO NVRAM is having a problem.
Health Check the health state of the NVRAM
Warning NVRAM health state is "minor failure."
Immediate NVRAM health state is "major failure."
List of Alerts
86 EMC Storage Analytics 4.4 Product Guide
Table 28 XtremIO alerts based on external events (continued)
Resource kind
Message Badge Recommendation Severity Condition
Critical NVRAM health state is "critical failure."
Warning NVRAM health state is "initializing."
Warning NVRAM health state is "uninitialized."
Critical NVRAM health state is "failed."
Critical NVRAM health state is "disconnected."
Table 29 XtremIO alerts based on metrics
Resource kind
Message Badge Severity Condition Recommendation
Cluster SSD Consumed Capacity Ratio (%) is high.
Health Warning Consumed Capacity Ratio (%) >= 60
1. Free capacity from cluster
2. Extend capacity of cluster
Subscription Ratio is high.
Subscription Ratio >= 5
1. Unsubscribe capacity from cluster
2. Extend capacity of cluster
Physical capacity used in the cluster is high.
Risk Consumed capacity >= 90%
Migrate the volume to another cluster.
Physical capacity used in the cluster is low.
Efficiency Consumed capacity <= 5%
Cluster is not fully utilized. Possible waste.
Endurance Remaining (%) is low.
Health Endurance Remaining (%) <= 10
Replace SSD
Volume Average Small Reads (IO/s) is out of normal range.*
Health Warning Average Small Read Ratio >= 20
Check the status of the volume.
Average Small Writes (IO/s) is out of normal range.*
Average Small Write Ratio >= 20
Check the status of the volume.
List of Alerts
XtremIO alerts 87
Table 29 XtremIO alerts based on metrics (continued)
Resource kind
Message Badge Severity Condition Recommendation
Average Unaligned Reads (IO/s) is out of normal range.*
Average Unaligned Read Ratio >= 20
Check the status of the volume.
Average Unaligned Writes (IO/s) is out of normal range.*
Average Unaligned Write Ratio >= 20
Check the status of the volume.
Capacity used in the volume is high.
Risk Consumed capacity >= 90%
Extend the capacity of the volume.
Capacity used in the volume is low.
Efficiency Consumed capacity <= 5%
Volume is not fully utilized. Possible waste.
Snapshot Average Small Reads (IO/s) is out of normal range.*
Health Warning Average Small Read Ratio >= 20
Check the status of the snapshot.
Average Small Writes (IO/s) is out of normal range.*
Average Small Write Ratio >= 20
Check the status of the snapshot.
Average Unaligned Reads (IO/s) is out of normal range.*
Average Unaligned Read Ratio >= 20
Check the status of the snapshot.
Average Unaligned Writes (IO/s) is out of normal range.*
Average Unaligned Write Ratio >= 20
Check the status of the snapshot.
* Alerts for these metrics are disabled by default to align with the XMS defaults. You can enable them using the procedure in Enabling XtremIO alert settings.
List of Alerts
88 EMC Storage Analytics 4.4 Product Guide
APPENDIX B
Dashboards and Metric Tolerances
This appendix includes the following topics:
l EMC Avamar Overview dashboard.....................................................................90 l Isilon Overview dashboard.................................................................................. 91 l Top-N Isilon Nodes dashboard............................................................................91 l RecoverPoint for VMs Overview dashboard.......................................................92 l RecoverPoint for VMs Performance dashboard................................................. 92 l Top-N RecoverPoint for VMs Objects dashboard.............................................. 93 l ScaleIO Overview dashboard............................................................................. 93 l Unity Overview dashboard................................................................................. 94 l Top-N Unity LUNs, File Systems and VVols dashboard...................................... 94 l VMAX Overview dashboard............................................................................... 96 l Top-N VNX File Systems dashboard.................................................................. 97 l Top-N VNX LUNs dashboard..............................................................................97 l VNX Overview dashboard.................................................................................. 98 l VPLEX Communication dashboard.....................................................................99 l VPLEX Overview dashboard............................................................................. 100 l VPLEX Performance dashboard........................................................................ 101 l XtremIO Overview dashboard...........................................................................102 l XtremIO Top-N dashboard............................................................................... 103 l XtremIO Performance dashboard..................................................................... 103
Dashboards and Metric Tolerances 89
EMC Avamar Overview dashboard This dashboard displays heat maps for Client and Policy and scoreboards for DPN and DDR.
The following tables describe the dashboard items available for EMC Avamar.
Table 30 Avamar heat maps
Heat map Metric
Client Last changed (GB)
Unintentionally Skipped Files
Last Backup Date
Last Backup Status
Last Elapsed Time
Overhead (GB)
Policy Policy Client Count
DDR Used Capacity
Table 31 Avamar scoreboards
Metric group Scoreboard Metric Yellow Orange Red
DPN Status State Active Sessions (count)
HFS Address
License Expiration
Scheduler Enabled
Capacity Data Used Capacity (%) Protected Capacity (%)
Total Capacity (GB)
70 80 90
Success History (24 hrs)
Backup failures (Count) 1 2 3
Restore failures (Count) 1 2 3
Garbage Collection Status Result
Passes (Count)
End Time
Recovered (GB)
Chunks Deleted (Count)
Performance History (24hrs)
Average Files Changed (Count) Average Files Unintentionally Skipped (Count)
Average Overhead (GB)
Dashboards and Metric Tolerances
90 EMC Storage Analytics 4.4 Product Guide
Table 31 Avamar scoreboards (continued)
Metric group Scoreboard Metric Yellow Orange Red
DDR Status File System Status Monitoring Status
Default Replication Storage System
Capacity Data Used Capacity (%) Protected Capacity (%)
Total Capacity (GB)
70 80 90
Isilon Overview dashboard The Isilon dashboard displays scoreboards for the resources listed in this section.
For each scoreboard and selected metric, the configured Isilon adapter is shown.
Table 32 Isilon Overview dashboard
Scoreboard Green Yellow Red
CPU Performance (% used) 0% in use 100% in use
Overall Cache Hit Rate
Remaining Capacity (%) > 20% available 10 20% available 0 10% available
Disk Operations Latency 020 ms 20 50 ms > 50 ms
Number of Active Clients 0 1,500
Top-N Isilon Nodes dashboard By default, the Top-N Isilon Nodes dashboard shows the top 10 devices in these categories across your Isilon system.
l Top-10 Active Nodes (24h) by number of active clients
l Top-10 CPU % Usage
l Top-10 Disk Throughput Rate In by Write (MB/s)
l Top-10 Disk Throughput Rate Out by Read (MB/s)
l Top-10 Overall Cache Hit Rate (24 hr) (Bytes/s)
l Top-10 L1 Cache Hit Rate (24 hr) (MB/s)
l Top-10 L2 Cache Hit Rate (24 hr) (MB/s)
l Top-10 L3 Cache Hit Rate (24 hr) (MB/s)
Dashboards and Metric Tolerances
Isilon Overview dashboard 91
RecoverPoint for VMs Overview dashboard The table in this section describes the dashboard items available for RecoverPoint for Virtual Machines.
Table 33 RecoverPoint for VMs Overview dashboard
Heat map Metric Yellow Orange Red
RecoverPoint for VMs System
Number of RecoverPoint clusters n/a
RecoverPoint Cluster Number of splitters (version 5.1) 96 108 120
Number of consistency groups 192 218 243
Number of protected Virtual Machine Disks (VMDKs) 3000 3400 1900
Number of protected virtual machines for each RecoverPoint system
750 850 950
Number of virtual RecoverPoint Appliances (vRPAs) for each cluster (version 5.1)
8 1 n/a
Number of registered ESXi clusters 6 7 8
Consistency Group Displays all RecoverPoint for Virtual Machines consistency groups Enabled Disabled Unknown
Splitter Number of vSphere ESX Clusters connected to a given splitters n/a
Number of attached volumes 11,250 12,750 14,250
RecoverPoint for VMs Performance dashboard The RecoverPoint for VMs Performance dashboard provides a single view of the most important performance metrics for the resources.
The Performance dashboard displays two types of heat maps:
l Metrics with definitive measurements such as CPU usage (0100%) are assigned color ranges from lowest (green) to highest (red).
l Metrics with varied values that cannot be assigned a range show relative values from lowest (light blue) to highest (dark blue).
Table 34 RecoverPoint for VMs Performance dashboard
Heat map Description Yellow Orange Red
Link | Lag (%) Percent of the current lag for the link and for protection 90% 100%
Consistency Group | Protection Window
Current Protection Window (Hrs) shows the earliest point in hours for which RecoverPoint can roll back the consistency group's replica copy.
Current Protection Window Ratio shows the ratio of the current protection window compared with the required protection window for the Consistency Group.
Dashboards and Metric Tolerances
92 EMC Storage Analytics 4.4 Product Guide
Table 34 RecoverPoint for VMs Performance dashboard (continued)
Heat map Description Yellow Orange Red
vRPA | CPU Utilization (%)
Percent utilization of virtual RecoverPoint Appliance (vRPA) CPUs 75% 85% 95%
Cluster Performance for incoming writes (IOPS and MB/s) to clusters
Consistency Group Performance for incoming writes (IOPS and MB/s) to consistency groups
vRPA Performance for incoming writes (IOPS and MB/s) to vRPAs 75% 85% 95%
Top-N RecoverPoint for VMs Objects dashboard By default, the Top-N RecoverPoint for VMs Objects dashboard shows the top 10 devices in these categories across RecoverPoint for Virtual Machine systems.
l Top-10 vRPAs by Incoming Writes (IO/s) (24h)
l Top-10 vRPAs by Incoming Writes (KB/s) (24h)
l Top-10 Clusters by Incoming Writes (IO/s) (24h)
l Top-10 Clusters by Incoming Writes (KB/s) (24h)
l Top-10 Consistency Groups by Incoming Writes (IO/s) (24h)
l Top-10 Consistency Groups by Incoming Writes (KB/s) (24h)
ScaleIO Overview dashboard The ScaleIO dashboard displays the heat maps listed in this section.
For each heat map and selected metric, the configured ScaleIO adapter is shown.
Table 35 ScaleIO heat maps for System, Storage Pool, and Device
Heat map Description Green Yellow Red
System Displays the In Use Capacity metric 0 GB allocated
500 GB allocated
1000 GB allocated
Storage Pool Displays the In Use Capacity metric for each ScaleIO Storage Pool grouped by ScaleIO System
0 GB allocated
500 GB allocated
1000 GB allocated
Device Displays the In Use Capacity metric for each ScaleIO Device grouped by ScaleIO System and SDS associated with
0 GB allocated
500 GB allocated
1000 GB allocated
Table 36 ScaleIO heat maps for Protection Domain, SDS, and Fault Set
Heat map Description Light blue
Dark blue
Protection Domain
Displays the In Use Capacity metric for each ScaleIO Protection Domain grouped by ScaleIO System
0 GB allocated
>=1000 GB allocated
SDS Displays the In Use Capacity metric for each SDS grouped by ScaleIO System and Protection Domain
0 GB allocated
>=1000 GB allocated
Dashboards and Metric Tolerances
Top-N RecoverPoint for VMs Objects dashboard 93
Table 36 ScaleIO heat maps for Protection Domain, SDS, and Fault Set (continued)
Heat map Description Light blue
Dark blue
Fault Set Displays the In Health% metric for each Fault Set 0% 100%
Unity Overview dashboard The Unity Overview dashboard displays heat maps for Unity, UnityVSA, and VNXe.
Table 37 Unity Overview dashboard
Heat map Metric Green Red
CPU Performance Storage Processor Utilization 0% busy 100% busy
Pool capacity Storage Pool Capacity Utilization 0% full 100% full
Storage Pool Available Capacity Largest available capacity 0 GB available
LUN, File System, and VVol Performance
LUN Read IOPS Dark green = highest Light green = lowest
n/a
LUN Write IOPS
LUN Read Bandwidth
LUN Write Bandwidth
LUN Total Latency
File System Read IOPS
File System Write IOPS
File System Read Bandwidth
File System Write Bandwidth
VVol Read IOPS
VVol Write IOPS
VVol Read Bandwidth
VVol Write Bandwidth
VVol Total Latency
Top-N Unity LUNs, File Systems and VVols dashboard By default, the Top-N Unity LUNs, File Systems and VVols dashboard shows the top ten devices in these categories across your Unity systems.
Unity LUNs
l Top-10 by Read (IOPS)
l Top-10 by Write (IOPS)
l Top-10 by Read (MB/s)
Dashboards and Metric Tolerances
94 EMC Storage Analytics 4.4 Product Guide
l Top-10 by Write (MB/s)
l Top-10 by Consumed Capacity
Unity File System Top-10 by Consumed Capacity
Unity VVols
l Total Latency (ms)
l Top-10 by Read (IOPS)
l Top-10 by Write (IOPS)
l Top-10 by Read (MB/s)
l Top-10 by Write (MB/s)
l Top-10 by Consumed Capacity (GB)
Dashboards and Metric Tolerances
Top-N Unity LUNs, File Systems and VVols dashboard 95
VMAX Overview dashboard The table in this section describes the heat maps displayed on the VMAX Overview tab.
Note
Latency scales are based on average customer requirements. If they do not meet your particular requirements for latency, EMC recommends that you adjust the scale appropriately.
Table 38 VMAX Overview dashboard
Heat map Metric Description Green Yellow Red
Storage Resource Pool Capacity
Total Managed Space (GB) Dark blue = highest
Light blue = lowestUsed Capacity (GB)
Full (%) 0 50 100
Storage Group Capacity
Total Capacity (GB) Dark blue = highest
Light blue = lowestUsed Capacity (GB)
Storage Group Performance
Total Reads (IO/s) Aggregate reads for all LUNs in the storage group
Dark blue = highest
Light blue = lowest
Total Writes (IO/s) Aggregate writes for all LUNs in the storage group
Read Latency (ms) Average read latency of all LUNs in the storage group
0 ms 20 ms 40 ms
Write Latency (ms) Average write latency of all LUNs in the storage group
0 ms 20 ms 40 ms
Hit (%) 100 50 0
Miss (%) 0 50 100
Storage Resource Pool Performance
Total Reads (IO/s) Dark blue = highest
Light blue = lowestTotal Writes (IO/s)
Total Latency (ms) 0 ms 20 ms 40 ms
Front End Director Performance
Total Bandwidth (MB/s) Cumulative amount of data transferred over all ports of the front-end director
Dark blue = highest
Light blue = lowest
Total Operations (IO/s) Total number of operations taking place over all ports of a front-end director
Dark blue = highest
Light blue = lowest
Busy (%) 0 50 100
Back End Director Performance
Total Bandwidth (MB/s) Cumulative amount of data transferred over all ports of the back-end director
Dark blue = highest
Light blue = lowest
Busy (%) 0 50 100
Dashboards and Metric Tolerances
96 EMC Storage Analytics 4.4 Product Guide
Table 38 VMAX Overview dashboard (continued)
Heat map Metric Description Green Yellow Red
Read (IO/s) Dark blue = highest
Light blue = lowestWrite (IO/s)
SRDF Director Performance
Total Bandwidth (MB/s) Cumulative amount of data transferred over an SRDF director
Dark blue = highest
Light blue = lowest
Total Writes (IO/s) Total number of writes over an SRDF director
VVol Storage Container Capacity
Subscribed Free (GB) Dark blue = highest
Light blue = lowestSubscribed Limit (GB)
Subscribed Used (GB)
VVol Storage Resource Capacity
Subscribed Free (GB) Dark blue = highest
Light blue = lowestSubscribed Limit (GB)
Subscribed Used (GB)
Top-N VNX File Systems dashboard By default, the Top-N VNX File Systems dashboard shows the top ten devices in these categories across your VNX File system.
l Top-10 by Read (IOPS)
l Top-10 by Write (IOPS)
l Top-10 by Read (MB/s)
l Top-10 by Write (MB/s)
l Top-10 by Consumed Capacity
Top-N VNX LUNs dashboard By default, the Top-N VNX LUNs dashboard shows the top ten devices in these categories across your VNX system.
l Total Latency (ms)
l Top-10 by Read (IOPS)
l Top-10 by Write (IOPS)
l Top-10 by Read (MB/s)
l Top-10 by Write (MB/s)
l Top-10 by Consumed Capacity (GB)
Dashboards and Metric Tolerances
Top-N VNX File Systems dashboard 97
VNX Overview dashboard The VNX Overview dashboard displays the heat maps listed in this section.
Table 39 VNX Overview dashboard
Heat map Metric Description Green Red
CPU performance The CPU utilization of each Storage Processor and Data Mover on each configured adapter instance
0% busy 100% busy
FAST cache performance
Read Cache Hit Ratio (%)
Number of FAST Cache read hits divided by the total number of read or write I/Os across all RG LUNs and Pools configured to use FAST Cache
High ratio Low ratio
Write Cache Hit Ratio (%)
Number of FAST Cache write hits divided by the total number of read or write I/Os across all RG LUNs and Pools configured to use FAST Cache
High ratio Low ratio
Pool capacity RAID Group Available Capacity
Largest available capacity
0 GB available
Storage Pool Capacity Utilization
0% full 100% full
Storage Pool Available Capacity
Largest available capacity
0 GB available
File Pool Available Capacity
Largest available capacity
0 GB available
LUN and file system performance
LUN Utilization (%) Percentage busy for all LUNs grouped by adapter instance
0% busy 100% busy
LUN Latency (ms) Latency values appear for RAID Group LUNs. Pool LUNS appear in white with no latency values reported.
0 ms latency >= 20 ms latency
LUN Read IO/s Relative number of read I/O operations per second serviced by the LUN
Dark green = highest
Light green = lowest
LUN Write IO/s Relative number of write I/O operations per second serviced by the LUN
Dark green = highest
Light green = lowest
File System Read IO/s
Relative number of read I/O operations per second serviced by the file system
Dark green = highest
Light green = lowest
File System Write IO/s
Relative number of write I/O operations per second serviced by the file system
Dark green = highest
Light green = lowest
Dashboards and Metric Tolerances
98 EMC Storage Analytics 4.4 Product Guide
VPLEX Communication dashboard Click the VPLEX Communication tab to view a collection of heat maps that provide a single view of the performance of the communication links for a VPLEX configuration.
The EMC VPLEX Communication dashboard displays two types of heat maps:
l Metrics with definitive measurements such as intra-cluster local COM latency (0 15 ms) are assigned color ranges from lowest (green) to highest (red).
l Metrics with varied values that cannot be assigned a range show relative values from lowest (light blue) to highest (dark blue).
Note
Latency scales are based on average customer requirements. If they do not meet your particular requirements for latency, EMC recommends that you adjust the scale appropriately. For VPLEX Metro, EMC recommends adjusting the scale based on your discovered WAN round-trip time.
Table 40 VPLEX Communication dashboard
Heat map Metric Description Green Red
Cluster-1 COM Latency Average Latency (ms) Intra-cluster local COM latency, which occurs within the rack and is typically fast (less than 1 msec)
0 ms 15 ms
Cluster-2 COM Latency
WAN Link Usage (VPLEX Metro only)
Distributed Device Bytes Received (MB/s)
Total amount of traffic received for all distributed devices on a director
Light blue = lowest Dark blue = highest
Distributed Device Bytes Sent (MB/s)
Total amount of traffic sent for all distributed devices on a director
Distributed Device Rebuild Bytes Received (MB/s)
Total amount of rebuild/migration traffic received for all distributed devices on a director
Distributed Device Rebuild Bytes Sent (MB/s)
Total amount of rebuild/migration traffic sent for all distributed devices on a director
Dashboards and Metric Tolerances
VPLEX Communication dashboard 99
VPLEX Overview dashboard The EMC VPLEX Overview dashboard displays the widgets listed in this section.
Note
Red, yellow, and orange colors correlate with the Health State or Operational Status of the object. Any Health State or Operational Status other than those listed in the table show green (good). Because vRealize Operations Manager expects numeric values, you cannot modify these widgets.
Table 41 VPLEX Overview dashboard
Widget Description Green Yellow Orange Red
CPU Health Displays the CPU usage, as a percentage, for each director on the VPLEX system
Note
Generally, a director should stay below 75% CPU usage. Correct an imbalance of CPU usage across directors by adjusting the amount of I/O to the busier directors; make this adjustment by modifying existing storage view configurations. Identify busier volumes and hosts and move them to less busy directors. Alternately, add more director ports to a storage view to create a better load balance across the available directors.
075% usage
7585% usage
8595% usage
95100% usage
Cluster Health Health State Normal Degraded Major failure
Critical failure
Operational Status Normal Degraded Major failure
Critical failure
Memory Health
Displays the memory usage, as a percentage, of each director on the VPLEX system
070% usage
7080% usage
8090% usage
90100% usage
Director Health
Operational Status Normal Degraded Major failure
Critical failure
Extent Health
Storage Volume Health
Dashboards and Metric Tolerances
100 EMC Storage Analytics 4.4 Product Guide
VPLEX Performance dashboard Click the VPLEX Metrics tab to view a collection of heat maps that provide a single view of the most important performance metrics for VPLEX resources.
The EMC VPLEX Performance dashboard displays two types of heat maps:
l Metrics with definitive measurements such as CPU usage (0100%), response time latency (015 ms), or errors (05) are assigned color ranges from lowest (green) to highest (red).
l Metrics with varied values that cannot be assigned a range show relative values from lowest (light blue) to highest (dark blue).
Note
Latency scales are based on average customer requirements. If they do not meet your particular requirements for latency, EMC recommends that you adjust the scale appropriately.
Table 42 VPLEX Performance dashboard
Heatmap Metric Description Metric value
Front-end Bandwidth
Reads (MB/s) Total reads for the storage volumes across the front-end ports on a director
Light blue = lowest Dark blue = highest
Writes (MB/s) Total writes for the storage volumes across the front-end ports on a director
Active Operations (Counts/s)
Number of active, outstanding I/O operations on the director's front-end ports
Back-end Bandwidth
Reads (MB/s) Total reads for the storage volumes across the back-end ports on a director
Writes (MB/s) Total writes for the storage volumes across the back-end ports on a director
Active Operations (Counts/s)
Number of I/O operations per second through the director's back-end ports
Back-end Errors
Resets (count/s) LUN resets sent by VPLEX to a storage array LUN when it does not respond to I/O operations for over 20 seconds
Green = 0 errors Red = 5 or more errors
Timeouts (count/s) An I/O from VPLEX to a storage array LUN takes longer than 10 seconds to complete
Aborts (count/s) An I/O from VPLEX to a storage array LUN is cancelled in transit. Resets indicate more serious problems than timeouts and aborts
Front-end Latency
Read Latency (ms) Average read latency for all virtual volumes across all front-end ports on a director
Green = 0 ms Red = 15 ms
Write Latency (ms) Average write latency for all virtual volumes across all front-end ports on a director
Dashboards and Metric Tolerances
VPLEX Performance dashboard 101
Table 42 VPLEX Performance dashboard (continued)
Heatmap Metric Description Metric value
Note
For VPLEX Metro systems consisting primarily of distributed devices, the WAN round-trip time greatly affects the front-end write latency. See the COM Latency widgets and the WAN Link Usage widget in the VPLEX Communication dashboard.
Queued Operations (Counts/s)
Number of operations in the queue
Virtual Volumes Latency
Read Latency (ms) Average read latency for all virtual volumes on a director Green = 0 ms Red = 15 ms
Write Latency (ms) Average write latency for all virtual volumes on a director
Total Reads & Writes (Counts/s)
Virtual volume total reads and writes per director
Storage Volumes Latency
Read Latency (ms) Average read latency for all storage volumes on a director Green = 0 ms Red = 15 ms
Write Latency (ms) Average write latency for all storage volumes on a director
XtremIO Overview dashboard The XtremIO Overview dashboard displays the heat maps listed in this section.
Table 43 XtremIO Overview dashboard
Heatmap Description Green Yellow Orange Red
Cluster Data Reduction
Deduplication Ratio >= 3.0 < 3.0
Compression Ratio
Note
Compression Ratio shows as blue if XtremIO version 2.4.1 is running.
>= 1.5 < 1.5
Data Reduction Ratio >= 3.5 < 3.5
Cluster Efficiency
Thin Provisioning Savings (%)
Total Efficiency
Cluster Memory Usage
Total Memory In Use (%) 0 90 90 95 95 99 99 100
Volume Total Capacity (GB)
Consumed Capacity (GB)
Volume Capacity
Total Capacity (GB)
Dashboards and Metric Tolerances
102 EMC Storage Analytics 4.4 Product Guide
Table 43 XtremIO Overview dashboard (continued)
Heatmap Description Green Yellow Orange Red
Consumed Capacity (GB)
Cluster Total Physical Capacity (TB)
Total Logical Capacity (TB)
Available Physical Capacity (TB)
Available Logical Capacity (TB)
Consumed Physical Capacity (TB)
Consumed Logical Capacity (TB)
Snapshot Total Capacity (GB)
Consumed Capacity (GB)
Snapshot Capacity
Total Capacity (GB)
Consumed Capacity (GB)
XtremIO Top-N dashboard By default, the XtremIO Top-N dashboard shows the top 10 devices in these categories across your XtremIO system.
l Top-10 by Read (IOPS)
l Top-10 by Write (IOPS)
l Top-10 by Read Latency (usec)
l Top-10 by Write (usec)
l Top-10 by Read Block Size (KB)
l Top-10 by Write Block Size (KB)
l Top-10 by Total Capacity (GB)
XtremIO Performance dashboard The XtremIO Performance dashboard provides percent utilization of the Storage Controller CPUs, key volume, and SSD metrics and sparklines.
The XtremIO Performance dashboard displays two types of heat maps:
l Metrics with definitive measurements such as CPU usage (0100%) are assigned color ranges from lowest (green) to highest (red).
l Metrics with varied values that cannot be assigned a range show relative values from lowest (light blue) to highest (dark blue).
Table 44 XtremIO Performance dashboard
Heatmap Metric Notes
Storage Controllers CPU 1 Utilization (%)
Dashboards and Metric Tolerances
XtremIO Top-N dashboard 103
Table 44 XtremIO Performance dashboard (continued)
Heatmap Metric Notes
CPU 2 Utilization (%)
Volume Total Operations Select a volume from this widget to display spark lines for it.
Total Bandwidth
Total Latency
Unaligned (%)
Average Block Size
SSD Endurance Remaining Select an SSD from this widget to display sparklines for it
Disk Utilization
Dashboards and Metric Tolerances
104 EMC Storage Analytics 4.4 Product Guide
APPENDIX C
Metrics
This appendix includes the following topics:
l Avamar metrics.................................................................................................106 l Isilon metrics..................................................................................................... 110 l ScaleIO metrics................................................................................................. 114 l RecoverPoint for Virtual Machines metrics....................................................... 116 l Unity and UnityVSA metrics.............................................................................. 119 l VMAX metrics.................................................................................................. 125 l VNX Block metrics............................................................................................128 l VNX File/eNAS metrics.................................................................................... 132 l VNXe metrics................................................................................................... 136 l VPLEX metrics.................................................................................................. 141 l XtremIO metrics............................................................................................... 152
Metrics 105
Avamar metrics EMC Storage Analytics provides Avamar metrics for DPN, DDR, Domain, Policy, and Client.
The following tables show the metrics available for each resource.
Note
ESA does not monitor replication domain or client resources.
Table 45 Avamar DPN metrics
Metric Group
Metric Description
General HFS Address (String) (Hash File System address) The hostname or IP address that backup clients use to connect to this Avamar server
License Expiration (String) Calendar date on which this server's licensing expires
Scheduler Enabled (String) True or False
Active Sessions (Count) Number of active Avamar sessions
Status State Status of the node. One of the following values:
l OnlineThe node is functioning correctly.
l Read-OnlyThis status occurs normally as background operations are performed and when backups have been suspended.
l Time-OutMCS could not communicate with this node.
l UnknownNode status cannot be determined.
l OfflineThe node has experienced a problem. If ConnectEMC has been enabled, a Service Request (SR) is logged. Go to EMC Online Support to view existing SRs. Search the knowledgebase for Avamar Data Node offline solution esg112792.
l Full AccessNormal operational state for an Avamar server. All operations are allowed.
l AdminThe Avamar server is in an administrative state in which the Avamar server and root user can read and write data; other users are only allowed to read data.
l Admin OnlyThe Avamar server is in an administrative state in which the Avamar server or root user can read or write data; other users are not allowed access.
l Admin Read-OnlyThe Avamar server is in an administrative read-only state in which the Avamar server or root user can read data; other users are not allowed access.
l DegradedThe Avamar server has experienced a disk failure on one or more nodes. All operations are allowed, but immediate action should be taken to fix the problem.
l InactiveAvamar Administrator was unable to communicate with the Avamar server.
Metrics
106 EMC Storage Analytics 4.4 Product Guide
Table 45 Avamar DPN metrics (continued)
Metric Group
Metric Description
l Node OfflineOne or more Avamar server nodes are in an OFFLINE state.
l SuspendedAvamar Administrator was able to communicate with the Avamar server, but normal operations have been temporarily suspended.
l SynchronizingThe Avamar server is in a transitional state. It is normal for the server to be in this state during startup and for short periods of time during maintenance operations.
Garbage Collection
Status Idle or Processing
Result OK or Error code
Start Time Time format is: "January 1, 1970, 00:00:00 GMT."
End Time Time format is: "January 1, 1970, 00:00:00 GMT."
Passes
Recovered (GB)
Chunks Deleted
Index Stripes
Index Stripes Processed
Capacity Total Capacity (GB)
Used Capacity (GB)
Used Capacity (%) This value is derived from the largest Disk Utilization value on the Avamar tab in the Server Monitor, and therefore represents the absolute maximum Avamar server storage utilization. Actual utilization across all modules, nodes, and drives might be slightly lower.
Protected Capacity (GB)
Protected Capacity (%) Percent of client data in proportion to total capacity that has been backed up (protected) on this server
Free Capacity (GB)
Free Capacity (%)
Success history (Over Last 24 Hours)
Backup Failures (Count)
Backup Success (%)
Backup Successes (Count)
Restore Failures (Count)
Restores Success (%)
Restores Successes (Count)
Performance History Averages
Backup Average Elapsed Time
Metrics
Avamar metrics 107
Table 45 Avamar DPN metrics (continued)
Metric Group
Metric Description
(Over Last 24 Hours)
Average Scanned (GB)
Average Changed (GB)
Average Files Changed (Count)
Average Files Skipped (Count)
Average Sent (GB)
Average Excluded (GB)
Average Skipped (GB)
Average Modified & Sent (GB)
Average Modified & Not Sent (GB)
Average Overhead (GB)
Table 46 Avamar DDR metrics
Metric Group Metric Description
Capacity Total Capacity (GB)
Used Capacity (%)
Used Capacity (GB)
Free Capacity (GB)
Free Capacity (%)
Protected Capacity (GB)
Protected Capacity (%)
General Hostname IP or FQDN of the DDR
DDOS Version Data Domain Operating System version
Serial Number Disk serial number
Target for Avamar Checkpoint Backups
Model Number
Default replication storage system
Maximum Streams The maximum number of Data Domain system streams that Avamar can use at any one time to perform backups and restores. This number
Metrics
108 EMC Storage Analytics 4.4 Product Guide
Table 46 Avamar DDR metrics (continued)
Metric Group Metric Description
is configured for the Data Domain system when you add the system to the Avamar configuration.
Maximum Streams Limit
User Name
SNMP Community
SNMP Trap Port
Status File System Status
Monitoring Status
Table 47 Avamar Domain metrics
Metric group Metric
General Description
Contact
Directory
Location
Phone
Table 48 Avamar Policy metrics
Metric group Metric
General Encryption Method
Override Schedule
Auto Proxy Mapping
Client Count
Enabled
Domain
Dataset
Schedule Recurrence
Days of Week
Hours of Day
Next Run Time
Terminate Date
Retention Name
Metrics
Avamar metrics 109
Table 48 Avamar Policy metrics (continued)
Metric group Metric
Expiration Date
Duration
Table 49 Avamar Client metrics
Metric group Metric
General Description
Latest operation Start Time
End Time
Status
Elapsed Time
Type
Description
Expiration Time
Retention Tag
Size (GB)
Scanned (GB)
Changed (GB)
Number
Excluded (GB)
Modified & Sent (GB)
Modified & Not Sent (GB)
Skipped (GB)
Overhead (GB)
Files Changed (Count)
Files Skipped (Count)
Change Rate (%)
Isilon metrics EMC Storage Analytics provides metrics for Isilon clusters and nodes.
Note
Only the resource kinds with associated metrics are shown. Performance metrics that cannot be calculated are not displayed.
Metrics
110 EMC Storage Analytics 4.4 Product Guide
Table 50 Isilon Cluster metrics
Metric group Metric Description
Summary Cluster Name Name
Share Count Total count of SMB shares and NFS exports
CPU % Use Average CPU usage for all nodes in the monitored cluster
Number of Total Jobs Total number of active and inactive jobs on the cluster
Number of Active Jobs Total number of active jobs on the cluster
Capacity Total Capacity (TB) Total cluster capacity in terabytes
Used Capacity (%) Percent of total cluster capacity that has been used
Remaining Capacity (TB) Total unused cluster capacity in terabytes
Remaining Capacity (%) Total unused cluster capacity in percent
User Data Including Protection (TB) Amount of storage capacity that is occupied by user data and protection for that user data
Snapshots Usage (TB) Amount of data occupied by snapshots on the cluster
Deduplication Deduplicated Data > Physical (GB) Amount of data that has been deduplicated on the physical cluster
Deduplicated Data > Logical (GB) Amount of data that has been deduplicated on the logical cluster
Space Saved > Physical (GB) Amount of physical space that deduplication has saved on the cluster
Space Saved > Logical (GB) Amount of logical space that deduplication has saved on the cluster
Performance Disk Operations Rate > Read Operations
Average rate at which the disks in the cluster are servicing data read change requests
Disk Operations Rate > Write Operations
Average rate at which the disks in the cluster are servicing data write change requests
Pending Disk Operations Latency (ms)
Average amount of time disk operations spend in the input output scheduler
Disk Throughput Rate > Read Throughput (MB/s)
Total amount of data being read from the disks in the cluster
Disk Throughput Rate > Write Throughput (MB/s)
Total amount of data being written to the disks in the cluster
Cache L1 Cache Hits (MB/s) Amount of requested data that was available from the L1 cache
L2 Cache Hits (MB/s) Amount of requested data that was available from the L2 cache
L3 Cache Hits (MB/s) Amount of requested data that was available from the L3 cache
Overall Cache Hit Rate (MB/s) Amount of data requests that returned hits
Quotas Directory|Total Soft Quota (GB) Amount of total capacity allocated in all directory soft quotas
Metrics
Isilon metrics 111
Table 50 Isilon Cluster metrics (continued)
Metric group Metric Description
Directory|Total Hard Quota (GB) Amount of total capacity allocated in all directory hard quotas
Directory|Total Hard Quota Subscribed (%)
Percent of total capacity allocated in all directory hard quotas
Group|Total Soft Quota (GB) Amount of total capacity allocated in all group soft quotas
Group|Total Hard Quota (GB) Amount of total capacity allocated in all group hard quotas
Group|Total Hard Quota Subscribed (%)
Percent of total capacity allocated in all group hard quotas
User|Total Soft Quota (GB) Amount of total capacity allocated in all user soft quotas
User|Total Hard Quota (GB) Amount of total capacity allocated in all user hard quotas
User|Total Hard Quota Subscribed (%)
Percent of total capacity allocated in all user hard quotas
Note
The Isilon Quota Management white paper provides details about Isilon Smart Quotas.
Table 51 Isilon Node metrics
Metric group Metric Description
Summary CPU % Use Average percentage of the total available node CPU capacity used for this node
Number of Active Clients Number of unique client addresses generating protocol traffic on the monitored node
Number of Connected Clients Number of unique client addresses with established TCP connections to the node
Number of Total Job Workers Number of active and assigned workers on the node
Performance Deadlock File System Event Rate Number of file system deadlock events that the file system is processing per second
Locked File System Event Rate Number of file lock operations occurring in the file system per second
Blocking File System Event Rate Number of file blocking events occurring in the file system per second
Average Operations Size (MB) Average size of the operations or transfers that the disks in the node are servicing
Contended File System Event Rate Number of file contention events, such as lock contention or read/write contention, occurring in the file system per second
File System Event Rate Number of file system events, or operations, (such as read, write, lookup, or rename) that the file system is servicing per second
Metrics
112 EMC Storage Analytics 4.4 Product Guide
Table 51 Isilon Node metrics (continued)
Metric group Metric Description
Disk Operations Rate > Read Operations
Average rate at which the disks in the node are servicing data read requests
Disk Operations Rate > Write Operations
Average rate at which the disks in the node are servicing data write requests
Average Pending Disk Operations Count
Average number of operations or transfers that are in the processing queue for each disk in the node
Disk Throughput Rate > Read Operations
Total amount of data being read from the disks in the node
Disk Throughput Rate > Write Operations
Total amount of data being written to the disks in the node
Pending Disk Operation Latency (ms) Average amount of time that disk operations spend in the input/output scheduler
Disk Activity (%) Average percentage of time that disks in the node spend performing operations instead of sitting idle
Protocol Operations Rate Total number of requests that were originated by clients for all file data access protocols
Slow Disk Access Rate Rate at which slow (long-latency) disk operations occur
External Network External Network Errors > In Number of incoming errors generated for the external network interfaces
External Network Errors > Out Number of outgoing errors generated for the external network interfaces
External Network Packets Rate > In Total number of packets that came in through the external network interfaces in the monitored node
External Network Packets Rate > Out Total number of packets that went out through the external network interfaces in the monitored node
External Network Throughput Rate > In (MB/s)
Total amount of data that came in through the external network interfaces in the monitored node
External Network Throughput Rate > Out (MB/s)
Total amount of data that went out through the external network interfaces in the monitored node
Cache Average Cache Data Age Average amount of time data has been in the cache
L1 Data Prefetch Starts (Bytes/s) Amount of data that was requested from the L1 prefetch
L1 Data Prefetch Hits (Bytes/s) Amount of requested data that was available in the L1 prefetch
L1 Data Prefetch Misses (Bytes/s) Amount of requested data that did not exist in the L1 prefetch
L1 Cache Starts (Bytes/s) Amount of data that was requested from the L1 cache
L1 Cache Hits (Bytes/s) Amount of requested data that was available in the L1 cache
L1 Cache Misses (Bytes/s) Amount of requested data that did not exist in the L1 cache
L1 Cache Waits (Bytes/s) Amount of requested data that existed in the L1 cache but was not available because the data was in use
Metrics
Isilon metrics 113
Table 51 Isilon Node metrics (continued)
Metric group Metric Description
L2 Data Prefetch Starts (Bytes/s) Amount of data that was requested from the L2 prefetch
L2 Data Prefetch Hits (Bytes/s) Amount of requested data that was available in the L2 prefetch
L2 Data Prefetch Misses (Bytes/s) Amount of requested data that did not exist in the L2 prefetch
L2 Cache Starts (Bytes/s) Amount of data that was requested from the L2 cache
L2 Cache Hits (Bytes/s) Amount of requested data that was available in the L2 cache
L2 Cache Misses (Bytes/s) Amount of requested data that did not exist in the L2 cache
L2 Cache Waits (Bytes/s) Amount of requested data that existed in the L2 cache but was not available because the data was in use
L3 Cache Starts (Bytes/s) The amount of data that was requested from the L3 cache
L3 Cache Hits (Bytes/s) Amount of requested data that was available in the L3 cache
L3 Cache Misses (Bytes/s) Amount of requested data that did not exist in the L3 cache
L3 Cache Waits (Bytes/s) Amount of requested data that existed in the L3 cache but was not available because the data was in use
Overall Cache Hit Rate (Bytes/s) Amount of data requests that returned hits
Overall Cache Throughput Rate (Bytes/s)
Amount of data that was requested from cache
ScaleIO metrics EMC Storage Analytics provides ScaleIO metrics for System, Protection Domain, Device, SDS, Storage pool, Snapshot, MDM cluster, MDM, SDC, Fault Set, and Volume.
Note
Only the resource kinds with associated metrics are shown. Most performance metrics with values of zero are not displayed.
The following table shows the metrics available for each resource kind.
Table 52 ScaleIO metrics
Metric System Protection Domain
Device SDS Storage pool
Snap- shot
MDM cluster
MDM SDC Fault Set
Volume
Maximum Capacity (GB)
X X X X X
Used Capacity (GB)
X X X X X
Metrics
114 EMC Storage Analytics 4.4 Product Guide
Table 52 ScaleIO metrics (continued)
Metric System Protection Domain
Device SDS Storage pool
Snap- shot
MDM cluster
MDM SDC Fault Set
Volume
Spare Capacity Allocated (GB)
X X X X X
Thin Used Capacity (GB)
X X X X X
Thick Used Capacity (GB)
X X X X X
Protected Capacity (GB)
X X X X X
Snap Used Capacity (GB)
X X X X X
Unused Capacity (GB)
X X X X X
Used Capacity (%)
X X X X X
Thin Used Capacity (%)
X X X X X
Thick Used Capacity (%)
X X X X X
Protected Capacity (%)
X X X X X
Snap Used Capacity (%)
X X X X X
Total Reads (MB/s)
X X X X X X X X
Total Writes (MB/s
X X X X X X X X
Metrics
ScaleIO metrics 115
Table 52 ScaleIO metrics (continued)
Metric System Protection Domain
Device SDS Storage pool
Snap- shot
MDM cluster
MDM SDC Fault Set
Volume
Average Read IO size (MB)
X X X X X X X
Average Write IO Size (MB)
X X X X X X X
Size (GB) X
Total Read IO/s
X X X
Total Write IO/s
X X X
MDM Mode (String)
X
State (String)
X
Name (String)
X X
RecoverPoint for Virtual Machines metrics EMC Storage Analytics provides RecoverPoint for Virtual Machines metrics for Cluster, Consistency Group, Copy, Journal Volume, Link, Virtual RecoverPoint Appliance (vRPA), RecoverPoint for Virtual Machines System, Replication Set, Repository Volume, Splitter, and User Volume.
This section contains RecoverPoint for Virtual Machines metrics for the following resource kinds:
Table 53 RecoverPoint metrics for Cluster
Metric Group Metric Additional Information
Performance Incoming Writes (IO/s) Sum of incoming cluster writes from all child vRPAs
Incoming Writes (MB/s) Sum of incoming cluster throughput from all child vRPAs
Summary Number of Consistency Groups Sum of all child vRPA consistency groups
Number of Protected VMDKs Sum of user volumes that the cluster protects on all virtual machines, including replica virtual machines
Number of Protected VMs Sum of virtual machines, including replica virtual machines, that the cluster protects
Number of vRPAs Sum of all child vRPAs
Number of Splitters Sum of all the splitters in the RecoverPoint cluster
Metrics
116 EMC Storage Analytics 4.4 Product Guide
Table 53 RecoverPoint metrics for Cluster (continued)
Metric Group Metric Additional Information
Number of Registered ESXi Clusters
Sum of all ESXi Clusters that are registered
Table 54 RecoverPoint metrics for Consistency Group
Metric Group Metric Additional Information
Performance Incoming Writes (IO/s) Sum of incoming consistency group writes per second
Incoming Writes (MB/s) Sum of incoming consistency group writes throughput
Status Enabled Boolean value that indicates the consistency group is enabled
Protection Current Protection Window (Hrs) The farthest time in hours for which RecoverPoint can roll back the consistency group's replica copy
Current Protection Window Ratio Ratio of the current protection window for the consistency group's replica copy as compared with your required protection window
Table 55 RecoverPoint metrics for Copy
Metric Group Metric Additional Information
Protection Current Protection Window (Hrs) The farthest time in hours for which RecoverPoint can roll back the replica copy
Current Protection Window Ratio Ratio of current protection window for the replica copy as compared with your required protection window
Status Active Boolean value indicates if the copy is active
Enabled Boolean value indicates if the copy is enabled
Regulated Boolean value indicates if the copy is regulated
Removable Boolean value indicates if the copy is removable
Role Role of the copy, which is retrieved from the role of the consistency group copy settings
Suspended Boolean value indicates if the copy is suspended
Table 56 RecoverPoint metrics for Journal Volume
Metric Group Metric Additional Information
Capacity Capacity (GB) Size of journal volume in GB
Table 57 RecoverPoint metrics for Link
Metric Group Metric Additional Information
Configuration RPO The allowed maximum for lag times of consistency group copies
Metrics
RecoverPoint for Virtual Machines metrics 117
Table 57 RecoverPoint metrics for Link (continued)
Metric Group Metric Additional Information
RPO Type The set type of RPOs to measure
Status Current Compression Ratio The compression ratio through the link
Current Lag Current lag time between the copy and production
Current Lag Type The type set to measure the current lag time
Is In Compliance Exists only with consistency groups in asynchronous replication mode; a yes-no value that indicates if the current lag is in compliance with the RPO
Protection Current Lag (%) Exists only with consistency groups in asynchronous replication mode; indicates current lag ratio as compared with RPO
Table 58 RecoverPoint metrics for virtual RecoverPoint Appliance (vRPA)
Metric Group Metric Additional Information
Performance CPU Utilization (%) CPU usage of vRPAs
Note
Utilization values appear as decimals (not percentages). Values can range from 0.0 to 1.0, with a value of 1.0 indicating 100%.
Incoming Writes (IO/s) Incoming application writes per second
Incoming Writes (MB/s) Incoming application writes for throughput
Summary Summary Number of consistency groups
Table 59 RecoverPoint metrics for RecoverPoint for Virtual Machines System
Metric Group Metric Additional Information
Summary Number of RecoverPoint Clusters Sum of all the clusters in the RecoverPoint system
Number of Splitters Sum of the splitters in the RecoverPoint system
Table 60 RecoverPoint metrics for Replication Set
Metric Group Metric Additional Information
Capacity Capacity (GB) Size of the user volume in GB that the replication set is protecting
Table 61 RecoverPoint metrics for Repository Volume
Metric Group Metric Additional Information
Capacity Capacity (GB) Size of repository volume in GB
Metrics
118 EMC Storage Analytics 4.4 Product Guide
Table 62 RecoverPoint metrics for Splitter
Metric Group Metric Additional Information
Summary Number of Volumes Attached Number of volumes attached to the splitter
Number of ESX Clusters Connected
Number of clusters connecting to the splitter
Table 63 RecoverPoint metrics for User Volume
Metric Group Metric Additional Information
Capacity Capacity (GB) Size of user volume
Status Role Role of the copy to which the user volume belongs
Unity and UnityVSA metrics EMC Storage Analytics provides Unity and UnityVSA metrics for Array, Disk, FAST Cache, File System, LUN, Storage Container, Storage Pool, Tier, VVol, and Virtual Disk, and Storage Processor. Only the resource kinds with associated metrics are shown.
Unity and UnityVSA metrics for EMC Adapter Instance (array)
l Elapsed collect time (ms)
l New metrics in each collect call
l New resources in each collect call
l Number of down resources
l Number of metrics collected
l Number of resources collected
Table 64 Unity and UnityVSA metrics for Disk, FAST Cache, File System, LUN, Storage Pool, Tier, VVol, Virtual Disk
Metric group
Metric Disk FAST Cachea
File System
LUN Storage Containe
r
Storage Pool
Tier VVol Virtual Diskb
Capacity Size (GB) X X
Available Capacity (GB)
X X X X X
Capacity/Total capacity (GB)
X X X X
Consumed Capacity (GB)
X X X X X X
Full (%) X X X
Max Capacity (GB)
Thin Provisioning X
Metrics
Unity and UnityVSA metrics 119
Table 64 Unity and UnityVSA metrics for Disk, FAST Cache, File System, LUN, Storage Pool, Tier, VVol, Virtual Disk (continued)
Metric group
Metric Disk FAST Cachea
File System
LUN Storage Containe
r
Storage Pool
Tier VVol Virtual Diskb
Subscribed (%) X
User Capacity (GB) X X
Compression Percent (%)
X
Compression Ratio X
Compression Size Saved (GB)
X
Configuratio n
State X X
RAID type X X
FAST Cache X
Disk Count X
Burst Frequency (hours)
X
Burst Rate % X
Burst Time (minutes)
X
Description X
Max MB/s X
Max IO/s X
Max MB/s per GB X
Max IO/s per GB X
Performanc e
Busy (%) X X X
Reads (IO/s) X X X X X
Reads (MB/s) X X X X X
Total Latency (ms) X X X X
Writes (IO/s) X X X X X
Writes (MB/s) X X X X X
Queue Length X X
Total (IO/s) X X
Total (MB/s) X X
Data to Move Down (GB)
X
Metrics
120 EMC Storage Analytics 4.4 Product Guide
Table 64 Unity and UnityVSA metrics for Disk, FAST Cache, File System, LUN, Storage Pool, Tier, VVol, Virtual Disk (continued)
Metric group
Metric Disk FAST Cachea
File System
LUN Storage Containe
r
Storage Pool
Tier VVol Virtual Diskb
Data to Move Up (GB)
X
Data to Move Within (GB)
X
Property Compression Enabled
X
a. Applies to Unity only b. Applies to UnityVSA only
Table 65 Unity and UnityVSA metrics for Storage Processor
Metric group Metric
Cache Dirty Cache Pages (MB)
Read Cache Hit Ratio (%)
Write Cache Hit Ratio (%)
Network CIFS Reads (IOPS)
CIFS Reads (MB/s)
CIFS Writes (IOPS)
CIFS Writes (MB/s)
Network In Bandwidth (MB/s)
Network Out Bandwidth (MB/s)
NFS Reads (IOPS)
NFS Reads (MB/s)
NFS Writes (IOPS)
NFS Writes (MB/s)
Network > NFSv2 Read Calls/s
Read Errors/s
Read Response Time (ms)
Reads (IOPS)
Write Calls/s
Write Errors/s
Write Response Time (ms)
Writes (IOPS)
Network > NFSv3 Access Calls/s
Metrics
Unity and UnityVSA metrics 121
Table 65 Unity and UnityVSA metrics for Storage Processor (continued)
Metric group Metric
Network > NFSv4 Access Errors/s
Access Response Time (ms)
GetAttr Calls/s
GetAttr Errors/s
GetAttr Response Time (ms)
Lookup Calls/s
Lookup Errors/s
Lookup Response Time (ms)
Read Calls/s
Read Errors/s
Read Response Time (ms)
Reads (IOPS)
SetAttr Calls/s
SetAttr Errors/s
SetAtt Response Time (ms)
Write Calls/s
Write Errors/s
Write Response Time (ms)
Writes (IOPS)
Network > SMB1 Close Average Response Time (ms)
Close Calls/s
Close Max Response Time (ms)
NTCreateX Average Response Time (ms)
NTCreateX Calls/s
NTCreateX Max Response Time (ms)
Reads (IOPS)
Reads (MB/s)
ReadX Average Response Time (ms)
ReadX Calls/s
ReadX Max Response Time (ms)
Trans2Prim Average Response Time (ms)
Trans2Prim Calls/s
Trans2Prim Max Response Time (ms)
Metrics
122 EMC Storage Analytics 4.4 Product Guide
Table 65 Unity and UnityVSA metrics for Storage Processor (continued)
Metric group Metric
Writes (IOPS)
Writes (MB/s)
WriteX Average Response Time (ms)
WriteX Calls/s
WriteX Max Response Time (ms)
Network > SMB2 Close Average Response Time (ms)
Close Calls/s
Close Max Response Time (ms)
Create Average Response Time (ms)
Create Calls/s
Create Max Response Time (ms)
Flush Average Response Time (ms)
Flush Calls/s
Flush Max Response Time (ms)
Ioctl Average Response Time (ms)
Ioctl Calls/s
Ioctl Max Response Time
Queryinfo Average Response Time (ms)
Queryinfo Calls/s
Queryinfo Max Response Time (ms)
Read Average Response Time (ms)
Read Calls/s
Read Max Response Time (ms)
Reads (IOPS)
Reads (MB/s)
Write Average Response Time (ms)
Write Calls/s
Write Max Response Time (ms)
Writes (IOPS)
Writes (MB/s)
Performance Busy (%)
Reads (IOPS)
Reads (MB/s)
Metrics
Unity and UnityVSA metrics 123
Table 65 Unity and UnityVSA metrics for Storage Processor (continued)
Metric group Metric
Writes (IOPS)
Writes (MB/s)
Metrics
124 EMC Storage Analytics 4.4 Product Guide
VMAX metrics EMC Storage Analytics provides metrics for Device, Front-End Director, Front-End Port, Back-end Director, Back-end Port, Remote Replica Group, SRDF Director, Storage Group, Storage Resource Pool (SRP), SLO, VVol Protocol Endpoint (VVol PE), and SRDF Port.
Table 66 VMAX Performance metrics
Metric Front- end
director
Front- end port
Back- end
director
Back- end port
Remote replica group
SRDF director
Storage group
SRP
Read Latency (ms) X
Reads (IO/s) X X X X X X
Reads (MB/s) X X X X X
Total Bandwidth (MB/s) X X X X X X X
Total Operations (IO/s) X X X X X X X
Write Latency (ms) X
Writes (IO/s) X X X X X X X X
Writes (MB/s) X X X X X X
Total Hits (IO/s) X
Total Latency (ms) X X
Busy (%) X X X X X
Average Cycle Time (s) X
Minimum Cycle Time (s)
Delta Set Extension Threshold X
HA Repeat Writes (counts/s) X
Devices in Session (count) X
SRDFA Writes (IO/s) X
SRDFA Writes (MB/s) X
SRDFS Writes (IO/s) X
SRDFS Writes (MB/s) X
Response Time (ms) X X X X
Host Reads/sec X
Host Writes/sec X
Hit (%) X
Miss (%) X
Queue Depth Utilization X
Metrics
VMAX metrics 125
Table 66 VMAX Performance metrics (continued)
Metric Front- end
director
Front- end port
Back- end
director
Back- end port
Remote replica group
SRDF director
Storage group
SRP
Read Reqs/sec X
Write Reqs/sec X
IOPS X
MBs/sec X
Host IOPS X
Host MB/s X
Table 67 VMAX Capacity metrics
Metric Device Storage group
SRP VVol Storage Container
VVol Storage Resource
Total Capacity (GB) X X
Used Capacity (GB) X X X
EMC VP Space Saved (%) X
EMC Compression Ratio X
EMC Full (%) X X
EMC Snapshot space (GB) X
EMC Total Managed Space (GB) X
EMC Remaining Managed Space (GB)
X
Subscribed Limit (GB) X X
Subscribed Free (GB) X X
Subscribed Used (GB) X X
Note
The VMAX storage group capacity metrics related to compression are only valid for VMAX All Flash arrays running HYPERMAX OS 5977 2016 Q3 SR and later. Because VMAX3 arrays do not support compression, non-zero values for VMAX3 arrays are irrelevant and should be ignored.
Table 68 VMAX Configuration metrics
Metric Remote replica group
VVol Protocol Endpoint
Description
Number of Masking Views
X
Metrics
126 EMC Storage Analytics 4.4 Product Guide
Table 68 VMAX Configuration metrics (continued)
Metric Remote replica group
VVol Protocol Endpoint
Description
Number of Storage Groups
X
Modes X
Type X
Metro X
Async X
Witness X RDF group is configured as Physical Witness (Yes, No)
Witness Array or Name X
Table 69 VMAX Status metrics
Metric Remote replica group
Witness Configured X
Witness Effective X
Bias Configured X
Bias Effective X
Witness Degraded X
Table 70 VMAX Summary metrics
Metric VVol Protocol Endpoint
Reserved X
Status X
Table 71 VMAX Default metrics
Metric SLO
Compliance X
Metrics
VMAX metrics 127
VNX Block metrics EMC Storage Analytics provides VNX Block metrics for Array, Disk, FAST Cache, Pool LUN, RAID Group, RAID Group LUN, SP Front-end Port, Storage Pool, Storage Processor, and Tier.
The following table shows the metrics available for each resource kind.
Table 72 VNX Block metrics
Metric Array Disk FAST Cache
Pool LUN
RAID group
RAID group LUN
SP Front-
end port
Storage pool
Storage processor
Tier
Elapsed collect time (ms)
X
New metrics in each collect call (count)
X
New resources in each collect call (count)
X
Number of down resources
X
Number of metrics collected
X
Number of resources collected
X
Busy (%) X X X X
Capacity (GB) X
Hard Read Errors (Count)
X
Hard Write Errors (Count)
X
LUN Count X
Queue Length X X X
Read Size (MB) X X X X
Reads (IOPS) X X X X X
Reads (MB/s) X X X X X
Total Latency (ms) X X X
Total Operations (IOPS)
X X X X X
Total Bandwidth (MB/s)
X X X X X
Write Size (MB) X X X X
Metrics
128 EMC Storage Analytics 4.4 Product Guide
Table 72 VNX Block metrics (continued)
Metric Array Disk FAST Cache
Pool LUN
RAID group
RAID group LUN
SP Front-
end port
Storage pool
Storage processor
Tier
Writes (IOPS) X X X X X
Writes (MB/s) X X X X X
Current Operation X X
Current Operation Status
X X
Current Operation Complete (%)
X
Dirty (%) X
Flushed (MB) X
Mode X
RAID Type X X
Read Cache Hit Ratio (%)
X X X
Read Cache Hits (Hits/s)
X
Read Cache Misses (Misses/s)
X
Size (GB) X
Write Cache Hit Ratio (%)
X X
Write Cache Hits (Hits/s)
X
Write Cache Misses (Misses/s)
X
Average Busy Queue Length
X X
Capacity Tier Distribution (%)
X
Consumed Capacity (GB)
X X X
Explicit trespasses (Count)
X
Extreme Performance Tier Distribution (%)
X
Implicit trespasses (Count)
X
Metrics
VNX Block metrics 129
Table 72 VNX Block metrics (continued)
Metric Array Disk FAST Cache
Pool LUN
RAID group
RAID group LUN
SP Front-
end port
Storage pool
Storage processor
Tier
Initial Tier X
Performance Tier Distribution (%)
X
Read Cache State X X X
Service Time (ms) X X
Tiering Policy X
User Capacity (GB) X X X X
Write Cache State X X X
Available Capacity (GB)
X X X
Defragmented (%) X
Disk Count X X
Free Continuous Group of Unbound
Segments (GB)
X
Full (%) X
LUN Count X
Max Disks X
Max LUNs X
Raw Capacity (GB) X
Queue Full Count X X
Auto Tiering X
Auto-Tiering State X
Data Movement Completed (GB)
X
Data to Move Down (GB)
X
Data to Move Up (GB) X
Data to Move Within (GB)
X
Deduplicated LUNs Shared Capacity
(GBs)
X
Metrics
130 EMC Storage Analytics 4.4 Product Guide
Table 72 VNX Block metrics (continued)
Metric Array Disk FAST Cache
Pool LUN
RAID group
RAID group LUN
SP Front-
end port
Storage pool
Storage processor
Tier
Deduplication and Snapshot Savings
(GBs)
X
Deduplication Rate X
Dirty Cache Pages (%)
X
Dirty Cache Pages (MB)
X
Read Cache Size (MB)
X
Write Cache Flushes (MB/s)
X
Write Cache Size (MB)
X
Higher Tier (GB) X
Lower Tier (GB) X
Subscribed (%) X
Metrics
VNX Block metrics 131
VNX File/eNAS metrics EMC Storage Analytics provides VNX File metrics for Array, Data Mover (includes Virtual Data Mover), dVol, File Pool, and File System.
VNX File/eNAS metrics for Array
l Elapsed collect time (ms)
l New metrics in each collect call
l New resources in each collect call
l Number of down resources
l Number of metrics collected
l Number of resources collected
VNX File/eNAS metrics for Data Mover
Table 73 VNX File/eNAS metrics for Data Mover
Metric Group Metric
Cache Buffer Cache Hit Ratio (%)
DNLC Hit Ratio (%)
Open File Cache Hit Ratio (%)
Configuration Type
CPU Busy (%)
Disk Reads (MB/s)
Total Bandwidth (MB/s)
Writes (MB/s)
Network CIFS Average Read Size (KB)
CIFS Average Write Size (KB)
CIFS Reads (IOPS)
CIFS Reads (MB/s)
CIFS Total Operations (IOPS)
CIFS Total Bandwidth (MB/s)
CIFS Writes (IOPS)
CIFS Writes (MB/s)
NFS Average Read Size (Bytes)
NFS Average Write Size (Bytes)
NFS Reads (IOPS)
NFS Reads (MB/s)
NFS Total Bandwidth (MB/s)
NFS Total Operations (IOPS)
Metrics
132 EMC Storage Analytics 4.4 Product Guide
Table 73 VNX File/eNAS metrics for Data Mover (continued)
Metric Group Metric
NFS Writes (IOPS)
NFS Writes (MB/s)
Network In Bandwidth (MB/s)
Network Out Bandwidth (MB/s)
Total Network Bandwidth (MB/s)
Network > NFSv2, NFSv3, and NFSv4
Read Calls/s
Read Errors/s
Read Response Time (ms)
Write Calls/s
Write Errors/s
Write Response Time (ms)
Network > NFSv3 Access Calls/s
Access Errors/s
Access Response Time (ms)
GetAttr Calls/s
GetAttr Errors/s
GetAttr Response Time (ms)
Lookup Calls/s
Lookup Errors/s
Lookup Response Time (ms)
SetAttr Calls/s
SetAttr Errors/s
SetAttr Response Time (ms)
Network > NFSv4 Close Calls/s
Close Errors/s
Close Response Time (ms)
Compound Calls/s
Compound Errors/s
Compound Response Time (ms)
Open Calls/s
Open Errors/s
Open Response Time (ms)
Network > SMB1 Close Average Response Time (ms)
Metrics
VNX File/eNAS metrics 133
Table 73 VNX File/eNAS metrics for Data Mover (continued)
Metric Group Metric
Close Calls/s
Close Max Response Time (ms)
NTCreateX Average Response Time (ms)
NTCreateX Calls/s
NTCreateX Max Response Time (ms)
ReadX Average Response Time (ms)
ReadX Calls/s
ReadX Max Response Time (ms)
Trans2Prim Average Response Time (ms)
Trans2Prim Calls/s
Trans2Prim Max Response Time (ms)
WriteX Average Response Time (ms)
WriteX Calls/s
WriteX Max Response Time (ms)
Network > SMB2 Close Average Response Time (ms)
Close Calls/s
Close Max Response Time (ms)
Flush Average Response Time (ms)
Flush Calls/s
Flush Max Response Time (ms)
Create Average Response Time (ms)
Create Calls/s
Create Max Response Time (ms)
IOCTL Average Response Time (ms)
IOCTL Calls/s
IOCTL Max Response Time (ms)
Queryinfo Average Response Time (ms)
Queryinfo Calls/s
Queryinfo Max Response Time (ms)
Read Average Response Time (ms)
Read Calls/s
Read Max Response Time (ms)
Write Average Response Time (ms)
Metrics
134 EMC Storage Analytics 4.4 Product Guide
Table 73 VNX File/eNAS metrics for Data Mover (continued)
Metric Group Metric
Write Calls/s
Write Max Response Time (ms)
VNX File/eNAS metrics for dVol, File pool, and File system
Table 74 VNX File/eNAS metrics for dVol, File pool, and File system
Metric dVol File pool File system Note
Average Read Size (Bytes) X X
Average Write Size (Bytes) X X
Average Completion Time (ms/call)
X
Average Service Time (ms/ call)
X
Available Capacity (GB) X X
Capacity (GB) X X X
Consumed Capacity (GB) X X
Max Capacity (GB) X If automatic extension is enabled, the file system automatically extends to this maximum size when the high water mark is reached. The default value for the high water mark is 90 percent.
Full (%) X
IO Retries (IO/s) X
Queue Length X
Reads (IO/s) X X
Reads (MB/s) X X
Total Operations (IO/s) X
Total Bandwidth (MB/s) X X
Utilization (%) X
Writes (IO/s) X X
Writes (MB/s) X X
Thin Provisioning X True indicates that the file system is enabled for virtual provisioning, an option that can only be used with automatic file system extension. Combining automatic file system extension with virtual provisioning allows growth of the file system gradually and as needed. When virtual provisioning is enabled, NFS and CIFS clients receive reports for
Metrics
VNX File/eNAS metrics 135
Table 74 VNX File/eNAS metrics for dVol, File pool, and File system (continued)
Metric dVol File pool File system Note
either the virtual maximum file system size or real file system size, whichever is larger.
Read IO Ratio (%) X
Write IO Ratio (%) X
Read Requests (Requests/s) X
Write Requests (Requests/s) X
VNXe metrics EMC Storage Analytics provides VNXe metrics for Array, Disk, FAST Cache, File System, LUN, Storage Pool, Tier, VVol, Virtual Disk, and Storage Processor. Only the resource kinds with associated metrics are shown.
The following metrics are available:
l Elapsed collect time (ms)
l New metrics in each collect call
l New resources in each collect call
l Number of down resources
l Number of metrics collected
l Number of resources collected
Table 75 VNXe metrics for Disk, FAST Cache, File System, LUN, Storage Pool, Tier, Virtual Disk
Metric group Metric Disk FAST Cache
File System
LUN Storage Pool Tier
Capacity Size (GB) X
Available Capacity (GB) X X X X
Capacity/Total capacity (GB)
X X
Consumed Capacity (GB) X X X X
Full (%) X X X
Thin Provisioning X
Subscribed (%) X
User Capacity (GB) X X
Configuration State X
RAID type X X
FAST Cache X
Disk Count X
Metrics
136 EMC Storage Analytics 4.4 Product Guide
Table 75 VNXe metrics for Disk, FAST Cache, File System, LUN, Storage Pool, Tier, Virtual Disk (continued)
Metric group Metric Disk FAST Cache
File System
LUN Storage Pool Tier
Performance Busy (%) X
Reads (IO/s) X X
Reads (MB/s) X X
Total Latency (ms) X
Writes (IO/s) X X
Writes (MB/s) X X
Queue Length X
Data to move Down (GB) X
Data to move Up (GB) X
Data to move Within (GB) X
Disk Count X
Table 76 VNXe metrics for Storage Processor
Metric Group Metric
Cache Dirty Cache Pages (MB)
Read Cache Hit Ratio (%)
Write Cache Hit Ratio (%)
Network CIFS Reads (IOPS)
CIFS Reads (MB/s)
CIFS Writes (IOPS)
CIFS Writes (MB/s)
Network In Bandwidth (MB/s)
Network Out Bandwidth (MB/s)
NFS Reads (IOPS)
NFS Reads (MB/s)
NFS Writes (IOPS)
NFS Writes (MB/s)
Network > NFSv2 Read Calls/s
Read Errors/s
Read Response Time (ms)
Reads (IOPS)
Write Calls/s
Metrics
VNXe metrics 137
Table 76 VNXe metrics for Storage Processor (continued)
Metric Group Metric
Write Errors/s
Write Response Time (ms)
Writes (IOPS)
Network > NFSv3 Access Calls/s
Access Errors/s
Access Response Time (ms)
GetAttr Calls/s
GetAttr Errors/s
GetAttr Response Time (ms)
Lookup Calls/s
Lookup Errors/s
Lookup Response Time (ms)
Read Calls/s
Read Errors/s
Read Response Time (ms)
Reads (IOPS)
SetAttr Calls/s
SetAttr Errors/s
SetAtt Response Time (ms)
Write Calls/s
Write Errors/s
Write Response Time (ms)
Writes (IOPS)
Network > SMB1 Close Average Response Time (ms)
Close Calls/s
Close Max Response Time (ms)
NTCreateX Average Response Time (ms)
NTCreateX Calls/s
NTCreateX Max Response Time (ms)
Reads (IOPS)
Reads (MB/s)
ReadX Average Response Time (ms)
ReadX Calls/s
Metrics
138 EMC Storage Analytics 4.4 Product Guide
Table 76 VNXe metrics for Storage Processor (continued)
Metric Group Metric
ReadX Max Response Time (ms)
Trans2Prim Average Response Time (ms)
Trans2Prim Calls/s
Trans2Prim Max Response Time (ms)
Writes (IOPS)
Writes (MB/s)
WriteX Average Response Time (ms)
WriteX Calls/s
WriteX Max Response Time (ms)
Network > SMB2 Close Average Response Time (ms)
Close Calls/s
Close Max Response Time (ms)
Create Average Response Time (ms)
Create Calls/s
Create Max Response Time (ms)
Flush Average Response Time (ms)
Flush Calls/s
Flush Max Response Time (ms)
Ioctl Average Response Time (ms)
Ioctl Calls/s
Ioctl Max Response Time
Queryinfo Average Response Time (ms)
Queryinfo Calls/s
Queryinfo Max Response Time (ms)
Read Average Response Time (ms)
Read Calls/s
Read Max Response Time (ms)
Reads (IOPS)
Reads (MB/s)
Write Average Response Time (ms)
Write Calls/s
Write Max Response Time (ms)
Writes (IOPS)
Metrics
VNXe metrics 139
Table 76 VNXe metrics for Storage Processor (continued)
Metric Group Metric
Writes (MB/s)
Performance Busy (%)
Reads (IOPS)
Reads (MB/s)
Writes (IOPS)
Writes (MB/s)
Metrics
140 EMC Storage Analytics 4.4 Product Guide
VPLEX metrics EMC Storage Analytics provides VPLEX metrics for Cluster, Director, Distributed Device, Engine, Ethernet Port, Extent, FC Port, Local Device, Storage Array, Storage View, Storage Volume, Virtual Volume, and VPLEX Metro.
Table 77 VPLEX metrics for Cluster
Metric group Metric Description
Status Cluster Type Local or Metro.
Status Health State Possible values include:
l OK - Cluster is functioning normally.
l Degraded - Cluster is not functioning at an optimal level. This may indicate non- functioning remote virtual volumes, unhealthy devices or storage volumes, suspended devices, conflicting director count configuration values, or out-of-date devices.
l Unknown - VPLEX cannot determine the cluster's health state, or the state is invalid.
l Major failure - Cluster is failing and some functionality may be degraded or unavailable. This may indicate complete loss of back-end connectivity.
l Minor failure - Cluster is functioning, but some functionality may be degraded. This may indicate one or more unreachable storage volumes.
l Critical failure - Cluster is not functioning and may have failed completely. This may indicate a complete loss of back-end connectivity.
Status Operational Status
During transition periods, the cluster moves from one operational state to another. Possible values include:
l OK - Cluster is operating normally.
l Cluster departure - One or more of the clusters cannot be contacted. Commands affecting distributed storage are refused.
l Degraded - Cluster is not functioning at an optimal level. This may indicate non- functioning remote virtual volumes, unhealthy devices or storage volumes, suspended devices, conflicting director count configuration values, or out-of-date devices.
l Device initializing - If clusters cannot communicate with each other, then the distributed-device will be unable to initialize.
l Device out of date - Child devices are being marked fully out of date. Sometimes this occurs after a link outage.
l Expelled - Cluster has been isolated from the island either manually (by an administrator) or automatically (by a system configuration setting).
l Shutdown - Cluster's directors are shutting down.
l Suspended exports - Some I/O is suspended. This could be result of a link failure or loss of a director. Other states might indicate the true problem. The VPLEX might be waiting for you to confirm the resumption of I/O.
l Transitioning - Components of the software are recovering from a previous incident (for example, the loss of a director or the loss of an inter-cluster link).
Metrics
VPLEX metrics 141
Table 77 VPLEX metrics for Cluster (continued)
Metric group Metric Description
Capacity Exported Virtual Volumes
Number of exported virtual volumes.
Exported Virtual Volumes (GB)
Gigabytes of exported virtual volumes.
Used Storage Volumes
Number of used storage volumes.
Used Storage Volumes (GB)
Gigabytes of used storage volumes.
Unused Storage Volumes
Number of unused storage volumes.
Unused Storage Volumes (GB)
Gigabytes of unused storage volumes.
Table 78 VPLEX metrics for Director
Metric Group
Metric Description
CPU Busy (%) Percentage of director CPU usage
Status Operational Status Possible values include:
l OK - Functioning normally
l Degraded - May be out-of-date compared to its mirror
l Unknown - Cannot determine the health state, or the state is invalid
l Error - VPLEX has marked the object as hardware-dead
l Starting - Not yet ready
l Lost-communication - Object is unreachable
Storage Volumes
Read Latency (ms) Average read latency in milliseconds
Write Latency (ms) Average write latency in milliseconds
Virtual Volumes
Read Latency (ms) Average read latency in milliseconds
Reads (MB/s) Number of bytes read per second
Total Reads and Writes (counts/s)
Total number of reads and writes per second
Write Latency (ms) Average write latency in milliseconds
Writes (MB/s) Number of bytes written per second
Memory Memory Used (%) Percentage of memory heap usage by the firmware for its accounting on the director. This value is not the percentage of cache pages in use for user data
Front-end Director
Aborts (counts/s) Number of aborted I/O operations per second through the director's front-end ports
Metrics
142 EMC Storage Analytics 4.4 Product Guide
Table 78 VPLEX metrics for Director (continued)
Metric Group
Metric Description
Active Operations (counts)
Number of active, outstanding I/O operations on the director's front-end ports
Compare and Write Latency (ms)
Average time, in milliseconds, that it takes for VAAI CompareAndWrite request to complete on the director's front-end ports
Operations (counts/s) Number of I/O operations per second through the director's front-end ports
Queued Operations (counts)
Number of queued, outstanding I/O operations on the director's front-end ports
Read Latency (ms) Average time, in milliseconds, that it takes for read requests to complete on the director's front-end ports. Total time it takes VPLEX to complete a read request
Reads (counts/s) Number of read operations per second on the director's front-end ports
Reads (MB/s) Number of bytes per second read from the director's front-end ports
Write Latency (ms) Average time, in milliseconds, that it takes for write requests to complete on the director's front-end ports. Total time it takes VPLEX to complete a write request
Writes (counts/s) Number of write operations per second on the director's front-end ports
Writes (MB/s) Number of bytes per second written to the director's front-end ports
Back-end Director
Aborts (counts/s) Number of aborted I/O operations per second on the director's back-end ports
Operations (counts/s) Number of I/O operations per second through the director's back-end ports
Reads (counts/s) Number of read operations per second by the director's back-end ports
Reads (MB/s) Number of bytes read per second by the director's back-end ports
Resets (counts/s) Number of LUN resets issued per second through the director's back-end ports. LUN resets are issued after 20 seconds of LUN unresponsiveness to outstanding operations.
Timeouts (counts/s) Number of timed out I/O operations per second on the director's back-end ports. Operations time out after 10 seconds
Writes (MB/s) Number of bytes written per second by the director's back-end ports
COM Latency
Average Latency (ms) Average time, in milliseconds, that it took for inter-director WAN messages to complete on this director to the specified cluster in the last 5-second interval
Maximum Latency (ms) Maximum time, in milliseconds, that it took for an inter-director WAN message to complete on this director to the specified cluster in the last 5-second interval
Minimum Latency (ms) Minimum time, in milliseconds, that it took for an inter-director WAN message to complete on this director to the specified cluster in the last five-second interval
WAN Link Usage
Distributed Device Bytes Received (MB/s)
Number of bytes of distributed-device traffic per second received on the director's WAN ports
Distributed Device Bytes Sent (MB/s)
Number of bytes of distributed-device traffic per second sent on the director's WAN ports
Distributed Device Rebuild Bytes Received (MB/s)
Number of bytes of distributed-device, rebuild/migration traffic per second received on the director's WAN ports
Metrics
VPLEX metrics 143
Table 78 VPLEX metrics for Director (continued)
Metric Group
Metric Description
Distributed Device Rebuild Bytes Sent (MB/s)
Number of bytes of distributed-device rebuild/migration per second traffic sent on the director's WAN ports
FC WAN COM
Bytes Received (MB/s) Number of bytes of WAN traffic per second received on this director's Fibre Channel port
Bytes Sent (MB/s) Number of bytes of WAN traffic per second sent on this director's Fibre Channel port
Packets Received (counts/s)
Number of packets of WAN traffic per second received on this director's Fibre Channel port
Packets Sent (counts/s) Number of packets of WAN traffic per second sent on this director's Fibre Channel port
IP WAN COM
Average Latency (ms) Average time, in milliseconds, that it took for inter-director WAN messages to complete on this director's IP port in the last 5-second interval
Bytes Received (MB/s) Number of bytes of WAN traffic per second received on this director's IP port
Bytes Sent (MB/s) Number of bytes of WAN traffic per second sent on this director's IP port
Maximum Latency (ms) Maximum time, in milliseconds, that it took for an inter-director WAN message to complete on this director's IP port in the last five-second interval
Minimum Latency (ms) Minimum time, in milliseconds, that it takes for an inter-director WAN message to complete on this director's IP port in the last five-second interval
Packets Received (counts/s)
Number of packets of WAN traffic per second received on this director's IP port
Packets Resent (counts/s)
Number of WAN traffic packets re-transmitted per second that were sent on this director's IP port
Packets Sent (counts/s) Number of packets of WAN traffic per second sent on this director's IP port
Received Packets Dropped (counts/s)
Number of WAN traffic packets dropped per second that were received on this director's IP port
Sent Packets Dropped (counts/s)
Number of WAN traffic packets dropped per second that were sent on this director's IP port
Table 79 VPLEX metrics for Distributed Device
Metric Group
Metric Description
Capacity Capacity (GB) Capacity in gigabytes
Status Health State Possible values include:
l OK - Functioning normally
l Degraded - May be out-of-date compared to its mirror
l Unknown - Cannot determine the health state, or the state is invalid
l Non-recoverable error - May be out-of-date compared to its mirror, or VPLEX cannot determine the health state
Metrics
144 EMC Storage Analytics 4.4 Product Guide
Table 79 VPLEX metrics for Distributed Device (continued)
Metric Group
Metric Description
l Critical failure - VPLEX has marked the object as hardware-dead
Operational Status
Possible values include:
l OK - Functioning normally
l Degraded - May be out-of-date compared to its mirror
l Unknown - Cannot determine the health state, or the state is invalid
l Error - VPLEX has marked the object as hardware-dead
l Starting - Not yet ready
l Lost-communication - Object is unreachable
Service Status Possible values include:
l Cluster unreachable - VPLEX cannot reach the cluster; the status is unknown
l Need resume - The other cluster detached the distributed device while it was unreachable. Distributed device needs to be manually resumed for I/O to resume at this cluster.
l Need winner - All clusters are reachable again, but both clusters had detached this distributed device and resumed I/O. You must pick a winner cluster whose data will overwrite the other cluster's data for this distributed device.
l Potential conflict - Clusters have detached each other resulting in a potential for detach conflict.
l Running - Distributed device is accepting I/O
l Suspended - Distributed device is not accepting new I/O; pending I/O requests are frozen.
l Winner-running - This cluster detached the distributed device while the other cluster was unreachable, and is now sending I/O to the device.
Table 80 VPLEX metrics for Engine
Metric Group
Metric Description
Status Health State Possible values include:
l OK - Functioning normally
l Degraded - May be out-of-date compared to its mirror
l Unknown - Cannot determine the health state, or the state is invalid
l Non-recoverable error - May be out-of-date compared to its mirror, or VPLEX cannot determine the health state
l Critical failure - VPLEX has marked the object as hardware-dead
Operational Status
Possible values include:
Metrics
VPLEX metrics 145
Table 80 VPLEX metrics for Engine (continued)
Metric Group
Metric Description
l OK - Functioning normally
l Degraded - May be out-of-date compared to its mirror
l Unknown - Cannot determine the health state, or the state is invalid
l Error - VPLEX has marked the object as hardware-dead
l Starting - Not yet ready
l Lost-communication - Object is unreachable
Table 81 VPLEX metrics for Ethernet Port
Metric Group
Metric Description
Status Operational Status
Possible values include:
l OK - Functioning normally
l Degraded - May be out-of-date compared to its mirror
l Unknown - Cannot determine the health state, or the state is invalid
l Error - VPLEX has marked the object as hardware-dead
l Starting - Not yet ready
l Lost-communication - Object is unreachable
Table 82 VPLEX metrics for Extent Device
Metric Group
Metric Description
Capacity Capacity (GB) Capacity in gigabytes
Status Health State Possible values include:
l OK - The extent is functioning normally
l Degraded - The extent may be out-of-date compared to its mirror (applies only to extents that are part of a RAID 1 device)
l Unknown - VPLEX cannot determine the extent's operational state, or the state is invalid
l Non-recoverable error - The extent may be out-of-date compared to its mirror (applies only to extents that are part of a RAID 1 device), and/or the health state cannot be determined
Operational Status
Possible values include:
l OK - The extent is functioning normally
l Degraded - The extent may be out-of-date compared to its mirror (applies only to extents that are part of a RAID 1 device)
Metrics
146 EMC Storage Analytics 4.4 Product Guide
Table 82 VPLEX metrics for Extent Device (continued)
Metric Group
Metric Description
l Unknown - VPLEX cannot determine the extent's operational state, or the state is invalid
l Starting - The extent is not yet ready
Table 83 VPLEX metrics for Fibre Channel Port
Metric Group
Metric Description
Status Operational Status
Possible values include:
l OK - Functioning normally
l Degraded - May be out-of-date compared to its mirror
l Unknown - Cannot determine the health state, or the state is invalid
l Error - VPLEX has marked the object as hardware-dead
l Starting - Not yet ready
l Lost-communication - Object is unreachable
Table 84 VPLEX metrics for Local Device
Metric Group
Metric Description
Capacity Capacity (GB) Capacity in gigabytes
Status Health State Possible values include:
l OK - Functioning normally
l Degraded - May be out-of-date compared to its mirror
l Unknown - Cannot determine the health state, or the state is invalid
l Non-recoverable error - May be out-of-date compared to its mirror, or VPLEX cannot determine the health state
l Critical failure - VPLEX has marked the object as hardware-dead
Operational Status
Possible values include:
l OK - Functioning normally
l Degraded - May be out-of-date compared to its mirror
l Unknown - Cannot determine the health state, or the state is invalid
l Error - VPLEX has marked the object as hardware-dead
l Starting - Not yet ready
l Lost-communication - Object is unreachable
Metrics
VPLEX metrics 147
Table 84 VPLEX metrics for Local Device (continued)
Metric Group
Metric Description
Service Status Possible values include:
l Cluster unreachable - VPLEX cannot reach the cluster; the status is unknown
l Need resume - The other cluster detached the distributed device while it was unreachable. Distributed device needs to be manually resumed for I/O to resume at this cluster.
l Need winner - All clusters are reachable again, but both clusters had detached this distributed device and resumed I/O. You must pick a winner cluster whose data will overwrite the other cluster's data for this distributed device.
l Potential conflict - Clusters have detached each other resulting in a potential for detach conflict.
l Running - Distributed device is accepting I/O
l Suspended - Distributed device is not accepting new I/O; pending I/O requests are frozen
l Winner-running - This cluster detached the distributed device while the other cluster was unreachable, and is now sending I/O to the device.
Table 85 VPLEX metrics for Storage Array
Metric Group Metric Description
Capacity Allocated Storage Volumes Number of allocated storage volumes
Allocated Storage Volumes (GB) Gigabytes of allocated storage volumes
Used Storage Volumes Number of used storage volumes
Used Storage Volumes (GB) Gigabytes of used storage volumes
Table 86 VPLEX metrics for Storage View
Metric Group Metric Description
Capacity Virtual Volumes (GB)
Gigabytes of virtual volumes
Status Operational Status Possible values include:
l OK - Functioning normally
l Degraded - May be out-of-date compared to its mirror
l Unknown - Cannot determine the health state, or the state is invalid
l Error - VPLEX has marked the object as hardware-dead
l Starting - Not yet ready
l Lost-communication - Object is unreachable
Performance Read Latency (ms) Average read latency for child virtual volumes in milliseconds
Metrics
148 EMC Storage Analytics 4.4 Product Guide
Table 86 VPLEX metrics for Storage View (continued)
Metric Group Metric Description
Reads (MB/s) Sum of bytes read per second for child virtual volumes
Total Reads and Writes (counts/s)
Sum of reads and writes per second for child virtual volumes
Write Latency (ms) Average write latency for child virtual volumes in milliseconds
Writes (MB/s) Sum of bytes written per second for child virtual volumes
Table 87 VPLEX metrics for Storage Volume
Metric Group Metric Description
Capacity Capacity (GB) Capacity in gigabytes
Status Health State Possible values include:
l OK - The storage volume is functioning normally
l Degraded - The storage volume may be out-of-date compared to its mirror
l Unknown - Cannot determine the health state, or the state is invalid
l Non-recoverable error - May be out-of-date compared to its mirror, or VPLEX cannot determine the health state
l Critical failure - VPLEX has marked the object as hardware-dead
Operational Status
Possible values include:
l OK - Functioning normally
l Degraded - May be out-of-date compared to its mirror (This state applies only to a storage volume that is part of a RAID 1 Metadata Volume)
l Unknown - Cannot determine the health state, or the state is invalid
l Error - VPLEX has marked the object as hardware-dead
l Starting - Not yet ready
l Lost-communication - Object is unreachable
Table 88 VPLEX metrics for Virtual Volume
Metric Group Metric Description
Capacity Capacity (GB) Capacity in gigabytes
Locality Locality Possible values include:
l Local - The volume is local to the enclosing cluster
l Remote - The volume is made available by a different cluster than the enclosing cluster, and is accessed remotely
l Distributed - The virtual volume has or is capable of having legs at more than one cluster
Status Health State Possible values include:
Metrics
VPLEX metrics 149
Table 88 VPLEX metrics for Virtual Volume (continued)
Metric Group Metric Description
l OK - Functioning normally
l Unknown - Cannot determine the health state, or the state is invalid
l Major failure - One or more of the virtual volume's underlying devices is out-of- date, but will never rebuild
l Minor failure - One or more of the virtual volume's underlying devices is out-of- date, but will rebuild
Operational Status
Possible values include:
l OK - Functioning normally
l Degraded - The virtual volume may have one or more out-of-date devices that will eventually rebuild
l Unknown - VPLEX cannot determine the virtual volume's operational state, or the state is invalid
l Error - One or more of the virtual volume's underlying devices is hardware-dead
l Starting - Not yet ready
l Stressed - One or more of the virtual volume's underlying devices is out-of-date and will never rebuild
Service Status Possible values include:
l Running - I/O is running
l Inactive - The volume is part of an inactive storage-view and is not visible from the host
l Unexported- The volume is unexported
l Suspended - I/O is suspended for the volume
l Cluster-unreachable - Cluster is unreachable at this time
l Need-resume - Issue re-attach to resume after link has returned
Performance Read Latency (ms)
Average read latency for virtual volume in milliseconds
Reads (MB/s) Bytes read per second for virtual volume
Total Reads and Writes (counts/s)
Reads and writes per second for virtual volume
Write Latency (ms)
Average write latency for virtual volume in milliseconds
Writes (MB/s) Bytes written per second for virtual volume
Metrics
150 EMC Storage Analytics 4.4 Product Guide
Table 89 VPLEX metrics for VPLEX Metro
Metric Group Metric Description
Status Health State Possible values include:
l OK - Cluster is functioning normally
l Degraded - Cluster is not functioning at an optimal level. This may indicate non- functioning remote virtual volumes, unhealthy devices or storage volumes, suspended devices, conflicting director count configuration values, or out-of-date devices.
l Unknown - VPLEX cannot determine the cluster's health state, or the state is invalid
l Major failure - Cluster is failing and some functionality may be degraded or unavailable. This may indicate complete loss of back-end connectivity.
l Minor failure - Cluster is functioning, but some functionality may be degraded. This may indicate one or more unreachable storage volumes.
l Critical failure - Cluster is not functioning and may have failed completely. This may indicate a complete loss of back-end connectivity.
Operational Status
During transition periods, the cluster moves from one operational state to another. Possible values include:
l OK - Cluster is operating normally
l Cluster departure - One or more of the clusters cannot be contacted. Commands affecting distributed storage are refused.
l Degraded - Cluster is not functioning at an optimal level. This may indicate non- functioning remote virtual volumes, unhealthy devices or storage volumes, suspended devices, conflicting director count configuration values, or out-of-date devices.
l Device initializing - If clusters cannot communicate with each other, then the distributed-device will be unable to initialize.
l Device out of date - Child devices are being marked fully out of date. Sometimes this occurs after a link outage.
l Expelled - Cluster has been isolated from the island either manually (by an administrator) or automatically (by a system configuration setting).
l Shutdown - Cluster's directors are shutting down.
l Suspended exports - Some I/O is suspended. This could be result of a link failure or loss of a director. Other states might indicate the true problem. The VPLEX might be waiting for you to confirm the resumption of I/O.
l Transitioning - Components of the software are recovering from a previous incident (for example, the loss of a director or the loss of an inter-cluster link).
Metrics
VPLEX metrics 151
XtremIO metrics EMC Storage Analytics provides XtremIO metrics for Cluster, Data Protection Group, Snapshot, SSD, Storage Controller, Volume, Disk array enclosure (DAE), and X-Brick.
Table 90 XtremIO metrics for Cluster
Metric Group Metric
Capacity Deduplication Ratio
Compression Ratio
Total Efficiency
Thin Provision Savings (%)
Data Reduction Ratio
Capacity > Physical Available Capacity (TB)
Remaining Capacity (%)
Used Capacity (%)
Consumed Capacity (TB)
Total Capacity (TB)
Capacity > Volume Available Capacity (TB)
Consumed Capacity (TB)
Total Capacity (TB)
Performance Total Bandwidth (MB/s)
Total Latency (ms)
Total Operations (IO/s)
Performance > Read Operations
Read Bandwidth (MB/s)
Read Latency (ms)
Reads (IO/S)
Performance > Write Operations
Writes (MB/s)
Write Bandwidth (MB/s)
Write Latency (ms)
Status Health State ; Green = Normal, Yellow = Free space <= 90%, Orange = Free space <= 95%, Red = Free space <= 99%
Total Memory In Use (%)
Configuration Encrypted
Metrics
152 EMC Storage Analytics 4.4 Product Guide
Table 91 XtremIO metrics for Data Protection Group
Metric Group
Metric
Performance Average SSD Utilization (%)
Table 92 XtremIO metrics for Snapshot
Metric Group
Metric
Capacity Consumed Capacity in XtremIO (GB)Consumed capacity in gigabytes without "zeroed" space
Consumed Capacity in VMware (GB)Consumed capacity in gigabytes, including "zeroed" space
Note
This metric is available only when a datastore is built on top of the snapshot. The value of the metric is the consumed datastore capacity, which might not be the same as the consumed snapshot capacity.
Total Capacity (GB)
Performance Average Block Size (KB)
Total Bandwidth (MB/s)
Total Latency (usec)
Total Operations (IOPS)
Unaligned (%)
Performance > Read Operations
Average Block Size (KB)
Average Small Reads (IOPS)
Average Unaligned Reads (IOPS)
Read Bandwidth (MB/s)
Read Latency (usec)
Reads (IOPS)
Performance > Write Operations
Average Block Size (KB)
Average Small Writes (IOPS)
Average Unaligned Writes (IOPS)
Write Bandwidth (MB/s)
Write Latency (usec)
Writes (IOPS)
Configuration Tag
Metrics
XtremIO metrics 153
Table 93 XtremIO metrics for SSD
Metric Group
Metric
Capacity Disk Utilization (%)
Endurance Endurance Remaining (%)
Table 94 XtremIO metrics for Storage Controller
Metric Group
Metric
Performance CPU 1 Utilization (%)
CPU 2 Utilization (%)
Status Health State
Table 95 XtremIO metrics for Volume
Metric Group Metric
Capacity Consumed Capacity in XtremIO (GB)
Consumed Capacity in VMware (GB)
Total Capacity (GB)
Performance Average Block Size (KB)
Total Bandwidth (MB/s)
Total Latency (ms)
Total Operations (IOPS)
Unaligned (%)
Performance > Read Operations
Average Block Size (KB)
Average Small Reads (IOPS)
Average Unaligned Reads (IOPS)
Read Bandwidth (MB/s)
Read Latency (ms)
Reads (IOPS)
Performance > Write Operations
Average Block Size (KB)
Average Small Writes (IOPS)
Average Unaligned Writes (IOPS)
Write Bandwidth (MB/s)
Write Latency (ms)
Writes (IOPS)
Metrics
154 EMC Storage Analytics 4.4 Product Guide
Table 95 XtremIO metrics for Volume (continued)
Metric Group Metric
Configuration Tag
Table 96 XtremIO metrics for Disk Array Enclosure (DAE)
Metric Group Metric
Status Health State
Table 97 XtremIO metrics for DAE Row Controller
Metric Group Metric
Status Health State
Table 98 XtremIO metrics for NVRAM
Metric Group Metric
Status Health State
Table 99 XtremIO metrics for X-Brick
Metric Group Metric
X-Brick Reporting
Metrics
XtremIO metrics 155
Metrics
156 EMC Storage Analytics 4.4 Product Guide
APPENDIX D
Views and Reports
This appendix contains the following topics:
l Avamar views and reports.................................................................................158 l eNAS views and reports................................................................................... 159 l Isilon views and reports..................................................................................... 161 l ScaleIO views and reports................................................................................ 163 l VMAX views and reports.................................................................................. 164 l VNX, VNXe, and Unity/UnityVSA views and reports........................................ 166 l XtremIO views and reports............................................................................... 176
Views and Reports 157
Avamar views and reports The Avamar report includes all views and can be exported to CSV and PDF formats.
You can create Avamar reports for the following metrics:
Table 100 Avamar views and reports
View Metric
DPN Status Summary General | HFS Address
General | Active Sessions (Count)
Status | State
Garbage Collection | Status
Garbage Collection | Result
DPN Capacity Summary Capacity | Total Capacity (GB)
Capacity | Used Capacity (GB)
Capacity | Used Capacity (%)
Capacity | Protected Capacity (GB)
Capacity | Protected Capacity (%)
Capacity | Free Capacity (GB)
Capacity | Free Capacity (%)
DPN Backup Summary (last 24 hours)
Success History (last 24 hours) | Successful Backups (Count)
Success History (last 24 hours) | Successful Backups (%)
Success History (last 24 hours) | Failed Backups (Count)
Success History (last 24 hours) | Successful Restores (Count)
Success History (last 24 hours) | Successful Restores (%)
Success History (last 24 hours) | Failed Restores (Count)
DPN Backup Performance (last 24 hours)
Job Performance History (last 24 hours) | Backup Average Elapsed Time
Job Performance History (last 24 hours) | Average Scanned (GB)
Job Performance History (last 24 hours) | Average Changed (GB)
Job Performance History (last 24 hours) | Average Files Changed (Count)
Job Performance History (last 24 hours) | Average Files Skipped (Count)
Job Performance History (last 24 hours) | Average Sent (GB)
Job Performance History (last 24 hours) | Average Excluded (GB)
Job Performance History (last 24 hours) | Average Skipped (GB)
Job Performance History (last 24 hours) | Average Modified & Sent (GB)
Job Performance History (last 24 hours) | Average Modified & Not Sent (GB)
Views and Reports
158 EMC Storage Analytics 4.4 Product Guide
Table 100 Avamar views and reports (continued)
View Metric
Job Performance History (last 24 hours) | Average Overhead (GB)
DDR Status Summary General | Hostname
General | Model Number
Status | File System Status
Status | Monitoring Status
DDR Capacity Summary Capacity | Total Capacity (GB)
Capacity | Used Capacity (GB)
Capacity | Used Capacity (%)
Capacity | Free Capacity (GB)
Capacity | Free Capacity (%)
Capacity | Protected Capacity (GB)
Capacity | Protected Capacity (%)
eNAS views and reports The eNAS report includes all views and can be exported in CSV and PDF formats.
You can create views and reports for the following eNAS components.
Table 101 eNAS views and reports
Component Metric
Data Mover (In Use) Avg. CPU Busy (%)
Max CPU Busy (%)
Avg. Total Network Bandwidth (MB/s)
Max Total Network Bandwidth (MB/s)
Type (String)
dVol (In Use) Capacity (GB)
Avg. Average Service Time (ms/call)
Max Average Service Time (ms/call)
Avg. Utilization (%)
Max Utilization (%)
Avg. Total Operations (IO/s)
Max Total Operations (IO/s)
Avg. Total Bandwidth (MB/s)
Max Total Bandwidth (MB/s)
File Pool (In Use) Consumed Capacity (GB)
Views and Reports
eNAS views and reports 159
Table 101 eNAS views and reports (continued)
Component Metric
Available Capacity (GB)
Total Capacity (GB)
File system Total Capacity (GB)
Allocated Capacity (GB)
Consumed Capacity (GB)
Available Capacity (GB)
Avg. Total Operations (IO/s)
Max Total Operations (IO/s)
Avg. Total Bandwidth (MB/s)
Max Total Bandwidth (MB/s)
Views and Reports
160 EMC Storage Analytics 4.4 Product Guide
Isilon views and reports
You can create views and reports for Isilon components. The report name is Isilon Report, which contains all the following views:
Table 102 Isilon views and reports
Component Metric group Metric
Isilon Cluster Performance
Summary CPU Usage (%)
Number of Active Jobs
Node | External Network
External Throughput Rate (In, MB/s)
External Throughput Rate (Out, MB/s)
Node | Performance Protocol Operations Rate
Node | Summary Connected Clients
Cluster | Summary Active Jobs
Inactive Jobs
Node | Summary Job Workers
Isilon Cache Performance
Node | Cache Overall Cache Hit Rate (MB/s)
Overall Cache Throughput Rate (MB/s)
Average Cache Data Age (s)
L1 Cache Starts (MB/s)
L1 Cache Hits (MB/s)
L1 Cache Misses (MB/s)
L1 Cache Waits (MB/s)
L1 Cache Prefetch Starts (MB/s)
L1 Cache Prefetch Hits (MB/s)
L1 Cache Prefetch Misses (MB/s)
Isilon Cluster Capacity
Cluster | Capacity Total Capacity (TB)
Remaining Capacity (TB)
Remaining Capacity (%)
User Data Including Protection (TB)
Snapshot Usage (TB)
Isilon Cluster Deduplication
Cluster | Deduplication Deduplicated Data (Logical, GB)
Deduplicated Data (Physical, GB)
Saved Data (Logical, GB)
Saved Data (Physical, GB)
Views and Reports
Isilon views and reports 161
Table 102 Isilon views and reports (continued)
Component Metric group Metric
Isilon Disk Performance
Node | Performance Protocol Operations Rate
Disk Activity (%)
Disk Operations Rate (Read)
Disk Operations Rate (Write)
Average Disk Operation Size (MB)
Average Pending Disk Operations Count
Slow Disk Access Rate
Isilon File System Performance
Node | Performance File System Events Rate
Deadlock File System Events Rate
Locked File System Events Rate
Contended File System Events Rate
Blocking File System Events Rate
Isilon Network Performance
Node | External Network
External Network Throughput Rate (In, MB/s)
External Network Throughput Rate (Out, MB/s)
External Network Packets Rate (In, MB/s)
External Network Packets Rate (Out, MB/s)
External Network Errors (In, MB/s)
External Network Errors (Out, MB/s)
Isilon Node Performance
Node | Summary CPU Usage (%)
Node | External Network
External Throughput Rate (In, MB/s)
External Throughput Rate (Out, MB/s)
Node | Performance Disk Activity (%)
Disk Throughput Rate (Read)
Disk Throughput Rate (Write)
Disk Operations Rate (Read)
Disk Operations Rate (Write)
Protocol Operations Rate
Slow Disk Access Rate
Node | Summary Active Clients
Connected Clients
Pending Disk Operations Latency (ms)
Views and Reports
162 EMC Storage Analytics 4.4 Product Guide
ScaleIO views and reports You can create views and reports for the following ScaleIO components:
Table 103 ScaleIO views and reports
Component Metric
ScaleIO Volume Number of Child Volumes (Count)
Number of Descendant Volumes (Count)
Number of Mapped SDCs (Count)
Volume Size (GB)
Average Read I/O Size (MB)
Average Write I/O Size (MB)
Total Read IO/s
Total Write IO/s
Total Reads (MB/s)
Total Writes (MB/s)
ScaleIO Protection Domain Maximum Capacity (GB)
Protected Capacity (GB)
Snap Used Capacity (GB)
Thick Used Capacity (GB)
Thin Used Capacity (GB)
Unused Capacity (GB)
Used Capacity (GB)
Average Read I/O Size (MB)
Average Write I/O Size (MB)
Total Read IO/s
Total Write IO/s
Total Reads (MB/s)
Total Writes (MB/s)
ScaleIO SDC Number of Mapped Volumes (Count)
Total Mapped Capacity (GB)
Average Read I/O Size (MB)
Average Write I/O Size (MB)
Total Read IO/s
Total Write IO/s
Total Read (MB/s)
Views and Reports
ScaleIO views and reports 163
Table 103 ScaleIO views and reports (continued)
Component Metric
Total Write (MB/s)
ScaleIO SDS Maximum Capacity (GB)
Snap Used Capacity (GB)
Thick Used Capacity (GB)
Thin Used Capacity (GB)
Unused Capacity (GB)
Used Capacity (GB)
Average Read IO Size (MB)
Average Write IO Size (MB)
Total Read IO/s
Total Write IO/s
Total Read (MB/s)
Total Write (MB/s)
Note
The MDM list view does not contain component-specific metrics.
VMAX views and reports VMAX reports consist of multiple component list views with the supported VMAX metrics. The reports can be exported in CSV and PDF formats.
You can create the following views and reports:
Table 104 VMAX views and reports
Metric SRDF Report VMAX Report
Device X X
Front-End Director X
Front-End Port X
Back-End Director X
Back-End Port X
Remote Replica Group X
SRDF Director X
SRDF Port X
SLO X
Storage Group X
Views and Reports
164 EMC Storage Analytics 4.4 Product Guide
Table 104 VMAX views and reports (continued)
Metric SRDF Report VMAX Report
Storage Resource Pool X
The metrics available for each component are listed in the following table.
Table 105 VMAX available metrics
Metric Storage Group
Device Front- End
Director
Front- End Port
Back- End
Director
Back- End Port
SRDF Director
Remote Replica Group
Storage Resource
Pool
Total Capacity (GB) X X X
Current Size (GB) X X X
Used Capacity (GB) X X X
Usable Capacity (GB)
X X X
Workload (%) X X X
Under Used (%) X X X
Reads IO/s X X X X X
Reads MB/s X X X X X
Writes IO/s X X X X X X X
Writes MB/s X X X X X X
Total Operations IO/s
X X X X X X X
Total Bandwidth MB/s
X X X X X
Full (%) X X
Total Bandwidth IO/s
X
Total Hits IO/s X
Busy (%) X X X X X
SRDFA Writes IO/s X
SRDFA Writes MB/s X
SDRFS Writes IO/s X
SDRFS Writes MB/s X
Avg. Cycle Time (seconds)
X
Delta Set Extension Threshold (integer)
X
Devices in Session (count)
X
Views and Reports
VMAX views and reports 165
Table 105 VMAX available metrics (continued)
Metric Storage Group
Device Front- End
Director
Front- End Port
Back- End
Director
Back- End Port
SRDF Director
Remote Replica Group
Storage Resource
Pool
HA Repeat Writes (count/s)
X
Response Time (ms) X X X X
Hit (%) X
Miss (%) X
Note
The current list views of SRDF Port and SLO do not contain any component-specific metrics.
VNX, VNXe, and Unity/UnityVSA views and reports You can create views and reports for VNX, VNXe, and Unity resources. Several predefined views and templates are also available.
Report templates
Note
VNXe storage objects are contained in Unity views and reports.
The predefined report templates consist of several list views under the adapter instance, as shown in the following table.
Table 106 VNX, VNXe, and Unity/UnityVSA views and reports
Metric VNX Block Report VNX File Report VNXe Report Unity/UnityVSA
Alerts X X X X
Storage Pool (In Use) X X X
RAID Group (In Use) X
LUN X X X
Disk (In Use) X X X
SP Front-End Port X
Data Mover (In Use) X
File Pool (In Use) X
File System X X X
dVol (In Use) X
VVol (In Use) X
Views and Reports
166 EMC Storage Analytics 4.4 Product Guide
Predefined views The following sections describe the available predefined views:
l Alerts
l VNX Data Mover
l VNX File System
l VNX File Pool
l VNX dVol
l VNX LUN
l VNX Tier
l VNX FAST Cache
l VNX Storage Pool
l VNX Disk
l VNX Storage Processor
l VNX Storage Processor Front End Port
l VNX RAID Group
l Unity File System
l Unity LUN
l Unity Tier
l Unity Storage Pool
l Unity Disk
l Unity Storage Processor
l Unity VVol (In Use) on page 175
Alerts Alert definitions apply to all resources.
Table 107 Alerts
Metric Description
Criticality level The criticality level of the alertWarning, Immediate, or Critical
Object name Name of the impacted object
Object kind Resource kind of the impacted object
Alert impact Impacted badge (Risk, Health, or Efficiency) of the alert
Start time Start time of the alert
VNX Data Mover
Table 108 VNX Data Mover
Metric group Metric Description
CPU Busy (%) VNX Data Mover CPU busy trend
Network NFS Reads (MB/s) VNX Data Mover NFS bandwidth trend
NFS Writes (MB/s)
Views and Reports
VNX, VNXe, and Unity/UnityVSA views and reports 167
Table 108 VNX Data Mover (continued)
Metric group Metric Description
NFS Total Bandwidth (MB/s)
In Bandwidth (MB/s) VNX Data Mover network bandwidth trend
Out Bandwidth (MB/s)
Total Bandwidth (MB/s)
NFS Reads (IO/s) VNX Data Mover NFS IOPS trend
NFS Writes (IO/s)
NFS Total Operations (IO/s)
CPU % Busy - Average VNX Data Mover (in use)
% Busy - Max
Network Total Network Bandwidth - Average (MB/s)
Total Network Bandwidth - Max (MB/s)
Configuration Data Mover Type
VNX File System
Table 109 VNX File System
Metric group Metric Description
Performance Total Operations (IO/s) VNX file system IOPS trend
Reads (IO/s)
Writes (IO/s)
Total Bandwidth (MB/s) VNX file system bandwidth trend
Reads (MB/s)
Writes (MB/s)
Capacity Consumed Capacity (GB) VNX file system capacity trend
Total Capacity (GB)
Capacity Total Capacity (GB) VNX file system List
Allocated Capacity (GB)
Consumed Capacity (GB)
Available Capacity (GB)
Performance Avg. Total Operations (IO/s)
Max Total Operations (IO/s)
Views and Reports
168 EMC Storage Analytics 4.4 Product Guide
Table 109 VNX File System (continued)
Metric group Metric Description
Avg. Total Bandwidth (MB/s)
Max Total Bandwidth (MB/s)
VNX File Pool
Table 110 VNX File Pool
Metric group Metric Description
Capacity Consumed Capacity (GB) VNX file pool capacity trend
Total Capacity (GB)
Capacity Available Capacity (GB) VNX file pool (in use) list
Consumed Capacity (GB)
Total Capacity (GB)
VNX dVol
Table 111 VNX dVol
Metric group Metric Description
Performance Utilization (%) VNX dVol utilization trend
Performance Total Operations (IO/s) VNX dVol IOPS trend
Reads (IO/s)
Writes (IO/s)
Performance Total Bandwidth (MB/s) VNX dVol bandwidth trend
Reads (MB/s)
Writes (MB/s)
Capacity Capacity (GB) VNX dVol (in use) list
Performance Avg. Average Service Time (uSec/call)
Max Average Service Time (uSec/call)
Avg. Utilization (%)
Max Utilization (%)
Avg. Total Operations (IO/s)
Max Total Operations (IO/s)
Avg. Total Bandwidth (MB/s)
Views and Reports
VNX, VNXe, and Unity/UnityVSA views and reports 169
Table 111 VNX dVol (continued)
Metric group Metric Description
Max Total Bandwidth (MB/s)
VNX LUN
Table 112 VNX LUN
Metric group Metric Description
Performance Total Operations (IO/s) VNX LUN IOPS trend
Reads (IO/s)
Writes (IO/s)
Performance Total Bandwidth (MB/s) VNX LUN bandwidth trend
Reads (MB/s)
Writes (MB/s)
Performance Total Latency (ms) VNX LUN total latency trend
Performance Avg. Total Operations (IO/s) VNX LUN list
Max Total Operations (IO/s)
Avg. Total Bandwidth (MB/s)
Max Total Bandwidth (MB/s)
Avg. Total Latency (ms)
Max Total Latency (ms)
Capacity Total Capacity (GB)
VNX Tier
Table 113 VNX Tier
Metric group Metric Description
Capacity Consumed Capacity (GB) VNX Tier capacity trend
Total Capacity (GB)
VNX FAST Cache
Table 114 VNX FAST Cache
Metric group Metric Description
Performance Read Cache Hit Ratio (%) VNX FAST Cache hit ratio trend
Write Cache Hit Ratio (%)
Views and Reports
170 EMC Storage Analytics 4.4 Product Guide
VNX Storage Pool
Table 115 VNX Storage Pool
Metric group Metric Description
Capacity Consumed Capacity (GB) VNX storage pool capacity trend
Total Capacity (GB)
Capacity Available Capacity (GB) VNX storage pool (in use) List
Consumed Capacity (GB)
Full (%)
Subscribed (%)
Configuration LUN Count
VNX Disk
Table 116 VNX Disk
Metric group Metric Description
Performance Total Operations (IO/s) VNX disk IOPS trend
Reads (IO/s)
Writes (IO/s)
Performance Total Bandwidth (MB/s) VNX disk bandwidth (MB/s) trend
Reads (MB/s)
Writes (MB/s)
Performance Total Latency (ms) VNX disk Total Latency (ms) trend
Performance Busy (%) VNX disk busy (%) trend
Capacity Capacity (GB) VNX disk (in use) List
Performance Avg. Total Operations (IO/s)
Max Total Operations (IO/s)
Avg. Total Bandwidth (MB/s)
Max Total Bandwidth (MB/s)
Avg. Total Latency (ms)
Max Total Latency (ms)
Avg. Busy (%)
Max Busy (%)
Configuration Type
Views and Reports
VNX, VNXe, and Unity/UnityVSA views and reports 171
VNX Storage Processor
Table 117 VNX Storage Processor
Metric group Metric Description
CPU CPU Busy (%) VNX storage processor CPU busy trend
Disk Disk Total Operations (IO/s) VNX storage processor disk IOPS trend
Disk Reads (IO/s)
Disk Writes (IO/s)
Disk Disk Total Bandwidth (MB/s)
VNX storage processor disk bandwidth trend
Disk Reads (MB/s)
Disk Writes (MB/s)
VNX Storage Processor Front End Port
Table 118 VNX Storage Processor Front End Port
Metric group Metric Description
Performance Total Operations (IO/s) VNX SP front end port IOPS trend
Reads (IO/s)
Writes (IO/s)
Performance Total Bandwidth (MB/s) VNX SP front end port bandwidth trend
Reads (MB/s)
Writes (MB/s)
Performance Avg. Total Operations (IO/s) VNX SP front end port List
Max Total Operations (IO/s)
Avg. Total Bandwidth (MB/s)
Max Total Bandwidth (MB/s)
VNX RAID Group
Table 119 VNX RAID Group
Metric group Metric Description
Capacity Available Capacity (GB) VNX RAID group (in use) list
Total Capacity (GB)
Full (%)
Configuration Disk Count
LUN Count
Views and Reports
172 EMC Storage Analytics 4.4 Product Guide
Table 119 VNX RAID Group (continued)
Metric group Metric Description
Max Disks
Max LUNs
Unity File System
Table 120 Unity File System
Metric group Metric Description
Capacity Consumed Capacity (GB) Unity file system capacity trend
Total Capacity (GB)
Capacity Total Capacity (GB) Unity file system List
Allocated Capacity (GB)
Consumed Capacity (GB)
Available Capacity (GB)
Unity LUN
Table 121 Unity LUN
Metric group Metric Description
Performance Reads (IO/s) Unity LUN IOPS trend
Writes (IO/s)
Performance Reads (MB/s) Unity LUN bandwidth trend
Writes (MB/s)
Capacity Total Capacity (GB) Unity LUN List
Performance Avg. Reads (IO/s)
Max Reads (IO/s)
Avg. Writes (IO/s)
Max Writes (IO/s)
Avg. Reads (MB/s)
Max Reads (MB/s)
Avg. Writes (MB/s)
Max Writes (MB/s)
Views and Reports
VNX, VNXe, and Unity/UnityVSA views and reports 173
Unity Tier
Table 122 Unity Tier
Metric group Metric Description
Capacity Consumed Capacity (GB) Unity tier capacity trend
Total Capacity (GB)
Unity Storage Pool
Table 123 Unity Storage Pool
Metric group Metric Description
Capacity Consumed Capacity (GB) Unity storage pool capacity trend
Total Capacity (GB)
Capacity Consumed Capacity (GB) Unity storage pool (in use) List
Total Capacity (GB)
Full (%)
Subscribed (%)
Unity Disk
Table 124 Unity Disk
Metric group Metric Description
Performance Reads (IO/s) Unity disk IOPS trend
Writes (IO/s)
Performance Reads (MB/s) Unity disk bandwidth
Writes (MB/s)
Performance Busy (%) Unity disk busy trend
Capacity Size (GB) Unity disk (in use) list
Performance Avg. Reads (IO/s)
Max Reads (IO/s)
Avg. Writes (IO/s)
Max Writes (IO/s)
Avg. Reads (MB/s)
Max Reads (MB/s)
Avg. Writes (MB/s)
Max Writes (MB/s)
Avg. Busy (%)
Max Busy (%)
Views and Reports
174 EMC Storage Analytics 4.4 Product Guide
Table 124 Unity Disk (continued)
Metric group Metric Description
Configuration Type
Unity Storage Processor
Table 125 Unity Storage Processor
Metric group Metric Description
Performance Busy (%) Unity storage processor busy trend
Performance Reads (IO/s) Unity storage processor IOPS trend
Writes (IO/s)
Performance Reads (MB/s) Unity storage processor bandwidth trend
Writes (MB/s)
Network NFS Reads (IO/s) Unity storage processor NFS IOPS trend
NFS Writes (IO/s)
Network NFS Reads (MB/s) Unity storage processor NFS bandwidth trend
NFS Writes (MB/s)
Unity VVol (In Use)
Table 126 Unity VVol (In Use)
Metric group Metric Description
Unity VVol Bandwidth Trend
Reads (MB/s)
Writes (MB/s)
Total (MB/s)
Reads (MB/s) (5 days forecast)
Writes (MB/s) (5 days forecast)
Total (MB/s) (5 days forecast)
Unity VVol Capacity Trend
Consumed Capacity (GB)
Consumed Capacity (GB) (5 days forecast)
Total Capacity
Total Capacity (5 days forecast)
Unity VVol IO Trend Reads (IO/s)
Writes (IO/s)
Views and Reports
VNX, VNXe, and Unity/UnityVSA views and reports 175
Table 126 Unity VVol (In Use) (continued)
Metric group Metric Description
Total (IO/s)
Reads (IO/s) (5 days forecast)
Writes (IO/s) (5 days forecast)
Total (IO/s) (5 days forecast)
Unity VVol (In Use) List Available Capacity (GB)
Reads (IO/s)
Writes (IO/s)
Total (IO/s)
Reads (MB/s)
Writes (MB/s)
Total (MB/s)
Latency (ms)
XtremIO views and reports The XtremIO report includes all views and can be exported in CSV and PDF formats.
You can create views and reports for the following XtremIO components:
Table 127 XtremIO views and reports
Component Metric group Metric
XtremIO cluster capacity consumption
n/a Available Capacity (TB, physical)
Consumed Capacity (TB, physical)
Total Capacity (TB, physical)
Available Capacity (TB, volume)
Consumed Capacity (TB, volume)
Total Capacity (TB, volume)
XtremIO health state n/a Cluster health state
Storage Controller Health State
XtremIO LUN Volume|Performance:Read Operations|Read Bandwidth
Read Bandwidth (MB/s)
Volume|Performance:Read Operations|Read Latency Read Latency (ms)
Volume|Performance:Read Operations|Reads Reads (IO/s)
Views and Reports
176 EMC Storage Analytics 4.4 Product Guide
Table 127 XtremIO views and reports (continued)
Component Metric group Metric
Volume|Performance:Write Operations|Write Bandwidth
Write Bandwidth (MB/s)
Volume|Performance:Write Operations|Write Latency Write Latency (ms)
Volume|Performance:Write Operations|Write Write (IO/s)
Volume|Performance |Total Bandwidth Total Bandwidth (MB/s)
Volume|Performance |Total Latency Total Latency (ms)
Volume|Performance|Total Operations Total operations (IO/s)
Volume|Capacity| Consumed Capacity in VMware Consumed Capacity in VMware (GB)
Volume|Capacity| Consumed Capacity in XtremIO Consumed Capacity in XtremIO (GB)
Volume|Capacity|Total Capacity Total Capacity (GB)
Summary (Min, Max, Average)
XtremIO performance Cluster|Performance:Read Operations|Read Bandwidth
Read Bandwidth (MB/s)
Cluster|Performance:Read Operations|Read Latency Read Latency (ms)
Cluster|Performance:Read Operations|Reads Reads (IO/s)
Cluster|Performance:Write Operations|Write Bandwidth
Write Bandwidth (MB/s)
Cluster|Performance:Write Operations|Write Latency Write Latency (ms)
Cluster|Performance:Write Operations|Write Write (IO/s)
Cluster|Performance |Total Bandwidth Total Bandwidth (MB/s)
Cluster|Performance |Total Latency Total Latency (ms)
Cluster|Performance|Total Operations Total Operations (IO/s)
Storage Controller | Performance | CPU 1 Utilization CPU 1 Utilization (%)
Storage Controller | Performance | CPU 2 Utilization CPU 2 Utilization (%)
Summary (Max, Min, Average )
XtremIO storage efficiency
Cluster|Capacity|Deduplication Ratio Deduplication Ratio
Cluster|Capacity|Compression Ratio Compression Ratio
Cluster|Capacity|Thin Provision Savings Thin provision Savings (%)
SSD|Endurance|Endurance Remaining SSD endurance Remaining (%)
SSD|Capacity|Disk Utilization Disk Utilization (%)
Average Summary
Views and Reports
XtremIO views and reports 177
Views and Reports
178 EMC Storage Analytics 4.4 Product Guide
APPENDIX E
Topology Diagrams
This appendix includes the following topics:
l Topology mapping............................................................................................ 180 l Avamar topology...............................................................................................180 l Isilon topology................................................................................................... 181 l RecoverPoint for Virtual Machines topology.....................................................182 l ScaleIO topology.............................................................................................. 183 l Unity topology.................................................................................................. 184 l UnityVSA topology........................................................................................... 185 l VMAX3 and VMAX All Flash topology............................................................... 186 l VMAX VVol topology.........................................................................................187 l VNX Block topology..........................................................................................188 l VNX File/eNAS topology.................................................................................. 189 l VNXe topology................................................................................................. 190 l VPLEX Local topology....................................................................................... 191 l VPLEX Metro topology..................................................................................... 192 l XtremIO topology............................................................................................. 193
Topology Diagrams 179
Topology mapping You can view graphic representations of topology mapping using vRealize Operations Manager health trees. The ESA dashboards use topology mapping to display resources and metrics.
ESA establishes mappings between:
l Storage system components
l Storage system objects and vCenter objects
Topology mapping enables health scores and alerts from storage system components, such as storage processors and disks, to appear on affected vCenter objects, such as LUNs, datastores, and virtual machines. Topology mapping between storage system objects and vCenter objects uses a vCenter adapter instance.
Avamar topology The drawing in this section shows the components of the Avamar topology.
Figure 14 Avamar components
VMware VM
Relationship to VMware Object
Arrowhead points to parent
Key: Relationships to EMC Objects
Entity can be cascaded
Client
Domain
Avamar DPN
Policy/Group
DDR
Topology Diagrams
180 EMC Storage Analytics 4.4 Product Guide
Isilon topology The drawing in this section shows the components of the Isilon topology.
Figure 15 Isilon components
Adapter instance
Tier Node pool
Cluster
Access zone
NFS export
VMware datastore
SMB share
Node
Relationship to VMware Object
Arrowhead points to parent
Key: Relationships to EMC Objects
Entity can be cascaded
Topology Diagrams
Isilon topology 181
RecoverPoint for Virtual Machines topology The drawing in this section shows the components of the RecoverPoint for Virtual Machines topology.
Figure 16 RecoverPoint for Virtual Machines components
RecoverPoint
System Repository
Volume
Cluster
vRPA
Virtual
Machine
Splitter
Cluster
Compute
Resource Virtual
Machine
Journal
Volume
Consistency
Group
Replication
Set
User
Volume
Link
Copy
Topology Diagrams
182 EMC Storage Analytics 4.4 Product Guide
ScaleIO topology The drawing in this section shows the components of the ScaleIO topology.
Figure 17 ScaleIO components
MDM
Cluster MDM
System
Protection
Domain
SDC
Device SDS
Fault Set
Storage
Pool
Volume
VMware
Datastore
Relationship to VMware Object
Arrowhead points to parent
Key: Relationships to EMC Objects
Entity can be cascaded
Snapshot
Topology Diagrams
ScaleIO topology 183
Unity topology The drawing in this section shows the components of the Unity topology.
Figure 18 Unity components
VMware NFS Datastore
NFS Export
File System
NAS Server
Storage Processor
LUN
Storage Pool
EMC adapter instance
Consistency Group
VMware VMFS
Datastore
Relationship to VMware Object
Arrowhead points to parent
Key: Relationships to EMC Objects
Entity can be cascaded
Disk
Fast Cache
Tier
Storage Container
VVol DatastoreVMware VM
VVol
Topology Diagrams
184 EMC Storage Analytics 4.4 Product Guide
UnityVSA topology The drawing in this section shows the components of the UnityVSA topology.
Figure 19 UnityVSA components
Topology Diagrams
UnityVSA topology 185
VMAX3 and VMAX All Flash topology The drawing in this section shows the components of the VMAX topology.
Figure 20 VMAX3 and VMAX All Flash components
VMAX3 Array
Storage Resource
Pool
Device
Front-End Director
Front-End Port
Back-End Port
Storage GroupService Level Objectives
Relationship to VMware Object or eNAS Object
Arrowhead points to parent
Key: Relationships to EMC Objects
Entity can be cascaded
VMware Datastore
SRDF Director
Remote Replica Group
SRDF Port
Virtual Machine
eNAS Disk
Volume
Back-End Director
VMAX3 and VMAX All Flash topology rules The rules in this section govern how objects are displayed in the VMAX topology dashboard and which metrics are collected for them.
l vRealize Operations Manager does not display devices that are unmapped and unbound.
l vRealize Operations Manager does not display devices that are mapped and bound but unused by VMware, VNX, eNAS, or VPLEX.
l If the corresponding EMC vSphere adapter instance is running on the same vRealize Operations Manager appliance, then the vRealize Operations Manager displays devices that are mapped, bound, and used by VMware RDMs.
l A VMAX device is displayed when the corresponding VPLEX adapter instance is added.
l vRealize Operations Manager does not display Storage Groups with unmapped and unbound devices.
Topology Diagrams
186 EMC Storage Analytics 4.4 Product Guide
l vRealize Operations Manager displays Storage Groups that contain mapped and bound devices.
VMAX VVol topology The drawing in this section shows the components of the VMAX VVols topology.
Note
Because of the limitations of both vRealize Operations and the VMAX VVol architecture, it is not possible to show the relationship between virtual machines, VVols, and the VMAX VVol Storage Resource.
Figure 21 VMAX VVol components
VMAX3/VMAX All Flash Array
Storage Resource
Pool
Device
Front-End Director
Front-End Port
Storage GroupService Level Objectives
Relationship to VMware Object or eNAS Object
Arrowhead points to parent
Key: Relationships to EMC Objects
Entity can be cascaded
VMware Datastore
SRDF Director
Remote Replica Group
SRDF Port
Virtual Machine
eNAS Disk
Volume
VMAX VVol Protocol Endpoint
VMAX VVol
Storage Resource
VMAX VVol
Storage Container
Topology Diagrams
VMAX VVol topology 187
VNX Block topology The drawing in this section shows the components of the VNX Block topology.
Figure 22 VNX Block components
Virtual Machine
Datastore
Array Instance
Fast Cache
RAID Group
Storage Pool
LUN
SP A or B
SP Front End Port
Disk Tier
Physical Host
Relationship to VMware Object
Arrowhead points to parent
Key: Relationships to EMC Objects
Entity can be cascaded
HyperV VM
Non-ESX Host System Server
Non-ESX VM
Topology Diagrams
188 EMC Storage Analytics 4.4 Product Guide
VNX File/eNAS topology The drawing in this section shows the components of the VNX File and eNAS topologies.
Figure 23 VNX File/eNAS topology
Array Instance
NFS Export
VDM
File System
Data MoverData Mover (standby)
File Pool
Disk Volume
Datastore
VNX Block LUNs, VMAX3 Devices
Relationship to VMware Object
Arrowhead points to parent
Key: Relationships to EMC Objects
Entity can be cascaded
Topology Diagrams
VNX File/eNAS topology 189
VNXe topology The drawing in this section shows the components of the VNXe topology.
Figure 24 VNXe components
VMware NFS
Datastore
NFS Export
File System
NAS Server Storage
Processor
LUN
Storage
Pool
Disk
Fast Cache
Tier
EMC adapter
instance
LUN Group
VMware
VMFS
Datastore
Relationship to VMware Object
Arrowhead points to parent
Key: Relationships to EMC Objects
Entity can be cascaded
Topology Diagrams
190 EMC Storage Analytics 4.4 Product Guide
VPLEX Local topology The drawing in this section shows the components of the VPLEX Local topology.
Figure 25 VPLEX Local components
VMware Datastore
Cluster Engine
DirectorFC Port
Ethernet Port
Storage View
Device
Extent
VNX, XtremIO, or VMAX Adapter Instance
Virtual Volume
Storage Array
Virtual Machine
Storage Volume
Relationship to VMware Object
Arrowhead points to parent
Key: Relationships to EMC Objects
XtremIO Cluster
Back-end block devices: VNX Block or Unity LUNS XtremIO volumes or snapshots VMAX devices
Topology Diagrams
VPLEX Local topology 191
VPLEX Metro topology The drawing in this section shows the components of the VPLEX Metro topology.
Figure 26 VPLEX Metro components
VPLEX Metro
Local Storage View
Local Storage Volume
Local Extent
Local FC Port
Distributed Volume
Local Device Distributed
Device
Local Storage View
Local Storage Volume
Local Extent
Local FC Port
Local Device
VMware Datastore
Storage View
Storage Volume
Extent
Cluster-1Engine
Director
Storage Array
Virtual Machine
FC Port
Ethernet Port
Virtual Volume
Device VMware Datastore
Storage View
Storage Volume
Extent
Cluster-2 Engine
Director
Storage Array
Virtual Machine
FC Port
Ethernet Port
Virtual Volume
Device
Relationship to VMware Object
Arrowhead points to parent
Key: Relationships to EMC Objects VNX, XtremIO, or VMAX Adapter Instance
VNX, XtremIO, or VMAX Adapter Instance
XtremIO Cluster
XtremIO Cluster
Back-end block devices: VNX Block or Unity LUNS XtremIO volumes or snapshots VMAX devices
Back-end block devices: VNX Block or Unity LUNS XtremIO volumes or snapshots VMAX devices
Topology Diagrams
192 EMC Storage Analytics 4.4 Product Guide
XtremIO topology The drawing in this section shows the components of the XtremIO topology.
Figure 27 XtremIO components
VMware Datastore
Adapter Instance
X-Bri
Related manuals for Dell VRealize 4.4 Storage System Product Guide
Manualsnet FAQs
If you want to find out how the VRealize Dell works, you can view and download the Dell VRealize 4.4 Storage System Product Guide on the Manualsnet website.
Yes, we have the Product Guide for Dell VRealize as well as other Dell manuals. All you need to do is to use our search bar and find the user manual that you are looking for.
The Product Guide should include all the details that are needed to use a Dell VRealize. Full manuals and user guide PDFs can be downloaded from Manualsnet.com.
The best way to navigate the Dell VRealize 4.4 Storage System Product Guide is by checking the Table of Contents at the top of the page where available. This allows you to navigate a manual by jumping to the section you are looking for.
This Dell VRealize 4.4 Storage System Product Guide consists of sections like Table of Contents, to name a few. For easier navigation, use the Table of Contents in the upper left corner.
You can download Dell VRealize 4.4 Storage System Product Guide free of charge simply by clicking the “download” button in the upper right corner of any manuals page. This feature allows you to download any manual in a couple of seconds and is generally in PDF format. You can also save a manual for later by adding it to your saved documents in the user profile.
To be able to print Dell VRealize 4.4 Storage System Product Guide, simply download the document to your computer. Once downloaded, open the PDF file and print the Dell VRealize 4.4 Storage System Product Guide as you would any other document. This can usually be achieved by clicking on “File” and then “Print” from the menu bar.