Contents

Dell VRealize 4.1 Storage System Installation And User Guide PDF

1 of 179
1 of 179

Summary of Content for Dell VRealize 4.1 Storage System Installation And User Guide PDF

EMC Storage Analytics Version 4.1

Installation and User Guide P/N 302-001-532

REV 11

Copyright 2014-2016 EMC Corporation. All rights reserved. Published in the USA.

Published September 2016

EMC believes the information in this publication is accurate as of its publication date. The information is subject to change without notice.

The information in this publication is provided as is. EMC Corporation makes no representations or warranties of any kind with respect to the information in this publication, and specifically disclaims implied warranties of merchantability or fitness for a particular purpose. Use, copying, and distribution of any EMC software described in this publication requires an applicable software license.

EMC, EMC, and the EMC logo are registered trademarks or trademarks of EMC Corporation in the United States and other countries. All other trademarks used herein are the property of their respective owners.

For the most up-to-date regulatory document for your product line, go to EMC Online Support (https://support.emc.com).

EMC Corporation Hopkinton, Massachusetts 01748-9103 1-508-435-1000 In North America 1-866-464-7381 www.EMC.com

2 EMC Storage Analytics 4.1 Installation and User Guide

Introduction 7

Overview......................................................................................................... 8 References.................................................................................................... 10 Terminology.................................................................................................. 10

Installation and Licensing 13

Installation overview..................................................................................... 14 Installation and operating requirements........................................................15 Installing vRealize Operations Manager.........................................................18 Installing the EMC Adapter and dashboards.................................................. 19 Installing Navisphere CLI............................................................................... 20 Adapter instances......................................................................................... 21

Adding an EMC Adapter instance for vCenter.................................... 21 Configuring the vCenter Adapter.......................................................23 Adding an EMC Adapter instance for SCOM...................................... 24 Adding an EMC Adapter instance for OpenStack...............................24 Adding EMC Adapter instances for EMC products............................. 26 Editing EMC Adapter instances.........................................................28

EMC Storage Analytics Dashboards 31

Topology mapping.........................................................................................32 Avamar topology.............................................................................. 32 Isilon topology................................................................................. 33 RecoverPoint for Virtual Machines topology......................................34 ScaleIO topology..............................................................................35 Unity topology..................................................................................36 UnityVSA topology........................................................................... 37 VMAX3 and VMAX All Flash topology................................................ 38 VMAX VVol topology.........................................................................39 VNX Block topology..........................................................................40 VNX File/eNAS topology................................................................... 41 VNXe topology................................................................................. 42 VPLEX Local topology....................................................................... 43 VPLEX Metro topology...................................................................... 44 XtremIO topology............................................................................. 45

EMC dashboards........................................................................................... 46 Storage Topology dashboard............................................................46 Storage Metrics dashboard.............................................................. 47 EMC overview dashboards............................................................... 47 VPLEX Communication dashboard....................................................56 VPLEX Performance dashboard.........................................................57 XtremIO Performance dashboard......................................................58 RecoverPoint for VMs Performance dashboard................................. 59 Topology dashboards.......................................................................60 Metrics dashboards......................................................................... 60 Top-N dashboards............................................................................61 Dashboard XChange.........................................................................62

Chapter 1

Chapter 2

Chapter 3

CONTENTS

EMC Storage Analytics 4.1 Installation and User Guide 3

Resource Kinds and Metrics 65

Avamar metrics............................................................................................. 66 Isilon metrics................................................................................................ 71 ScaleIO metrics............................................................................................. 74 RecoverPoint for Virtual Machines metrics.....................................................76 Unity and UnityVSA metrics........................................................................... 79 VMAX metrics................................................................................................84 VNX Block metrics......................................................................................... 87 VNX File/eNAS metrics.................................................................................. 91 VNXe metrics.................................................................................................95 VPLEX metrics............................................................................................. 100 XtremIO metrics.......................................................................................... 110

Views and Reports 115

Avamar views and reports........................................................................... 116 eNAS views and reports...............................................................................117 Isilon views and reports.............................................................................. 119 ScaleIO views and reports........................................................................... 121 VMAX views and reports..............................................................................122 VNX, VNXe, and Unity/UnityVSA views and reports...................................... 124 XtremIO views and reports.......................................................................... 134

Remedial Actions on EMC Storage Systems 137

Remedial actions overview.......................................................................... 138 Changing the service level objective (SLO) for a VMAX3 storage group.........138 Changing the tier policy for a File System.....................................................138 Changing the tier policy for a LUN................................................................139 Extending file system capacity.....................................................................139 Enabling performance statistics for VNX Block.............................................139 Enabling FAST Cache on Unity and VNXe storage pools................................140 Enabling FAST Cache on a VNX Block storage pool.......................................140 Expanding LUN capacity.............................................................................. 140 Migrating a VNX LUN to another storage pool.............................................. 140 Rebooting a Data Mover on VNX storage......................................................141 Rebooting a VNX storage processor............................................................. 141 Extending volumes on EMC XtremIO storage systems.................................. 141

Configuring an extend volume policy for XtremIO........................... 142 Extending XtremIO volumes manually............................................ 142

Troubleshooting 143

Badges for monitoring resources.................................................................144 Navigating inventory trees...........................................................................144 Symptoms, alerts, and recommendations for EMC Adapter instances..........145 Event correlation......................................................................................... 146

Viewing all alerts............................................................................146 Enabling XtremIO alerts..................................................................147 Finding resource alerts...................................................................147 Locating alerts that affect the health score for a resource............... 147

Launching Unisphere.................................................................................. 148 Installation logs.......................................................................................... 148 Log Insight overview....................................................................................148

Log Insight configuration................................................................148

Chapter 4

Chapter 5

Chapter 6

Chapter 7

CONTENTS

4 EMC Storage Analytics 4.1 Installation and User Guide

Sending logs to Log Insight............................................................ 149 Error handling and event logging................................................................. 151

Viewing error logs.......................................................................... 151 Creating and downloading a support bundle.................................. 151

Log file sizes and rollover counts.................................................................152 Finding adapter instance IDs.......................................................... 152 Configuring log file sizes and rollover counts..................................152 Activating configuration changes................................................... 153 Verifying configuration changes..................................................... 153

Editing the Collection Interval for a resource................................................154 Configuring the thread count for an adapter instance.................................. 154 Connecting to vRealize Operations Manager by using SSH.......................... 155 Frequently asked questions........................................................................ 155

List of alerts 159

Avamar alerts.............................................................................................. 160 Isilon alerts................................................................................................. 161 RecoverPoint alerts..................................................................................... 162 ScaleIO alerts..............................................................................................163 Unity, UnityVSA, and VNXe alerts.................................................................166 VMAX alerts.................................................................................................168 VNX Block alerts..........................................................................................168 VNX Block notifications............................................................................... 173 VNX File alerts............................................................................................. 174 VNX File notifications.................................................................................. 177 VPLEX alerts................................................................................................ 181 XtremIO alerts............................................................................................. 184

Appendix A

CONTENTS

EMC Storage Analytics 4.1 Installation and User Guide 5

CHAPTER 1

Introduction

This chapter contains the following topics:

l Overview................................................................................................................. 8 l References............................................................................................................ 10 l Terminology.......................................................................................................... 10

Introduction 7

Overview VMware vRealize Operations Manager is a software product that collects performance and capacity data from monitored software and hardware resources. It provides users with realtime information about potential problems in the enterprise.

vRealize Operations Manager presents data and analysis information in several ways:

l Through alerts that warn of potential or occurring problems

l In configurable dashboards and predefined pages that show commonly needed information

l In predefined reports

EMC Storage Analytics links vRealize Operations Manager with the EMC Adapter. The EMC Adapter is bundled with a connector that enables vRealize Operations Manager to collect performance metrics. The adapter is installed with the vRealize Operations Manager user interface.

The collector types are shown in EMC Adapter architecture on page 9.

EMC Storage Analytics uses the power of existing vCenter features to aggregate data from multiple sources and process the data with proprietary analytic algorithms.

EMC Storage Analytics complies with VMware management pack certification requirements and has received the VMware Ready certification.

This version of EMC Storage Analytics supports the following EMC products:

l EMC Avamar

l EMC Isilon

l EMC RecoverPoint for Virtual Machines

l EMC ScaleIO

l EMC Unity

l EMC UnityVSA

l EMC VMAX All Flash

l EMC VMAX3

l EMC VMAX eNAS

l EMC VNX

l EMC VNXe3200

l EMC VPLEX

l EMC XtremIO

Note

Unity dashboards, views, and reports are compatible with VNXe. If you upgrade from ESA 3.5, VNXe dashboards are visible. For new installations use the Unity versions of these items for VNXe.

Introduction

8 EMC Storage Analytics 4.1 Installation and User Guide

Figure 1 EMC Adapter architecture

Note

Refer to the EMC Simple Support Matrix for a list of supported product models.

Introduction

Overview 9

References This topic provides a list of documentation for reference.

VMware vRealize Operations Manager documentation

l vRealize Operations Manager Release Notes contains descriptions of known issues and workarounds.

l vRealize Operations Manager vApp Deployment and Configuration Guide explains installation, deployment, and management of vRealize Operations Manager.

l vRealize Operations Manager User Guide explains basic features and use of vRealize Operations Manager.

l vRealize Operations Manager Customization and Administration Guide describes how to configure and manage the vRealize Operations Manager custom interface.

VMware documentation is available at http://www.vmware.com/support/pubs.

EMC documentation

l EMC Storage Analytics Release Notes provides a list of the latest supported features, licensing information, and known issues.

l EMC Storage Analytics Installation and User Guide (this document) provides installation and licensing instructions, a list of resource kinds and their metrics, and information about storage topologies and dashboards.

Terminology This topic contains a list of commonly used terms.

adapter

A vRealize Operations Manager component that collects performance metrics from an external source like a vCenter or storage system. Third-party adapters such as the EMC Adapter are installed on the vRealize Operations Manager server to enable creation of adapter instances within vRealize Operations Manager.

adapter instance

A specific external source of performance metrics, such as a specific storage system. An adapter instance resource is an instance of an adapter that has a one-to-one relationship with an external source of data, such as a VNX storage system.

dashboard

A tab on the home page of the vRealize Operations Manager GUI. vRealize Operations Manager ships with default dashboards. Dashboards are also fully customizable by the end user.

health rating

An overview of the current state of any resource, from an individual operation to an entire enterprise. vRealize Operations Manager checks internal metrics for the resource and uses its proprietary analytics formulas to calculate an overall health score on a scale of 0 to 100.

icon

A pictorial element in a widget that enables a user to perform a specific function. Hovering over an icon displays a tooltip that describes the function.

Introduction

10 EMC Storage Analytics 4.1 Installation and User Guide

metric

A category of data collected for a resource. For example, the number of read operations per second is one of the metrics collected for each LUN resource.

resource

Any entity in the environment for which vRealize Operations Manager can collect data. For example, LUN 27 is a resource.

resource kind

A general type of a resource, such as LUN or DISK. The resource kind dictates the type of metrics collected.

widget

An area of the EMC Storage Analytics graphical user interface (GUI) that displays metrics-related information. A user can customize widgets to their own environments.

Introduction

Terminology 11

CHAPTER 2

Installation and Licensing

This chapter contains the following topics:

l Installation overview............................................................................................. 14 l Installation and operating requirements................................................................15 l Installing vRealize Operations Manager.................................................................18 l Installing the EMC Adapter and dashboards.......................................................... 19 l Installing Navisphere CLI....................................................................................... 20 l Adapter instances................................................................................................. 21

Installation and Licensing 13

Installation overview Learn about installation options and license requirements.

EMC Storage Analytics consists of the following installation packages:

l vRealize Operations ManagerProvides a view of all resources managed by vCenter, including EMC storage arrays

l EMC AdapterEnables the collection of metrics from EMC resources. The adapter installation includes instructions for:

n Installing the EMC Adapter and dashboards

n Adding one or more EMC Adapter instances and applying license keys from EMC

Installation and upgrade options Review the Installation and operating requirements on page 15, and then refer to the instructions for one of the following options to install or upgrade your system.

Option Instructions

Install a supported version of VMware vRealize Operations Manager and the latest release of EMC Storage Analytics

l Installing vRealize Operations Manager on page 18

l Installing the EMC Adapter and dashboards on page 19

l Adding EMC Adapter instances for EMC products on page 26

Install EMC Storage Analytics on a system running a supported version of VMware vRealize Operations Manager

l Installing the EMC Adapter and dashboards on page 19

l Adding EMC Adapter instances for EMC products on page 26

Upgrade from a previous version to the latest release of EMC Storage Analytics on a system running a supported version of VMware vRealize Operations Manager

1. Install a new instance of vRealize Operations Manager. See Installing vRealize Operations Manager on page 18.

2. Install EMC Storage Analytics on vRealize Operations Manager. See Installing the EMC Adapter and dashboards on page 19 and Adding EMC Adapter instances for EMC products on page 26.

3. If you are using vCenter Operations Manager 5.8.x, migrate the data to the new vRealize Operations Manager system.

Note

Refer to the vRealize Operations Manager vApp Deployment and Configuration Guide for information about migration-based upgrades to vRealize Operations Manager.

License requirements The following table lists the licensing requirements.

Installation and Licensing

14 EMC Storage Analytics 4.1 Installation and User Guide

Note

EMC products are available for a 90-day trial from the installation date. To install software for trial, leave the license field blank.

Table 1 Required software licenses

Software Required license Notes

vRealize Operations Manager (Advanced or Enterprise)

VMware license for vRealize Operations Manager (Advanced or Enterprise)

EMC Storage Analytics

EMC Storage Analytics electronic or physical license

If you purchase an electronic license for EMC Storage Analytics, you will receive a letter that directs you to an electronic licensing system to activate the software to which you are entitled. Otherwise, you will receive a physical license key.

EMC storage arrays EMC license for your storage array

A 90-day trial for all supported products is available with EMC Storage Analytics. The 90-day trial provides the same features as licensed products, but after 90 days of usage, the adapter stops collecting data. You can add a license at any time.

Installation and operating requirements Before installing the EMC Adapter, verify that these installation and operating requirements are satisfied.

Note

The ESA space on the EMC Community Network provides more information about installing and configuring EMC Storage Analytics.

EMC Adapter port assignments

Table 2 Port assignments

Connection type Data source Protocol Default port

Avamar MCSDK API HTTP SOAP 9443

Isilon REST API HTTPS 8080

Microsoft SCOM SQL TCP 1433

OpenStack OpenStack Endpoint HTTP 5000

RecoverPoint for Virtual Machines REST API HTTPS 443

ScaleIO REST API HTTPS 443

Unity/UnityVSA REST API HTTPS 443

VMAX REST API HTTPS 8443

VMware vSphere vCenter Web Services SDK HTTPS 443

Installation and Licensing

Installation and operating requirements 15

Table 2 Port assignments (continued)

Connection type Data source Protocol Default port

VNX Block Naviseccli TCP/SSL 443 or 2163

VNX File/eNAS Control Station CLI SSH 22

VNXe REST API HTTPS 443

VPLEX REST API (topology)

VPlexcli (metrics)

HTTPS

SSH

443

22

XtremIO REST API HTTPS 443

Supported vRealize Operations Manager versions

vRealize Operations Manager Advanced or Enterprise editions from VMware

Note

EMC Storage Analytics does not support vRealize Operations Manager Foundation and Standard editions.

Deploy the vApp for vRealize Operations Manager before installing the EMC Adapter. Check the vRealize Operations Manager vApp Deployment and Configuration Guide at http://www.vmware.com/support/pubs for system requirements pertaining to your version of vRealize Operations Manager.

Supported product models

See the EMC Simple Support Matrix for a complete list of supported product models.

Supported web browser

See the latest vRealize Operations Manager release notes for a list of supported browsers.

Isilon systems

EMC Storage Analytics uses REST APIs to interact with Isilon systems. Specify the Isilon Storage Administration web interface IP address (and port if you are not using the default port, 8080) to configure the Isilon collector.

ScaleIO systems

EMC Storage Analytics uses REST APIs to interact with ScaleIO systems. Specify the IP address and port of the ScaleIO Gateway to configure the ScaleIO collector.

Unity and UnityVSA systems

The EMC Adapter uses the REST API to collect configuration and metrics from Unity and UnityVSA systems. To configure a Unity or UnityVSA adapter instance, specify the Unisphere Management IP address and user credential that has the array Administrator role. Unity and UnityVSA adapter instances do not require that you input a license in the configuration wizard. The ESA license for the Unity and UnityVSA collector is tracked on the array side. In Unisphere, select Settings- > Software and Licenses > License Information to ensure that the EMC Storage Analytics license is valid and current.

EMC Unisphere for VMAX

The EMC Adapter uses the Unisphere for VMAX REST API. Unisphere must be available on the network and accessible through a port specified at the end of the IP

Installation and Licensing

16 EMC Storage Analytics 4.1 Installation and User Guide

address (for example, 10.10.10.10:8443). In addition, all VMAX systems must be registered for performance data collection to work with ESA. For data collection only, the Unisphere user credentials for ESA must have PERF_MONITOR permissions and, for the ability to use actions, the user must have STORAGE_ADMIN permissions.

VNX Block systems

The EMC Adapter uses naviseccli to collect metrics from VNX Block systems. It is bundled into the EMC Adapter install file and is automatically installed along with the adapter. Storage processors require IP addresses that are reachable from the vRealize Operations Manager server. Bidirectional traffic for this connection flows through port 443 (HTTPS). Statistics logging must be enabled on each storage processor (SP) for metric collection (System > System Properties > Statistics Logging in Unisphere).

VNX File/eNAS systems

CLI commands issued on the Control Station direct the EMC Adapter to collect metrics from VNX File and eNAS systems. The Control Station requires an IP address that is reachable from the vRealize Operations Manager server. Bidirectional Ethernet traffic flows through port 22 using Secure Shell (SSH). If you are using the EMC VNX nas_stig script for security (/nas/tools/nas_stig), do not use root in the password credentials. Setting nas_stig to On limits direct access for root accounts, preventing the adapter instance from collecting metrics for VNX File and eNAS.

VNXe systems

The EMC Adapter uses REST API to collect configuration and metrics from VNXe systems. To configure a VNXe adapter instance, specify the Unisphere's Management IP address and user credential that has the array's Administrator role.

VPLEX Adapter instance

Only one EMC Adapter instance is required for VPLEX Local or VPLEX Metro. You can monitor both clusters in a VPLEX Metro by adding a single EMC Adapter instance for one of the clusters. Adding an EMC Adapter instance for each cluster in a VPLEX Metro system introduces unnecessary stress on the system.

VPLEX data migrations

EMC VPLEX systems are commonly used to perform non-disruptive data migrations. When monitoring a VPLEX system with EMC Storage Analytics, a primary function is to perform analytics of trends on the storage system. When swapping a back-end storage system on VPLEX system, the performance and trends for the entire VPLEX storage environment are impacted. Therefore, EMC recommends that you start a new EMC Storage Analytics baseline for the VPLEX system after data migration. To start a new baseline:

1. Before you begin data migration, delete all resources associated with the existing EMC Storage Analytics VPLEX adapter instance.

2. Remove the existing EMC Storage Analytics VPLEX adapter instance by using the Manage Adapter Instances dialog.

3. Perform the data migration.

4. Create a new EMC Storage Analytics VPLEX adapter instance to monitor the updated VPLEX system.

Optionally, you can stop the VPLEX adapter instance collects during the migration cycle. When collects are restarted after the migration, orphaned VPLEX resources will

Installation and Licensing

Installation and operating requirements 17

appear in EMC Storage Analytics, but those resources will be unavailable. Remove the resources manually.

XtremIO

EMC Storage Analytics uses REST APIs to interact with XtremIO arrays. Users must specify the IP address of the XtremIO Management Server (XMS) when adding an EMC Adapter for XtremIO and the serial number of the XtremIO Cluster to monitor when adding an EMC Adapter instance for XtremIO.

If enhanced performance is required, administrators can configure the thread count for the XtremIO adapter instance. See Configuring the thread count for an adapter instance on page 154.

Minimum OE requirements

See the EMC Simple Support Matrix for a complete list of minimum Operating Environment (OE) requirements for supported product models.

User accounts

To create an EMC Adapter instance for a storage array, you must have a user account that allows you to connect to the storage array or EMC SMI-S Provider. For example, to add an EMC Adapter for a VNX array, use a global account with an operator or administrator role (a local account will not work).

To create an EMC Adapter instance for vCenter (where Adapter Kind = EMC Adapter and Connection Type = VMware vSphere), you must have an account that allows you access to vCenter and the objects it monitors. In this case, vCenter enforces access credentials (not the EMC Adapter). To create an EMC Adapter instance for vCenter, use, at minimum, an account assigned to the Read-Only role at the root of vCenter, and enable propagation of permissions to descendant objects. Depending on the size of the vCenter, wait approximately 30 seconds before testing the EMC Adapter. More information on user accounts and access rights is available in the vSphere API/SDK documentation (see information about authentication and authorization for ESXi and vCenter Server). Ensure that the adapter points to the vCenter server that is monitored by vRealize Operations Manager.

DNS configuration

To use the EMC Adapter, the vRealize Operations Manager vApp requires network connectivity to the storage systems to be monitored. DNS must be correctly configured on the vRealize Operations Manager server to enable hostname resolution by the EMC Adapter.

Time zone and synchronization settings

Ensure time synchronization for all EMC Storage Analytics resources by using Network Time Protocol (NTP). Also, set correct time zones for EMC Storage Analytics resources. Failure to observe these practices may affect the collection of performance metrics and topology updates.

Installing vRealize Operations Manager Learn about prerequisites and where to find installation instructions.

Before you begin

l Obtain the OVA installation package for vRealize Operations Manager from VMware or download it from EMC Online Support.

l Obtain a copy of the vRealize Operations Manager vApp Deployment and Configuration Guide at http://www.vmware.com/support/pubs.

Installation and Licensing

18 EMC Storage Analytics 4.1 Installation and User Guide

Refer to the vRealize Operations Manager vApp Deployment and Configuration Guide to deploy the vApp for vRealize Operations Manager.

Procedure

1. Review the system requirements.

2. Follow the instructions to install vRealize Operations Manager and use the VMware license that you received when prompted to assign the vRealize Operations Manager license.

3. Conclude the installation by following instructions to verify the vRealize Operations Manager installation.

Installing the EMC Adapter and dashboards Learn how to install ESA.

Before you begin

Obtain the PAK file for the EMC Adapter.

Note

If using Internet Explorer, the installation file downloads as a ZIP file but functions the same way as the PAK file.

WARNING

When you upgrade EMC Storage Analytics the standard EMC dashboards are overwritten. To customize a standard EMC dashboard, clone it, rename it, and then customize it.

To install the EMC Adapter and dashboards:

Procedure

1. Save the PAK file in a temporary folder.

2. Start the vRealize Operations Manager administrative user interface in your web browser and log in as an administrator.

For example, enter https:// .

3. Select Administration > Solutions and then click the Add (plus) sign to upload the PAK file.

A message similar to this one is displayed in the Add Solution window:

The .pak file has been uploaded and is ready to install. pak file details Name EMC Adapter Description Manages EMC systems such as VNX, VMAX... Version 4.1

4. Click Next, read the license agreement, and select the check box to indicate agreement. Click Next again.

Installation begins. Depending on your system's performance, the installation can take from 5 to 15 minutes to complete.

5. When the installation completes, click the Finish button.

The EMC Adapter appears in the list of installed solutions.

Installation and Licensing

Installing the EMC Adapter and dashboards 19

Installing Navisphere CLI For VNX Block systems, the Navisphere CLI (naviseccli) must be installed in the Data Node that you assign to collect metrics for VNX. The naviseccli-bin-xxx-rpm is available in the ESA package.

Note

For vRealize Operations Manager 6.1 or later, the Navisphere CLI is automatically installed on all Data Nodes that are available during the initial installation. If you add more nodes in vRealize Operations Manager cluster after ESA is installed or if you are using vRealize Operations Manager 6.0 or earlier, use the following procedure to manually install the Navisphere CLI.

Install the CLI before you add the EMC Adapter instance to vRealize Operations Manager. If the CLI is not installed, errors could occur in scaled-out vCenter environments that consist of a Master Node and multiple Data Nodes. The CLI is automatically installed on the Master Node. However, because the Data Node collects metrics, the EMC Adapter might report errors if naviseccli is not installed.

Procedure

1. Enable Secure Shell (SSH) for both master and data nodes.

Refer to Connecting to vRealize Operations Manager by using SSH on page 155 for instructions.

2. Extract the pak file by using decompression software such as WinZip.

3. Copy the naviseccli-bin- .rpm file (for example, naviseccli- bin-7.33.1.0.33-x64.rpm) to a target directory in the data node. If you are using Windows, you can use WinSCP for the copy operation.

4. Establish a secure connection to the data node and change to the target directory.

5. Run this command: rpm -i naviseccli-bin- .rpm where is the appropriate version of the naviseccli utility for the node.

6. Repeat this procedure to install naviseccli in other nodes, as required.

Installation and Licensing

20 EMC Storage Analytics 4.1 Installation and User Guide

Adapter instances The vRealize Operations Manager requires an adapter instance for each resource to be monitored. The instance specifies the type of adapter to use and the information needed to identify and access the resource.

With EMC Storage Analytics, the vRealize Operations Manager uses EMC Adapter instances to identify and access the resources. Supported adapter instances include:

l vCenter (prerequisite for other adapter instances)

l Avamar

l eNAS

l Isilon

l OpenStack

l RecoverPoint for Virtual Machines

l ScaleIO

l System Center Operations Manager (SCOM)

l Unity

l UnityVSA

l VMAX

l VNX File

l VNX Block

l VNXe

l VPLEX

l XtremIO

See the EMC Simple Support Matrix for a list of the supported models for each adapter instance and related Operating Environments (OEs).

If the vCenter adapter instance is not configured, other adapter instances will function normally but will not display visible connections between the VMware objects and the array objects.

Note

The ESA space on the EMC Community Network provides more information about installing and configuring EMC Storage Analytics.

After adapter instances are created, the vRealize Operations Manager Collector requires several minutes to collect statistics, depending on the size of the storage array. Large storage array configurations require up to 45 minutes to collect metrics and resources and update dashboards. This is a one-time event; future statistical collections run quickly.

Adding an EMC Adapter instance for vCenter To view health trees for the storage environment from the virtual environment, you must install an EMC Adapter instance for vCenter.

All storage system adapter instances require the EMC Adapter instance for vCenter, which you must add first. A separate instance is required for each vCenter monitored by the vRealize Operations Manager environment.

Installation and Licensing

Adapter instances 21

Procedure

1. In a web browser, type: https:// /vcops-web-ent to start the vRealize Operations Manager custom user interface log in as an administrator.

2. Select Administration > Solutions > EMC Adapter, and then click the Configure icon.

The Manage Solution dialog box appears.

3. Click the Add icon to add a new adapter instance.

4. Configure the following Adapter Settings and Basic Settings:

Option Description

Display Name Any descriptive name, for example: My vCenter

Description Optional description

Connection Type VMware vSphere

License (optional) Not applicable (must be blank) for EMC Adapter instance for vCenter

Management IP IP address of the vCenter server

Array ID (optional) Not applicable (must be blank) for VMware vSphere connection type

5. In the Credential field, select any previously defined credentials for this storage system; otherwise, click the Add New icon (+) and configure these settings:

Option Description

Credential name Any descriptive name, for example: My VMware Credentials

Username Username that EMC Storage Analytics uses to connect to the VMware vRealize system

Note

If a domain user is used, the format for the username is DOMAIN \USERNAME.

Password Password for the EMC Storage Analytics username

6. Click OK.

7. Configure the Advanced Settings, if they are required:

Collector vRealize Operations Manager Collector

Log Level Configure log levels for each adapter instance. The levels for logging information are ERROR, WARN, INFO, DEBUG, and TRACE.

The Manage Solution dialog box appears.

8. To test the adapter instance, click Test Connection.

If the connection is correctly configured, a confirmation box appears.

9. Click OK.

Installation and Licensing

22 EMC Storage Analytics 4.1 Installation and User Guide

The new adapter instance polls for data every 5 minutes by default. At every interval, the adapter instance will collect information about the VMware vSphere datastore and virtual machines with Raw Device Mapping (RDM). Consumers of the registered VMware service can access the mapping information.

Note

To edit the polling interval, select Administration > Environment Overview > EMC Adapter Instance. Select the EMC Adapter instance you want to edit, and click the Edit Object icon.

Configuring the vCenter Adapter After the vCenter Adapter is installed, use the following procedure to configure it manually.

Procedure

1. Start the vRealize Operations Manager custom user interface and log in as administrator.

In a web browser, type https://vROps_ip_address/vcops-webent and type the password.

2. Select Administration > Solutions.

3. In the solutions list, select VMware vSphere > vCenter Adapter, and click the Configure icon.

The Manage Solution dialog box appears.

4. Click the Add icon.

5. In the Manage Solution dialog box, provide values for the following parameters:

l Under Adapter Settings, type a name and optional description.

l Under Basic Settings:

n For vCenter Server, type the vCenter IP address.

n For Credential, either select a previously defined credential or click the Add icon to add a new credential. For a new credential, in the Manage Credential dialog box, type a descriptive name and the username and password for the vRealize system. If you use a domain username, the format is DOMAIN\USERNAME. Optionally, you can edit the credential using the Manage Credential dialog box. Click OK to close the dialog box.

l Optionally, configure the Advanced Settings:

n Collector: The vRealize Operations Manager Collector

n Auto Discovery: True or False

n Process Change Events: True or False

n Registration user: The registration username used to collect data from vCenter Server.

n Registration password: The registration password used to collect data from vCenter Server

6. Click Test Connection.

7. Click OK in the confirmation dialog box.

Installation and Licensing

Configuring the vCenter Adapter 23

8. Click Save Settings to save the adapter.

9. Click Yes to force the registration.

10. Click Next to go through a list of questions to create a new default policy if required.

Adding an EMC Adapter instance for SCOM The SCOM adapter collects the resource and topology information for the computers and virtual machines in a Microsoft Hyper-V environment. To view relationships between VNX resources and resources that are collected by the SCOM adapter, configure an EMC Adapter instance for SCOM.

Before you begin

l Install Hyper-V Management Pack Extensions 2012/2012 R2 in SCOM. The installation binary is available at https://hypervmpe2012.codeplex.com.

l Install the Management Pack for EMC storage systems (EMC Adapter) on vRealize Operations Manager.

l Install the Management Pack for SCOM on vRealize Operations Manager.

l Download the Hyper-V-enabled VNX Topology Dashboard from ESA Dashboard Exchange. Import the dashboard into vRealize Operations Manager.

l Add your SCOM adapter instance in VMware SCOM MP.

Procedure

1. Open the EMC Adapter configuration dialog box.

2. For Management IP, type the IP address in this format: .< / .

For example, 10.0.0.1:1433/OperationsManager

3. For Collector Type, type Microsoft SCOM.

4. For Credential, type the username and password to connect to SQL Server.

For Windows authentication, provide the domain name, for example \ .

5. Click Test Connection. If a "Driver not found" error appears, try again.

6. Verify that the connection is successful and click Save.

The SCOM adapter instance is added.

Adding an EMC Adapter instance for OpenStack The OpenStack adapter collects the compute, storage, and network infrastructure information in the OpenStack environment. To view relationships between VNX resources and OpenStack resources, configure an EMC Adapter instance for OpenStack.

Before you begin

l Install the Management Pack for EMC storage systems (EMC Adapter) on vRealize Operations Manager.

l Install the vRealize Operations Manager pack for OpenStack.

l Configure a VMware OpenStack Adapter instance.

Installation and Licensing

24 EMC Storage Analytics 4.1 Installation and User Guide

Procedure

1. In the configuration dialog box for the EMC Adapter, provide the following information:

l Management IP: Type the url of the OpenStack Endpoint in this format: [protocol://][IP_address][:port]. Protocol can be http or https. The protocol defaults to http if omitted. The port defaults to 5000 if omitted. For example: 192.168.1.2 defaults to http://192.168.1.2:5000

l Connection Type: Select OpenStack.

l Credential: Type the user name and password used to connect to the OpenStack Endpoint. The username format is tenant:username. Tenant defaults to admin if omitted.

2. Click Test Connection.

3. If a Review and Accept Certificate dialog box appears, review and click OK to accept the certificate.

4. Verify that the connection test is successful and click Save.

Results

The OpenStack adapter instance is added.

Installation and Licensing

Adding an EMC Adapter instance for OpenStack 25

Adding EMC Adapter instances for EMC products Each EMC product requires an adapter instance.

Before you begin

l Install the EMC Adapter for vCenter.

l Obtain the adapter license key (if required) for your EMC product.

All EMC product adapter instances require the EMC Adapter instance for vCenter. Add the EMC Adapter instance for vCenter first. Then add the adapter instances for each EMC product. Adapter instances are licensed per product. Observe these exceptions and requirements:

l When adding an eNAS adapter instance, a license is not required.

l When adding a Unity adapter instance, the license is automatically checked through the array.

l A VNX Unified array can use the same license for VNX File and VNX Block.

l When adding a VNX File adapter instance, a license is required for the VNX File system.

l For VNX Block, to avoid a certificate error in case the main storage processor is down, test both storage processors for the VNX Block system to accept both certificates.

l Global Scope is required for VNX Block access.

l For VPLEX Metro, add an adapter instance for only one of the clusters (either one); this action enables you to monitor both clusters with a single adapter instance.

l For RecoverPoint for Virtual Machines, get the RecoverPoint model that is required for the license.

Procedure

1. In a web browser, type: https:// /vcops-web-ent to start the vRealize Operations Manager custom user interface and log in as an administrator.

2. Select Administration > Solutions > EMC Adapter and click the Configure icon.

The Manage Solution dialog box appears.

3. Click the Add icon to add a new adapter instance.

4. Configure the following Adapter Settings and Basic Settings:

Display Name A descriptive name, such as My Storage System or the array ID

Description Optional description with more details

License (optional)

License key (if required) for the array that you want to monitor (The license key for the adapter instance appears on the Right to Use Certificate that is delivered to you or through electronic licensing, depending on your order.)

Note

If the license field is left blank, the adapter instance will run under a 90-day trial. When the 90 days expires, ESA will stop collecting metrics until a valid license is added to the adapter instance.

Installation and Licensing

26 EMC Storage Analytics 4.1 Installation and User Guide

5. Configure these settings based on the adapter instance for your product:

Supported product Field: Connection Type

Field: Management IP Field: Array ID (optional)

Avamar Avamar Use the IP address of the Avamar server where MCS is running.

Not applicable

eNAS eNAS Use the IP address of the primary Control station. Not applicable

Isilon arrays Isilon If SmartConnect Zone is configured, use the SmartConnect zone name or IP address. Otherwise, use any node IP address.

RecoverPoint for Virtual Machines RecoverPoint for Virtual Machines

Use the IP address of the virtual RecoverPoint appliance. Not applicable

ScaleIO arrays ScaleIO Use the IP address and port of the ScaleIO Gateway. Not applicable

Unity Unity Use the IP address of the management server. Not applicable

UnityVSA UnityVSA Use the IP address of the management server. Not applicable

VMAX3 and VMAX All Flash VMAX Use the IPv4 or IPv6 address, and the port number of the configured EMC Unisphere for VMAX.

Required

VNX Block arrays VNX Block Use the IP address of one Storage Processor (SP) in a single array. Do not add an adapter instance for each SP.

Not applicable

VNX File and Unified models,VG2 and VG8 gateway models

VNX File Use the IP address of the primary Control Station. Not applicable

VNXe3200 VNXe Use the IP address of the management server. Not applicable

VPLEX Local or VPLEX Metro VPLEX Use the IP address of the management server. For a Metro cluster, use the IP address of either management server, but not both.

Not applicable

XtremIO XtremIO Use the IP address of the XMS that manages the XtremIO target cluster.

Use the serial number of the XtremIO target cluster.

6. In the Credential field, select any previously defined credentials for this product; otherwise, click the Add New icon and configure these settings:

Field Value to enter

Credential name

A name for the credentials information.

Username Username that EMC Storage Analytics uses to connect to the EMC product.

l For Avamar, use the credentials of the MCUser account, or another Avamar Administrator user.

l For Isilon, use the credentials of the OneFS storage administration server.

l For ScaleIO, use the credentials of the ScaleIO Gateway.

Installation and Licensing

Adding EMC Adapter instances for EMC products 27

Field Value to enter

l For RecoverPoint for Virtual Machines, use the credentials of the virtual RecoverPoint appliance.

l For Unity and UnityVSA, use the credentials of the management server.

l For VMAX, use the Unisphere user credentials. For data collection only, the Unisphere user credentials for ESA must have PERF_MONITOR permissions and, for the ability to use actions, the user must have STORAGE_ADMIN permissions.

l For VNX File or eNAS, use the credentials of the Control Station.

l For VNX Block, use the credentials of the Storage Processor.

l For VNXe, use the credentials of the management server.

l For VPLEX, use the credentials of the management server (for example, the service user). The default credentials are service/ Mi@Dim7T.

l For XtremIO, use the XMS username.

Password EMC product management password.

7. Click OK.

The Manage Solution dialog reappears.

8. If required, configure the following Advanced Settings:

Collector Automatically select collector

Log Level Configure log levels for each adapter instance. The levels for logging information are ERROR, WARN, INFO, DEBUG, and TRACE.

The Manage Solution dialog box appears.

9. Click Test Connection to validate the values you entered.

If the adapter instance is correctly configured, a confirmation box appears.

Note

Testing an adapter instance validates the values you entered. Failure to do this step causes the adapter instance to change to the (red) warning state if you enter invalid values and do not validate them.

10. To finish adding the adapter instance, click OK.

Editing EMC Adapter instances You can edit installed EMC Adapter instances.

Before you begin

l Install the EMC Adapter.

l Configure the EMC Adapter instance for your EMC product.

Installation and Licensing

28 EMC Storage Analytics 4.1 Installation and User Guide

l Obtain an adapter license key for your product.

Adapter instances are licensed per product. For details, refer to License requirements on page 14.

Procedure

1. Start the vRealize Operations Manager custom user interface and log in as administrator.

For example in a web browser, type: https:// /vcops-web- ent.

2. Select Administration > Inventory Explorer > EMC Adapter Instance .

3. Select the EMC adapter you want to edit and click the Edit Object icon.

The Edit Object dialog appears.

4. Edit the fields you need to change. See Adding EMC Adapter instances for EMC products on page 26 for field descriptions.

5. Click Test Connection to verify the connection.

6. To finish editing the adapter instance, click OK.

Installation and Licensing

Editing EMC Adapter instances 29

CHAPTER 3

EMC Storage Analytics Dashboards

This chapter contains the following topics:

l Topology mapping.................................................................................................32 l EMC dashboards................................................................................................... 46

EMC Storage Analytics Dashboards 31

Topology mapping Topology mapping is viewed and traversed graphically using vRealize Operations Manager health trees. The dashboards developed for EMC Storage Analytics utilize topology mapping to display resources and metrics.

EMC Storage Analytics establishes mappings between:

l Storage system components

l Storage system objects and vCenter objects

Topology mapping enables health scores and alerts from storage system components, such as storage processors and disks, to appear on affected vCenter objects, such as LUNs, datastores, and virtual machines. Topology mapping between storage system objects and vCenter objects uses a vCenter adapter instance.

Avamar topology The drawing in this section shows the components of the Avamar topology.

Figure 2 Avamar components

VMware VM

Relationship to VMware Object

Arrowhead points to parent

Key: Relationships to EMC Objects

Entity can be cascaded

Client

Domain

Avamar DPN

Policy/Group

DDR

EMC Storage Analytics Dashboards

32 EMC Storage Analytics 4.1 Installation and User Guide

Isilon topology The drawing in this section shows the components of the Isilon topology.

Figure 3 Isilon components

Adapter instance

Tier Node pool

Cluster

Access zone

NFS export

VMware datastore

SMB share

Node

Relationship to VMware Object

Arrowhead points to parent

Key: Relationships to EMC Objects

Entity can be cascaded

EMC Storage Analytics Dashboards

Isilon topology 33

RecoverPoint for Virtual Machines topology The drawing in this section shows the components of the RecoverPoint for Virtual Machines topology.

Figure 4 RecoverPoint for Virtual Machines components

RecoverPoint

System Repository

Volume

Cluster

vRPA

Virtual

Machine

Splitter

Cluster

Compute

Resource Virtual

Machine

Journal

Volume

Consistency

Group

Replication

Set

User

Volume

Link

Copy

EMC Storage Analytics Dashboards

34 EMC Storage Analytics 4.1 Installation and User Guide

ScaleIO topology The drawing in this section shows the components of the ScaleIO topology.

Figure 5 ScaleIO components

MDM

Cluster MDM

System

Protection

Domain

SDC

Device SDS

Fault Set

Storage

Pool

Volume

VMware

Datastore

Relationship to VMware Object

Arrowhead points to parent

Key: Relationships to EMC Objects

Entity can be cascaded

Snapshot

EMC Storage Analytics Dashboards

ScaleIO topology 35

Unity topology The drawing in this section shows the components of the Unity topology.

Figure 6 Unity components

VMware NFS Datastore

NFS Export

File System

NAS Server

Storage Processor

LUN

Storage Pool

EMC adapter instance

Consistency Group

VMware VMFS

Datastore

Relationship to VMware Object

Arrowhead points to parent

Key: Relationships to EMC Objects

Entity can be cascaded

Disk

Fast Cache

Tier

Storage Container

VVol DatastoreVMware VM

VVol

EMC Storage Analytics Dashboards

36 EMC Storage Analytics 4.1 Installation and User Guide

UnityVSA topology The drawing in this section shows the components of the UnityVSA topology.

Figure 7 UnityVSA components

EMC Storage Analytics Dashboards

UnityVSA topology 37

VMAX3 and VMAX All Flash topology The drawing in this section shows the components of the VMAX topology.

Figure 8 VMAX 3 and VMAX All Flash components

VMAX3 Array

Storage Resource

Pool

Device

Front-End Director

Front-End Port

Storage GroupService Level Objectives

Relationship to VMware Object or eNAS Object

Arrowhead points to parent

Key: Relationships to EMC Objects

Entity can be cascaded

VMware Datastore

SRDF Director

Remote Replica Group

SRDF Port

Virtual Machine

eNAS Disk

Volume

VMAX3 and VMAX All Flash topology rules The rules in this section govern how objects are displayed in the VMAX topology dashboard and which metrics are collected for them.

l vRealize Operations Manager does not display devices that are unmapped and unbound.

l vRealize Operations Manager does not display devices that are mapped and bound but unused by VMware, VNX, eNAS, or VPLEX.

l If the corresponding EMC vSphere adapter instance is running on the same vRealize Operations Manager appliance, then the vRealize Operations Manager displays devices that are mapped, bound, and used by VMware datastores or RDMs.

l A VMAX device is displayed when the corresponding VPLEX adapter instance is added.

l vRealize Operations Manager does not display Storage Groups with unmapped and unbound devices.

l vRealize Operations Manager displays Storage Groups that contain mapped and bound devices.

EMC Storage Analytics Dashboards

38 EMC Storage Analytics 4.1 Installation and User Guide

VMAX VVol topology The drawing in this section shows the components of the VMAX VVols topology.

Note

Because of the limitations of both vRealize Operations and the VMAX VVol architecture, it is not possible to show the relationship between virtual machines, VVols, and the VMAX VVol Storage Resource.

Figure 9 VMAX VVol components

VMAX3/VMAX All Flash Array

Storage Resource

Pool

Device

Front-End Director

Front-End Port

Storage GroupService Level Objectives

Relationship to VMware Object or eNAS Object

Arrowhead points to parent

Key: Relationships to EMC Objects

Entity can be cascaded

VMware Datastore

SRDF Director

Remote Replica Group

SRDF Port

Virtual Machine

eNAS Disk

Volume

VMAX VVol Protocol Endpoint

VMAX VVol

Storage Resource

VMAX VVol

Storage Container

EMC Storage Analytics Dashboards

VMAX VVol topology 39

VNX Block topology The drawing in this section shows the components of the VNX Block topology.

Figure 10 VNX Block components

Virtual Machine

Datastore

Array Instance

Fast Cache

RAID Group

Storage Pool

LUN

SP A or B

SP Front End Port

Disk Tier

Physical Host

Relationship to VMware Object

Arrowhead points to parent

Key: Relationships to EMC Objects

Entity can be cascaded

HyperV VM

Non-ESX Host System Server

Non-ESX VM

EMC Storage Analytics Dashboards

40 EMC Storage Analytics 4.1 Installation and User Guide

VNX File/eNAS topology The drawing in this section shows the components of the VNX File and eNAS topologies.

Figure 11 VNX File/eNAS components

Array Instance

NFS Export

VDM

File System

Data MoverData Mover

(standby)

File Pool

Disk Volume

Datastore

VNX Block LUNs,

VMAX3 Devices,

XtremIO Volumes or

Snapshots

Relationship to VMware Object

Arrowhead points to parent

Key: Relationships to EMC Objects

Entity can be cascaded

EMC Storage Analytics Dashboards

VNX File/eNAS topology 41

VNXe topology The drawing in this section shows the components of the VNXe topology.

Figure 12 VNXe components

VMware NFS

Datastore

NFS Export

File System

NAS Server Storage

Processor

LUN

Storage

Pool

Disk

Fast Cache

Tier

EMC adapter

instance

LUN Group

VMware

VMFS

Datastore

Relationship to VMware Object

Arrowhead points to parent

Key: Relationships to EMC Objects

Entity can be cascaded

EMC Storage Analytics Dashboards

42 EMC Storage Analytics 4.1 Installation and User Guide

VPLEX Local topology The drawing in this section shows the components of the VPLEX Local topology.

Figure 13 VPLEX Local components

VMware Datastore

Cluster Engine

DirectorFC Port

Ethernet Port

Storage View

Device

Extent

VNX, VNXe, or VMAX Adapter Instance

Virtual Volume

Storage Array

Virtual Machine

Storage Volume

Relationship to VMware Object

Arrowhead points to parent

Key: Relationships to EMC Objects

XtremIO Cluster

EMC Storage Analytics Dashboards

VPLEX Local topology 43

VPLEX Metro topology The drawing in this section shows the components of the VPLEX Metro topology.

Figure 14 VPLEX Metro components

VPLEX Metro

Local Storage View

Local Storage Volume

Local Extent

Local FC Port

Distributed Volume

Local Device Distributed

Device

Local Storage View

Local Storage Volume

Local Extent

Local FC Port

Local Device

VMware Datastore

Storage View

Storage Volume

Extent

Cluster-1Engine

Director

Storage Array

Virtual Machine

FC Port

Ethernet Port

Virtual Volume

Device VMware Datastore

Storage View

Storage Volume

Extent

Cluster-2 Engine

Director

Storage Array

Virtual Machine

FC Port

Ethernet Port

Virtual Volume

Device

Relationship to VMware Object

Arrowhead points to parent

Key: Relationships to EMC Objects VNX, VNXe, or VMAX Adapter Instance

VNX, VNXe, or VMAX Adapter Instance

XtremIO Cluster

XtremIO Cluster

EMC Storage Analytics Dashboards

44 EMC Storage Analytics 4.1 Installation and User Guide

XtremIO topology The drawing in this section shows the components of the XtremIO topology.

Figure 15 XtremIO components

VMware

Datastore

Adapter

Instance

X-Brick

Storage

Controller

Data

Protection

Group

SSD

Virtual

Machine

Volume

Cluster

Snapshot

Relationship to VMware Object

Arrowhead points to parent

Key: Relationships to EMC Objects

Entity can be cascaded

EMC Storage Analytics Dashboards

XtremIO topology 45

EMC dashboards Use dashboards to view metrics.

The standard dashboards are delivered as templates. If a dashboard is accidentally deleted or changed, you can generate a new one. Table 3 on page 46 lists the EMC dashboards available for each EMC product.

Note

Unity dashboards are used for UnityVSA and VNXe.

Table 3 Dashboard-to-product matrix

Dashboard name

Avamar Isilon ScaleIO VNX Unity VMAX VPLEX XtremIO RecoverPoint for Virtual Machines

Storage Topology

--- X X X X X X X X

Storage Metrics --- X X X X X X X X

Overview

X X X X X X X X X

Topology

X X X X X X X --- ---

Metrics

X X X X X X --- X X

Top-N

--- X --- X X --- --- X X

Performance

--- --- --- --- --- --- X X X

Communication

--- --- --- --- --- --- X --- ---

You can use the standard vRealize Operations Manager dashboard customization features to create additional dashboards that are based on your site requirements (some restrictions may apply).

Note

eNAS dashboards are available on the Dashboard XChange. Dashboard XChange on page 62 has more information.

Storage Topology dashboard The Storage Topology dashboard provides an entry point for viewing resources and relationships between storage and virtual infrastructure objects.

Click the Storage Topology tab. Details for every object in every widget are available by selecting the object and clicking the Object Detail icon at the top of the widget.

The Storage Topology dashboard contains the following widgets:

EMC Storage Analytics Dashboards

46 EMC Storage Analytics 4.1 Installation and User Guide

Storage System Selector

This Resource widget filters the EMC Adapter instances that are found in each storage system. To populate the Storage Topology and Health widget, select an instance name.

Storage Topology and Health

This Health Tree widget provides a navigable visualization of resources and virtual infrastructure resources. Single-click to select resources, or double-click to change the navigation focus. To populate the Parent Resources and Child Resources widgets, select a resource in this widget.

Parent resources

This widget lists the parent resources of the resource selected in the Storage Topology and Health widget.

Child resources

This widget lists the child resources of the resource selected in the Storage Topology and Health widget.

Storage Metrics dashboard Click the Storage Metrics tab to display resource and metrics for storage systems and view graphs of resource metrics.

The Storage Metrics dashboard contains the following widgets:

Storage System Selector

This Resource widget lists all configured EMC Adapter instances. Select an instance name to populate the Resource Selector widget.

Resource Selector

This Health Tree widget lists each resource associated with the adapter instance selected in the Storage System Selector. Select a resource to populate the Metric Picker widget.

Metric Picker

This widget lists all the metrics that are collected for the resource selected in the Resource Selector widget. You can use the search feature of this widget to locate specific objects. Double-click a metric to create a graph of the metric in the Metric Graph widget.

Metric Graph

This widget graphs the metrics selected in the Metric Picker widget. It enables you to display multiple metrics simultaneously in a single graph or in multiple graphs.

EMC overview dashboards Click an EMC product Overview tab to display a single view of performance and capacity metrics for selected resources with configured adapter instances. Scoreboards and heat maps group the contents by adapter instance.

Overview dashboards use color to provide a high-level view of performance and capacity metrics for selected devices.

l For measurable metrics, colors range from green to shades of yellow and orange to red. You can change the tolerances mapped to these colors. Default values are listed in the following sections.

EMC Storage Analytics Dashboards

Storage Metrics dashboard 47

l Metrics with varied values that cannot be assigned a range show relative values from lowest (light blue or light green) to highest (dark blue or dark green). Because the range of values for relative metrics have no lower or upper limits, the numerical difference between light and dark blue or green can be very small.

EMC Avamar Overview dashboard This dashboard displays heat maps for Client and Policy and scoreboards for DPN and DDR.

The following tables describe the dashboard items available for EMC Avamar.

Table 4 Avamar heat maps

Heat map

Metric

Client Last changed (GB)

Unintentionally Skipped Files

Last Backup Date

Last Backup Status

Last Elapsed Time

Overhead (GB)

Policy Policy Client Count

DDR Used Capacity

Table 5 Avamar scoreboards

Metric group

Scoreboard Metric Yellow Orange Red

DPN Status State Active Sessions (count)

HFS Address

License Expiration

Scheduler Enabled

Capacity Data Used Capacity (%) Protected Capacity (%)

Total Capacity (GB)

70 80 90

Success History (24 hrs)

Backup failures (Count) 1 2 3

Restore failures (Count) 1 2 3

Garbage Collection

Status Result

Passes (Count)

End Time

Recovered (GB)

EMC Storage Analytics Dashboards

48 EMC Storage Analytics 4.1 Installation and User Guide

Table 5 Avamar scoreboards (continued)

Metric group

Scoreboard Metric Yellow Orange Red

Chunks Deleted (Count)

Performance History (24hrs)

Average Files Changed (Count) Average Files Unintentionally Skipped (Count)

Average Overhead (GB)

DDR Status File System Status Monitoring Status

Default Replication Storage System

Capacity Data Used Capacity (%) Protected Capacity (%)

Total Capacity (GB)

70 80 90

Isilon Overview dashboard The Isilon dashboard displays scoreboards for the resources listed in this section.

For each scoreboard and selected metric, the configured Isilon adapter is shown.

Table 6 Isilon Overview dashboard

Scoreboard Green Yellow Red

CPU Performance (% used) 0% in use 100% in use

Overall Cache Hit Rate

Remaining Capacity (%) > 20% available 10 20% available 0 10% available

Disk Operations Latency 020 ms 20 50 ms > 50 ms

Number of Active Clients 0 1,500

RecoverPoint for VMs Overview dashboard The table in this section describes the dashboard items available for RecoverPoint for Virtual Machines.

Table 7 RecoverPoint for VMs Overview dashboard

Heat map Metric Yellow Orange Red

RecoverPoint for VMs System

Number of clusters n/a

Number of splitters 24 27 30

RecoverPoint Cluster Number of consistency groups 96 109 122

EMC Storage Analytics Dashboards

EMC overview dashboards 49

Table 7 RecoverPoint for VMs Overview dashboard (continued)

Heat map Metric Yellow Orange Red

Number of clusters n/a

Number of protected Virtual Machine Disks (VMDKs) n/a

Number of protected user volumes 1536 1741 1946

Number of protected virtual machines for each RecoverPoint system 384 435 486

Number of virtual RecoverPoint Appliances (vRPAs) for each cluster 8 1 n/a

Consistency Group Displays all RecoverPoint for Virtual Machines consistency groups Enabled Disabled Unknown

Splitter Number of vSphere ESX Clusters connected to a given splitters n/a

Number of attached volumes 1536 1741 1946

ScaleIO Overview dashboard The ScaleIO dashboard displays the heat maps listed in this section.

For each heat map and selected metric, the configured ScaleIO adapter is shown.

Table 8 ScaleIO heat maps for System, Storage Pool, and Device

Heat map Description Green Yellow Red

System Displays the In Use Capacity metric 0 GB allocated

500 GB allocated

1000 GB allocated

Storage Pool Displays the In Use Capacity metric for each ScaleIO Storage Pool grouped by ScaleIO System

0 GB allocated

500 GB allocated

1000 GB allocated

Device Displays the In Use Capacity metric for each ScaleIO Device grouped by ScaleIO System and SDS associated with

0 GB allocated

500 GB allocated

1000 GB allocated

Table 9 ScaleIO heat maps for Protection Domain, SDS, and Fault Set

Heat map Description Light blue Dark blue

Protection Domain

Displays the In Use Capacity metric for each ScaleIO Protection Domain grouped by ScaleIO System

0 GB allocated

>=1000 GB allocated

SDS Displays the In Use Capacity metric for each SDS grouped by ScaleIO System and Protection Domain

0 GB allocated

>=1000 GB allocated

Fault Set Displays the In Health% metric for each Fault Set 0% 100%

EMC Storage Analytics Dashboards

50 EMC Storage Analytics 4.1 Installation and User Guide

Unity Overview dashboard The Unity Overview dashboard displays heat maps for Unity, UnityVSA, and VNXe.

Table 10 Unity Overview dashboard

Heat map Metric Green Red

CPU Performance Storage Processor Utilization 0% busy 100% busy

Pool capacity Storage Pool Capacity Utilization 0% full 100% full

Storage Pool Available Capacity Largest available capacity 0 GB available

LUN, File System, and VVol Performance

LUN Read IOPS Dark green = highest Light green = lowest

n/a

LUN Write IOPS

LUN Read Bandwidth

LUN Write Bandwidth

LUN Total Latency

File System Read IOPS

File System Write IOPS

File System Read Bandwidth

File System Write Bandwidth

VVol Read IOPS

VVol Write IOPS

VVol Read Bandwidth

VVol Write Bandwidth

VVol Total Latency

EMC Storage Analytics Dashboards

EMC overview dashboards 51

VMAX Overview dashboard The table in this section describes the heat maps displayed on the VMAX Overview tab.

Note: Latency scales are based on average customer requirements. If they do not meet your particular requirements for latency, EMC recommends that you adjust the scale appropriately.

Table 11 VMAX Overview dashboard

Heat map Metric Description Green Yellow Red

Storage Resource Pool Capacity

Total Managed Space (GB) Dark blue = highest

Light blue = lowestUsed Capacity (GB)

Full (%) 0 50 100

Storage Group Performance

Total Reads (IO/s) Aggregate reads for all LUNs in the storage group

Dark blue = highest

Light blue = lowest

Total Writes (IO/s) Aggregate writes for all LUNs in the storage group

Read Latency (ms) Average read latency of all LUNs in the storage group

0 ms 20 ms 40 ms

Write Latency (ms) Average write latency of all LUNs in the storage group

0 ms 20 ms 40 ms

Storage Resource Pool Performance

Total Reads (IO/s) Dark blue = highest

Light blue = lowestTotal Writes (IO/s)

Total Latency (ms) 0 ms 20 ms 40 ms

Front End Director Performance

Total Bandwidth (MB/s) Cumulative amount of data transferred over all ports of the front- end director

Dark blue = highest

Light blue = lowest

Total Operations (IO/s) Total number of operations taking place over all ports of a front-end director

Dark blue = highest

Light blue = lowest

SRDF Director Performance

Total Bandwidth (MB/s) Cumulative amount of data transferred over an SRDF director

Dark blue = highest

Light blue = lowest

Total Writes (IO/s) Total number of writes over an SRDF director

SRDF Groups Performance

Writes (IO/s) Number of writes per second on the devices in the SRDF group

Dark blue = highest

Light blue = lowest

Writes (MB/s) Number of megabytes per second sent from the SRDF group

Dark blue = highest

Light blue = lowest

VVol Storage Container Capacity

Subscribed Free (GB) Dark blue = highest

Light blue = lowestSubscribed Limit (GB)

Subscribed Used (GB)

EMC Storage Analytics Dashboards

52 EMC Storage Analytics 4.1 Installation and User Guide

Table 11 VMAX Overview dashboard (continued)

Heat map Metric Description Green Yellow Red

VVol Storage Resource Capacity

Subscribed Free (GB) Dark blue = highest

Light blue = lowestSubscribed Limit (GB)

Subscribed Used (GB)

VNX Overview dashboard The VNX Overview dashboard displays the heat maps listed in this section.

Table 12 VNX Overview dashboard

Heat map Metric Description Green Red

CPU performance The CPU utilization of each Storage Processor and Data Mover on each configured adapter instance

0% busy 100% busy

FAST cache performance

Read Cache Hit Ratio (%)

Number of FAST Cache read hits divided by the total number of read or write I/Os across all RG LUNs and Pools configured to use FAST Cache

High ratio Low ratio

Write Cache Hit Ratio (%)

Number of FAST Cache write hits divided by the total number of read or write I/Os across all RG LUNs and Pools configured to use FAST Cache

High ratio Low ratio

Pool capacity RAID Group Available Capacity

Largest available capacity

0 GB available

Storage Pool Capacity Utilization

0% full 100% full

Storage Pool Available Capacity

Largest available capacity

0 GB available

File Pool Available Capacity

Largest available capacity

0 GB available

LUN and file system performance

LUN Utilization (%) Percentage busy for all LUNs grouped by adapter instance

0% busy 100% busy

LUN Latency (ms) Latency values appear for RAID Group LUNs. Pool LUNS appear in white with no latency values reported.

0 ms latency >= 20 ms latency

LUN Read IO/s Relative number of read I/O operations per second serviced by the LUN

Dark green = highest

Light green = lowest

LUN Write IO/s Relative number of write I/O operations per second serviced by the LUN

Dark green = highest

Light green = lowest

File System Read IO/s Relative number of read I/O operations per second serviced by the file system

Dark green = highest

Light green = lowest

EMC Storage Analytics Dashboards

EMC overview dashboards 53

Table 12 VNX Overview dashboard (continued)

Heat map Metric Description Green Red

File System Write IO/s

Relative number of write I/O operations per second serviced by the file system

Dark green = highest

Light green = lowest

EMC Storage Analytics Dashboards

54 EMC Storage Analytics 4.1 Installation and User Guide

VPLEX Overview dashboard The EMC VPLEX Overview dashboard displays the widgets listed in this section.

Note

Red, yellow, and orange colors correlate with the Health State or Operational Status of the object. Any Health State or Operational Status other than those listed below will show green (good). Also note that because vRealize Operations Manager expects numeric values, you cannot modify these widgets.

Table 13 VPLEX Overview dashboard

Widget Description Green Yellow Orange Red

CPU Health Displays the CPU usage, as a percentage, for each director on the VPLEX system

Note

Generally, a director should stay below 75% CPU usage. Correct an imbalance of CPU usage across directors by adjusting the amount of I/O to the busier directors; make this adjustment by modifying existing storage view configurations. Identify busier volumes and hosts and move them to less busy directors. Alternately, add more director ports to a storage view to create a better load balance across the available directors.

075% usage

7585% usage

8595% usage

95100% usage

Memory Health Displays the memory usage, as a percentage, of each director on the VPLEX system

070% usage

7080% usage

8090% usage

90100% usage

Front-End Latency - Read/ Write

Displays read and write latency in ms for each Front-end Director

07 ms 711 ms 1115 ms > 15 ms

Front-End Operations

Displays the active and total operations (in counts/s) for each Front-end Director

n/a

XtremIO Overview dashboard The XtremIO Overview dashboard displays the heat maps listed in this section.

Table 14 XtremIO Overview dashboard

Heat map Description

Cluster Data Reduction

Displays the Data Deduplication Ratio and Compression Ratio of each cluster and the Data Reduction Ratio, which is the result of the combined Data Deduplication and Compression reduction on each cluster

Note

Compression Ratio shows as blue if XtremIO version 2.4.1 is running.

Cluster Efficiency Displays the Thin Provisioning Savings (%) and the Total Efficiency of each cluster

EMC Storage Analytics Dashboards

EMC overview dashboards 55

Table 14 XtremIO Overview dashboard (continued)

Heat map Description

Volume Displays volumes in one of two modes: Total Capacity or Consumed Capacity. Select a volume to display its sparkline charts

Cluster Displays, for each cluster, the Total Physical and Logical Capacity; Available Physical and Logical Capacity; and Consumed Physical and Logical Capacity.

Snapshot Displays snapshots in one of two modes: Total Capacity or Consumed Capacity. Select a snapshot to display its sparkline charts

Data Reduction Ratio

As data enters the XtremIO system, in-line deduplication and compression reduce the amount of space needed to store the data. This widget provides a ratio showing the overall data reduction savings from both the data deduplication and data compression processes combined.

VPLEX Communication dashboard Click the VPLEX Communication tab to view a collection of heat maps that provide a single view of the performance of the communication links for a VPLEX configuration.

The EMC VPLEX Communication dashboard displays two types of heat maps:

l Metrics with definitive measurements such as intra-cluster local COM latency (015 ms) are assigned color ranges from lowest (green) to highest (red).

l Metrics with varied values that cannot be assigned a range show relative values from lowest (light blue) to highest (dark blue).

Note

Latency scales are based on average customer requirements. If they do not meet your particular requirements for latency, EMC recommends that you adjust the scale appropriately. For VPLEX Metro, EMC recommends adjusting the scale based on your discovered WAN round-trip time.

Table 15 VPLEX Communication dashboard

Heat map Metric Description Green Red

Cluster-1 COM Latency Average Latency (ms) Intra-cluster local COM latency, which occurs within the rack and is typically fast (less than 1 msec)

0 ms 15 ms

Cluster-2 COM Latency

WAN Link Usage (VPLEX Metro only)

Distributed Device Bytes Received (MB/s)

Total amount of traffic received for all distributed devices on a director

Light blue = lowest

Dark blue = highest

Distributed Device Bytes Sent (MB/s)

Total amount of traffic sent for all distributed devices on a director

Distributed Device Rebuild Bytes Received (MB/s)

Total amount of rebuild/migration traffic received for all distributed devices on a director

Distributed Device Rebuild Bytes Sent (MB/s)

Total amount of rebuild/migration traffic sent for all distributed devices on a director

EMC Storage Analytics Dashboards

56 EMC Storage Analytics 4.1 Installation and User Guide

VPLEX Performance dashboard Click the VPLEX Metrics tab to view a collection of heat maps that provide a single view of the most important performance metrics for VPLEX resources.

The EMC VPLEX Performance dashboard displays two types of heat maps:

l Metrics with definitive measurements such as CPU usage (0100%), response time latency (015 ms), or errors (05) are assigned color ranges from lowest (green) to highest (red).

l Metrics with varied values that cannot be assigned a range show relative values from lowest (light blue) to highest (dark blue).

Note

Latency scales are based on average customer requirements. If they do not meet your particular requirements for latency, EMC recommends that you adjust the scale appropriately.

Table 16 VPLEX Performance dashboard

Heat map Metric Description Metric value

Front-end Bandwidth

Reads (MB/s) Total reads for the storage volumes across the front-end ports on a director

Light blue = lowest Dark blue = highest

Writes (MB/s) Total writes for the storage volumes across the front-end ports on a director

Active Operations (Counts/s)

Number of active, outstanding I/O operations on the director's front-end ports

Back-end Bandwidth

Reads (MB/s) Total reads for the storage volumes across the back-end ports on a director

Writes (MB/s) Total writes for the storage volumes across the back-end ports on a director

Active Operations (Counts/s)

Number of I/O operations per second through the director's back-end ports

Back-end Errors Resets (count/s) LUN resets sent by VPLEX to a storage array LUN when it does not respond to I/O operations for over 20 seconds

Green = 0 errors Red = 5 or more errors

Timeouts (count/s)

An I/O from VPLEX to a storage array LUN takes longer than 10 seconds to complete

Aborts (count/s) An I/O from VPLEX to a storage array LUN is cancelled in transit. Resets indicate more serious problems than timeouts and aborts

Front-end Latency Read Latency (ms)

Average read latency for all virtual volumes across all front-end ports on a director

Green = 0 ms Red = 15 ms

Write Latency (ms)

Average write latency for all virtual volumes across all front-end ports on a director

EMC Storage Analytics Dashboards

VPLEX Performance dashboard 57

Table 16 VPLEX Performance dashboard (continued)

Heat map Metric Description Metric value

Note

For VPLEX Metro systems consisting primarily of distributed devices, the WAN round-trip time greatly affects the front-end write latency. See the COM Latency widgets and the WAN Link Usage widget in the VPLEX Communication dashboard.

Queued Operations (Counts/s)

Number of operations in the queue

Virtual Volumes Latency

Read Latency (ms)

Average read latency for all virtual volumes on a director Green = 0 ms Red = 15 ms

Write Latency (ms)

Average write latency for all virtual volumes on a director

Total Reads & Writes (Counts/s)

Virtual volume total reads and writes per director

Storage Volumes Latency

Read Latency (ms)

Average read latency for all storage volumes on a director Green = 0 ms Red = 15 ms

Write Latency (ms)

Average write latency for all storage volumes on a director

XtremIO Performance dashboard The XtremIO Performance dashboard provides percent utilization of the Storage Controller CPUs, key volume, and SSD metrics and sparklines.

The XtremIO Performance dashboard displays two types of heat maps:

l Metrics with definitive measurements such as CPU usage (0100%) are assigned color ranges from lowest (green) to highest (red).

l Metrics with varied values that cannot be assigned a range show relative values from lowest (light blue) to highest (dark blue).

Table 17 XtremIO Performance dashboard

Heat map Metric Notes

Storage Controllers

CPU 1 Utilization (%)

CPU 2 Utilization (%)

Volume Total Operations

Select a volume from this widget to display spark lines for it.

Total Bandwidth

EMC Storage Analytics Dashboards

58 EMC Storage Analytics 4.1 Installation and User Guide

Table 17 XtremIO Performance dashboard (continued)

Heat map Metric Notes

Total Latency

Unaligned (%)

Average Block Size

SSD Endurance Remaining

Select an SSD from this widget to display sparklines for it

Disk Utilization

RecoverPoint for VMs Performance dashboard The RecoverPoint for VMs Performance dashboard provides a single view of the most important performance metrics for the resources.

The Performance dashboard displays two types of heat maps:

l Metrics with definitive measurements such as CPU usage (0100%) are assigned color ranges from lowest (green) to highest (red).

l Metrics with varied values that cannot be assigned a range show relative values from lowest (light blue) to highest (dark blue).

Table 18 RecoverPoint for VMs Performance dashboard

Heat map Description Yello w

Orang e

Red

Link | Lag (%) Percent of the current lag for the link and for protection

90% 100%

Consistency Group | Protection Window

Current Protection Window (Hrs) shows the earliest point in hours for which RecoverPoint can roll back the consistency group's replica copy. Current Protection Window Ratio shows the ratio of the current protection window compared with the required protection window for the Consistency Group.

vRPA | CPU Utilization (%)

Percent utilization of virtual RecoverPoint Appliance (vRPA) CPUs

75% 85% 95%

Cluster Performance for incoming writes (IOPS and KB/s) to clusters

Consistency Group

Performance for incoming writes (IOPS and KB/s) to consistency groups

vRPA Performance for incoming writes (IOPS and KB/s) to vRPAs

75% 85% 95%

EMC Storage Analytics Dashboards

RecoverPoint for VMs Performance dashboard 59

Topology dashboards The topology dashboards provide an entry point for viewing resources and relationships between storage and virtual infrastructure objects for supported adapter instances.

Click the Topology tab for the EMC Adapter instance you want to view.

Details for every object in every widget are available by selecting the object and clicking the Resource Detail icon at the top of each widget.

The topology dashboards contain the following widgets:

Resource Tree

This widget shows the end-to-end topology and health of resources across vSphere and storage domains. You can configure the hierarchy that is shown by changing the widget settings; changing these settings does not alter the underlying object relationships in the database. Select any resource in this widget to view related resources in the stack.

Health Tree

The Health Tree widget provides a navigable visualization of resources that have parent or child relationships to the resource you select in the Resource Tree widget. Single-click to select resources, or double-click to change the navigation focus.

Metric Sparklines

This widget shows sparklines for the metrics of the resource you select in the Resource Tree widget.

Metrics dashboards The metrics dashboards display resources and metrics for storage systems and allow the user to view graphs of resource metrics.

Click the Metrics tab for the EMC Adapter instance you want to view.

Available widgets for the metrics dashboards are as follows:

Resource Tree/Environment Overview

This widget shows the end-to-end topology and health of resources across vSphere and storage domains. You can configure the hierarchy that is shown by changing the widget settings; changing these settings does not alter the underlying object relationships in the database. Select any resource in this widget to view related resources in the stack.

Metric Selector/Metric Picker

This widget lists all the metrics that are collected for the resource you select in the Resource Tree/Environment Overview widget. Double-click a metric to create a graph of the metric in the Metric Graph/Metric Chart widget.

Metric Graph/Metric Chart

This widget graphs the metrics you select in the Metric Selector/Metric Picker widget. You can display multiple metrics simultaneously in a single graph or in multiple graphs.

EMC Storage Analytics Dashboards

60 EMC Storage Analytics 4.1 Installation and User Guide

Resource Events (VNX/VNXe only)

The resource event widget shows a graph that illustrates the health of the selected object over a period of time. Object events are labeled on the graph. When you hover over or click a label, event details appear in a message box:

Id: 460 Start Time: May 23, 2014 4:30:52 AM Cancel Time: May 23, 2014 4:38:28 AM Trigger: Notification Resource: Pool 0 (Storage Pool) Details: FAST VP relocation completed.

The message box includes the event ID, start time, cancel time, trigger, resource name, and event details.

Top-N dashboards Click a Top-N dashboard to view your top performing devices at a glance.

The Top-N dashboards are available for:

l Isilon

l RecoverPoint for Virtual Machines

l VNX

l VNXe

l Unity

l UnityVSA

l Unity VVols

l XtremIO

Top performing devices are selected based on the current value of the associated metric that you configured for each widget. You can change the time period.

You can also change the number of objects in your top performer list.

Isilon By default, a Top-N dashboard shows the top 10 devices in the following categories across your Isilon system.

l Top-10 Active Nodes (24h) by number of active clients

l Top-10 CPU % Usage

l Top-10 Disk Throughput Rate In by Write (MB/s)

l Top-10 Disk Throughput Rate Out by Read (MB/s)

l Top-10 Overall Cache Hit Rate (24 hr) (Bytes/s)

l Top-10 L1 Cache Hit Rate (24 hr) (MB/s)

l Top-10 L2 Cache Hit Rate (24 hr) (MB/s)

l Top-10 L3 Cache Hit Rate (24 hr) (MB/s)

RecoverPoint for Virtual Machines By default, a Top-N dashboard shows the top 10 devices in the following categories across RecoverPoint for Virtual Machine systems:

l Top-10 vRPAs by Incoming Writes (IO/s) (24h)

EMC Storage Analytics Dashboards

Top-N dashboards 61

l Top-10 vRPAs by Incoming Writes (KB/s) (24h)

l Top-10 Clusters by Incoming Writes (IO/s) (24h)

l Top-10 Clusters by Incoming Writes (KB/s) (24h)

l Top-10 Consistency Groups by Incoming Writes (IO/s) (24h)

l Top-10 Consistency Groups by Incoming Writes (KB/s) (24h)

Unity, UnityVSA, VNX, and VNXe By default, a Top-N dashboard shows the top five devices in the following categories across your VNX, Unity, UnityVSA, or VNXe systems:

l Top-5 by Read (IOPS)

l Top-5 by Write (IOPS)

l Top-5 by Read (MB/s)

l Top-5 by Write (MB/s)

l Top-5 by Consumed Capacity

Unity VVols By default, a Top-N dashboard shows the top 10 devices in the following categories across Unity VVol systems.

l Consumed Capacity (GB)

l Total Latency (ms)

l Top-10 by Read (IOPS)

l Top-10 by Write (IOPS)

l Top-10 by Read (MB/s)

l Top-10 by Write (MB/s)

XtremIO By default, a Top-N dashboard shows the top 10 devices in the following categories across your XtremIO system.

l Top-10 by Read (IOPS)

l Top-10 by Write (IOPS)

l Top-10 by Read Latency (usec)

l Top-10 by Write (usec)

l Top-10 by Read Block Size (KB)

l Top-10 by Write Block Size (KB)

l Top-10 by Total Capacity (GB)

Dashboard XChange The Dashboard XChange is a user community page for users to exchange EMC Storage Analytics custom dashboards.

EMC Storage Analytics provides a set of default dashboards that provide you with a variety of functional views into your storage environment. EMC Storage Analytics also enables you to create custom dashboards to visualize collected data according to your own requirements. The Dashboard XChange is an extension of that feature that enables you to:

l Export custom dashboards to the Dashboard XChange to benefit a wider EMC Storage Analytics community

EMC Storage Analytics Dashboards

62 EMC Storage Analytics 4.1 Installation and User Guide

l Import custom dashboards from the Dashboard XChange to add value to your own environment

The Dashboard XChange, hosted on the EMC Community Network, will also host dashboards designed by EMC to showcase widget functions that may satisfy a particular use-case in your environment. You can import these dashboards into your existing environment to enhance the functionality offered by EMC Storage Analytics. You can also edit imported dashboards to meet the specific requirements of your own storage environment.

The Dashboard XChange provides these resources to assist you in creating custom dashboards:

l How-to video that shows how to create custom dashboards

l Best practices guide that provides detailed guidelines for dashboard creation

l Slide show that demonstrates how to import dashboards from or export them to the Dashboard XChange

The EMC Storage Analytics Dashboard XChange is available at https:// community.emc.com/community/products/storage-analytics. Note that there are XChange Zones for supported platforms.

EMC Storage Analytics Dashboards

Dashboard XChange 63

CHAPTER 4

Resource Kinds and Metrics

This chapter contains the following topics:

l Avamar metrics..................................................................................................... 66 l Isilon metrics........................................................................................................ 71 l ScaleIO metrics..................................................................................................... 74 l RecoverPoint for Virtual Machines metrics.............................................................76 l Unity and UnityVSA metrics................................................................................... 79 l VMAX metrics........................................................................................................84 l VNX Block metrics................................................................................................. 87 l VNX File/eNAS metrics.......................................................................................... 91 l VNXe metrics.........................................................................................................95 l VPLEX metrics..................................................................................................... 100 l XtremIO metrics.................................................................................................. 110

Resource Kinds and Metrics 65

Avamar metrics EMC Storage Analytics provides Avamar metrics for DPN, DDR, Domain, Policy, and Client.

The following tables show the metrics available for each resource.

Note

ESA does not monitor replication domain or client resources.

Table 19 Avamar DPN metrics

Metric Group Metric Description

General HFS Address (String) (Hash File System address) The hostname or IP address that backup clients use to connect to this Avamar server

License Expiration (String)

Calendar date on which this server's licensing expires

Scheduler Enabled (String)

True or False

Active Sessions (Count) Number of active Avamar sessions

Status State Status of the node. One of the following values:

l OnlineThe node is functioning correctly.

l Read-OnlyThis status occurs normally as background operations are performed and when backups have been suspended.

l Time-OutMCS could not communicate with this node.

l UnknownNode status cannot be determined.

l OfflineThe node has experienced a problem. If ConnectEMC has been enabled, a Service Request (SR) is logged. Go to EMC Online Support to view existing SRs. Search the knowledgebase for Avamar Data Node offline solution esg112792.

l Full AccessNormal operational state for an Avamar server. All operations are allowed.

l AdminThe Avamar server is in an administrative state in which the Avamar server and root user can read and write data; other users are only allowed to read data.

l Admin OnlyThe Avamar server is in an administrative state in which the Avamar server or root user can read or write data; other users are not allowed access.

l Admin Read-OnlyThe Avamar server is in an administrative read-only state in which the Avamar server or root user can read data; other users are not allowed access.

l DegradedThe Avamar server has experienced a disk failure on one or more nodes. All operations are allowed, but immediate action should be taken to fix the problem.

l InactiveAvamar Administrator was unable to communicate with the Avamar server.

l Node OfflineOne or more Avamar server nodes are in an OFFLINE state.

Resource Kinds and Metrics

66 EMC Storage Analytics 4.1 Installation and User Guide

Table 19 Avamar DPN metrics (continued)

Metric Group Metric Description

l SuspendedAvamar Administrator was able to communicate with the Avamar server, but normal operations have been temporarily suspended.

l SynchronizingThe Avamar server is in a transitional state. It is normal for the server to be in this state during startup and for short periods of time during maintenance operations.

Garbage Collection

Status Idle or Processing

Result OK or Error code

Start Time Time format is: "January 1, 1970, 00:00:00 GMT."

End Time Time format is: "January 1, 1970, 00:00:00 GMT."

Passes

Recovered (GB)

Chunks Deleted

Index Stripes

Index Stripes Processed

Capacity Total Capacity (GB)

Used Capacity (GB)

Used Capacity (%) This value is derived from the largest Disk Utilization value on the Avamar tab in the Server Monitor, and therefore represents the absolute maximum Avamar server storage utilization. Actual utilization across all modules, nodes, and drives might be slightly lower.

Protected Capacity (GB)

Protected Capacity (%) Percent of client data in proportion to total capacity that has been backed up (protected) on this server

Free Capacity (GB)

Free Capacity (%)

Success history (Over Last 24 Hours)

Backup Failures (Count)

Backup Success (%)

Backup Successes (Count)

Restore Failures (Count)

Restores Success (%)

Restores Successes (Count)

Performance History Averages

Backup Average Elapsed Time

Average Scanned (GB)

Resource Kinds and Metrics

Avamar metrics 67

Table 19 Avamar DPN metrics (continued)

Metric Group Metric Description

(Over Last 24 Hours)

Average Changed (GB)

Average Files Changed (Count)

Average Files Skipped (Count)

Average Sent (GB)

Average Excluded (GB)

Average Skipped (GB)

Average Modified & Sent (GB)

Average Modified & Not Sent (GB)

Average Overhead (GB)

Table 20 Avamar DDR metrics

Metric Group

Metric Description

Capacity Total Capacity (GB)

Used Capacity (%)

Used Capacity (GB)

Free Capacity (GB)

Free Capacity (%)

Protected Capacity (GB)

Protected Capacity (%)

General Hostname IP or FQDN of the DDR

DDOS Version Data Domain Operating System version

Serial Number Disk serial number

Target for Avamar Checkpoint Backups

Model Number

Default replication storage system

Maximum Streams The maximum number of Data Domain system streams that Avamar can use at any one time to perform backups and restores. This number is configured for the Data

Resource Kinds and Metrics

68 EMC Storage Analytics 4.1 Installation and User Guide

Table 20 Avamar DDR metrics (continued)

Metric Group

Metric Description

Domain system when you add the system to the Avamar configuration.

Maximum Streams Limit

User Name

SNMP Community

SNMP Trap Port

Status File System Status

Monitoring Status

Table 21 Avamar Domain metrics

Metric group Metric

General Description

Contact

Directory

Email

Location

Phone

Table 22 Avamar Policy metrics

Metric group Metric

General Encryption Method

Override Schedule

Auto Proxy Mapping

Client Count

Enabled

Domain

Dataset

Schedule Recurrence

Days of Week

Hours of Day

Next Run Time

Resource Kinds and Metrics

Avamar metrics 69

Table 22 Avamar Policy metrics (continued)

Metric group Metric

Terminate Date

Retention Name

Expiration Date

Duration

Table 23 Avamar Client metrics

Metric group Metric

General Description

Latest operation Start Time

End Time

Status

Elapsed Time

Type

Description

Expiration Time

Retention Tag

Size (GB)

Scanned (GB)

Changed (GB)

Number

Excluded (GB)

Modified & Sent (GB)

Modified & Not Sent (GB)

Skipped (GB)

Overhead (GB)

Files Changed (Count)

Files Skipped (Count)

Change Rate (%)

Resource Kinds and Metrics

70 EMC Storage Analytics 4.1 Installation and User Guide

Isilon metrics EMC Storage Analytics provides metrics for Isilon clusters and nodes.

Note

Only the resource kinds with associated metrics are shown. Performance metrics that cannot be calculated are not displayed.

Table 24 Isilon Cluster metrics

Metric group Metric Description

Summary CPU % Use Average CPU usage for all nodes in the monitored cluster

Number of Total Jobs Total number of active and inactive jobs on the cluster

Number of Active Jobs Total number of active jobs on the cluster

Capacity Total Capacity (TB) Total cluster capacity in terabytes

Remaining Capacity (TB) Total unused cluster capacity in terabytes

Remaining Capacity (%) Total unused cluster capacity in percent

User Data Including Protection (TB) Amount of storage capacity that is occupied by user data and protection for that user data

Snapshots Usage (TB) Amount of data occupied by snapshots on the cluster

Deduplication Deduplicated Data > Physical (GB) Amount of data that has been deduplicated on the physical cluster

Deduplicated Data > Logical (GB) Amount of data that has been deduplicated on the logical cluster

Space Saved > Physical (GB) Amount of physical space that deduplication has saved on the cluster

Space Saved > Logical (GB) Amount of logical space that deduplication has saved on the cluster

Performance Disk Operations Rate > Read Operations Average rate at which the disks in the cluster are servicing data read change requests

Disk Operations Rate > Write Operations Average rate at which the disks in the cluster are servicing data write change requests

Pending Disk Operations Latency (ms) Average amount of time disk operations spend in the input output scheduler

Disk Throughput Rate > Read Throughput (MB/s)

Total amount of data being read from the disks in the cluster

Disk Throughput Rate > Write Throughput (MB/s)

Total amount of data being written to the disks in the cluster

Cache L1 Cache Hits (MB/s) Amount of requested data that was available from the L1 cache

L2 Cache Hits (MB/s) Amount of requested data that was available from the L2 cache

L3 Cache Hits (MB/s) Amount of requested data that was available from the L3 cache

Overall Cache Hit Rate (MB/s) Amount of data requests that returned hits

Resource Kinds and Metrics

Isilon metrics 71

Table 25 Isilon Node metrics

Metric group Metric Description

Summary CPU % Use Average percentage of the total available node CPU capacity used for this node

Number of Active Clients Number of unique client addresses generating protocol traffic on the monitored node

Number of Connected Clients Number of unique client addresses with established TCP connections to the node

Number of Total Job Workers Number of active and assigned workers on the node

Performance Deadlock File System Event Rate Number of file system deadlock events that the file system is processing per second

Locked File System Event Rate Number of file lock operations occurring in the file system per second

Blocking File System Event Rate Number of file blocking events occurring in the file system per second

Average Operations Size (MB) Average size of the operations or transfers that the disks in the node are servicing

Contended File System Event Rate Number of file contention events, such as lock contention or read/write contention, occurring in the file system per second

File System Event Rate Number of file system events, or operations, (such as read, write, lookup, or rename) that the file system is servicing per second

Disk Operations Rate > Read Operations

Average rate at which the disks in the node are servicing data read requests

Disk Operations Rate > Write Operations

Average rate at which the disks in the node are servicing data write requests

Average Pending Disk Operations Count

Average number of operations or transfers that are in the processing queue for each disk in the node

Disk Throughput Rate > Read Operations

Total amount of data being read from the disks in the node

Disk Throughput Rate > Write Operations

Total amount of data being written to the disks in the node

Pending Disk Operation Latency (ms) Average amount of time that disk operations spend in the input/output scheduler

Disk Activity (%) Average percentage of time that disks in the node spend performing operations instead of sitting idle

Protocol Operations Rate Total number of requests that were originated by clients for all file data access protocols

Slow Disk Access Rate Rate at which slow (long-latency) disk operations occur

External Network External Network Errors > In Number of incoming errors generated for the external network interfaces

External Network Errors > Out Number of outgoing errors generated for the external network interfaces

External Network Packets Rate > In Total number of packets that came in through the external network interfaces in the monitored node

Resource Kinds and Metrics

72 EMC Storage Analytics 4.1 Installation and User Guide

Table 25 Isilon Node metrics (continued)

Metric group Metric Description

External Network Packets Rate > Out Total number of packets that went out through the external network interfaces in the monitored node

External Network Throughput Rate > In (MB/s)

Total amount of data that came in through the external network interfaces in the monitored node

External Network Throughput Rate > Out (MB/s)

Total amount of data that went out through the external network interfaces in the monitored node

Cache Average Cache Data Age Average amount of time data has been in the cache

L1 Data Prefetch Starts (Bytes/s) Amount of data that was requested from the L1 prefetch

L1 Data Prefetch Hits (Bytes/s) Amount of requested data that was available in the L1 prefetch

L1 Data Prefetch Misses (Bytes/s) Amount of requested data that did not exist in the L1 prefetch

L1 Cache Starts (Bytes/s) Amount of data that was requested from the L1 cache

L1 Cache Hits (Bytes/s) Amount of requested data that was available in the L1 cache

L1 Cache Misses (Bytes/s) Amount of requested data that did not exist in the L1 cache

L1 Cache Waits (Bytes/s) Amount of requested data that existed in the L1 cache but was not available because the data was in use

L2 Data Prefetch Starts (Bytes/s) Amount of data that was requested from the L2 prefetch

L2 Data Prefetch Hits (Bytes/s) Amount of requested data that was available in the L2 prefetch

L2 Data Prefetch Misses (Bytes/s) Amount of requested data that did not exist in the L2 prefetch

L2 Cache Starts (Bytes/s) Amount of data that was requested from the L2 cache

L2 Cache Hits (Bytes/s) Amount of requested data that was available in the L2 cache

L2 Cache Misses (Bytes/s) Amount of requested data that did not exist in the L2 cache

L2 Cache Waits (Bytes/s) Amount of requested data that existed in the L2 cache but was not available because the data was in use

L3 Cache Starts (Bytes/s) The amount of data that was requested from the L3 cache

L3 Cache Hits (Bytes/s) Amount of requested data that was available in the L3 cache

L3 Cache Misses (Bytes/s) Amount of requested data that did not exist in the L3 cache

L3 Cache Waits (Bytes/s) Amount of requested data that existed in the L3 cache but was not available because the data was in use

Overall Cache Hit Rate (Bytes/s) Amount of data requests that returned hits

Overall Cache Throughput Rate (Bytes/s)

Amount of data that was requested from cache

Resource Kinds and Metrics

Isilon metrics 73

ScaleIO metrics EMC Storage Analytics provides ScaleIO metrics for System, Protection Domain, Device, SDS, Storage pool, Snapshot, MDM cluster, MDM, SDC, Fault Set, and Volume.

Note

Only the resource kinds with associated metrics are shown. Most performance metrics with values of zero are not displayed.

The following table shows the metrics available for each resource kind.

Table 26 ScaleIO metrics

Metric System Protection Domain

Device SDS Storage pool

Snapshot MDM cluster

MDM SDC Fault Set

Volume

Maximum Capacity (GB)

X X X X X

Used Capacity (GB) X X X X X

Spare Capacity Allocated (GB)

X X X X X

Thin Used Capacity (GB)

X X X X X

Thick Used Capacity(GB)

X X X X X

Protected Capacity(GB)

X X X X X

Snap Used Capacity(GB)

X X X X X

Unused Capacity (GB)

X X X X X

Used Capacity (%) X X X X X

Thin Used Capacity (%)

X X X X X

Thick Used Capacity (%)

X X X X X

Protected Capacity (%)

X X X X X

Snap Used Capacity (%)

X X X X X

Total Reads (MB/s) X X X X X X X X

Total Writes (MB/s X X X X X X X X

Average Read IO size (MB)

X X X X X X X

Resource Kinds and Metrics

74 EMC Storage Analytics 4.1 Installation and User Guide

Table 26 ScaleIO metrics (continued)

Metric System Protection Domain

Device SDS Storage pool

Snapshot MDM cluster

MDM SDC Fault Set

Volume

Average Write IO Size (MB)

X X X X X X X

Size (GB) X

Total Read IO/s X X X

Total Write IO/s X X X

MDM Mode (String) X

State (String) X

Name (String) X X

Resource Kinds and Metrics

ScaleIO metrics 75

RecoverPoint for Virtual Machines metrics EMC Storage Analytics provides RecoverPoint for Virtual Machines metrics for Cluster, Consistency Group, Copy, Journal Volume, Link, Virtual RecoverPoint Appliance (vRPA), RecoverPoint for Virtual Machines System, Replication Set, Repository Volume, Splitter, and User Volume.

This section contains RecoverPoint for Virtual Machines metrics for the following resource kinds:

Table 27 RecoverPoint metrics for Cluster

Metric Group Metric Additional Information

Performance Incoming Writes (IO/s) Sum of incoming cluster writes from all child vRPAs

Incoming Writes (MB/s) Sum of incoming cluster throughput from all child vRPAs

Summary Number of Consistency Groups Sum of all child vRPA consistency groups

Number of Protected VMDKs Sum of user volumes that the cluster protects on all virtual machines, including replica virtual machines

Number of Protected VMs Sum of virtual machines, including replica virtual machines, that the cluster protects

Number of vRPAs Sum of all child vRPAs

Table 28 RecoverPoint metrics for Consistency Group

Metric Group Metric Additional Information

Performance Incoming Writes (IO/s) Sum of incoming consistency group writes per second

Incoming Writes (MB/s) Sum of incoming consistency group writes throughput

Status Enabled Boolean value that indicates the consistency group is enabled

Protection Current Protection Window (Hrs) The farthest time in hours for which RecoverPoint can roll back the consistency group's replica copy

Current Protection Window Ratio Ratio of the current protection window for the consistency group's replica copy as compared with your required protection window

Table 29 RecoverPoint metrics for Copy

Metric Group Metric Additional Information

Protection Current Protection Window (Hrs) The farthest time in hours for which RecoverPoint can roll back the replica copy

Current Protection Window Ratio Ratio of current protection window for the replica copy as compared with your required protection window

Status Active Boolean value indicates if the copy is active

Enabled Boolean value indicates if the copy is enabled

Resource Kinds and Metrics

76 EMC Storage Analytics 4.1 Installation and User Guide

Table 29 RecoverPoint metrics for Copy (continued)

Metric Group Metric Additional Information

Regulated Boolean value indicates if the copy is regulated

Removable Boolean value indicates if the copy is removable

Role Role of the copy, which is retrieved from the role of the consistency group copy settings

Suspended Boolean value indicates if the copy is suspended

Table 30 RecoverPoint metrics for Journal Volume

Metric Group Metric Additional Information

Capacity Capacity (GB) Size of journal volume in GB

Table 31 RecoverPoint metrics for Link

Metric Group Metric Additional Information

Configuration RPO The allowed maximum for lag times of consistency group copies

RPO Type The set type of RPOs to measure

Status Current Compression Ratio The compression ratio through the link

Current Lag Current lag time between the copy and production

Current Lag Type The type set to measure the current lag time

Is In Compliance Exists only with consistency groups in asynchronous replication mode; a yes-no value that indicates if the current lag is in compliance with the RPO

Protection Current Lag (%) Exists only with consistency groups in asynchronous replication mode; indicates current lag ratio as compared with RPO

Table 32 RecoverPoint metrics for virtual RecoverPoint Appliance (vRPA)

Metric Group Metric Additional Information

Performance CPU Utilization (%) CPU usage of vRPAs

Note

Utilization values appear as decimals (not percentages). Values can range from 0.0 to 1.0, with a value of 1.0 indicating 100%.

Incoming Writes (IO/s) Incoming application writes per second

Incoming Writes (MB/s) Incoming application writes for throughput

Summary Summary Number of consistency groups

Resource Kinds and Metrics

RecoverPoint for Virtual Machines metrics 77

Table 33 RecoverPoint metrics for RecoverPoint for Virtual Machines System

Metric Group Metric Additional Information

Summary Number of RecoverPoint Clusters Sum of all the clusters in the RecoverPoint system

Number of Splitters Sum of all the splitters in the RecoverPoint system

Table 34 RecoverPoint metrics for Replication Set

Metric Group Metric Additional Information

Capacity Capacity (GB) Size of the user volume in GB that the replication set is protecting

Table 35 RecoverPoint metrics for Repository Volume

Metric Group Metric Additional Information

Capacity Capacity (GB) Size of repository volume in GB

Table 36 RecoverPoint metrics for Splitter

Metric Group Metric Additional Information

Summary Number of Volumes Attached Number of volumes attached to the splitter

Number of ESX Clusters Connected

Number of clusters connecting to the splitter

Table 37 RecoverPoint metrics for User Volume

Metric Group Metric Additional Information

Capacity Capacity (GB) Size of user volume

Status Role Role of the copy to which the user volume belongs

Resource Kinds and Metrics

78 EMC Storage Analytics 4.1 Installation and User Guide

Unity and UnityVSA metrics EMC Storage Analytics provides Unity and UnityVSA metrics for Array, Disk, FAST Cache, File System, LUN, Storage Pool, Tier, VVol, and Virtual Disk, and Storage Processor. Only the resource kinds with associated metrics are shown.

Unity and UnityVSA metrics for EMC Adapter Instance (array)

l Elapsed collect time (ms)

l New metrics in each collect call

l New resources in each collect call

l Number of down resources

l Number of metrics collected

l Number of resources collected

Table 38 Unity and UnityVSA metrics for Disk, FAST Cache, File System, LUN, Storage Pool, Tier, VVol, Virtual Disk

Metric group Metric Disk FAST Cache

File System

LUN Storage Pool

Tier VVol Virtual Disk

Capacity Size (GB) X X

Available Capacity (GB)

X X X X X

Capacity/Total capacity (GB)

X X X

Consumed Capacity (GB)

X X X X X

Full (%) X X X

Max Capacity (GB)

Thin Provisioning

X

Subscribed (%) X

User Capacity (GB)

X X

Configuration State X X

RAID type X X

FAST Cache X

Disk Count X

Performance Busy (%) X X X

Reads (IO/s) X X X X

Reads (MB/s) X X X X

Resource Kinds and Metrics

Unity and UnityVSA metrics 79

Table 38 Unity and UnityVSA metrics for Disk, FAST Cache, File System, LUN, Storage Pool, Tier, VVol, Virtual Disk (continued)

Metric group Metric Disk FAST Cache

File System

LUN Storage Pool

Tier VVol Virtual Disk

Total Latency (ms)

X X X

Writes (IO/s) X X X X

Writes (MB/s) X X X X

Queue Length X X

Total (IO/s) X

Total (MB/s) X

Data to Move Down (GB)

X

Data to Move Up (GB)

X

Data to Move Within (GB)

X

Applies to Unity only

Applies to UnityVSA only

Table 39 Unity and UnityVSA metrics for Storage Processor

Metric group Metric

Cache Dirty Cache Pages (MB)

Read Cache Hit Ratio (%)

Write Cache Hit Ratio (%)

Network CIFS Reads (IOPS)

CIFS Reads (MB/s)

CIFS Writes (IOPS)

CIFS Writes (MB/s)

Network In Bandwidth (MB/s)

Network Out Bandwidth (MB/s)

NFS Reads (IOPS)

NFS Reads (MB/s)

NFS Writes (IOPS)

NFS Writes (MB/s)

Network > NFSv2 Read Calls/s

Read Errors/s

Resource Kinds and Metrics

80 EMC Storage Analytics 4.1 Installation and User Guide

Table 39 Unity and UnityVSA metrics for Storage Processor (continued)

Metric group Metric

Read Response Time (ms)

Reads (IOPS)

Write Calls/s

Write Errors/s

Write Response Time (ms)

Writes (IOPS)

Network > NFSv3 Network > NFSv4

Access Calls/s

Access Errors/s

Access Response Time (ms)

GetAttr Calls/s

GetAttr Errors/s

GetAttr Response Time (ms)

Lookup Calls/s

Lookup Errors/s

Lookup Response Time (ms)

Read Calls/s

Read Errors/s

Read Response Time (ms)

Reads (IOPS)

SetAttr Calls/s

SetAttr Errors/s

SetAtt Response Time (ms)

Write Calls/s

Write Errors/s

Write Response Time (ms)

Writes (IOPS)

Network > SMB1 Close Average Response Time (ms)

Close Calls/s

Close Max Response Time (ms)

NTCreateX Average Response Time (ms)

NTCreateX Calls/s

NTCreateX Max Response Time (ms)

Reads (IOPS)

Resource Kinds and Metrics

Unity and UnityVSA metrics 81

Table 39 Unity and UnityVSA metrics for Storage Processor (continued)

Metric group Metric

Reads (MB/s)

ReadX Average Response Time (ms)

ReadX Calls/s

ReadX Max Response Time (ms)

Trans2Prim Average Response Time (ms)

Trans2Prim Calls/s

Trans2Prim Max Response Time (ms)

Writes (IOPS)

Writes (MB/s)

WriteX Average Response Time (ms)

WriteX Calls/s

WriteX Max Response Time (ms)

Network > SMB2 Close Average Response Time (ms)

Close Calls/s

Close Max Response Time (ms)

Create Average Response Time (ms)

Create Calls/s

Create Max Response Time (ms)

Flush Average Response Time (ms)

Flush Calls/s

Flush Max Response Time (ms)

Ioctl Average Response Time (ms)

Ioctl Calls/s

Ioctl Max Response Time

Queryinfo Average Response Time (ms)

Queryinfo Calls/s

Queryinfo Max Response Time (ms)

Read Average Response Time (ms)

Read Calls/s

Read Max Response Time (ms)

Reads (IOPS)

Reads (MB/s)

Write Average Response Time (ms)

Resource Kinds and Metrics

82 EMC Storage Analytics 4.1 Installation and User Guide

Table 39 Unity and UnityVSA metrics for Storage Processor (continued)

Metric group Metric

Write Calls/s

Write Max Response Time (ms)

Writes (IOPS)

Writes (MB/s)

Performance Busy (%)

Reads (IOPS)

Reads (MB/s)

Writes (IOPS)

Writes (MB/s)

Resource Kinds and Metrics

Unity and UnityVSA metrics 83

VMAX metrics EMC Storage Analytics provides metrics for Device, Front-End Director, Front-End Port, Remote Replica Group, SRDF Director, Storage Group, Storage Resource Pool (SRP), SLO, VVol Protocol Endpoint (VVol PE), and SRDF Port.

Table 40 VMAX Performance metrics

Metric Front-end director

Front-end port

Remote replica group

SRDF director Storage group

SRP

Read Latency (ms) X

Reads (IO/s) X X X

Reads (MB/s) X X

Total Bandwidth (MB/s) X X X X X

Total Operations (IO/s) X X X X X

Write Latency (ms) X

Writes (IO/s) X X X X X

Writes (MB/s) X X X

Total Hits (IO/s) X X

Total Latency (ms) X X

Busy (%) X

Average Cycle Time (s) X

Minimum Cycle Time (s)

Delta Set Extension Threshold X

HA Repeat Writes (counts/s) X

Devices in Session (count) X

SRDFA Writes (IO/s) X

SRDFA Writes (MB/s) X

SRDFS Writes (IO/s) X

SRDFS Writes (MB/s) X

Table 41 VMAX Capacity metrics

Metric Device Storage group

SRP VVol Storage Container

VVol Storage

Resource

Total Capacity (GB) X X

Used Capacity (GB) X X X

EMC VP Space Saved (%) X

Resource Kinds and Metrics

84 EMC Storage Analytics 4.1 Installation and User Guide

Table 41 VMAX Capacity metrics (continued)

Metric Device Storage group

SRP VVol Storage Container

VVol Storage

Resource

EMC Compression Ratio X

EMC Full (%) X X

Snapshot space (GB) X

Total Managed Space (GB) X

EMC Remaining Managed Space (GB)

X

Subscribed Limit (GB) X X

Subscribed Free (GB) X X

Subscribed Used (GB) X X

Note

The VMAX storage group capacity metrics related to compression are only valid for VMAX All Flash arrays running HYPERMAX OS 5977 2016 Q3 SR and later. Because VMAX3 arrays do not support compression, non-zero values for VMAX3 arrays are irrelevant and should be ignored.

Table 42 VMAX Configuration metrics

Metric Remote replica group

VVol Protocol Endpoint

Description

Number of Masking Views

X

Number of Storage Groups

X

Modes X

Type X

Metro X

Async X

Witness X RDF group is configured as Physical Witness (Yes, No)

Witness Array or Name

X

Resource Kinds and Metrics

VMAX metrics 85

Table 43 VMAX Status metrics

Metric Remote replica group

Witness Configured X

Witness Effective X

Bias Configured X

Bias Effective X

Witness Degraded X

Table 44 VMAX Summary metrics

Metric VVol Protocol Endpoint

Reserved X

Status X

Table 45 VMAX Default metrics

Metric Storage group SLO

Compliance X

Reporting X

Resource Kinds and Metrics

86 EMC Storage Analytics 4.1 Installation and User Guide

VNX Block metrics EMC Storage Analytics provides VNX Block metrics for Array, Disk, FAST Cache, Pool LUN, RAID Group, RAID Group LUN, SP Front-end Port, Storage Pool, Storage Processor, and Tier.

The following table shows the metrics available for each resource kind.

Table 46 VNX Block metrics

Metric Array Disk FAST Cache

Pool LUN

RAID group

RAID group LUN

SP Front- end port

Storage pool

Storage processor

Tier

Elapsed collect time (ms)

X

New metrics in each collect call (count)

X

New resources in each collect call (count)

X

Number of down resources

X

Number of metrics collected

X

Number of resources collected

X

Busy (%) X X X X

Capacity (GB) X

Hard Read Errors (Count)

X

Hard Write Errors (Count)

X

LUN Count X

Queue Length X X X

Read Size (MB) X X X X

Reads (IOPS) X X X X X

Reads (MB/s) X X X X X

Total Latency (ms) X X X

Total Operations (IOPS)

X X X X X

Total Bandwidth (MB/s)

X X X X X

Resource Kinds and Metrics

VNX Block metrics 87

Table 46 VNX Block metrics (continued)

Metric Array Disk FAST Cache

Pool LUN

RAID group

RAID group LUN

SP Front- end port

Storage pool

Storage processor

Tier

Write Size (MB) X X X X

Writes (IOPS) X X X X X

Writes (MB/s) X X X X X

Current Operation X X

Current Operation Status

X X

Current Operation Complete (%)

X

Dirty (%) X

Flushed (MB) X

Mode X

RAID Type X X

Read Cache Hit Ratio (%)

X X X

Read Cache Hits (Hits/s)

X

Read Cache Misses (Misses/s)

X

Size (GB) X

Write Cache Hit Ratio (%)

X X

Write Cache Hits (Hits/s)

X

Write Cache Misses (Misses/s)

X

Average Busy Queue Length

X X

Capacity Tier Distribution (%)

X

Consumed Capacity (GB)

X X X

Explicit trespasses (Count)

X

Extreme Performance Tier Distribution (%)

X

Resource Kinds and Metrics

88 EMC Storage Analytics 4.1 Installation and User Guide

Table 46 VNX Block metrics (continued)

Metric Array Disk FAST Cache

Pool LUN

RAID group

RAID group LUN

SP Front- end port

Storage pool

Storage processor

Tier

Implicit trespasses (Count)

X

Initial Tier X

Performance Tier Distribution (%)

X

Read Cache State X X X

Service Time (ms) X X

Tiering Policy X

User Capacity (GB) X X X X

Write Cache State X X X

Available Capacity (GB)

X X X

Defragmented (%) X

Disk Count X X

Free Continuous Group of Unbound Segments (GB)

X

Full (%) X

LUN Count X

Max Disks X

Max LUNs X

Raw Capacity (GB) X

Queue Full Count X X

Auto Tiering X

Auto-Tiering State X

Data Movement Completed (GB)

X

Data to Move Down (GB)

X

Data to Move Up (GB)

X

Data to Move Within (GB)

X

Resource Kinds and Metrics

VNX Block metrics 89

Table 46 VNX Block metrics (continued)

Metric Array Disk FAST Cache

Pool LUN

RAID group

RAID group LUN

SP Front- end port

Storage pool

Storage processor

Tier

Deduplicated LUNs Shared Capacity (GBs)

X

Deduplication and Snapshot Savings (GBs)

X

Deduplication Rate X

Dirty Cache Pages (%)

X

Dirty Cache Pages (MB)

X

Read Cache Size (MB)

X

Write Cache Flushes (MB/s)

X

Write Cache Size (MB)

X

Higher Tier (GB) X

Lower Tier (GB) X

Subscribed (%) X

Resource Kinds and Metrics

90 EMC Storage Analytics 4.1 Installation and User Guide

VNX File/eNAS metrics EMC Storage Analytics provides VNX File metrics for Array, Data Mover (includes Virtual Data Mover), dVol, File Pool, and File System.

VNX File/eNAS metrics for Array

l Elapsed collect time (ms)

l New metrics in each collect call

l New resources in each collect call

l Number of down resources

l Number of metrics collected

l Number of resources collected

VNX File/eNAS metrics for Data Mover

Table 47 VNX File/eNAS metrics for Data Mover

Metric Group Metric

Cache Buffer Cache Hit Ratio (%)

DNLC Hit Ratio (%)

Open File Cache Hit Ratio (%)

Configuration Type

CPU Busy (%)

Disk Reads (MB/s)

Total Bandwidth (MB/s)

Writes (MB/s)

Network CIFS Average Read Size (KB)

CIFS Average Write Size (KB)

CIFS Reads (IOPS)

CIFS Reads (MB/s)

CIFS Total Operations (IOPS)

CIFS Total Bandwidth (MB/s)

CIFS Writes (IOPS)

CIFS Writes (MB/s)

NFS Average Read Size (Bytes)

NFS Average Write Size (Bytes)

NFS Reads (IOPS)

NFS Reads (MB/s)

NFS Total Bandwidth (MB/s)

NFS Total Operations (IOPS)

Resource Kinds and Metrics

VNX File/eNAS metrics 91

Table 47 VNX File/eNAS metrics for Data Mover (continued)

Metric Group Metric

NFS Writes (IOPS)

NFS Writes (MB/s)

Network In Bandwidth (MB/s)

Network Out Bandwidth (MB/s)

Total Network Bandwidth (MB/s)

Network > NFSv2, NFSv3, and NFSv4 Read Calls/s

Read Errors/s

Read Response Time (ms)

Write Calls/s

Write Errors/s

Write Response Time (ms)

Network > NFSv3 Access Calls/s

Access Errors/s

Access Response Time (ms)

GetAttr Calls/s

GetAttr Errors/s

GetAttr Response Time (ms)

Lookup Calls/s

Lookup Errors/s

Lookup Response Time (ms)

SetAttr Calls/s

SetAttr Errors/s

SetAttr Response Time (ms)

Network > NFSv4 Close Calls/s

Close Errors/s

Close Response Time (ms)

Compound Calls/s

Compound Errors/s

Compound Response Time (ms)

Open Calls/s

Open Errors/s

Open Response Time (ms)

Network > SMB1 Close Average Response Time (ms)

Resource Kinds and Metrics

92 EMC Storage Analytics 4.1 Installation and User Guide

Table 47 VNX File/eNAS metrics for Data Mover (continued)

Metric Group Metric

Close Calls/s

Close Max Response Time (ms)

NTCreateX Average Response Time (ms)

NTCreateX Calls/s

NTCreateX Max Response Time (ms)

ReadX Average Response Time (ms)

ReadX Calls/s

ReadX Max Response Time (ms)

Trans2Prim Average Response Time (ms)

Trans2Prim Calls/s

Trans2Prim Max Response Time (ms)

WriteX Average Response Time (ms)

WriteX Calls/s

WriteX Max Response Time (ms)

Network > SMB2 Close Average Response Time (ms)

Close Calls/s

Close Max Response Time (ms)

Flush Average Response Time (ms)

Flush Calls/s

Flush Max Response Time (ms)

Create Average Response Time (ms)

Create Calls/s

Create Max Response Time (ms)

IOCTL Average Response Time (ms)

IOCTL Calls/s

IOCTL Max Response Time (ms)

Queryinfo Average Response Time (ms)

Queryinfo Calls/s

Queryinfo Max Response Time (ms)

Read Average Response Time (ms)

Read Calls/s

Read Max Response Time (ms)

Write Average Response Time (ms)

Resource Kinds and Metrics

VNX File/eNAS metrics 93

Table 47 VNX File/eNAS metrics for Data Mover (continued)

Metric Group Metric

Write Calls/s

Write Max Response Time (ms)

VNX File/eNAS metrics for dVol, File pool, and File system

Table 48 VNX File/eNAS metrics for dVol, File pool, and File system

Metric dVol File pool File system Note

Average Read Size (Bytes) X X

Average Write Size (Bytes) X X

Average Completion Time (ms/ call)

X

Average Service Time (ms/call) X

Available Capacity (GB) X X

Capacity (GB) X X X

Consumed Capacity (GB) X X

Max Capacity (GB) X If automatic extension is enabled, the file system automatically extends to this maximum size when the high water mark is reached. The default value for the high water mark is 90 percent.

Full (%) X

IO Retries (IO/s) X

Queue Length X

Reads (IO/s) X X

Reads (MB/s) X X

Total Operations (IO/s) X

Total Bandwidth (MB/s) X X

Utilization (%) X

Writes (IO/s) X X

Writes (MB/s) X X

Thin Provisioning X True indicates that the file system is enabled for virtual provisioning, an option that can only be used with automatic file system extension. Combining automatic file system extension with virtual provisioning allows growth of the file system gradually and as needed. When virtual provisioning is enabled, NFS and CIFS clients receive reports for either the virtual maximum

Resource Kinds and Metrics

94 EMC Storage Analytics 4.1 Installation and User Guide

Table 48 VNX File/eNAS metrics for dVol, File pool, and File system (continued)

Metric dVol File pool File system Note

file system size or real file system size, whichever is larger.

Read IO Ratio (%) X

Write IO Ratio (%) X

Read Requests (Requests/s) X

Write Requests (Requests/s) X

VNXe metrics EMC Storage Analytics provides VNXe metrics for Array, Disk, FAST Cache, File System, LUN, Storage Pool, Tier, VVol, Virtual Disk, and Storage Processor. Only the resource kinds with associated metrics are shown.

The following metrics are available:

l Elapsed collect time (ms)

l New metrics in each collect call

l New resources in each collect call

l Number of down resources

l Number of metrics collected

l Number of resources collected

Table 49 VNXe metrics for Disk, FAST Cache, File System, LUN, Storage Pool, Tier, Virtual Disk

Metric group Metric Disk FAST Cache

File System

LUN Storage Pool

Tier

Capacity Size (GB) X

Available Capacity (GB) X X X X

Capacity/Total capacity (GB)

X X

Consumed Capacity (GB)

X X X X

Full (%) X X X

Thin Provisioning X

Subscribed (%) X

User Capacity (GB) X X

Configuration State X

RAID type X X

FAST Cache X

Resource Kinds and Metrics

VNXe metrics 95

Table 49 VNXe metrics for Disk, FAST Cache, File System, LUN, Storage Pool, Tier, Virtual Disk (continued)

Metric group Metric Disk FAST Cache

File System

LUN Storage Pool

Tier

Disk Count X

Performance Busy (%) X

Reads (IO/s) X X

Reads (MB/s) X X

Total Latency (ms) X

Writes (IO/s) X X

Writes (MB/s) X X

Queue Length X

Data to move Down (GB) X

Data to move Up (GB) X

Data to move Within (GB)

X

Disk Count X

Table 50 VNXe metrics for Storage Processor

Metric Group Metric

Cache Dirty Cache Pages (MB)

Read Cache Hit Ratio (%)

Write Cache Hit Ratio (%)

Network CIFS Reads (IOPS)

CIFS Reads (MB/s)

CIFS Writes (IOPS)

CIFS Writes (MB/s)

Network In Bandwidth (MB/s)

Network Out Bandwidth (MB/s)

NFS Reads (IOPS)

NFS Reads (MB/s)

NFS Writes (IOPS)

NFS Writes (MB/s)

Network > NFSv2 Read Calls/s

Read Errors/s

Read Response Time (ms)

Resource Kinds and Metrics

96 EMC Storage Analytics 4.1 Installation and User Guide

Table 50 VNXe metrics for Storage Processor (continued)

Metric Group Metric

Reads (IOPS)

Write Calls/s

Write Errors/s

Write Response Time (ms)

Writes (IOPS)

Network > NFSv3 Access Calls/s

Access Errors/s

Access Response Time (ms)

GetAttr Calls/s

GetAttr Errors/s

GetAttr Response Time (ms)

Lookup Calls/s

Lookup Errors/s

Lookup Response Time (ms)

Read Calls/s

Read Errors/s

Read Response Time (ms)

Reads (IOPS)

SetAttr Calls/s

SetAttr Errors/s

SetAtt Response Time (ms)

Write Calls/s

Write Errors/s

Write Response Time (ms)

Writes (IOPS)

Network > SMB1 Close Average Response Time (ms)

Close Calls/s

Close Max Response Time (ms)

NTCreateX Average Response Time (ms)

NTCreateX Calls/s

NTCreateX Max Response Time (ms)

Reads (IOPS)

Reads (MB/s)

Resource Kinds and Metrics

VNXe metrics 97

Table 50 VNXe metrics for Storage Processor (continued)

Metric Group Metric

ReadX Average Response Time (ms)

ReadX Calls/s

ReadX Max Response Time (ms)

Trans2Prim Average Response Time (ms)

Trans2Prim Calls/s

Trans2Prim Max Response Time (ms)

Writes (IOPS)

Writes (MB/s)

WriteX Average Response Time (ms)

WriteX Calls/s

WriteX Max Response Time (ms)

Network > SMB2 Close Average Response Time (ms)

Close Calls/s

Close Max Response Time (ms)

Create Average Response Time (ms)

Create Calls/s

Create Max Response Time (ms)

Flush Average Response Time (ms)

Flush Calls/s

Flush Max Response Time (ms)

Ioctl Average Response Time (ms)

Ioctl Calls/s

Ioctl Max Response Time

Queryinfo Average Response Time (ms)

Queryinfo Calls/s

Queryinfo Max Response Time (ms)

Read Average Response Time (ms)

Read Calls/s

Read Max Response Time (ms)

Reads (IOPS)

Reads (MB/s)

Write Average Response Time (ms)

Write Calls/s

Resource Kinds and Metrics

98 EMC Storage Analytics 4.1 Installation and User Guide

Table 50 VNXe metrics for Storage Processor (continued)

Metric Group Metric

Write Max Response Time (ms)

Writes (IOPS)

Writes (MB/s)

Performance Busy (%)

Reads (IOPS)

Reads (MB/s)

Writes (IOPS)

Writes (MB/s)

Resource Kinds and Metrics

VNXe metrics 99

VPLEX metrics EMC Storage Analytics provides VPLEX metrics for Cluster, Director, Distributed Device, Engine, Ethernet Port, Extent, FC Port, Local Device, Storage Array, Storage View, Storage Volume, Virtual Volume, and VPLEX Metro.

Table 51 VPLEX metrics for Cluster

Metric group Metric Description

Status Cluster Type Local or Metro.

Status Health State Possible values include:

l OK - Cluster is functioning normally.

l Degraded - Cluster is not functioning at an optimal level. This may indicate non- functioning remote virtual volumes, unhealthy devices or storage volumes, suspended devices, conflicting director count configuration values, or out-of-date devices.

l Unknown - VPLEX cannot determine the cluster's health state, or the state is invalid.

l Major failure - Cluster is failing and some functionality may be degraded or unavailable. This may indicate complete loss of back-end connectivity.

l Minor failure - Cluster is functioning, but some functionality may be degraded. This may indicate one or more unreachable storage volumes.

l Critical failure - Cluster is not functioning and may have failed completely. This may indicate a complete loss of back-end connectivity.

Status Operational Status

During transition periods, the cluster moves from one operational state to another. Possible values include:

l OK - Cluster is operating normally.

l Cluster departure - One or more of the clusters cannot be contacted. Commands affecting distributed storage are refused.

l Degraded - Cluster is not functioning at an optimal level. This may indicate non- functioning remote virtual volumes, unhealthy devices or storage volumes, suspended devices, conflicting director count configuration values, or out-of-date devices.

l Device initializing - If clusters cannot communicate with each other, then the distributed-device will be unable to initialize.

l Device out of date - Child devices are being marked fully out of date. Sometimes this occurs after a link outage.

l Expelled - Cluster has been isolated from the island either manually (by an administrator) or automatically (by a system configuration setting).

l Shutdown - Cluster's directors are shutting down.

l Suspended exports - Some I/O is suspended. This could be result of a link failure or loss of a director. Other states might indicate the true problem. The VPLEX might be waiting for you to confirm the resumption of I/O.

l Transitioning - Components of the software are recovering from a previous incident (for example, the loss of a director or the loss of an inter-cluster link).

Resource Kinds and Metrics

100 EMC Storage Analytics 4.1 Installation and User Guide

Table 51 VPLEX metrics for Cluster (continued)

Metric group Metric Description

Capacity Exported Virtual Volumes

Number of exported virtual volumes.

Exported Virtual Volumes (GB)

Gigabytes of exported virtual volumes.

Used Storage Volumes

Number of used storage volumes.

Used Storage Volumes (GB)

Gigabytes of used storage volumes.

Unused Storage Volumes

Number of unused storage volumes.

Unused Storage Volumes (GB)

Gigabytes of unused storage volumes.

Table 52 VPLEX metrics for Director

Metric Group

Metric Description

CPU Busy (%) Percentage of director CPU usage

Status Operational Status Possible values include:

l OK - Functioning normally

l Degraded - May be out-of-date compared to its mirror

l Unknown - Cannot determine the health state, or the state is invalid

l Error - VPLEX has marked the object as hardware-dead

l Starting - Not yet ready

l Lost-communication - Object is unreachable

Storage Volumes

Read Latency (ms) Average read latency in milliseconds

Write Latency (ms) Average write latency in milliseconds

Virtual Volumes

Read Latency (ms) Average read latency in milliseconds

Reads (MB/s) Number of bytes read per second

Total Reads and Writes (counts/s)

Total number of reads and writes per second

Write Latency (ms) Average write latency in milliseconds

Writes (MB/s) Number of bytes written per second

Memory Memory Used (%) Percentage of memory heap usage by the firmware for its accounting on the director. This value is not the percentage of cache pages in use for user data

Front-end Director

Aborts (counts/s) Number of aborted I/O operations per second through the director's front-end ports

Active Operations (counts) Number of active, outstanding I/O operations on the director's front-end ports

Resource Kinds and Metrics

VPLEX metrics 101

Table 52 VPLEX metrics for Director (continued)

Metric Group

Metric Description

Compare and Write Latency (ms)

Average time, in milliseconds, that it takes for VAAI CompareAndWrite request to complete on the director's front-end ports

Operations (counts/s) Number of I/O operations per second through the director's front-end ports

Queued Operations (counts)

Number of queued, outstanding I/O operations on the director's front-end ports

Read Latency (ms) Average time, in milliseconds, that it takes for read requests to complete on the director's front-end ports. Total time it takes VPLEX to complete a read request

Reads (counts/s) Number of read operations per second on the director's front-end ports

Reads (MB/s) Number of bytes per second read from the director's front-end ports

Write Latency (ms) Average time, in milliseconds, that it takes for write requests to complete on the director's front-end ports. Total time it takes VPLEX to complete a write request

Writes (counts/s) Number of write operations per second on the director's front-end ports

Writes (MB/s) Number of bytes per second written to the director's front-end ports

Back-end Director

Aborts (counts/s) Number of aborted I/O operations per second on the director's back-end ports

Operations (counts/s) Number of I/O operations per second through the director's back-end ports

Reads (counts/s) Number of read operations per second by the director's back-end ports

Reads (MB/s) Number of bytes read per second by the director's back-end ports

Resets (counts/s) Number of LUN resets issued per second through the director's back-end ports. LUN resets are issued after 20 seconds of LUN unresponsiveness to outstanding operations.

Timeouts (counts/s) Number of timed out I/O operations per second on the director's back-end ports. Operations time out after 10 seconds

Writes (MB/s) Number of bytes written per second by the director's back-end ports

COM Latency Average Latency (ms) Average time, in milliseconds, that it took for inter-director WAN messages to complete on this director to the specified cluster in the last 5-second interval

Maximum Latency (ms) Maximum time, in milliseconds, that it took for an inter-director WAN message to complete on this director to the specified cluster in the last 5-second interval

Minimum Latency (ms) Minimum time, in milliseconds, that it took for an inter-director WAN message to complete on this director to the specified cluster in the last five-second interval

WAN Link Usage

Distributed Device Bytes Received (MB/s)

Number of bytes of distributed-device traffic per second received on the director's WAN ports

Distributed Device Bytes Sent (MB/s)

Number of bytes of distributed-device traffic per second sent on the director's WAN ports

Distributed Device Rebuild Bytes Received (MB/s)

Number of bytes of distributed-device, rebuild/migration traffic per second received on the director's WAN ports

Distributed Device Rebuild Bytes Sent (MB/s)

Number of bytes of distributed-device rebuild/migration per second traffic sent on the director's WAN ports

Resource Kinds and Metrics

102 EMC Storage Analytics 4.1 Installation and User Guide

Table 52 VPLEX metrics for Director (continued)

Metric Group

Metric Description

FC WAN COM Bytes Received (MB/s) Number of bytes of WAN traffic per second received on this director's Fibre Channel port

Bytes Sent (MB/s) Number of bytes of WAN traffic per second sent on this director's Fibre Channel port

Packets Received (counts/s)

Number of packets of WAN traffic per second received on this director's Fibre Channel port

Packets Sent (counts/s) Number of packets of WAN traffic per second sent on this director's Fibre Channel port

IP WAN COM Average Latency (ms) Average time, in milliseconds, that it took for inter-director WAN messages to complete on this director's IP port in the last 5-second interval

Bytes Received (MB/s) Number of bytes of WAN traffic per second received on this director's IP port

Bytes Sent (MB/s) Number of bytes of WAN traffic per second sent on this director's IP port

Maximum Latency (ms) Maximum time, in milliseconds, that it took for an inter-director WAN message to complete on this director's IP port in the last five-second interval

Minimum Latency (ms) Minimum time, in milliseconds, that it takes for an inter-director WAN message to complete on this director's IP port in the last five-second interval

Packets Received (counts/s)

Number of packets of WAN traffic per second received on this director's IP port

Packets Resent (counts/s) Number of WAN traffic packets re-transmitted per second that were sent on this director's IP port

Packets Sent (counts/s) Number of packets of WAN traffic per second sent on this director's IP port

Received Packets Dropped (counts/s)

Number of WAN traffic packets dropped per second that were received on this director's IP port

Sent Packets Dropped (counts/s)

Number of WAN traffic packets dropped per second that were sent on this director's IP port

Table 53 VPLEX metrics for Distributed Device

Metric Group

Metric Description

Capacity Capacity (GB) Capacity in gigabytes

Status Health State Possible values include:

l OK - Functioning normally

l Degraded - May be out-of-date compared to its mirror

l Unknown - Cannot determine the health state, or the state is invalid

l Non-recoverable error - May be out-of-date compared to its mirror, or VPLEX cannot determine the health state

l Critical failure - VPLEX has marked the object as hardware-dead

Resource Kinds and Metrics

VPLEX metrics 103

Table 53 VPLEX metrics for Distributed Device (continued)

Metric Group

Metric Description

Operational Status

Possible values include:

l OK - Functioning normally

l Degraded - May be out-of-date compared to its mirror

l Unknown - Cannot determine the health state, or the state is invalid

l Error - VPLEX has marked the object as hardware-dead

l Starting - Not yet ready

l Lost-communication - Object is unreachable

Service Status Possible values include:

l Cluster unreachable - VPLEX cannot reach the cluster; the status is unknown

l Need resume - The other cluster detached the distributed device while it was unreachable. Distributed device needs to be manually resumed for I/O to resume at this cluster.

l Need winner - All clusters are reachable again, but both clusters had detached this distributed device and resumed I/O. You must pick a winner cluster whose data will overwrite the other cluster's data for this distributed device.

l Potential conflict - Clusters have detached each other resulting in a potential for detach conflict.

l Running - Distributed device is accepting I/O

l Suspended - Distributed device is not accepting new I/O; pending I/O requests are frozen.

l Winner-running - This cluster detached the distributed device while the other cluster was unreachable, and is now sending I/O to the device.

Table 54 VPLEX metrics for Engine

Metric Group

Metric Description

Status Health State Possible values include:

l OK - Functioning normally

l Degraded - May be out-of-date compared to its mirror

l Unknown - Cannot determine the health state, or the state is invalid

l Non-recoverable error - May be out-of-date compared to its mirror, or VPLEX cannot determine the health state

l Critical failure - VPLEX has marked the object as hardware-dead

Operational Status

Possible values include:

l OK - Functioning normally

l Degraded - May be out-of-date compared to its mirror

l Unknown - Cannot determine the health state, or the state is invalid

Resource Kinds and Metrics

104 EMC Storage Analytics 4.1 Installation and User Guide

Table 54 VPLEX metrics for Engine (continued)

Metric Group

Metric Description

l Error - VPLEX has marked the object as hardware-dead

l Starting - Not yet ready

l Lost-communication - Object is unreachable

Table 55 VPLEX metrics for Ethernet Port

Metric Group

Metric Description

Status Operational Status

Possible values include:

l OK - Functioning normally

l Degraded - May be out-of-date compared to its mirror

l Unknown - Cannot determine the health state, or the state is invalid

l Error - VPLEX has marked the object as hardware-dead

l Starting - Not yet ready

l Lost-communication - Object is unreachable

Table 56 VPLEX metrics for Extent Device

Metric Group

Metric Description

Capacity Capacity (GB) Capacity in gigabytes

Status Health State Possible values include:

l OK - The extent is functioning normally

l Degraded - The extent may be out-of-date compared to its mirror (applies only to extents that are part of a RAID 1 device)

l Unknown - VPLEX cannot determine the extent's operational state, or the state is invalid

l Non-recoverable error - The extent may be out-of-date compared to its mirror (applies only to extents that are part of a RAID 1 device), and/or the health state cannot be determined

Operational Status

Possible values include:

l OK - The extent is functioning normally

l Degraded - The extent may be out-of-date compared to its mirror (applies only to extents that are part of a RAID 1 device)

l Unknown - VPLEX cannot determine the extent's operational state, or the state is invalid

l Starting - The extent is not yet ready

Resource Kinds and Metrics

VPLEX metrics 105

Table 57 VPLEX metrics for Fibre Channel Port

Metric Group

Metric Description

Status Operational Status

Possible values include:

l OK - Functioning normally

l Degraded - May be out-of-date compared to its mirror

l Unknown - Cannot determine the health state, or the state is invalid

l Error - VPLEX has marked the object as hardware-dead

l Starting - Not yet ready

l Lost-communication - Object is unreachable

Table 58 VPLEX metrics for Local Device

Metric Group

Metric Description

Capacity Capacity (GB) Capacity in gigabytes

Status Health State Possible values include:

l OK - Functioning normally

l Degraded - May be out-of-date compared to its mirror

l Unknown - Cannot determine the health state, or the state is invalid

l Non-recoverable error - May be out-of-date compared to its mirror, or VPLEX cannot determine the health state

l Critical failure - VPLEX has marked the object as hardware-dead

Operational Status

Possible values include:

l OK - Functioning normally

l Degraded - May be out-of-date compared to its mirror

l Unknown - Cannot determine the health state, or the state is invalid

l Error - VPLEX has marked the object as hardware-dead

l Starting - Not yet ready

l Lost-communication - Object is unreachable

Service Status Possible values include:

l Cluster unreachable - VPLEX cannot reach the cluster; the status is unknown

l Need resume - The other cluster detached the distributed device while it was unreachable. Distributed device needs to be manually resumed for I/O to resume at this cluster.

l Need winner - All clusters are reachable again, but both clusters had detached this distributed device and resumed I/O. You must pick a winner cluster whose data will overwrite the other cluster's data for this distributed device.

Resource Kinds and Metrics

106 EMC Storage Analytics 4.1 Installation and User Guide

Table 58 VPLEX metrics for Local Device (continued)

Metric Group

Metric Description

l Potential conflict - Clusters have detached each other resulting in a potential for detach conflict.

l Running - Distributed device is accepting I/O

l Suspended - Distributed device is not accepting new I/O; pending I/O requests are frozen

l Winner-running - This cluster detached the distributed device while the other cluster was unreachable, and is now sending I/O to the device.

Table 59 VPLEX metrics for Storage Array

Metric Group Metric Description

Capacity Allocated Storage Volumes Number of allocated storage volumes

Allocated Storage Volumes (GB) Gigabytes of allocated storage volumes

Used Storage Volumes Number of used storage volumes

Used Storage Volumes (GB) Gigabytes of used storage volumes

Table 60 VPLEX metrics for Storage View

Metric Group Metric Description

Capacity Virtual Volumes (GB)

Gigabytes of virtual volumes

Status Operational Status Possible values include:

l OK - Functioning normally

l Degraded - May be out-of-date compared to its mirror

l Unknown - Cannot determine the health state, or the state is invalid

l Error - VPLEX has marked the object as hardware-dead

l Starting - Not yet ready

l Lost-communication - Object is unreachable

Table 61 VPLEX metrics for Storage Volume

Metric Group Metric Description

Capacity Capacity (GB) Capacity in gigabytes

Status Health State Possible values include:

l OK - The storage volume is functioning normally

l Degraded - The storage volume may be out-of-date compared to its mirror

Resource Kinds and Metrics

VPLEX metrics 107

Table 61 VPLEX metrics for Storage Volume (continued)

Metric Group Metric Description

l Unknown - Cannot determine the health state, or the state is invalid

l Non-recoverable error - May be out-of-date compared to its mirror, or VPLEX cannot determine the health state

l Critical failure - VPLEX has marked the object as hardware-dead

Operational Status

Possible values include:

l OK - Functioning normally

l Degraded - May be out-of-date compared to its mirror (This state applies only to a storage volume that is part of a RAID 1 Metadata Volume)

l Unknown - Cannot determine the health state, or the state is invalid

l Error - VPLEX has marked the object as hardware-dead

l Starting - Not yet ready

l Lost-communication - Object is unreachable

Table 62 VPLEX metrics for Virtual Volume

Metric Group Metric Description

Capacity Capacity (GB) Capacity in gigabytes

Locality Locality Possible values include:

l Local - The volume is local to the enclosing cluster

l Remote - The volume is made available by a different cluster than the enclosing cluster, and is accessed remotely

l Distributed - The virtual volume has or is capable of having legs at more than one cluster

Status Health State Possible values include:

l OK - Functioning normally

l Unknown - Cannot determine the health state, or the state is invalid

l Major failure - One or more of the virtual volume's underlying devices is out-of-date, but will never rebuild

l Minor failure - One or more of the virtual volume's underlying devices is out-of-date, but will rebuild

Operational Status

Possible values include:

l OK - Functioning normally

l Degraded - The virtual volume may have one or more out-of-date devices that will eventually rebuild

l Unknown - VPLEX cannot determine the virtual volume's operational state, or the state is invalid

Resource Kinds and Metrics

108 EMC Storage Analytics 4.1 Installation and User Guide

Table 62 VPLEX metrics for Virtual Volume (continued)

Metric Group Metric Description

l Error - One or more of the virtual volume's underlying devices is hardware-dead

l Starting - Not yet ready

l Stressed - One or more of the virtual volume's underlying devices is out-of-date and will never rebuild

Service Status Possible values include:

l Running - I/O is running

l Inactive - The volume is part of an inactive storage-view and is not visible from the host

l Unexported- The volume is unexported

l Suspended - I/O is suspended for the volume

l Cluster-unreachable - Cluster is unreachable at this time

l Need-resume - Issue re-attach to resume after link has returned

Table 63 VPLEX metrics for VPLEX Metro

Metric Group Metric Description

Status Health State Possible values include:

l OK - Cluster is functioning normally

l Degraded - Cluster is not functioning at an optimal level. This may indicate non- functioning remote virtual volumes, unhealthy devices or storage volumes, suspended devices, conflicting director count configuration values, or out-of-date devices.

l Unknown - VPLEX cannot determine the cluster's health state, or the state is invalid

l Major failure - Cluster is failing and some functionality may be degraded or unavailable. This may indicate complete loss of back-end connectivity.

l Minor failure - Cluster is functioning, but some functionality may be degraded. This may indicate one or more unreachable storage volumes.

l Critical failure - Cluster is not functioning and may have failed completely. This may indicate a complete loss of back-end connectivity.

Operational Status

During transition periods, the cluster moves from one operational state to another. Possible values include:

l OK - Cluster is operating normally

l Cluster departure - One or more of the clusters cannot be contacted. Commands affecting distributed storage are refused.

l Degraded - Cluster is not functioning at an optimal level. This may indicate non- functioning remote virtual volumes, unhealthy devices or storage volumes, suspended devices, conflicting director count configuration values, or out-of-date devices.

l Device initializing - If clusters cannot communicate with each other, then the distributed-device will be unable to initialize.

Resource Kinds and Metrics

VPLEX metrics 109

Table 63 VPLEX metrics for VPLEX Metro (continued)

Metric Group Metric Description

l Device out of date - Child devices are being marked fully out of date. Sometimes this occurs after a link outage.

l Expelled - Cluster has been isolated from the island either manually (by an administrator) or automatically (by a system configuration setting).

l Shutdown - Cluster's directors are shutting down.

l Suspended exports - Some I/O is suspended. This could be result of a link failure or loss of a director. Other states might indicate the true problem. The VPLEX might be waiting for you to confirm the resumption of I/O.

l Transitioning - Components of the software are recovering from a previous incident (for example, the loss of a director or the loss of an inter-cluster link).

XtremIO metrics EMC Storage Analytics provides XtremIO metrics for Cluster, Data Protection Group, Snapshot, SSD, Storage Controller, Volume, and X-Brick.

Table 64 XtremIO metrics for Cluster

Metric Group Metric

Capacity Deduplication Ratio

Compression Ratio

Total Efficiency

Thin Provision Savings (%)

Data Reduction Ratio

Capacity > Physical Available Capacity (TB)

Remaining Capacity (%)

Used Capacity (%)

Consumed Capacity (TB)

Total Capacity (TB)

Capacity > Volume Available Capacity (TB)

Consumed Capacity (TB)

Total Capacity (TB)

Performance Total Bandwidth (MB/s)

Total Latency (ms)

Total Operations (IO/s)

Performance > Read Operations

Read Bandwidth (MB/s)

Resource Kinds and Metrics

110 EMC Storage Analytics 4.1 Installation and User Guide

Table 64 XtremIO metrics for Cluster (continued)

Metric Group Metric

Read Latency (ms)

Reads (IO/S)

Performance > Write Operations

Writes (MB/s)

Write Bandwidth (MB/s)

Write Latency (ms)

Status Health State

Table 65 XtremIO metrics for Data Protection Group

Metric Group Metric

Performance Average SSD Utilization (%)

Table 66 XtremIO metrics for Snapshot

Metric Group Metric

Capacity Consumed Capacity in XtremIO (GB) Consumed capacity in gigabytes without "zeroed" space

Consumed Capacity in VMware (GB) Consumed capacity in gigabytes, including "zeroed" space

Note

This metric is available only when a datastore is built on top of the snapshot. The value of the metric is the consumed datastore capacity, which might not be the same as the consumed snapshot capacity.

Total Capacity (GB)

Performance Average Block Size (KB)

Total Bandwidth (MB/s)

Total Latency (usec)

Total Operations (IOPS)

Unaligned (%)

Performance > Read Operations Average Block Size (KB)

Average Small Reads (IOPS)

Average Unaligned Reads (IOPS)

Resource Kinds and Metrics

XtremIO metrics 111

Table 66 XtremIO metrics for Snapshot (continued)

Metric Group Metric

Read Bandwidth (MB/s)

Read Latency (usec)

Reads (IOPS)

Performance > Write Operations Average Block Size (KB)

Average Small Writes (IOPS)

Average Unaligned Writes (IOPS)

Write Bandwidth (MB/s)

Write Latency (usec)

Writes (IOPS)

Tag Configuration

Table 67 XtremIO metrics for SSD

Metric Group Metric

Capacity Disk Utilization (%)

Endurance Endurance Remaining (%)

Table 68 XtremIO metrics for Storage Controller

Metric Group Metric

Configuration Encrypted

Performance CPU 1 Utilization (%)

CPU 2 Utilization (%)

Status Health State

Table 69 XtremIO metrics for Volume

Metric Group Metric

Capacity Consumed Capacity in XtremIO (GB)

Consumed Capacity in VMware (GB)

Total Capacity (GB)

Performance Average Block Size (KB)

Total Bandwidth (MB/s)

Total Latency (usec)

Total Operations (IOPS)

Resource Kinds and Metrics

112 EMC Storage Analytics 4.1 Installation and User Guide

Table 69 XtremIO metrics for Volume (continued)

Metric Group Metric

Unaligned (%)

Performance > Read Operations Average Block Size (KB)

Average Small Reads (IOPS)

Average Unaligned Reads (IOPS)

Read Bandwidth (MB/s)

Read Latency (usec)

Reads (IOPS)

Performance > Write Operations Average Block Size (KB)

Average Small Writes (IOPS)

Average Unaligned Writes (IOPS)

Write Bandwidth (MB/s)

Write Latency (usec)

Writes (IOPS)

Tag Configuration

Table 70 XtremIO metrics for X-Brick

Metric Group Metric

X-Brick Reporting

Resource Kinds and Metrics

XtremIO metrics 113

CHAPTER 5

Views and Reports

This chapter contains the following topics:

l Avamar views and reports................................................................................... 116 l eNAS views and reports.......................................................................................117 l Isilon views and reports...................................................................................... 119 l ScaleIO views and reports................................................................................... 121 l VMAX views and reports...................................................................................... 122 l VNX, VNXe, and Unity/UnityVSA views and reports.............................................. 124 l XtremIO views and reports...................................................................................134

Views and Reports 115

Avamar views and reports The Avamar report includes all views and can be exported to CSV and PDF formats.

You can create Avamar reports for the following metrics:

Table 71 Avamar views and reports

View Metric

DPN Status Summary General | HFS Address

General | Active Sessions (Count)

Status | State

Garbage Collection | Status

Garbage Collection | Result

DPN Capacity Summary Capacity | Total Capacity (GB)

Capacity | Used Capacity (GB)

Capacity | Used Capacity (%)

Capacity | Protected Capacity (GB)

Capacity | Protected Capacity (%)

Capacity | Free Capacity (GB)

Capacity | Free Capacity (%)

DPN Backup Summary (last 24 hours)

Success History (last 24 hours) | Successful Backups (Count)

Success History (last 24 hours) | Successful Backups (%)

Success History (last 24 hours) | Failed Backups (Count)

Success History (last 24 hours) | Successful Restores (Count)

Success History (last 24 hours) | Successful Restores (%)

Success History (last 24 hours) | Failed Restores (Count)

DPN Backup Performance (last 24 hours)

Job Performance History (last 24 hours) | Backup Average Elapsed Time

Job Performance History (last 24 hours) | Average Scanned (GB)

Job Performance History (last 24 hours) | Average Changed (GB)

Job Performance History (last 24 hours) | Average Files Changed (Count)

Job Performance History (last 24 hours) | Average Files Skipped (Count)

Job Performance History (last 24 hours) | Average Sent (GB)

Job Performance History (last 24 hours) | Average Excluded (GB)

Job Performance History (last 24 hours) | Average Skipped (GB)

Job Performance History (last 24 hours) | Average Modified & Sent (GB)

Job Performance History (last 24 hours) | Average Modified & Not Sent (GB)

Views and Reports

116 EMC Storage Analytics 4.1 Installation and User Guide

Table 71 Avamar views and reports (continued)

View Metric

Job Performance History (last 24 hours) | Average Overhead (GB)

DDR Status Summary General | Hostname

General | Model Number

Status | File System Status

Status | Monitoring Status

DDR Capacity Summary Capacity | Total Capacity (GB)

Capacity | Used Capacity (GB)

Capacity | Used Capacity (%)

Capacity | Free Capacity (GB)

Capacity | Free Capacity (%)

Capacity | Protected Capacity (GB)

Capacity | Protected Capacity (%)

eNAS views and reports The eNAS report includes all views and can be exported in CSV and PDF formats.

You can create views and reports for the following eNAS components.

Table 72 eNAS views and reports

Component Metric

Data Mover (In Use) Avg. CPU Busy (%)

Max CPU Busy (%)

Avg. Total Network Bandwidth (MB/s)

Max Total Network Bandwidth (MB/s)

Type (String)

dVol (In Use) Capacity (GB)

Avg. Average Service Time (ms/call)

Max Average Service Time (ms/call)

Avg. Utilization (%)

Max Utilization (%)

Avg. Total Operations (IO/s)

Max Total Operations (IO/s)

Avg. Total Bandwidth (MB/s)

Max Total Bandwidth (MB/s)

Views and Reports

eNAS views and reports 117

Table 72 eNAS views and reports (continued)

Component Metric

File Pool (In Use) Consumed Capacity (GB)

Available Capacity (GB)

Total Capacity (GB)

File system Total Capacity (GB)

Allocated Capacity (GB)

Consumed Capacity (GB)

Available Capacity (GB)

Avg. Total Operations (IO/s)

Max Total Operations (IO/s)

Avg. Total Bandwidth (MB/s)

Max Total Bandwidth (MB/s)

Views and Reports

118 EMC Storage Analytics 4.1 Installation and User Guide

Isilon views and reports

You can create views and reports for Isilon components. The report name is Isilon Report, which contains all the following views:

Table 73 Isilon views and reports

Component Metric group Metric

Isilon Cluster Performance

Summary CPU Usage (%)

Number of Active Jobs

Node | External Network External Throughput Rate (In, MB/s)

External Throughput Rate (Out, MB/s)

Node | Performance Protocol Operations Rate

Node | Summary Connected Clients

Cluster | Summary Active Jobs

Inactive Jobs

Node | Summary Job Workers

Isilon Cache Performance

Node | Cache Overall Cache Hit Rate (MB/s)

Overall Cache Throughput Rate (MB/s)

Average Cache Data Age (s)

L1 Cache Starts (MB/s)

L1 Cache Hits (MB/s)

L1 Cache Misses (MB/s)

L1 Cache Waits (MB/s)

L1 Cache Prefetch Starts (MB/s)

L1 Cache Prefetch Hits (MB/s)

L1 Cache Prefetch Misses (MB/s)

Isilon Cluster Capacity

Cluster | Capacity Total Capacity (TB)

Remaining Capacity (TB)

Remaining Capacity (%)

User Data Including Protection (TB)

Snapshot Usage (TB)

Isilon Cluster Deduplication

Cluster | Deduplication Deduplicated Data (Logical, GB)

Deduplicated Data (Physical, GB)

Saved Data (Logical, GB)

Saved Data (Physical, GB)

Views and Reports

Isilon views and reports 119

Table 73 Isilon views and reports (continued)

Component Metric group Metric

Isilon Disk Performance

Node | Performance Protocol Operations Rate

Disk Activity (%)

Disk Operations Rate (Read)

Disk Operations Rate (Write)

Average Disk Operation Size (MB)

Average Pending Disk Operations Count

Slow Disk Access Rate

Isilon File System Performance

Node | Performance File System Events Rate

Deadlock File System Events Rate

Locked File System Events Rate

Contended File System Events Rate

Blocking File System Events Rate

Isilon Network Performance

Node | External Network External Network Throughput Rate (In, MB/s)

External Network Throughput Rate (Out, MB/s)

External Network Packets Rate (In, MB/s)

External Network Packets Rate (Out, MB/s)

External Network Errors (In, MB/s)

External Network Errors (Out, MB/s)

Isilon Node Performance

Node | Summary CPU Usage (%)

Node | External Network External Throughput Rate (In, MB/s)

External Throughput Rate (Out, MB/s)

Node | Performance Disk Activity (%)

Disk Throughput Rate (Read)

Disk Throughput Rate (Write)

Disk Operations Rate (Read)

Disk Operations Rate (Write)

Protocol Operations Rate

Slow Disk Access Rate

Node | Summary Active Clients

Connected Clients

Pending Disk Operations Latency (ms)

Views and Reports

120 EMC Storage Analytics 4.1 Installation and User Guide

ScaleIO views and reports You can create views and reports for the following ScaleIO components:

Table 74 ScaleIO views and reports

Component Metric

ScaleIO Volume Number of Child Volumes (Count)

Number of Descendant Volumes (Count)

Number of Mapped SDCs (Count)

Volume Size (GB)

Average Read I/O Size (MB)

Average Write I/O Size (MB)

Total Read IO/s

Total Write IO/s

Total Reads (MB/s)

Total Writes (MB/s)

ScaleIO Protection Domain Maximum Capacity (GB)

Protected Capacity (GB)

Snap Used Capacity (GB)

Thick Used Capacity (GB)

Thin Used Capacity (GB)

Unused Capacity (GB)

Used Capacity (GB)

Average Read I/O Size (MB)

Average Write I/O Size (MB)

Total Read IO/s

Total Write IO/s

Total Reads (MB/s)

Total Writes (MB/s)

ScaleIO SDC Number of Mapped Volumes (Count)

Total Mapped Capacity (GB)

Average Read I/O Size (MB)

Average Write I/O Size (MB)

Total Read IO/s

Total Write IO/s

Total Read (MB/s)

Views and Reports

ScaleIO views and reports 121

Table 74 ScaleIO views and reports (continued)

Component Metric

Total Write (MB/s)

ScaleIO SDS Maximum Capacity (GB)

Snap Used Capacity (GB)

Thick Used Capacity (GB)

Thin Used Capacity (GB)

Unused Capacity (GB)

Used Capacity (GB)

Average Read IO Size (MB)

Average Write IO Size (MB)

Total Read IO/s

Total Write IO/s

Total Read (MB/s)

Total Write (MB/s)

Note

The MDM list view does not contain component-specific metrics.

VMAX views and reports VMAX reports consist of multiple component list views with the supported VMAX metrics. The reports can be exported in CSV and PDF formats.

You can create the following views and reports:

Table 75 VMAX views and reports

Metric SRDF Report VMAX3 Report

Device X

Front-End Director X

Front-End Port X

Remote Replica Group X

SRDF Director X

SRDF Port X

SLO X

Storage Group X

Storage Resource Pool X

Views and Reports

122 EMC Storage Analytics 4.1 Installation and User Guide

The metrics available for each component are listed in the following table.

Table 76 VMAX available metrics

Metric Storage Group

Device Front-End Director

Front-End Port

SRDF Director

Remote Replica Group

Storage Resource

Pool

Total Capacity (GB) X X X

Current Size (GB) X X X

Used Capacity (GB) X X X

Usable Capacity (GB) X X X

Workload (%) X X X

Under Used (%) X X X

Reads IO/s X X X

Reads MB/s X X

Writes IO/s X X X X X

Writes MB/s X X X

Total Operations IO/s X X X X X

Total Bandwidth MB/s X X X

Full (%) X X

Total Bandwidth IO/s X

Total Hits IO/s X

Tier1 Percent in Policy (%)

Tier2 Percent in Policy (%)

Tier3 Percent in Policy (%)

Tier4 Percent in Policy (%)

% Busy X

SRDFA Writes IO/s X

SRDFA Writes MB/s X

SDRFS Writes IO/s X

SDRFS Writes MB/s X

Avg. Cycle Time (seconds) X

Delta Set Extension Threshold (integer)

X

Devices in Session (count) X

HA Repeat Writes (count/s) X

Views and Reports

VMAX views and reports 123

Note

The current list views of SRDF Port and SLO do not contain any component-specific metrics.

VNX, VNXe, and Unity/UnityVSA views and reports You can create views and reports for VNX, VNXe, and Unity resources. Several predefined views and templates are also available.

Report templates

Note

VNXe storage objects are contained in Unity views and reports.

The predefined report templates consist of several list views under the adapter instance, as shown in the following table.

Table 77 VNX, VNXe, and Unity/UnityVSA views and reports

Metric VNX Block Report

VNX File Report VNXe Report Unity/UnityVSA

Alerts X X X X

Storage Pool (In Use)

X X X

RAID Group (In Use)

X

LUN X X X

Disk (In Use) X X X

SP Front-End Port

X

Data Mover (In Use)

X

File Pool (In Use)

X

File System X X X

dVol (In Use) X

VVol (In Use) X

Predefined views The following sections describe the available predefined views:

l Alerts on page 125

l VNX Data Mover on page 125

l VNX File System on page 126

l VNX File Pool on page 124

Views and Reports

124 EMC Storage Analytics 4.1 Installation and User Guide

l VNX dVol on page 127

l VNX LUN on page 127

l VNX Tier on page 128

l VNX FAST Cache on page 128

l VNX Storage Pool on page 128

l VNX Disk on page 129

l VNX Storage Processor on page 129

l VNX Storage Processor Front End Port on page 130

l VNX RAID Group on page 130

l Unity File System on page 130

l Unity LUN on page 131

l Unity Tier on page 131

l Unity Storage Pool on page 132

l Unity Disk on page 132

l Unity Storage Processor on page 132

l Unity VVol (In Use) on page 133

Alerts Alert definitions apply to all resources.

Table 78 Alerts

Metric Description

Criticality level The criticality level of the alertWarning, Immediate, or Critical

Object name Name of the impacted object

Object kind Resource kind of the impacted object

Alert impact Impacted badge (Risk, Health, or Efficiency) of the alert

Start time Start time of the alert

VNX Data Mover

Table 79 VNX Data Mover

Metric group Metric Description

CPU Busy (%) VNX Data Mover CPU busy trend

Network NFS Reads (MB/s) VNX Data Mover NFS bandwidth trend

NFS Writes (MB/s)

NFS Total Bandwidth (MB/s)

In Bandwidth (MB/s) VNX Data Mover network bandwidth trend

Out Bandwidth (MB/s)

Total Bandwidth (MB/s)

NFS Reads (IO/s) VNX Data Mover NFS IOPS trend

Views and Reports

VNX, VNXe, and Unity/UnityVSA views and reports 125

Table 79 VNX Data Mover (continued)

Metric group Metric Description

NFS Writes (IO/s)

NFS Total Operations (IO/s)

CPU % Busy - Average VNX Data Mover (in use)

% Busy - Max

Network Total Network Bandwidth - Average (MB/s)

Total Network Bandwidth - Max (MB/s)

Configuration Data Mover Type

VNX File System

Table 80 VNX File System

Metric group Metric Description

Performance Total Operations (IO/s) VNX file system IOPS trend

Reads (IO/s)

Writes (IO/s)

Total Bandwidth (MB/s) VNX file system bandwidth trend

Reads (MB/s)

Writes (MB/s)

Capacity Consumed Capacity (GB) VNX file system capacity trend

Total Capacity (GB)

Capacity Total Capacity (GB) VNX file system List

Allocated Capacity (GB)

Consumed Capacity (GB)

Available Capacity (GB)

Performance Avg. Total Operations (IO/s)

Max Total Operations (IO/s)

Avg. Total Bandwidth (MB/s)

Max Total Bandwidth (MB/s)

Views and Reports

126 EMC Storage Analytics 4.1 Installation and User Guide

VNX File Pool

Table 81 VNX File Pool

Metric group Metric Description

Capacity Consumed Capacity (GB) VNX file pool capacity trend

Total Capacity (GB)

Capacity Available Capacity (GB) VNX file pool (in use) list

Consumed Capacity (GB)

Total Capacity (GB)

VNX dVol

Table 82 VNX dVol

Metric group Metric Description

Performance Utilization (%) VNX dVol utilization trend

Performance Total Operations (IO/s) VNX dVol IOPS trend

Reads (IO/s)

Writes (IO/s)

Performance Total Bandwidth (MB/s) VNX dVol bandwidth trend

Reads (MB/s)

Writes (MB/s)

Capacity Capacity (GB) VNX dVol (in use) list

Performance Avg. Average Service Time (uSec/call)

Max Average Service Time (uSec/call)

Avg. Utilization (%)

Max Utilization (%)

Avg. Total Operations (IO/s)

Max Total Operations (IO/s)

Avg. Total Bandwidth (MB/s)

Max Total Bandwidth (MB/s)

VNX LUN

Table 83 VNX LUN

Metric group Metric Description

Performance Total Operations (IO/s) VNX LUN IOPS trend

Reads (IO/s)

Writes (IO/s)

Views and Reports

VNX, VNXe, and Unity/UnityVSA views and reports 127

Table 83 VNX LUN (continued)

Metric group Metric Description

Performance Total Bandwidth (MB/s) VNX LUN bandwidth trend

Reads (MB/s)

Writes (MB/s)

Performance Total Latency (ms) VNX LUN total latency trend

Performance Avg. Total Operations (IO/s) VNX LUN list

Max Total Operations (IO/s)

Avg. Total Bandwidth (MB/s)

Max Total Bandwidth (MB/s)

Avg. Total Latency (ms)

Max Total Latency (ms)

Capacity Total Capacity (GB)

VNX Tier

Table 84 VNX Tier

Metric group Metric Description

Capacity Consumed Capacity (GB) VNX Tier capacity trend

Total Capacity (GB)

VNX FAST Cache

Table 85 VNX FAST Cache

Metric group Metric Description

Performance Read Cache Hit Ratio (%) VNX FAST Cache hit ratio trend

Write Cache Hit Ratio (%)

VNX Storage Pool

Table 86 VNX Storage Pool

Metric group Metric Description

Capacity Consumed Capacity (GB) VNX storage pool capacity trend

Total Capacity (GB)

Capacity Available Capacity (GB) VNX storage pool (in use) List

Consumed Capacity (GB)

Full (%)

Views and Reports

128 EMC Storage Analytics 4.1 Installation and User Guide

Table 86 VNX Storage Pool (continued)

Metric group Metric Description

Subscribed (%)

Configuration LUN Count

VNX Disk

Table 87 VNX Disk

Metric group Metric Description

Performance Total Operations (IO/s) VNX disk IOPS trend

Reads (IO/s)

Writes (IO/s)

Performance Total Bandwidth (MB/s) VNX disk bandwidth (MB/s) trend

Reads (MB/s)

Writes (MB/s)

Performance Total Latency (ms) VNX disk Total Latency (ms) trend

Performance Busy (%) VNX disk busy (%) trend

Capacity Capacity (GB) VNX disk (in use) List

Performance Avg. Total Operations (IO/s)

Max Total Operations (IO/s)

Avg. Total Bandwidth (MB/s)

Max Total Bandwidth (MB/s)

Avg. Total Latency (ms)

Max Total Latency (ms)

Avg. Busy (%)

Max Busy (%)

Configuration Type

VNX Storage Processor

Table 88 VNX Storage Processor

Metric group Metric Description

CPU CPU Busy (%) VNX storage processor CPU busy trend

Disk Disk Total Operations (IO/s) VNX storage processor disk IOPS trend

Disk Reads (IO/s)

Disk Writes (IO/s)

Disk Disk Total Bandwidth (MB/s) VNX storage processor disk bandwidth trend

Views and Reports

VNX, VNXe, and Unity/UnityVSA views and reports 129

Table 88 VNX Storage Processor (continued)

Metric group Metric Description

Disk Reads (MB/s)

Disk Writes (MB/s)

VNX Storage Processor Front End Port

Table 89 VNX Storage Processor Front End Port

Metric group Metric Description

Performance Total Operations (IO/s) VNX SP front end port IOPS trend

Reads (IO/s)

Writes (IO/s)

Performance Total Bandwidth (MB/s) VNX SP front end port bandwidth trend

Reads (MB/s)

Writes (MB/s)

Performance Avg. Total Operations (IO/s) VNX SP front end port List

Max Total Operations (IO/s)

Avg. Total Bandwidth (MB/s)

Max Total Bandwidth (MB/s)

VNX RAID Group

Table 90 VNX RAID Group

Metric group Metric Description

Capacity Available Capacity (GB) VNX RAID group (in use) list

Total Capacity (GB)

Full (%)

Configuration Disk Count

LUN Count

Max Disks

Max LUNs

Unity File System

Table 91 Unity File System

Metric group Metric Description

Capacity Consumed Capacity (GB) Unity file system capacity trend

Views and Reports

130 EMC Storage Analytics 4.1 Installation and User Guide

Table 91 Unity File System (continued)

Metric group Metric Description

Total Capacity (GB)

Capacity Total Capacity (GB) Unity file system List

Allocated Capacity (GB)

Consumed Capacity (GB)

Available Capacity (GB)

Unity LUN

Table 92 Unity LUN

Metric group Metric Description

Performance Reads (IO/s) Unity LUN IOPS trend

Writes (IO/s)

Performance Reads (MB/s) Unity LUN bandwidth trend

Writes (MB/s)

Capacity Total Capacity (GB) Unity LUN List

Performance Avg. Reads (IO/s)

Max Reads (IO/s)

Avg. Writes (IO/s)

Max Writes (IO/s)

Avg. Reads (MB/s)

Max Reads (MB/s)

Avg. Writes (MB/s)

Max Writes (MB/s)

Unity Tier

Table 93 Unity Tier

Metric group Metric Description

Capacity Consumed Capacity (GB) Unity tier capacity trend

Total Capacity (GB)

Views and Reports

VNX, VNXe, and Unity/UnityVSA views and reports 131

Unity Storage Pool

Table 94 Unity Storage Pool

Metric group Metric Description

Capacity Consumed Capacity (GB) Unity storage pool capacity trend

Total Capacity (GB)

Capacity Consumed Capacity (GB) Unity storage pool (in use) List

Total Capacity (GB)

Full (%)

Subscribed (%)

Unity Disk

Table 95 Unity Disk

Metric group Metric Description

Performance Reads (IO/s) Unity disk IOPS trend

Writes (IO/s)

Performance Reads (MB/s) Unity disk bandwidth

Writes (MB/s)

Performance Busy (%) Unity disk busy trend

Capacity Size (GB) Unity disk (in use) list

Performance Avg. Reads (IO/s)

Max Reads (IO/s)

Avg. Writes (IO/s)

Max Writes (IO/s)

Avg. Reads (MB/s)

Max Reads (MB/s)

Avg. Writes (MB/s)

Max Writes (MB/s)

Avg. Busy (%)

Max Busy (%)

Configuration Type

Unity Storage Processor

Table 96 Unity Storage Processor

Metric group Metric Description

Performance Busy (%) Unity storage processor busy trend

Views and Reports

132 EMC Storage Analytics 4.1 Installation and User Guide

Table 96 Unity Storage Processor (continued)

Metric group Metric Description

Performance Reads (IO/s) Unity storage processor IOPS trend

Writes (IO/s)

Performance Reads (MB/s) Unity storage processor bandwidth trend

Writes (MB/s)

Network NFS Reads (IO/s) Unity storage processor NFS IOPS trend

NFS Writes (IO/s)

Network NFS Reads (MB/s) Unity storage processor NFS bandwidth trend

NFS Writes (MB/s)

Unity VVol (In Use)

Table 97 Unity VVol (In Use)

Metric group Metric Description

Unity VVol Bandwidth Trend Reads (MB/s)

Writes (MB/s)

Total (MB/s)

Reads (MB/s) (5 days forecast)

Writes (MB/s) (5 days forecast)

Total (MB/s) (5 days forecast)

Unity VVol Capacity Trend Consumed Capacity (GB)

Consumed Capacity (GB) (5 days forecast)

Total Capacity

Total Capacity (5 days forecast)

Unity VVol IO Trend Reads (IO/s)

Writes (IO/s)

Total (IO/s)

Reads (IO/s) (5 days forecast)

Writes (IO/s) (5 days forecast)

Total (IO/s) (5 days forecast)

Unity VVol (In Use) List Available Capacity (GB)

Reads (IO/s)

Writes (IO/s)

Total (IO/s)

Reads (MB/s)

Views and Reports

VNX, VNXe, and Unity/UnityVSA views and reports 133

Table 97 Unity VVol (In Use) (continued)

Metric group Metric Description

Writes (MB/s)

Total (MB/s)

Latency (ms)

XtremIO views and reports The XtremIO report includes all views and can be exported in CSV and PDF formats.

You can create views and reports for the following XtremIO components:

Table 98 XtremIO views and reports

Component Metric group Metric

XtremIO cluster capacity consumption

n/a Available Capacity (TB, physical)

Consumed Capacity (TB, physical)

Total Capacity (TB, physical)

Available Capacity (TB, volume)

Consumed Capacity (TB, volume)

Total Capacity (TB, volume)

XtremIO health state n/a Cluster health state

Storage Controller Health State

XtremIO LUN Volume|Performance:Read Operations|Read Bandwidth Read Bandwidth (MB/s)

Volume|Performance:Read Operations|Read Latency Read Latency (ms)

Volume|Performance:Read Operations|Reads Reads (IO/s)

Volume|Performance:Write Operations|Write Bandwidth Write Bandwidth (MB/s)

Volume|Performance:Write Operations|Write Latency Write Latency (ms)

Volume|Performance:Write Operations|Write Write (IO/s)

Volume|Performance |Total Bandwidth Total Bandwidth (MB/s)

Volume|Performance |Total Latency Total Latency (ms)

Volume|Performance|Total Operations Total operations (IO/s)

Volume|Capacity| Consumed Capacity in VMware Consumed Capacity in VMware (GB)

Volume|Capacity| Consumed Capacity in XtremIO Consumed Capacity in XtremIO (GB)

Volume|Capacity|Total Capacity Total Capacity (GB)

Summary (Min, Max, Average)

XtremIO performance Cluster|Performance:Read Operations|Read Bandwidth Read Bandwidth (MB/s)

Views and Reports

134 EMC Storage Analytics 4.1 Installation and User Guide

Table 98 XtremIO views and reports (continued)

Component Metric group Metric

Cluster|Performance:Read Operations|Read Latency Read Latency (ms)

Cluster|Performance:Read Operations|Reads Reads (IO/s)

Cluster|Performance:Write Operations|Write Bandwidth Write Bandwidth (MB/s)

Cluster|Performance:Write Operations|Write Latency Write Latency (ms)

Cluster|Performance:Write Operations|Write Write (IO/s)

Cluster|Performance |Total Bandwidth Total Bandwidth (MB/s)

Cluster|Performance |Total Latency Total Latency (ms)

Cluster|Performance|Total Operations Total Operations (IO/s)

Storage Controller | Performance | CPU 1 Utilization CPU 1 Utilization (%)

Storage Controller | Performance | CPU 2 Utilization CPU 2 Utilization (%)

Summary (Max, Min, Average )

XtremIO storage efficiency Cluster|Capacity|Deduplication Ratio Deduplication Ratio

Cluster|Capacity|Compression Ratio Compression Ratio

Cluster|Capacity|Thin Provision Savings Thin provision Savings (%)

SSD|Endurance|Endurance Remaining SSD endurance Remaining (%)

SSD|Capacity|Disk Utilization Disk Utilization (%)

Average Summary

Views and Reports

XtremIO views and reports 135

CHAPTER 6

Remedial Actions on EMC Storage Systems

This chapter contains the following topics:

l Remedial actions overview.................................................................................. 138 l Changing the service level objective (SLO) for a VMAX3 storage group.................138 l Changing the tier policy for a File System.............................................................138 l Changing the tier policy for a LUN........................................................................139 l Extending file system capacity.............................................................................139 l Enabling performance statistics for VNX Block.....................................................139 l Enabling FAST Cache on Unity and VNXe storage pools........................................140 l Enabling FAST Cache on a VNX Block storage pool...............................................140 l Expanding LUN capacity...................................................................................... 140 l Migrating a VNX LUN to another storage pool.......................................................140 l Rebooting a Data Mover on VNX storage..............................................................141 l Rebooting a VNX storage processor..................................................................... 141 l Extending volumes on EMC XtremIO storage systems.......................................... 141

Remedial Actions on EMC Storage Systems 137

Remedial actions overview Various remedial actions are available in vRealize Operations Manager, depending on the storage system. The Actions menu is available on the storage system's resource page, and remedial actions can also be initiated from the details page for an alert.

For these actions to be available, ensure that the Management Pack for EMC storage systems (EMC Adapter) is installed and the EMC Adapter instances are configured.

Other requirements:

l The EMC Adapter instances require the use of Admin credentials on the storage array.

l The vRealize Operations Manager user must have an Admin role that can access the Actions menu.

Changing the service level objective (SLO) for a VMAX3 storage group

This action is available from the Actions menu when a VMAX3 storage group is selected.

Procedure

1. From the summary page of a VMAX3 storage group, click Actions > Change SLO.

2. In the Change SLO dialog box, provide the following information:

Option Description

New SLO New SLO for the storage group

New Workload New workload type for the storage group

3. Click OK.

Results

The SLO for the storage group is changed.

Changing the tier policy for a File System This action is available in the Actions menu when you select a File System on the Summary tab.

Procedure

1. From the File System's Summary page, click Actions > Change File System Tiering Policy.

2. In the dialog box, select a tiering policy and click Begin Action.

Results

The policy is changed. You can check the status under Recent Tasks.

Remedial Actions on EMC Storage Systems

138 EMC Storage Analytics 4.1 Installation and User Guide

Changing the tier policy for a LUN This action is available from the Actions menu when a Unity, UnityVSA, VNX, or VNXe LUN is selected on the Summary tab.

Procedure

1. From the Summary tab of a supported storage system LUN, click Action > Change Tiering Policy.

2. In the Change Tiering Policy dialog box, select a tiering policy and click Begin Action.

Results

The policy is changed. You can check the status under Recent Tasks.

Extending file system capacity This action is available from the Actions menu when a file system is selected or under a recommended action when a file system's used capacity is high.

Procedure

1. Do one of the following:

l Select a file system and click Actions > Extend File System.

l From the alert details window for a file system, click Extend File System.

2. In the Extend File System dialog box, type a number in the New Size text box, and then click OK.

3. Click OK in the status dialog box.

Results

The file system size is increased and the alert (if present) is cancelled.

Enabling performance statistics for VNX Block This action is available only as a recommended action when an error or warning occurs on a VNX Block array. It is never available from the vRealize Operations Manager Actions menu.

Procedure

1. From the Summary page of the VNX Block array that reports an error or warning, click Enable Statistics.

2. In the Enable Statistics dialog box, click OK.

Results

You can confirm the action by checking the Message column under Recent Tasks.

Remedial Actions on EMC Storage Systems

Changing the tier policy for a LUN 139

Enabling FAST Cache on Unity and VNXe storage pools This action is available from the Actions menu when a Unity or VNXe storage pool is selected and FAST Cache is enabled and configured.

Procedure

1. Under Details for the storage pool, select Actions > Configure FAST Cache.

2. In the Configure FAST Cache dialog box, click Begin Action.

Results

FAST Cache is enabled. You can check the status under Recent Tasks.

Enabling FAST Cache on a VNX Block storage pool This action is available from the Actions menu when a VNX Block storage pool is selected or as a recommended action when FAST Cache is configured and available.

Procedure

1. Select the Summary tab for a VNX Block storage pool.

2. Do one of the following:

l From the Actions menu, select Enable FAST Cache.

l Under Recommendations, click Configure FAST Cache.

3. In the Configure FAST Cache dialog box, click OK.

Results

FAST Cache is enabled. You can check the status under Recent Tasks.

Expanding LUN capacity This action is available from the Actions menu when a Unity, UnityVSA, VNX, or VNXe LUN is selected.

Procedure

1. Select a LUN for a supported storage system.

2. Under Actions, click Expand.

3. Type the new size and select the size qualifier.

4. Click Begin Action.

Results

The LUN is expanded. You can check the status under Recent Tasks.

Migrating a VNX LUN to another storage pool This action is available from the vRealize Operations Manager Actions menu.

Procedure

1. From the Summary page of the VNX LUN, click Actions > Migrate.

Remedial Actions on EMC Storage Systems

140 EMC Storage Analytics 4.1 Installation and User Guide

2. In the Migrate dialog box, provide the following information:

l Storage Pool Type: Select Pool or RAID Group.

l Storage Pool Name: Type the name of the pool to migrate to.

l Migration Rate: Select Low, Medium, High, or ASAP.

3. Click OK.

Results

The LUN is migrated.

Rebooting a Data Mover on VNX storage This action is available from the Actions menu when a VNX Data Mover is selected or under a recommended action when the health state of the Data Mover has an error.

Procedure

1. Do one of the following:

l Select a VNX Data Mover and click Actions > Reboot Data Mover.

l From the alert details window for a VNX Data Mover, click Reboot Data Mover.

2. In the Reboot Data Mover dialog box, click OK.

Results

The Data Mover is restarted and the alert is cancelled.

Rebooting a VNX storage processor This action is available from the Actions menu on the Summary tab for the storage processor or as a recommendation when the storage processor cannot be accessed.

Procedure

1. Do one of the following:

l On the Summary tab for the storage processor, click Actions > Reboot Storage Processor.

l Under Recommendations, click Reboot Storage Processor.

2. In the Reboot Storage Processor dialog box, click Begin Action.

Results

The storage processor is restarted. This could take several minutes. You can check the status under Recent Tasks.

Extending volumes on EMC XtremIO storage systems You can extend XtremIO volumes manually or configure a policy to extend them automatically when used capacity is high.

l If you have not configured an automated policy, you can extend a volume manually. Refer to Extending XtremIO volumes manually on page 142.

l To configure a policy that automatically extends an XtremIO volume when capacity becomes high, refer to Configuring an extend volume policy for XtremIO on page 142.

Remedial Actions on EMC Storage Systems

Rebooting a Data Mover on VNX storage 141

Configuring an extend volume policy for XtremIO You can set a policy that automatically extends an XtremIO volume when capacity becomes high.

Procedure

1. In the vRealize Operations Manager main menu, click Administration > Policies.

Default Policy appears under Active Policies.

2. Select Policy Library, select Edit the Default Policy.

3. In the left panel, select Alert/System Definitions.

4. Under Alert Definitions, select Capacity used in the volume is high.

5. In the Automate column, select Local, and then click Save.

Results

When Capacity used in the volume is high is triggered, the volume will be extended automatically.

Extending XtremIO volumes manually Use this procedure when you have not configured an automated policy.

This action is available from the Actions menu when an XtremIO volume is selected or under a recommended action when a volume's used capacity is high.

Procedure

1. Do one of the following:

l Select an XtremIO volume and click Actions > Extend Volume.

l From the alert details window for an XtremIO volume, click Extend Volume.

2. In the Extend Volume dialog box, type a number in the New Size text box, and then click OK.

3. Click OK in the status dialog box.

Results

The volume size is increased and the alert (if present) is cancelled.

Remedial Actions on EMC Storage Systems

142 EMC Storage Analytics 4.1 Installation and User Guide

CHAPTER 7

Troubleshooting

This chapter contains the following topics:

l Badges for monitoring resources.........................................................................144 l Navigating inventory trees...................................................................................144 l Symptoms, alerts, and recommendations for EMC Adapter instances.................. 145 l Event correlation................................................................................................. 146 l Launching Unisphere.......................................................................................... 148 l Installation logs.................................................................................................. 148 l Log Insight overview............................................................................................148 l Error handling and event logging......................................................................... 151 l Log file sizes and rollover counts.........................................................................152 l Editing the Collection Interval for a resource........................................................154 l Configuring the thread count for an adapter instance.......................................... 154 l Connecting to vRealize Operations Manager by using SSH...................................155 l Frequently asked questions.................................................................................155

Troubleshooting 143

Badges for monitoring resources This topic describes the use of vRealize Operations Manager badges to monitor EMC Storage Analytics resources.

vRealize Operations Manager enables you to analyze capacity, workload, and stress of supported resource objects.

The badges include:

Workload

The Workload badge defines the current workload of a monitored resource. It displays a breakdown of the workload based on supported metrics.

Stress

The Stress badge is similar to the Workload badge but defines the workload over a period of time. The Stress badge displays one-hour time slices over the period of a week. The color of each slice reflects the stress status of the resource.

Capacity

The Capacity badge displays the percentage of a resource that is currently consumed and the remaining capacity for the resource.

Note

Depending on the resource and supported metrics, full capacity is sometimes defined as 100% (for example, Busy %). Full capacity can also be defined by the maximum observed value (for example, Total Operations IO/s).

Time Remaining

This badge is calculated from the Capacity badge and estimates when the resource will reach full capacity.

The badges are based on a default policy that is defined in vRealize Operations Manager for each resource kind.

Navigating inventory trees This topic describes how to navigate vRealize Operations Manager inventory trees for EMC resource objects.

Navigating inventory trees in vRealize Operations Manager can help you to troubleshoot problems you encounter with EMC resources.

Note

vRealize Operations Manager inventory trees are available for these EMC products: VNX Block, VNX File, Unity, and VMAX.

Procedure

1. Log into vRealize Operations Manager.

2. Open the Environment Overview.

3. Locate Inventory Trees.

Troubleshooting

144 EMC Storage Analytics 4.1 Installation and User Guide

4. Click the tree name to view its nodes. Click > to expand the list to view objects under the selected node.

Symptoms, alerts, and recommendations for EMC Adapter instances

This topic describes the symptoms, alerts, and recommendations that are displayed in vRealize Operations Manager for EMC Adapter instances.

Note

You can view symptoms, alerts, and recommendations in vRealize Operations Manager for these EMC products: RecoverPoint for Virtual Machines, Unity, UnityVSA, VMAX, VNX Block, VNX File, VNXe, VPLEX, and XtremIO.

You can view symptoms, alerts, and recommendations for EMC Adapter instances through the vRealize Operations Manager GUI. EMC Storage Analytics generates the alerts, which appear with other alerts that VMware generates. EMC Storage Analytics defines the alerts, symptoms, and recommendations for resources that the EMC Adapter instance monitors. You can view the symptoms, alerts, and recommendations in these vRealize Operations Manager windows.

Home dashboard

The vRealize Operations Manager home page dashboard displays EMC Storage Analyticssymptoms, alerts, and recommendations along with VMware-generated alerts. You can view health, risk, and efficiency alerts, listed in order of severity.

Alerts Overview

You can view EMC Storage Analytics alerts along with VMware-generated alerts in the Alerts Overview window. In this view, vRealize Operations Manager groups the alerts in health, risk, and efficiency categories.

Alert Details

This vRealize Operations Manager view displays detailed properties of a selected alert. Properties include title, description, related resources, type, subtype, status, impact, criticality, and alert start time. This view also shows the symptoms that triggered the alert as well as recommendations for responding to the alert.

Summary

In the Summary view for resource details, vRealize Operations Manager displays the alerts for the selected resource. It also displays alerts for the children of the selected resource, which affect the badge color of the selected resource.

Symptom definition

You can find symptom definitions for EMC Storage Analytics-generated alerts in the Definitions Overview (configuration page). Each definition includes the resource kind, metric key, and lists EMC Adapter as the Adapter Kind.

Recommendations

You can find the recommendation descriptions for EMC Storage Analytics-generated alerts in the Recommendations Overview (configuration page).

Troubleshooting

Symptoms, alerts, and recommendations for EMC Adapter instances 145

Alert definition

You can find alert definitions for EMC Storage Analytics-generated alerts in the Alert Definitions Overview (configuration page). Each definition includes the resource kind, type of alert, criticality, and impact (health, risk, or efficiency alert).

Event correlation Event correlation enables users to correlate alerts with the resources that generate them.

Event correlation is available for:

l VNX Block

l VNX File

EMC Adapter instances registered with the vRealize Operations Manager monitor events on select resources. These events appear as alerts in vRealize Operations Manager. The events are associated with the resources that generate them and aid the user in troubleshooting problems that may occur.

vRealize Operations Manager manages the life cycle of an alert and will cancel an active alert based on its rules. For example, vRealize Operations Manager may cancel an alert if EMC Storage Analytics no longer reports it.

vRealize Operations Manager-generated events influence the health score calculation for select resources. For example, in the RESOURCE:DETAILS pane for a selected resource, vRealize Operations Manager-generated events that contribute to the health score appear as alerts.

vRealize Operations Manager only generates events and associates them with the resources that triggered them. vRealize Operations Manager determines how the alerts appear and how they affect the health scores of the related resources.

Note

When a resource is removed, vRealize Operations Manager automatically removes existing alerts associated with the resource, and the alerts no longer appear in the user interface.

Viewing all alerts This procedure shows you how to view a list of all the alerts in the vRealize Operations Manager system.

Procedure

1. Log into the vRealize Operations Manager user interface.

2. From the vRealize Operations Manager menu, select ALERTS > ALERTS OVERVIEW.

A list of alerts appears in the ALERTS OVERVIEW window.

3. (Optional) To refine your search, use the tools in the menu bar. For example, select a start and end date or enter a search string.

4. (Optional) To view a summary of information about a specific alert, select the alert and double-click it.

The ALERT SUMMARY window appears and provides reason, impact, and root cause information for the alert.

Troubleshooting

146 EMC Storage Analytics 4.1 Installation and User Guide

Enabling XtremIO alerts The following alerts for XtremIO Volume and Snapshot metrics out of range are disabled by default to align with XMS default settings:

l Average Small Reads (IO/s)

l Average Small Writes (IO/s)

l Average Unaligned Reads (IO/s)

l Average Unaligned Writes (IO/s)

Use the following procedure to enable alerts.

Procedure

1. Select Administration > Policies > Policy Library > Default Policy

2. Select Edit > 6. Alert/Symptom Definitions.

3. For each alert that you want to enable, underState, select Enable Local.

4. Click Save.

Finding resource alerts An alert generated by EMC Storage Analytics is associated with a resource. This procedure shows you how to find an alert for a specific resource.

Procedure

1. Log into the vRealize Operations Manager user interface.

2. Select the resource from one of the dashboard views.

The number that appears on the alert icon represents the number of alerts for this resource.

3. Click the Show Alerts icon on the menu bar to view the list of alerts for the resource.

Alert information for the resource appears in the popup window.

Locating alerts that affect the health score for a resource This procedure shows how to locate an alert that affects the health score of a resource.

Different types of alerts can contribute to the health score of a resource, but a resource with an abnormal health score might not have triggered the alert. For example, the alert might be triggered by a parent resource. To locate an alert that affects the health score of a resource:

Procedure

1. Log into the vRealize Operations Manager user interface.

2. View the RESOURCE DETAIL window for a resource that shows an abnormal health score.

Events that contributed to the resource health score appear in the ROOT CAUSE RANKING pane.

3. Click an event to view the event details and investigate the underlying cause.

Troubleshooting

Enabling XtremIO alerts 147

Launching Unisphere EMC Storage Analytics provides metrics that enable you to assess the health of monitored resources. If the resource metrics indicate that you need to troubleshoot those resources, EMC Storage Analytics provides a way to launch Unisphere on the array.

The capability to launch Unisphere on the array is available for: l VNX Block l VNX File l Unity To launch Unisphere on the array, select the resource and click the Link and Launch icon. The Link and Launch icon is available on most widgets (hovering over an icon displays a tooltip that describes its function).

Note

This feature requires a fresh installation of the EMC Adapter (not an upgrade). You must select the object to launch Unisphere. Unisphere launch capability does not exist for VMAX or VPLEX objects.

Installation logs This topic lists the log files to which errors in the EMC Storage Analytics installation are written.

Errors in the EMC Storage Analytics installation are written to log files in the following directory in vRealize Operations Manager:

/var/log/emc Log files in this directory follow the naming convention: install-2012-12-11-10:54:19.log.

Use a text editor to view the installation log files.

Log Insight overview This topic provides an overview of Log Insight and its use with EMC Storage Analytics.

VMware vRealize Log Insight provides log management for VMware environments. Log Insight includes dashboards for visual display of log information. Content packs extend this capability by providing dashboard views, alerts, and saved queries.

For information on working with Log Insight, refer to the Log Insight documentation: https://www.vmware.com/support/pubs/log-insight-pubs.html.

Log Insight configuration This topic describes important background information about the integration of Log Insight with EMC Storage Analytics.

You can send the EMC Storage Analytics logs stored on the vRealize Operations Manager virtual machine to the Log Insight instance to facilitate performance analysis and perform root cause analysis of problems.

The adapter logs in vRealize Operations Manager are stored in a subdirectory of the / storage/vcops/log/adapters/EmcAdapter directory. The directory name and

Troubleshooting

148 EMC Storage Analytics 4.1 Installation and User Guide

the log file are created by concatenating the adapter instance name with the adapter instance ID.

An example of the contents of EmcAdapter follows. Notice that the adapter name parsing changes dots and spaces into underscores. For example, the adapter instance named ESA3.0 Adapter VNX File is converted to ESA3_0_Adapter_VNX_File. The adapter instance ID of 455633441 is concatenated to create the subdirectory name as well as the log file name.

-rw-r--r-- 1 admin admin 27812 Sep 26 10:37 ./ ESA3_0_Adapter_VNX_File-455633441/ ESA3_0_Adapter_VNX_File-455633441.log -rw-r--r-- 1 admin admin 1057782 Sep 26 15:51 ./ ESA3_0_VNX_Adapter-1624/ESA3_0_VNX_Adapter-1624.log -rw-r--r-- 1 admin admin 40712 Sep 23 11:58 ./ ESA3_0_VNX_Adapter-616398625/ESA3_0_VNX_Adapter-616398625.log -rw-r--r-- 1 admin admin 40712 Sep 23 11:58 ./ ESA3_0_VNX_Adapter-725881978/ESA3_0_VNX_Adapter-725881978.log -rw-r--r-- 1 admin admin 31268 Sep 10 11:33 ./ ESA_3_0_Adapter-1324885475/ESA_3_0_Adapter-1324885475.log -rw-r--r-- 1 admin admin 193195 Sep 26 10:48 ./EmcAdapter.log -rw-r--r-- 1 admin admin 25251 Sep 26 10:48 ./My_VNXe-1024590653/ My_VNXe-1024590653.log -rw-r--r-- 1 admin admin 25251 Sep 26 10:48 ./My_VNXe-1557931636/ My_VNXe-1557931636.log -rw-r--r-- 1 admin admin 4853 Sep 26 10:48 ./My_VNXe-1679/ My_VNXe-1679.log

In the vRealize Operations Manager Solution Details view, the corresponding adapter instance names appear as follows:

l ESA 3.0 Adapter VMAX

l My VNXe

l ESA 3.0 VNX Adapter

l ESA 3.0 Adapter VNX File

As seen in the example, multiple instances of each of the adapter types appear because EMC Storage Analytics creates a new directory and log file for the Test Connection part of discovery as well as for the analytics log file.

My_VNXe-1557931636 and My_VNXe-1024590653 are the Test Connection log locations, and My_VNXe-1679 is the analytics log file.

The Test Connection logs have a null name associated with the adapter ID, for example:

id=adapterId[id='1557931636',name='null']'

The same entry type from the analytics log shows:

id=adapterId[id='1679',name='My VNXe']'

You can forward any logs of interest to Log Insight, remembering that forwarding logs consumes bandwidth.

Sending logs to Log Insight This topic lists the steps to set up syslog-ng to send EMC Storage Analytics logs to Log Insight.

Before you begin

Import the vRealize Operations Manager content pack into Log Insight. This context- aware content pack includes content for supported EMC Adapter instances.

Troubleshooting

Sending logs to Log Insight 149

VMware uses syslog-ng for sending logs to Log Insight. Documentation for syslog-ng is available online. The steps that follow represent an example of sending VNX and VMAX logs to Log Insight. Refer to the EMC Simple Support Matrix for the EMC products that support Log Insight.

Procedure

1. Access the syslog-ng.conf directory:

cd /etc/syslog-ng

2. Save a copy of the file:

cp syslog-ng.conf syslog-ng.conf.noli

3. Save another copy to modify:

cp syslog-ng.conf syslog-ng.conf.tmp

4. Edit the temporary (.tmp) file by adding the following to the end of the file:

#LogInsight Log forwarding for ESA <<<<<<<<< comment source esa_logs { internal(); <<<<<<<<<<<<<<< internal syslog-ng events required. file("/storage/vcops/log/adapters/EmcAdapter/ ESA3_0_VNX_Adapter-1624/ESA3_0_VNX_Adapter-1624.log" <<<<<<<<<<<<<<<<<<< path to log file to monitor and forward follow_freq(1) <<<<<<<<<<<<<<<<<<<<<<<<<<<<< how often to check file (1 second). flags(no-parse) <<<<<<<<<<<<<<<<<<<<<<<<<<<<< dont do any processing on the file ); <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<< end of first entry repeat as needed file("/storage/vcops/log/adapters/EmcAdapter/ ESA3_0_Adapter_VMAX-1134065754/ESA3_0_Adapter_VMAX-1134065754.log" follow_freq(1) flags(no-parse)); <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<< end of second entry

file("/storage/vcops/log/adapters/EmcAdapter/ ESA3_0_Adapter_VMAX-1001/ESA3_0_Adapter_VMAX-1001.log" follow_freq(1) flags(no-parse)); <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<< end of third entry }; <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<< end of source entry

destination loginsight { udp("10.110.44.18" port(514)); }; <<<<<<<<<<< protocol,destination IP and port.

log { source(esa_logs); <<<<<<<<<<<<<<<<<<<< connect the source and destination to start logging destination(loginsight); };

5. Copy the .tmp file to the .conf file:

cp syslog-ng.conf syslog-ng.conf.tmp

6. Stop and restart logging:

Note

Use syslog, not syslog-ng, in this command.

service syslog restart

Troubleshooting

150 EMC Storage Analytics 4.1 Installation and User Guide

Results

Login to Log Insight to ensure the logs are being sent.

Error handling and event logging Errors in the EMC Storage Analytics operation are written to log files available through vRealize Operations Manager.

Error logs are available in the /data/vcops/log directory. This directory contains the vRealize Operations Manager logs.

Adapter logs (including adapters other than the EMC Adapter) are in /data/ vcops/log/adapters.

View logs relating to EMC Storage Analytics operation in the vRealize Operations Manager GUI. Create and download a support bundle used for troubleshooting.

Viewing error logs EMC Storage Analytics enables you to view error log files for each adapter instance.

Procedure

1. Start the vRealize Operations Manager custom user interface and log in as administrator.

For example in a web browser, type: http:// /vcops-web-ent 2. Select Admin > Support. Select the Logs tab.

3. Expand the vCenter Operations Collector folder, then the adapter folder, then the EmcAdapter folder. Log files appear under the EmcAdapter folder. Double-click a log entry in the log tree.

Entries appear in the Log Content pane.

Creating and downloading a support bundle Procedure

1. On the Logs tab, click the Create Support Bundle icon.

The bundle encapsulates all necessary logs.

2. Select the bundle name and click the Download Bundle icon.

Troubleshooting

Error handling and event logging 151

Log file sizes and rollover counts This topic describes the default log file size and rollover count for EMC Adapter instances.

Logs for each EMC Adapter instance are in folders under /data/vcops/log/ adapters/EmcAdapter, one folder for each adapter instance. For example, if you have five EMC Adapter instances, a directory (folder) appears for each of them.

Log files in this directory follow this naming convention:

- .log. For example: VNX_File-131.log.9 The log filename begins with the name of the EMC Adapter instance. Filenames beginning with EmcAdapter are common to all connectors.

The number that follows the EMC Adapter instance name is the adapter instance ID, which corresponds to a VMware internal ID.

The last number in the filename indicates the rollover increment. When the default log file size is reached, the system starts a new log file with a new increment. The lowest- numbered increment represents the most recent log. Each rollover is 10 MB (default value, recommended). Ten rollovers (default value) are allowed; the system deletes the oldest log files.

Finding adapter instance IDs This describes how to find the ID for an EMC Adapter instance.

Procedure

1. In vRealize Operations Manager, select Administration > Environment > Adapter Types > EMC Adapter.

2. In the Internal ID column, you can view the IDs for adapter instances.

Configuring log file sizes and rollover counts This topic describes how to change the default values for all adapter instances or for a specific adapter instance.

Before you begin

CAUTION

EMC recommends that you not increase the 10 MB default value for the log file size. Increasing this value makes the log file more difficult to load and process as it grows in size. If more retention is necessary, increase the rollover count instead.

Procedure

1. On the vRealize Operations Manager virtual machine, find and edit the adaptor.properties file:

/usr/lib/vmware-vcops/user/plugins/inbound/emc-vcops- adapter/conf/adapter.properties

2. Locate these EMC Adapter instance properties:

com.emc.vcops.adapter.log.size=10MB com.emc.vcops.adapter.log.count=10

Troubleshooting

152 EMC Storage Analytics 4.1 Installation and User Guide

3. To change the properties for all EMC Adapter instances, edit only the log size or log count values. For example:

com.emc.vcops.adapter.log.size=12MB com.emc.vcops.adapter.log.count=15

4. To change the properties for a specific EMC Adapter instance, insert the EMC Adapter instance ID as shown in this example:

com.emc.vcops.adapter.356.log.size=8MB com.emc.vcops.adapter.356.log.count=15

Activating configuration changes This topic describes how to activate changes you made to the log file size or rollover count for an EMC Adapter instance.

Procedure

1. In vRealize Operations Manager, select Environment > Environment Overview.

2. In the navigation pane, expand Adapter Kinds, then select EMC Adapter.

3. In the List tab, select a resource from the list and click the Edit Resource icon.

The Resource Management window for the EMC Adapter opens.

4. Click the OK button. No other changes are required.

This step activates the changes you made to the log file size or rollover count for the EMC Adapter instance.

Verifying configuration changes This topic describes how to verify the changes you made to the log file size or rollover counts of an EMC Adapter instance.

Procedure

1. Log into vRealize Operations Manager.

2. Change directories to /data/vcops/log/adapters/EmcAdapter.

3. Verify the changes you made to the size of the log files or the number of saved rollover backups.

If you changed:

l Only the default properties for log file size and rollover count, all adapter instance logs will reflect the changes

l Properties for a specific adapter instance, only the logs for that adapter instance will reflect the changes

l Log file size or rollover count to higher values, you will not notice the resulting changes until those thresholds are crossed

Troubleshooting

Activating configuration changes 153

Editing the Collection Interval for a resource From the vRealize Operations Manager user interface, you can edit the Collection Interval for a resource.

The interval time is five minutes by default. Changing this time will affect the frequency of collection times for metrics, but the EMC Adapter will only recognize the change if the resource is the EMC Adapter instance. This is normal vRealize Operations Manager behavior.

Note

For Unity, the maximum collection interval is 5 minutes.

Instructions on configuring Resource Management settings are provided in the vRealize Operations Manager online help.

Configuring the thread count for an adapter instance This topic describes two ways to configure the thread count for an adapter instance.

Only administrative personnel should perform this procedure. Use this procedure to change the thread count for best performance. If the thread count is not specified in adapter.properties, thread count = vCPU count +2. The maximum allowed thread count is 20.

Procedure

1. Access the adapter.properties file. You can find this file at:

/usr/vmware-vcops/user/plugins/inbound/emc-vcops-adapter/ conf/adapter.properties

2. Open and edit the thread count property for all adapter instances or for a specific adapter instance.

l If you want to edit the thread count property for all adapter instances, change the com.emc.vcops.adapter.threadcount property.

l If you want to edit the thread count property for a specific adapter instance, insert the adapter instance ID after adapter, for example: com.emc.vcops.adapter.7472.threadcount, and change the property value.

Note

To find an adapter instance ID, refer to Finding adapter instance IDs on page 152.

3. To activate the property change, restart the adapter instance in the vRealize Operations Manager.

Troubleshooting

154 EMC Storage Analytics 4.1 Installation and User Guide

Connecting to vRealize Operations Manager by using SSH This topic describes how to use SSH to login to vRealize Operations Manager as root.

Procedure

1. Open the VM console for the vRealize Operations Manager.

2. Press Alt-F1 to open the command prompt.

3. Enter root for the login and leave the password field blank.

You are prompted for a password.

4. Set the root password.

You will be logged in.

5. Use this command to enable SSH:

service sshd start

You will be able to successfully login as root by using SSH.

Frequently asked questions How many nodes are supported per vRealize Operations Manager cluster? vRealize Operations Manager clusters consist of a master node and data nodes. A total of eight nodes are supported. The master node (required) and up to seven data nodes.

How many resources and metrics are supported per node in vRealize Operations Manager?

l Small Node - 4vCPU, 16GB Memory - Supports 2,000 objects and 1,000,000 metrics

l Medium Node - 8vCPU, 32GB Memory - Supports 6,000 objects and 3,000,000 metrics

l Large Node - 16vCPU, 64GB Memory - Supports 10,000 objects and 5,000,000 metrics

How does a product trial work? A 90-day trial is provided for each platform that EMC Storage Analytics supports. The 90- day trial provides the same features as a licensed product, but after 90 days, the adapter stops collecting data. You can add a license at any time during or after the trial period.

How do health scores work? Health scores measure how normal a resource is and grades it on a scale of 0-100. A health score of 100 indicates normal behavior while a lower health score indicates that the resource is acting abnormally. The resource may not be in an unhealthy state but there is an abnormality. Health scores are calculated by a proprietary algorithm which account for several factors including thresholds and historical statistics. vRealize Operations Manager may take up to 30 days to gather enough information to determine what is considered normal in your environment. Until then, you may not see any changes in your health scores.

I deleted a resource. Why does it still appear in the vRealize Operations Manager? vRealize Operations Manager will not delete any resources automatically because it retains historical statistics and topology information that may be important to the user. The resource enters an unknown state (blue). To remove the resource, delete it on the Environment Overview page.

Troubleshooting

Connecting to vRealize Operations Manager by using SSH 155

What does the blue question mark in the health score indicate? The blue question mark indicates that vRealize Operations Manager was unable to poll that resource. It will retry during the next polling interval.

What does it mean when a resource has a health score of 0? This indicates that the resource is either down or not available.

Why are my EMC Adapter instances marked down after upgrading to the latest version of the EMC Adapter? EMC Adapter instances require a license to operate. Edit your EMC Adapter instances to add license keys obtained from EMC. Select Environment Overview > Configuration > Adapter Instances.

I have multiple EMC Adapter instances for my storage systems, and I have added license keys for each of them. Why are they still marked down? License keys are specific to the model for which the license was purchased. Verify that you are using the correct license key for the adapter instance. After adding a license, click the Test button to test the configuration and validate the license key. If you saved the configuration without performing a test and the license is invalid, the adapter instance will be marked Resource down. To verify that a valid license exists, select Environment Overview. The list that appears shows the license status.

How is the detailed view of vCenter resources affected in EMC Storage Analytics? Any changes in the disk system affects the health of vCenter resources such as virtual machines, but EMC Storage Analytics does not show changes in other subsystems. Metrics for other subsystems will either show No Data or ?.

Can I see relationships between my vCenter and EMC storage resources? Yes. Relationships between resources are not affected and you can see a top to bottom view of the virtual and storage infrastructures if the two are connected.

How do I uninstall EMC Storage Analytics? No uninstall utility exists. However, to remove EMC Storage Analytics objects, remove adapter instances for which the Adapter Kind is EMC Adapter (Environment >

Configuration > Adapter Instances). Then delete objects in the Environment Overview for which the Data Source is EMC (Environment > Environment Overview).

If I test a connection and it fails, how do I know which field is wrong? Unfortunately, the only field that produces a unique message when it is wrong is the license number field. If any other field is wrong, the only message is that the connection was not successful. To resolve the issue, verify all the other fields are correct. Remove any white spaces after the end of the values.

Can I modify or delete a dashboard? Yes, the environment can be customized to suit the needs of the user. Rename the dashboard so that it is not overwritten during an upgrade.

Why do some of the boxes appear white in the Overview dashboard? While the metrics are being gathered for an adapter instance, some of the heat maps in the dashboard may be white. This is normal. Another reason the boxes may appear white is that the adapter itself or an individual resource has been deleted, but the resources remain until they are removed from the Environment Overview page.

Which arrays does EMC Storage Analytics support? A complete list of the supported models for EMC storage arrays is available in the EMC Simple Support Matrix.

Troubleshooting

156 EMC Storage Analytics 4.1 Installation and User Guide

Will EMC Storage Analytics continue to collect VNX statistics if the primary SP or CS goes down? Storage Analytics will continue to collect statistics through the secondary Storage Processor if the primary Storage Processor goes down. EMC Storage Analytics will automatically collect metrics from the secondary Control Station in the event of a Control Station failover. Note that the credentials on the secondary Control Station must match the credentials on the primary Control Station.

Does the Unisphere Analyzer for VNX need to be running to collect metrics? No. VNX Block metrics are gathered through naviseccli commands and VNX File metrics are gathered through CLI commands. However, statistics logging must be enabled on each storage processor (SP) on VNX Block, and statistics logging will have a performance impact on the array. No additional services are required for VNX File.

How does the FAST Cache heat map work? The FAST Cache heat maps are based on the FAST Cache read and write hit ratios. This heat map will turn red if these ratios are low because that indicates that FAST Cache is not being utilized efficiently. These heat maps will turn green when FAST Cache is servicing a high percentage of I/O.

I purchased a license for the model of the VNX array that I plan to monitor. When I configure the adapter instance for VNX File, why does an "invalid license" error message appear? Control Station may not be reporting the correct model or the array. Log into Control Station and check the array model with the command: /nas/sbin/model. Verify that the array model returned matches the model on the Right to Use certificate.

After a Control Station failover, why is the VNX File adapter instance marked down and why does metric collection stop? The failover may have been successful, but the new Control Station may not be reporting the correct model of the array. This results in a failure to validate the license and all data collection stops. Log into Control Station and check the array model with the command: /nas/sbin/model. If the model returned does not match the actual model of the array, Primus case emc261291 in the EMC Knowledgebase provides possible solutions.

The disk utilization metric is not visible for my VNX Block array. Why not? The disk utilization metric is not supported on VNX arrays running a VNX Block OE earlier than Release 32. Upgrade to VNX Block OE Release 32 or later to see this metric in vRealize Operations Manager.

I am unable to successfully configure an EMC Adapter instance for VNX File when using a user with read-only privileges. Why does this happen? A user with administrative privileges is required while configuring an EMC Adapter instance for VNX File arrays running an OE earlier than 7.1.56.2. Upgrade to VNX File OE 7.1.56.2 or later to be able to configure an adapter instance using a user with read-only privileges.

The user LUNs on my VNX Block vault drives are not reporting performance metrics. Why not? Performance metrics are not supported for user LUNs on vault drives. Place user LUNs on drives other than vault drives.

Troubleshooting

Frequently asked questions 157

I received the following error when I attempted to modify the VNX Overview dashboard although I have only VMAX arrays. Is this a problem?

Error occurred

An error occurred on the page; please contact support. Error Message: org.hibernate.exception.SQLGrammerException: could not execute query

No, this is a generic error that VMware produces when you attempt to modify a component you do not have.

Troubleshooting

158 EMC Storage Analytics 4.1 Installation and User Guide

APPENDIX A

List of alerts

ESA generates the listed events when the resources are queried. This appendix contains the following topics:

l Avamar alerts...................................................................................................... 160 l Isilon alerts......................................................................................................... 161 l RecoverPoint alerts............................................................................................. 162 l ScaleIO alerts......................................................................................................163 l Unity, UnityVSA, and VNXe alerts.........................................................................166 l VMAX alerts.........................................................................................................168 l VNX Block alerts..................................................................................................168 l VNX Block notifications....................................................................................... 173 l VNX File alerts..................................................................................................... 174 l VNX File notifications.......................................................................................... 177 l VPLEX alerts........................................................................................................ 181 l XtremIO alerts..................................................................................................... 184

List of alerts 159

Avamar alerts ESA provides alerts for Avamar DPN, DDR, and Client resources.

Table 99 Avamar DPN alert messages

Alert message Badge Severity Condition Description/Recommendation

DPN used capacity (%) is high Risk Critical >= 90% Avamar system is almost full and may become read- only soon. Reclaim space or increase capacity.

Warning >= 80% Reclaim space or increase capacity.

Info >= 70% Monitor space usage and plan for growth accordingly.

The DPN has experienced a problem. State: Offline

Health Critical Offline If ConnectEMC has been enabled, a Service Request (SR) is logged. Go to EMC Online Support to view existing SRs. Search the knowledgebase for Avamar Data Node offline solution esg112792.

Avamar server has experienced a disk failure on one or more nodes. State: Degraded

Warning Degraded All operations are allowed, but immediate action should be taken to fix the problem.

Avamar Administrator was able to communicate with the Avamar server, but normal operations have been temporarily suspended. State: Suspended

Warning Suspended Restart or enable scheduler to resume backups and restores.

MCS could not communicate with this node. State: Time-Out

Health Critical Time-Out Refer to Avamar Administrator guide, Troubleshooting guide and KB articles for assistance.

Node status cannot be determined. State: Unknown

Critical Unknown

One or more Avamar server nodes are in an offline state. State: Node Offline

Warning Node Offline

Avamar Administrator was unable to communicate with the Avamar server. State: Inactive

Warning Inactive

Successful backups (%) in the last 24 hours is low

Risk Info <= 90% Investigate backup failures and remediate.

Warning <= 80% The system's ability to restore data may be compromised. Investigate backup failures and remediate

Table 100 Avamar DDR alert messages

Alert message Badge Severity Condition Description/Recommendation

DDR used capacity (%) is high. Risk Critical >= 90% Data Domain system is almost full and may become read-only soon. Reclaim space or increase capacity

List of alerts

160 EMC Storage Analytics 4.1 Installation and User Guide

Table 100 Avamar DDR alert messages (continued)

Alert message Badge Severity Condition Description/Recommendation

Warning >= 80% Data Domain system is becoming full. Reclaim space or increase capacity.

Info >= 70% Monitor space and plan for growth accordingly.

The file system has experienced a problem. Health Critical Disabled Data Domain file system disabled. Contact administrator to enable. No backups or restores can be performed.

Critical Unknown Data Domain file system in an unknown state. Contact administrator to resolve. Backups and restores may fail.

Table 101 Avamar Client alert messages

Alert message Badge Severity Condition Description/Recommendation

The latest backup operation for this client has failed.

Risk Warning Failed Remediate failure.

The backup elapsed time for this client is high.

Efficiency Warning >= 24 hours Backups are running longer than expected. Investigate and remediate

The change rate between backups exceeds 20%.

Efficiency Info Job Bytes Scanned >= 20%

Change rate exceeds 20%. Change Block Tracking may have been disabled.

Isilon alerts Cluster and Node lerts are available for Isilon 8.0 and later. Alert messages are collected from REST API.

Table 102 Isilon Cluster alert messages

Alert Message Badge Severity Type/ID

Allocation error detected. Risk Warning 800010002

System is running out of file descriptors. 800010006

Table 103 Isilon Node alert messages

Alert Message Badge Severity Type/ID

CPU 0 about to throttle due to temperature. Risk Warning 900020026

CPU 1 about to throttle due to temperature. 900020027

CPU throttling Health 900020035

List of alerts

Isilon alerts 161

Table 103 Isilon Node alert messages (continued)

Alert Message Badge Severity Type/ID

Internal network interface link down. Efficiency 200020003

External network link down. 200020005

Node integer offline. Health Critical 200010001

The snapshot reserve space is nearly full (value % used). Risk Info 600010005

RecoverPoint alerts ESA provides RecoverPoint alerts based on events for Consistency Group, Copy, and vPRA and alerts based on metrics for vRPA, Consistency Group, System, Cluster, and Splitter. Cancel cycle and Wait cycle for these alerts is 1.

Table 104 RecoverPoint for Virtual Machines alerts based on message event symptoms

Resource kind Message summary Badge Severity Event message Recommendation

Consistency group

Problem with RecoverPoint consistency group.

Health Critical RecoverPoint consistency group state is unknown.

Check the status of the consistency group.

Warning RecoverPoint consistency group is disabled.

Copy Problem with RecoverPoint copy.

Health Critical RecoverPoint copy state is unknown.

Check the status of the copy.

Warning RecoverPoint copy state is disabled.

vPRA Problem with vPRA Health Critical vRPA status is down. Check the status of the vPRA.

Warning vRPA status is removed for maintenance.

Immediate vRPA status is unknown.

Table 105 RecoverPoint for Virtual Machines alerts based on metrics

Resource kind

Message summary Metric and criteria Badge Severity Recommendation

vRPA Problem with vRPA. vRPA | CPU Utilization (%) >95

Health Warning Check the status of the vRPA.

Consistency group

Consistency group protection window limit has been exceeded.

Consistency group protection window ratio < 1

Protection window limit has been exceeded.

Lag limit has been exceeded.

Link | Lag (%) > 95 Lag limit has been exceeded.

List of alerts

162 EMC Storage Analytics 4.1 Installation and User Guide

Table 105 RecoverPoint for Virtual Machines alerts based on metrics (continued)

Resource kind

Message summary Metric and criteria Badge Severity Recommendation

RecoverPoint for Virtual Machines system

Number of splitters is reaching upper limit.

RecoverPoint System | Number of splitters > 30

Risk Information Consider adding another RecoverPoint for Virtual Machines system.

Cluster Number of consistency groups per cluster is reaching upper limit.

RecoverPoint cluster | number of consistency groups > 122

Consider adding another RecoverPoint cluster.

Number of vRPAs per cluster is reaching upper limit.

RecoverPoint cluster | number of vRPAs > 8

Consider adding another RecoverPoint cluster.

Number of protected virtual machines per cluster is reaching upper limit.

RecoverPoint cluster | number of protected virtual machines > 486

Consider adding another RecoverPoint cluster.

Number of protected volumes per cluster is reaching upper limit.

RecoverPoint cluster | number of protected VMDKs > 1946

The maximum number of protected volumes per vRPA cluster is 2K.

Splitter Number of attached volumes per splitter is reaching upper limit.

Splitter | number of volumes attached > 3890

The maximum number of attached volumes per splitter is 4K.

ScaleIO alerts ESA provides ScaleIO alerts for System, Protection Domain, Device Disk, SDS, Storage pool, SDC, and MDM.

Table 106 ScaleIO System alerts

Metric Badge Severity Condition

Used Capacity Risk Critical Warning

> 95 >85

Thick Used Capacity Critical Warning

> 95 >85

Thin Used Capacity Critical Warning

> 95 >85

Snap Used Capacity Critical Warning

> 95 >85

List of alerts

ScaleIO alerts 163

Table 107 ScaleIO Protection Domain alerts

Metric Badge Severity Condition

Status Health Critical No Active

Used Capacity Risk Critical Warning

> 95 >85

Thick Used Capacity Critical Warning

> 95 >85

Thin Used Capacity Critical Warning

> 95 >85

Snap Used Capacity Critical Warning

> 95 >85

Table 108 ScaleIO Device/Disk alerts

Metric Badge Severity Condition

Status Health Critical -> Error, Info -> {Remove, Pending}

Used Capacity Risk Critical Warning

> 95 >85

Spare Capacity Allocated Critical Warning

> 95 >85

Thick Used Capacity Critical Warning

> 95 >85

Thin Used Capacity Critical Warning

> 95 >85

Protected Capacity Critical Warning

> 95 >85

Table 109 ScaleIO SDS alerts

Metric Badge Severity Condition

Status Health Critical Disconnected

Used Capacity Risk Critical Warning

> 95 >85

Thick Used Capacity Critical Warning

> 95 >85

Thin Used Capacity Critical Warning

> 95 >85

Protected Capacity Critical Warning

> 95 >85

List of alerts

164 EMC Storage Analytics 4.1 Installation and User Guide

Table 109 ScaleIO SDS alerts (continued)

Metric Badge Severity Condition

Note

Note: Not available from REST API

Snap Used Capacity Critical Warning

> 95 >85

Table 110 ScaleIO Storage Pool alerts

Metric Badge Severity Condition

Status

Note

Not available from REST API

Health Critical Warning

Warning

Warning

Warning

Degraded capacity Unreachable capacity

Unavailable unused capacity

Extremely unbalanced

Unbalanced

Used Capacity Risk Critical Warning

> 95 >85

Thick Used Capacity Critical Warning

> 95 >85

Thin Used Capacity Critical Warning

> 95 >85

Protected Capacity Critical Warning

> 95 >85

Snap Used Capacity Critical Warning

> 95 >85

Table 111 ScaleIO SDC alerts

Metric Badge Severity Condition

State Health Critical Disconnected

Table 112 ScaleIO MDM Cluster alerts

Metric Badge Severity Condition

State Health Critical Not clustered Clustered degraded

Clustered tie breaker down

Clustered degraded tie breaker down

List of alerts

ScaleIO alerts 165

Unity, UnityVSA, and VNXe alerts ESA provides alerts for the following resources on Unity, UnityVSA, and VNXe: Disk, Tier, Storage Pool, Storage Processor, LUN, File System, and NAS Server.

Table 113 Unity, UnityVSA, and VNXe alerts

Resource kind Metric Badge Severity Condition Message summary

Disk Total Latency (ms)

Risk Critical > 75 Disk total latency (ms) is high.

Immediate > 50

Warning > 25

State Health Critical Includes "critical"

This disk is reporting a problem.

Immediate

Warning

Info

Tier Full (%) Risk Info > 95 Consumed capacity (%) of this tier is high.

Storage Pool Full (%) Risk Critical > 90 Consumed capacity (%) of this storage pool is high.

Immediate > 85

Efficiency Info < 5 Consumed capacity (%) of this storage pool is low.

State Health Critical Includes "critical"

This storage pool is reporting a problem.

Immediate

Warning

Info

SP (Storage Processor)

CIFS SMBv1 Read Response (ms)

Risk Critical > 75 CIFS SMBv1 average read response time(ms) is high.

Immediate > 50

Warning > 25

CIFS SMBv1 Write Response (ms)

Risk Critical > 75

Immediate > 50

Warning > 25

CIFS SMBv2 Read Response (ms)

Risk Critical > 75 CIFS SMBv2 average read response time(ms) is high.

Immediate > 50

Warning > 25

List of alerts

166 EMC Storage Analytics 4.1 Installation and User Guide

Table 113 Unity, UnityVSA, and VNXe alerts (continued)

Resource kind Metric Badge Severity Condition Message summary

CIFS SMBv2 Write Response (ms)

Risk Critical > 75

Immediate > 50

Warning > 25

NFS v3 Read Response (ms)

Risk Critical > 75 NFSv3 average read response time (ms) is high.

Immediate > 50

Warning > 25

NFS v3 Write Response (ms)

Risk Critical > 75

Immediate > 50

Warning > 25

State Health Critical Includes "critical"

This storage processor is reporting a problem.

Immediate

Warning

Info

LUN State Health Critical Condition includes critical

This LUN is reporting a problem.

Immediate

Warning

Info

File System State Health Critical Condition includes critical

This file system is reporting a problem.

Immediate

Warning

Info

NAS Server State Health Critical Condition includes critical

This NAS Server is reporting a problem.

Immediate

Warning

Info

List of alerts

Unity, UnityVSA, and VNXe alerts 167

VMAX alerts ESA provides alerts for VMAX Device, Storage Resource Pool, and SLO resources. The Wait Cycle is 1 for all these VMAX alerts.

Table 114 VMAX alerts

Resource kind

Symptom Badge Severity Condition Message

Device VmaxDevice_percent_full98.0 Risk Critical > 98 Device available capacity is low.

VmaxDevice_percent_full95.0 Risk Immediate > 95 Device available capacity is low.

SRP (VMAX3 Storage Resource Pool)

VmaxSRPStoragePool_percent_full98.0 Risk Critical > 98 Storage resource pool available capacity is low.

VmaxSRPStoragePool_percent_full95.0 Risk Immediate > 95 Storage resource pool available capacity is low.

SLO Compliance Risk Warning is MARGINAL SLO compliance status needs attention.

Critical is CRITICAL SLO compliance status needs attention.

VNX Block alerts ESA provides alerts for the following resources on VNX Block: Storage Pool, FAST Cache, Tier, Storage Processor, RAID Group, Disk, LUN, Port, Fan and Power Supply, and Array.

Table 115 VNX Block alerts

Resource kind

Metric Badge Severity Condition Message summary

Storage Pool

Full (%) Risk Critical > 90 Capacity used in this storage pool is very high.

Immediate > 85 Capacity used in this storage pool is very high.

Efficiency Info < 5 Capacity used in this storage pool is low.

Subscribed (%) Risk Info >100 This storage pool is oversubscribed.

State Health Critical Offline This storage pool is offline.

Faulted This storage pool is faulted.

Expansion Failed This storage pool's expansion failed.

Cancel Expansion Failed The cancellation of this storage pool's expansion failed.

List of alerts

168 EMC Storage Analytics 4.1 Installation and User Guide

Table 115 VNX Block alerts (continued)

Resource kind

Metric Badge Severity Condition Message summary

Verification Failed The verification of this storage pool failed.

Initialize Failed The initialization of this storage pool failed.

Destroy Failed The destruction of this storage pool failed.

Warning Offline and Recovering This storage pool is offline and recovering.

Critical Offline and Recovery Failed

The recovery of this offline storage pool failed.

Warning Offline and Verifying This storage pool is offline and verifying.

Critical Offline and Verification Failed

This storage pool is offline and verification failed.

Faulted and Expanding This storage pool is faulted and expanding.

Faulted and Expansion Failed

This expansion of this storage pool failed.

Faulted and Cancelling Expansion

This storage pool is faulted and is cancelling an expansion.

Faulted and Cancel Expansion Failed

This storage pool is faulted and the cancellation of the expansion failed.

Faulted and Verifying This storage pool is faulted and verifying.

Faulted and Verification Failed

This storage pool is faulted and verification failed.

Unknown The status of this storage pool is unknown.

FAST Cache

State Health Info Enabling FAST Cache is enabling.

Warning Enabled_Degraded The status of this storage pool is unknown.

Info Disabling FAST Cache is disabling.

Warning Disabled FAST Cache is created but disabled.

Critical Disabled_Faulted FAST Cache is faulted.

Critical Unknown The state of FAST Cache is unknown.

Tier Subscribed (%) Risk Info > 95 Consumed capacity (%) of this tier is high.

Storage Processor

Busy (%) Risk Warning > 90 Storage processor utilization is high.

Info > 80 Storage processor utilization is high.

List of alerts

VNX Block alerts 169

Table 115 VNX Block alerts (continued)

Resource kind

Metric Badge Severity Condition Message summary

Read Cache Hit Ratio (%)

Efficiency Info < 50 Storage processor read cache hit ratio is low.

Dirty Cache Pages (%)

Efficiency Critical > 95 Storage processor dirty cache pages is high.

Info < 10 Storage processor dirty cache pages is high.

Write Cache Hit Ratio (%)

Efficiency Warning > 20 Storage processor write cache hit ratio is low.

Info < 25 Storage processor write cache hit ratio is low.

N/A Health Critical N/A Storage processor could not be reached by CLI.

RAID Group

Full (%) Risk Info > 90 RAID group capacity used is high.

Efficiency Info < 5 RAID group capacity used is low.

State Health Critical Invalid The status of this RAID group is invalid.

Info Explicit_Remove This RAID group is explicit remove.

Info Expanding This RAID group is expanding.

Info Defragmenting This RAID group is defragmenting.

Critical Halted This RAID group is halted.

Info Busy This RAID group is busy.

Critical Unknown This RAID group is unknown.

Disk Busy (%) Risk Critical > 95 Disk utilization is high.

Immediate > 90 Disk utilization is high.

Warning > 85

Info > 75

Hard Read Error (count)

Health Critical > 10 Disk has read error.

Immediate > 5 Disk has read error.

Warning > 0 Disk has read error.

Hard Write Error (count)

Health Critical > 75 Disk has write error.

Immediate And Disk has write error.

Warning Total IO/s > 1 Disk has write error.

Response Time (ms)

Risk Critical > 75 Disk average response time (ms) is in range.

And N/A

Total IO/s > 1 Disk is not idle.

List of alerts

170 EMC Storage Analytics 4.1 Installation and User Guide

Table 115 VNX Block alerts (continued)

Resource kind

Metric Badge Severity Condition Message summary

Immediate 75 >= x > 50 Disk average response time (ms) is in range.

And N/A

Total IO/s > 1 Disk is not idle.

Warning 50 >= x > 25 Disk average response time (ms) is in range.

And N/A

Total IO/s > 1 Disk is not idle.

State Health Critical Removed This disk is removed.

Faulted The disk is faulted.

Unsupported The disk is unsupported.

Unknown The disk is unknown.

Info Powering up The disk is powering up.

Unbound The disk is unbound.

Warning Rebuilding The disk is rebuilding.

Info Binding The disk is binding.

Info Formatting The disk is formatting.

Warning Equalizing The disk is equalizing.

Info Unformatted The disk is unformatted.

Probation The disk is in probation

Warning Copying to Hot Spare The disk is copying to hot spare.

N/A Critical N/A Disk failure occurred.

LUN Service Time (ms)

Risk Critical > 25 LUN service time (ms) is in range.

And N/A

Total IO/s > 1 LUN is not idle.

Immediate > 25 LUN service time (ms) is in range.

And N/A

Total IO/s > 1 LUN is not idle.

Warning > 25 LUN service time (ms) is in range.

And N/A

Total IO/s > 1 LUN is not idle.

Latency (ms) Risk Critical 75 >= x > 50 LUN total latency (ms) is in range.

And N/A

List of alerts

VNX Block alerts 171

Table 115 VNX Block alerts (continued)

Resource kind

Metric Badge Severity Condition Message summary

Total IO/s > 1 LUN is not idle.

Immediate 75 >= x > 50 LUN total latency (ms) is in range.

And N/A

Total IO/s > 1 LUN is not idle.

Warning 50 >= x > 25 LUN total latency (ms) is in range.

And N/A

Total IO/s > 1 LUN is not idle.

State Health Critical Device Map Corrupt This LUN's device map is corrupt.

Faulted This LUN is faulted.

Unsupported This LUN is unsupported.

Unknown This LUN is unknown.

Info Binding This LUN is binding.

Warning Degraded This LUN is degraded.

Info Transitioning This LUN is transitioning.

Info Queued This LUN is queued.

Critical Offline This LUN is offline.

Port N/A Health Info N/A Link down occurred.

N/A The port is not in use.

Warning N/A Link down occurred.

Info N/A The port is not in use.

Fan and Power Supply

N/A Health Critical N/A Device (FAN or Power Supply) is having problem. Device state is "empty."

Warning N/A Device (FAN or Power Supply) is having problem. Device state is "unknown."

Critical N/A Device (FAN or Power Supply) is having problem. Device state is "removed."

N/A Device (FAN or Power Supply) is having problem. Device state is "faulted."

N/A Device (FAN or Power Supply) is having problem. Device state is "missing."

Array N/A Health Warning N/A Statistics logging is disabled.

N/A Performance data won't be available until it is enabled.

List of alerts

172 EMC Storage Analytics 4.1 Installation and User Guide

VNX Block notifications ESA provides the following notifications for the VNX Block resources listed in the table in this section.

Table 116 VNX Block notifications

Category Resource kind Message

Failures Disk Disk failure occurred.

SP Front-end Port Link down occurred.

Background Event Disk Disk rebuilding started.

Disk rebuilding completed.

Disk zeroing started. Note: This alert is not available for 1st generation models.

Disk zeroing completed. Note: This alert is not available for 1st generation models.

LUN LUN migration queued.

LUN migration completed.

LUN migration halted.

LUN migration started.

EMC Adapter Instance Fast VP relocation resumed. Note: This alert is not available for 1st generation models.

Fast VP relocation paused. Note: This alert is not available for 1st generation models.

Storage Pool Fast VP relocation started.

Fast VP relocation stopped.

Fast VP relocation completed.

Storage Processor SP boot up.

SP is down. Note: This alert is not available for 1st generation models.

FAST Cache FAST Cache started.

Configuration Storage Pool Storage Pool background initialization started.

Storage Pool background initialization completed.

LUN LUN creation started.

LUN creation completed.

Snapshot creation completed.

EMC Adapter Instance SP Write Cache was disabled.

SP Write Cache was enabled. Note: This alert is not available for 1st generation models.

Non-Disruptive upgrading started.

List of alerts

VNX Block notifications 173

Table 116 VNX Block notifications (continued)

Category Resource kind Message

Non-Disruptive upgrading completed.

LUN Deduplication on LUN was disabled. Note: This alert is not available for 1st generation models.

Deduplication on LUN was enabled. Note: This alert is not available for 1st generation models.

Storage Pool Deduplication on Storage Pool paused. Note: This alert is not available for 1st generation models.

Deduplication on Storage Pool resumed. Note: This alert is not available for 1st generation models.

LUN Compression on LUN started.

Compression on LUN completed.

Compression on LUN was turned off.

VNX File alerts ESA provides alerts for File Pool, Disk Volume, File System, and Data Mover resources for VNX File.

Table 117 VNX File alerts

Resource kind Metric Badge Severity Condition Message summary

File Pool Full (%) Risk Critical > 90 Capacity consumed of the file pool is high.

Immediate > 85

Efficiency Info < 5 Capacity consumed of the file pool is low.

Disk Volume Request Comp. Time (s)

Risk Critical > 25,000 dVol's average request completion time is high.

Immediate > 15,000

Warning > 10,000

Service Comp. Time (s)

Risk Critical > 25,000

Immediate > 15,000

Warning > 10,000

File System Full (%) Risk Critical > 90 Capacity consumed of this file system is high.

Immediate > 85

Efficiency Info < 5

List of alerts

174 EMC Storage Analytics 4.1 Installation and User Guide

Table 117 VNX File alerts (continued)

Resource kind Metric Badge Severity Condition Message summary

Data Mover NFS v2 Read Response (ms)

Risk Critical > 75 NFS v2 average read response time is high.

Immediate > 50

Warning > 25

NFS v2 Write Response (ms)

Risk Critical > 75 NFS v2 Average write response time is high.

Immediate > 50

Warning > 25

NFS v3 Read Response (ms)

Risk Critical > 75 NFS v3 average read response time is high.

Immediate > 50

Warning > 25

NFS v3 Write Response (ms)

Risk Critical > 75 NFS v3 average write response time is high.

Immediate > 50

Warning > 25

NFS v4 Read Response (ms)

Risk Critical > 75 NFS v4 average read response time is high.

Immediate > 50

Warning > 25

NFS v4 Write Response (ms)

Risk Critical > 75 NFS v4 average write response time is high.

Immediate > 50

Warning > 25

CIFS SMBv1 Read Response (ms)

Risk Critical > 75 CIFS SMB v1 average read response time is high.

Immediate > 50

Warning > 25

CIFS SMBv1 Write Response (ms)

Risk Critical > 75 CIFS SMB v1 average write response time is high.

Immediate > 50

Warning > 25

CIFS SMBv2 Read Response (ms)

Risk Critical > 75 CIFS SMB v2 average read response time is high.

List of alerts

VNX File alerts 175

Table 117 VNX File alerts (continued)

Resource kind Metric Badge Severity Condition Message summary

Immediate > 50

Warning > 25

CIFS SMBv2 Write Response (ms)

Risk Critical > 75 CIFS SMB v2 average write response time is high.

Immediate > 50

Warning > 25

State Health Info Offline Data Mover is powered off.

Error Disabled Data Mover will not reboot.

Out_of_service Data Mover cannot provide service. (For example, taken over by its standby)

Warning Boot_level=0 Data Mover is powered up.

Data Mover is booted to BIOS.

Data Mover is booted to DOS.

DART is loaded and initializing.

DART is initialized.

Info Data Mover is controlled by control station.

Error Fault/Panic Data Mover has faulted.

Online Data Mover is inserted and has power, but not active or ready.

Slot_empty There is no Data Mover in the slot.

Unknown Cannot determine the Data Mover state.

Hardware misconfigured

Data Mover hardware is misconfigured.

Hardware error Data Mover hardware has error.

List of alerts

176 EMC Storage Analytics 4.1 Installation and User Guide

Table 117 VNX File alerts (continued)

Resource kind Metric Badge Severity Condition Message summary

Firmware error Data Mover firmware has error.

Data Mover firmware is updating.

VNX File notifications ESA provides notifications for the VNX File resources listed in the table in this section.

Table 118 VNX File notifications

Category Resource kind Message

Control Station Events

Array The NAS Command Service daemon is shutting down abnormally. (MessageID: )

The NAS Command Service daemon is shutting down abnormally. (MessageID: )

The NAS Command Service daemon is shut down completely.

The NAS Command Service daemon is forced to shut down. (MessageID: )

Data Mover Warm reboot is about to start on this data mover.

Unable to warm reboot this data mover. Cold reboot has been performed.

EMC Adapter instance AC power has been lost. VNX storage system will be powered down in seconds. (MessageID: )(timeout_wait)

AC power is restored and back on.

File system Automatic extension failed. Reason: Internal error. COMMAND: , ERROR: , STAMP: (MessageID: )(COMMAND, DM_EVENT_STAMP, ERROR)

Automatic extension started.

Automatic extension failed. Reason: File system has reached the maximum size. STAMP: (MessageID: ) (DM_EVENT_STAMP)

Automatic extension failed. Reason: Percentage used could not be determined. STAMP: (MessageID: ) (DM_EVENT_STAMP)

Automatic extension failed. Reason: Filesystem size could not be determined. STAMP: (MessageID: ) (DM_EVENT_STAMP)

Automatic extension failed. Reason: Available space could not be determined. STAMP: (MessageID: ) (DM_EVENT_STAMP)

Automatic extension failed. Reason: File system is not RW mounted. STAMP: (MessageID: ) (DM_EVENT_STAMP)

Automatic extension failed. Reason: Insufficient available space. STAMP: (MessageID: ) (DM_EVENT_STAMP)

List of alerts

VNX File notifications 177

Table 118 VNX File notifications (continued)

Category Resource kind Message

Automatic extension failed. Reason: Available pool size could not be determined. STAMP: (MessageID: ) (DM_EVENT_STAMP)

Automatic extension failed. Reason: Slice flag could not be determined. STAMP: (MessageID: ) (DM_EVENT_STAMP)

Automatic extension failed. Reason: Available space is not sufficient for minimum size extension. STAMP: (MessageID: ) (DM_EVENT_STAMP)

Automatic extension failed. Reason: Maximum filesystem size could not be determined. STAMP: (MessageID: ) (DM_EVENT_STAMP)

Automatic extension failed. Reason: High Water Mark (HWM) could not be determined. STAMP: (MessageID: ) (DM_EVENT_STAMP)

Forced automatic extension started.

Automatic extension ended.

Automatic extension ended. The filesystem is now at its maximum size limit.

Forced automatic extension is cancelled. The requested extension size is less than the high water mark (HWM) set for the filesystem.

The filesystem's available storage pool size will be used as the extension size instead of the requested size.

Automatic extension completed.

Forced automatic extension completed. The file system is at the maximum size.

Automatic extension failed. Reason: Volume ID could not be determined. STAMP: (MessageID: ) (DM_EVENT_STAMP)

Automatic extension failed. Reason: Storage system ID could not be determined. STAMP: (MessageID: ) (DM_EVENT_STAMP)

Automatic extension failed. Reason: Filesystem is spread across multiple storage systems. STAMP: (MessageID: ) (DM_EVENT_STAMP)

Automatic extension failed. STAMP: (MessageID: < ) (DM_EVENT_STAMP)

EMC Adapter instance The JServer is not able to start. VNX File System statistics will be impacted. (MessageID: )

File system Filesystem is using of its capacity. (condition, cap_setting, prop_name)

Filesystem has of its capacity available. (condition, cap_setting, prop_name)

File pool Storage pool is using of its capacity (condition, cap_setting)

Storage pool has of its capacity available. (condition, cap_setting)

File system Filesystem is using of the maximum allowable file system size (16 TB). (condition)

List of alerts

178 EMC Storage Analytics 4.1 Installation and User Guide

Table 118 VNX File notifications (continued)

Category Resource kind Message

Filesystem has of the maximum allowable file system size (16 TB). (condition)

Filesystem is using of the maximum storage pool capacity available. (condition)

Filesystem has of the maximum storage pool capacity available. (condition)

Filesystem will fill its capacity on . (cap_setting, prop_name, sdate)

File pool Storage pool will fill its capacity on . (cap_setting, sdate)

File system Filesystem will reach the 16 TB file system size limit on . (sdate)

Filesystem will fill its storage pool's maximum capacity on . (sdate)

Data Mover Data Mover is using of its capacity. (stat_value, stat_name)

File pool Storage usage has crossed threshold value and has reached to . (threshold, pool_usage_percentage)

Storage usage has crossed threshold value and has reached to . (threshold, pool_usage_percentage)

File system Filesystem has filled its capacity. (cap_setting, prop_name)

File pool Storage pool has filled its capacity. (cap_setting)

File system Filesystem has almost filled its capacity. (cap_setting, prop_name)

File pool Storage pool has almost filled its capacity. (cap_setting)

File system Filesystem is using of its current node capacity. (condition)

Dart Events Data Mover The SCSI HBA is operating normally. (hbano)

The SCSI HBA has failed. (MessageID: ) (hbano)

The SCSI HBA is inaccessible. (MessageID: ) (hbano)

File system Filesystem has encountered a critical fault and is being unmounted internally. (MessageID: )

Filesystem has encountered a corrupted metadata and filesystem operation is being fenced. (MessageID: )

Filesystem usage rate % crossed the high water mark threshold %. Its size will be automatically extended. (currentUsage, usageHWM)

Filesystem is full.

EMC Adapter instance Power Supply A in Data Mover Enclosure was removed.

Power Supply A in Data Mover Enclosure is OK.

Power Supply A in Data Mover Enclosure failed: (MessageID: ) (details)

Power Supply B in Data Mover Enclosure was installed.

Power Supply B in Data Mover Enclosure was removed.

List of alerts

VNX File notifications 179

Table 118 VNX File notifications (continued)

Category Resource kind Message

Power Supply B in Data Mover Enclosure is OK.

Power Supply B in Data Mover Enclosure failed: (MessageID: ) (details)

One or more fans in Fan Module 1 in Data Mover Enclosure failed. (MessageID: )

One or more fans in Fan Module 2 in Data Mover Enclosure failed. (MessageID: )

One or more fans in Fan Module 3 in Data Mover Enclosure failed. (MessageID: )

Multiple fans in Data Mover Enclosure failed. (MessageID: )

All Fan Modules in Data Mover Enclosure are in OK status.

Power Supply A in Data Mover Enclosure is going to shut down due to overheating. (MessageID: )

Power Supply B in Data Mover Enclosure is going to shut down due to overheating. (MessageID: )

Both Power Supplies in Data Mover Enclosure are going to shut down due to overheating. (MessageID: )

Power Supply A in Data Mover Enclosure was installed.

Data Mover DNS server is not responding. Reason: (MessageID: ) (serverAddr, reason)

Network device is down. (MessageID: ) (deviceName)

File system Automatic fsck is started via Data Mover . Filesystem may be corrupted. (MessageID: ) (DATA_MOVER_NAME)

Manual fsck is started via Data Mover . (DATA_MOVER_NAME)

Automatic fsck succeeded via Data mover . (DATA_MOVER_NAME)

Manual fsck succeeded via Data mover . (DATA_MOVER_NAME)

Automatic fsck failed via Data mover . (DATA_MOVER_NAME)

Manual fsck failed via Data mover . (DATA_MOVER_NAME)

List of alerts

180 EMC Storage Analytics 4.1 Installation and User Guide

VPLEX alerts ESA provides alerts for the following VPLEX resources: Cluster, FC Port, Ethernet, Local Device, Storage View, Storage Volume, Virtual Volume, VPLEX Metro, Distributed Device, Engine, Director, and Extent.

Table 119 VPLEX alerts

Resource kind Message Badge Recommendation Severity Condition

Cluster VPLEX cluster is having a problem.

Health Check the health state of your VPLEX cluster. Ignore this alert if the health state is expected.

Critical VPLEX cluster health state is "major-failure."

VPLEX cluster health state is "critical-failure."

Immediate VPLEX cluster health state is "unknown."

Warning VPLEX cluster health state is "minor-failure."

VPLEX cluster health state is "degraded."

FC Port FC port is having a problem.

Health Check the operational status of your FC port. Ignore this alert if the operational status is expected.

Critical FC port operational status is "error."

FC port operational status is "lost- communication."

Immediate FC port operational status is "unknown."

Warning FC port operational status is "degraded."

FC port operational status is "stopped."

Ethernet Port Ethernet port is having a problem.

Health Check the operational status of your Ethernet port. Ignore this alert if the operational status is expected.

Critical Ethernet port operational status is "error."

Ethernet port operational status is "lost-communication."

Immediate Ethernet port operational status is "unknown."

Warning Ethernet port operational status is "degraded."

Ethernet port operational status is "stopped."

Local Device Local device is having a problem.

Health Check the health state of your local device. Ignore this alert if the health state is expected.

Critical Local device health state is "major- failure."

Local device health state is "critical-failure."

List of alerts

VPLEX alerts 181

Table 119 VPLEX alerts (continued)

Resource kind Message Badge Recommendation Severity Condition

Immediate Local device health state is "unknown."

Warning Local device health state is "minor- failure."

Local device health state is "degraded."

Storage View Storage view is having a problem.

Health Check the operational status of your storage view. Ignore this alert if the operational status is expected.

Critical Storage view operational status is "error."

Warning Storage view operational status is "degraded."

Storage view operational status is "stopped."

Storage Volume Storage volume is having a problem.

Health Check the health state of your storage volume. Ignore this alert if the health state is expected.

Critical Storage volume health state is "critical-failure."

Immediate Storage volume health state is "unknown."

Warning Storage volume health state is "non-recoverable-error."

Storage volume health state is "degraded."

Virtual Volume Virtual volume is having a problem.

Health Check the health state of your virtual volume. Ignore this alert if the health state is expected.

Critical Virtual volume health state is "critical-failure."

Virtual volume health state is "major-failure."

Immediate Virtual volume health state is "unknown."

Warning Virtual volume health state is "minor-failure."

Virtual volume health state is "degraded."

VPLEX Metro VPLEX metro is having a problem.

Health Check the health state of your VPLEX metro. Ignore this alert if the health state is expected.

Critical VPLEX metro health state is "critical-failure."

VPLEX metro health state is "major-failure."

Immediate VPLEX metro health state is "unknown."

Warning VPLEX metro health state is "minor-failure."

VPLEX metro health state is "degraded."

List of alerts

182 EMC Storage Analytics 4.1 Installation and User Guide

Table 119 VPLEX alerts (continued)

Resource kind Message Badge Recommendation Severity Condition

Distributed Device

Distributed device is having a problem.

Health Check the health state of your distributed device. Ignore this alert if the health state is expected.

Critical Distributed device health state is "critical-failure."

Distributed device health state is "major-failure."

Immediate Distributed device health state is "unknown."

Warning Distributed device health state is "minor-failure."

Distributed device health state is "non-recoverable-error."

Distributed device health state is "degraded."

Engine Engine is having a problem.

Health Check the operational status of your engine. Ignore this alert if the health state is expected.

Critical Engine operational status is "error."

Engine operational status is "lost- communication."

Immediate Engine operational status is "unknown."

Warning Engine operational status is "degraded."

Director Director is having a problem.

Health Check the operational status of your director. Ignore this alert if the health state is expected.

Critical Director operational status is "critical-failure."

Director operational status is "major-failure."

Immediate Director operational status is "unknown."

Warning Director operational status is "minor-failure."

Director operational status is "degraded."

Extent Extent is having a problem.

Health Check the health state of your extent. Ignore this alert if the health state is expected.

Critical Extent health state is "critical- failure."

Immediate Extent health state is "unknown."

Warning Extent health state is "non- recoverable-error."

Extent health state is "degraded."

List of alerts

VPLEX alerts 183

XtremIO alerts ESA provides alerts for XtremIO Cluster and Storage Controller resources and alerts based on metrics for Cluster SSD, Volume, and Snapshot. The Wait Cycle is 1 for all these XtremIO alerts.

Table 120 XtremIO alerts based on external events

Resource kind

Message Badge Recommendation Severity Condition

Cluster XtremIO cluster is having a problem.

Health Check the state of your XtremIO cluster. Ignore this alert if the state is expected.

Critical XtremIO cluster health state is "failed."

Warning XtremIO cluster health state is "degraded."

XtremIO cluster health state is "partial fault."

Storage Controller

Storage controller is having problem.

Health Check the state of your storage controller. Ignore this alert if the state is expected.

Critical Storage controller health state is "failed."

Warning Storage controller health state is "degraded."

Storage controller health state is "partial fault."

Table 121 XtremIO alerts based on metrics

Resource kind Message Badge Severity Condition Recommendation

Cluster SSD Consumed Capacity Ratio (%) is high.

Health Warning Consumed Capacity Ratio (%) >= 60

1. Free capacity from cluster

2. Extend capacity of cluster

Subscription Ratio is high.

Subscription Ratio >= 5

1. Unsubscribe capacity from cluster

2. Extend capacity of cluster

Physical capacity used in the cluster is high.

Risk Consumed capacity >= 90%

Migrate the volume to another cluster.

Physical capacity used in the cluster is low.

Efficiency Consumed capacity <= 5%

Cluster is not fully utilized. Possible waste.

Endurance Remaining (%) is low.

Health Endurance Remaining (%) <= 10

Replace SSD

Volume Average Small Reads (IO/s) is out of normal range.*

Health Warning Average Small Read Ratio >= 20

Check the status of the volume.

Average Small Writes (IO/s) is out of normal range.*

Average Small Write Ratio >= 20

Check the status of the volume.

List of alerts

184 EMC Storage Analytics 4.1 Installation and User Guide

Table 121 XtremIO alerts based on metrics (continued)

Resource kind Message Badge Severity Condition Recommendation

Average Unaligned Reads (IO/s) is out of normal range.*

Average Unaligned Read Ratio >= 20

Check the status of the volume.

Average Unaligned Writes (IO/s) is out of normal range.*

Average Unaligned Write Ratio >= 20

Check the status of the volume.

Capacity used in the volume is high.

Risk Consumed capacity >= 90%

Extend the capacity of the volume.

Capacity used in the volume is low.

Efficiency Consumed capacity <= 5%

Volume is not fully utilized. Possible waste.

Snapshot Average Small Reads (IO/s) is out of normal range.*

Health Warning Average Small Read Ratio >= 20

Check the status of the snapshot.

Average Small Writes (IO/s) is out of normal range.*

Average Small Write Ratio >= 20

Check the status of the snapshot.

Average Unaligned Reads (IO/s) is out of normal range.*

Average Unaligned Read Ratio >= 20

Check the status of the snapshot.

Average Unaligned Writes (IO/s) is out of normal range.*

Average Unaligned

Manualsnet FAQs

If you want to find out how the VRealize Dell works, you can view and download the Dell VRealize 4.1 Storage System Installation And User Guide on the Manualsnet website.

Yes, we have the Installation And User Guide for Dell VRealize as well as other Dell manuals. All you need to do is to use our search bar and find the user manual that you are looking for.

The Installation And User Guide should include all the details that are needed to use a Dell VRealize. Full manuals and user guide PDFs can be downloaded from Manualsnet.com.

The best way to navigate the Dell VRealize 4.1 Storage System Installation And User Guide is by checking the Table of Contents at the top of the page where available. This allows you to navigate a manual by jumping to the section you are looking for.

This Dell VRealize 4.1 Storage System Installation And User Guide consists of sections like Table of Contents, to name a few. For easier navigation, use the Table of Contents in the upper left corner.

You can download Dell VRealize 4.1 Storage System Installation And User Guide free of charge simply by clicking the “download” button in the upper right corner of any manuals page. This feature allows you to download any manual in a couple of seconds and is generally in PDF format. You can also save a manual for later by adding it to your saved documents in the user profile.

To be able to print Dell VRealize 4.1 Storage System Installation And User Guide, simply download the document to your computer. Once downloaded, open the PDF file and print the Dell VRealize 4.1 Storage System Installation And User Guide as you would any other document. This can usually be achieved by clicking on “File” and then “Print” from the menu bar.