Contents

Dell PowerMaxOS 10 V10.0.0 Storage Product Guide PDF

1 of 40
1 of 40

Summary of Content for Dell PowerMaxOS 10 V10.0.0 Storage Product Guide PDF

Product Guide

Dell Unisphere for PowerMax 10.0.0

July 2022

Overview of Unisphere........................................................................................................................................................................................... 3

Functionality supported by each OS type..........................................................................................................................................................4

Capacity information............................................................................................................................................................................................... 6

Understanding resource usage............................................................................................................................................................................. 6

Login authentication................................................................................................................................................................................................ 7

Understanding the system health score.............................................................................................................................................................8

Manage jobs............................................................................................................................................................................................................. 10

Unisphere for PowerMax alert monitoring recommendations.....................................................................................................................10

Alerts...........................................................................................................................................................................................................................11

Server alerts..............................................................................................................................................................................................................11

Understanding licenses..........................................................................................................................................................................................12

Understanding user authorization...................................................................................................................................................................... 12

Individual and group roles..................................................................................................................................................................................... 12

Roles...........................................................................................................................................................................................................................12

User IDs..................................................................................................................................................................................................................... 13

Roles and associated permissions...................................................................................................................................................................... 14

Roles for performing local and remote replication actions.......................................................................................................................... 16

RBAC roles for SRDF local and remote replication actions......................................................................................................................... 18

RBAC roles for TimeFinder SnapVX local and remote replication actions............................................................................................... 18

Storage Management............................................................................................................................................................................................ 19

Understanding storage groups............................................................................................................................................................................ 19

Understanding data reduction............................................................................................................................................................................ 20

Understanding service levels.............................................................................................................................................................................. 20

Performance Impact ............................................................................................................................................................................................ 20

Understanding storage templates...................................................................................................................................................................... 21

Understanding Storage Resource Pools........................................................................................................................................................... 21

Understanding volumes.........................................................................................................................................................................................21

Understanding disk groups................................................................................................................................................................................... 21

Host Management.................................................................................................................................................................................................. 21

Understanding hosts............................................................................................................................................................................................. 22

Understanding masking views............................................................................................................................................................................ 22

Understanding port groups..................................................................................................................................................................................22

Understanding initiators....................................................................................................................................................................................... 22

Understanding PowerPath hosts....................................................................................................................................................................... 22

Understanding mainframe management.......................................................................................................................................................... 23

Data protection management............................................................................................................................................................................. 24

Understanding Snapshot policy..........................................................................................................................................................................24

Understanding SRDF/Metro Smart DR............................................................................................................................................................26

Manage remote replication sessions................................................................................................................................................................. 26

Understanding SRDF groups............................................................................................................................................................................... 27

Understanding migration.......................................................................................................................................................................................31

Understanding Virtual Witness .......................................................................................................................................................................... 32

Understanding Open Replicator......................................................................................................................................................................... 32

Open Replicator session options........................................................................................................................................................................ 32

Understanding device groups............................................................................................................................................................................. 34

Understanding TimeFinder/Mirror sessions.................................................................................................................................................... 34

Understanding TimeFinder SnapVX...................................................................................................................................................................34

Understanding Performance Management..................................................................................................................................................... 35

Understanding Unisphere support for VMware............................................................................................................................................. 35

Understanding eNAS.............................................................................................................................................................................................35

Understanding iSCSI............................................................................................................................................................................................. 35

Understanding Cloud Mobility for Dell PowerMax.........................................................................................................................................36

Understanding NVMe/TCP................................................................................................................................................................................. 36

Understanding PowerMax File for storage systems..................................................................................................................................... 37

Understanding serviceability............................................................................................................................................................................... 38

Understanding PowerMax software system profiles and compliance......................................................................................................38

Where to get help.................................................................................................................................................................................................. 39

2

Overview of Unisphere Unisphere enables the user to configure and manage PowerMax, VMAX All Flash, and VMAX storage systems.

Unisphere is a HTML5 web-based application that enables you to configure and manage PowerMax, VMAX All Flash, and VMAX storage systems.

In the case of embedded deployment, Unisphere 10.0 runs embedded on PowerMax 2500 or PowerMax 8500 only. External deployment is required for storage systems running code levels below PowerMaxOS 10 (6079).

The side panel has the following items when the Overview view is selected: OverviewView overview of the status of all storage systems being managed by Unisphere for PowerMax SystemView home dashboard view for all storage systems PerformanceMonitors and manages storage system performance data (Dashboards, Charts, Analyze, Heatmap, Reports,

Plan, Real-Time traces, and Performance Database management). See Understanding Performance Management for more information.

VMwareViews all the relevant storage-related objects at an ESXi server and helps troubleshooting storage performance- related issues at the ESXi server. See Understanding Unisphere support for VMware for more information.

Configuration-Allows the viewing and configuration of system profiles and provisioning templates. EventsIncludes Alerts and Job List.

NOTE: For additional information about event and alerts, see the Events and Alerts for PowerMax and VMAX Users

Guide.

SupportDisplays support information.

The side panel has the following items when the storage system-specific view is selected: OverviewView overview of the status of all storage systems being managed by Unisphere for PowerMax DashboardView the following dashboards for a selected storage system: System Health, Storage Group (SG) compliance,

Capacity, Performance, and Protection StorageManage storage (storage groups, service levels, storage resource pools, volumes, external storage, PowerMax

File, and vVol dashboard). See Storage Management for more information. HostsManage hosts (hosts, masking views, port groups, initiators, PowerPath, mainframe, and CU images). See Host

Management for more information. Data ProtectionManage data protection (snapshot policies, MetroDR, SRDF groups, migrations, virtual witness, open

replicator, PowerMax File, and device groups). See Data protection management for more information. PerformanceMonitors and manages storage system performance data (Dashboards, Charts, Analyze, Heatmap, Reports,

and Plan). See Understanding Performance Management for more information. SystemIncludes Hardware, System Properties, File (eNAS), Cloud, iSCSI, iSCSI+NVMe, and PowerMax File. EventsIncludes Alerts, Job List, and Audit log.

NOTE: For additional information about event and alerts, see the Events and Alerts for PowerMax and VMAX Users

Guide.

SupportDisplays support information.

The following options are available from the title bar: View and configure settings View newly added features. Discover systems Refresh system information. Search for objects View and manage jobs. View and manage alerts. View and manage settings View profile Sign out View online help

A Unisphere Representational State Transfer (REST) API is also available. The API enables you to access diagnostic, performance and configuration data, and also enables you to perform provisioning operations on the storage system.

Information on the installation of Unisphere for PowerMax can be found in the Unisphere for PowerMax Installation Guide at the Dell support website.

For information specific to this Unisphere product release, see the Unisphere for PowerMax Release Notes at the Dell support website.

3

Your suggestions help continue to improve the accuracy, organization, and overall quality of the user publications. Send your feedback to content feedback.

Functionality supported by each OS type Unisphere enables the user to configure and manage PowerMax, VMAX All Flash, and VMAX storage systems.

Unisphere is a HTML5 web-based application that allows you to configure and manage PowerMax, VMAX All Flash, and VMAX storage systems.

Table 1. Functionality supported at storage system level depending on OS type

Functionality HYPERMAX OS 5977 PowerMaxOS 5978 PowerMaxOS 10 (6079)

Overview

Dashboard

Storage > Storage Groups

Storage > Service Levels

Storage > Storage Resource Pools

Storage > Volumes

Storage > External Storage

Storage > vVol Dashboard

Storage > File

Hosts > Hosts

Hosts > Masking Views

Hosts > Port Groups

Hosts > Initiators

Hosts > PowerPath

Hosts > Mainframe

Data Protection > Snapshot Policies

running PowerMaxOS 5978 Q3 2020 and later

4

Table 1. Functionality supported at storage system level depending on OS type (continued)

Functionality HYPERMAX OS 5977 PowerMaxOS 5978 PowerMaxOS 10 (6079)

Data Protection > MetroDR

running PowerMaxOS 5978 Q3 2020 and later

Data Protection > SRDF Groups

Data Protection > Migrations

Data Protection > Virtual Witness

Data Protection > File Protection

Data Protection > Open Replicator

Data Protection > Device Groups

Performance

System > Hardware

System > System Properties

System > iSCSI

System > iSCSI + NVMe

System > File (eNAS)

System > File Configuration

System > Cloud

running PowerMaxOS 5978 Q3 2020 and later

System

Events

Serviceability

Support

5

Table 2. Functionality supported by at overview and system level

Overview

Systems

Performance

Configuration > System Profiles

(applies to local storage systems running PowerMaxOS 10 (6079))

Configuration > Provisioning Templates

VMware

Events

Support

Capacity information Unisphere supports measurement of capacity using both the base 2 (binary) and base 10 (decimal) systems.

Storage capacity can be measured using two different systems base 2 (binary) and base 10 (decimal). Organizations such as the International System of Units (SI) recommend using the base 10 measurement to describe storage capacity. In base 10 notation, one MB is equal to 1 million bytes, and one GB is equal to 1 billion bytes.

Operating systems generally measure storage capacity using the base 2 measurement system. Unisphere and Solutions Enabler use the base 2 measurement system to display storage capacity with the TB notation as it is more universally understood. In base 2 notation, one MB is equal to 1,048,576 bytes and one GB is equal to 1,073,741,824 bytes.

Name Abbreviation Binary Power Binary Value (in Decimal) Decimal Power

Decimal (Equivalent)

kilobyte KB 210 1,024 103 1,000

megabyte MB 220 1,048,576 106 1,000,000

GB GB 230 1,073,741,824 109 1,000,000,000

terabyte TB 240 1,099,511,627,776 1012 1,000,000,000,000

Understanding resource usage Unisphere supports a view of capacity utilization on a storage system throughout its life cycle.

The Unisphere capacity dashboard supports you to understand the storage system resources being used in the following scenarios:

Day 1: The storage system is installed and is ready for production: Can I see the physical disk capacity of the system? If I have purchased different disk technology tiers, can I see the capacity of each tier? Can I see how much effective capacity the system was sized for?

Day N: The storage system is in production, and storage is being provisioned:

6

How much more can I write? How much more effective capacity is available? How much physical capacity is consumed? How much storage is allocated? How much of the system is over provisioned? What is the overall system Data Reduction Ratio (DRR)? How much capacity is not enabled for DRR? How much capacity is not reducible? How much capacity is being used by system resources that cannot be used for storage provisioning? How much system capacity is consumed by local replication? How much capacity is available for snapshots?

An overview of capacity terminology is displayed in the following figure.

A logical view in comparison with a disk consumption view is displayed in the following figure.

Login authentication Unisphere authenticates users attempting to access the system.

When you log in, Unisphere checks the following authorities: WindowsThe user has a Windows account on the server. (Log in to Unisphere with your Windows Domain\Username and

Password.)

7

LDAP-SSLThe user account is stored on an LDAP-SSL server. (Log in to with your Unisphere LDAP-SSL Username and Password.)

The Unisphere Administrator or SecurityAdmin must set the LDAP-SSL server location in the LDAP-SSL Configuration dialog box.

LocalThe user has a local Unisphere account. Local user accounts are stored locally on the Unisphere server host. (Log in to Unisphere with your Username and Password.)

RSA SecurID MFAIf RSA SecurID MFA authentication is enabled, the RSA token must be entered into the password field immediately followed by the user's password.

See for information about configuring authentication authorities.

User names are case-sensitive and allow alphanumeric characters of either case, an underscore, a dash, or a period: a-z A-Z 0-9 _ . -

There are no restrictions on special characters when using passwords.

The Initial Setup User (ISU) is used to create the user account or add authorization rules to existing LDAP/AD or Windows OS users.

Understanding the system health score The System Health dashboard provides a single place from which you can quickly determine the health of the system.

The system health health score can help you spot where your most severe health issues are, based on five core factors: Configuration, Capacity, System Utilization, Service Level Compliance and Storage Group (SG) Response Time. The area with the highest risk to your system health lowers the score until remedial actions are taken

The System Health panel displays values for the following high-level health or performance metrics: Configuration, Capacity, System Utilization, Storage Group (SG) response time, and Service Level compliance. It also displays an overall health score that is based on the lowest health score out of the five metrics. The health score is calculated every five minutes. The overall value is always calculated from all metric values. If a health score category is seen as stale or unknown, the overall health score is not updated. The previously calculated overall health score is displayed but its value is denoted as stale by setting the menu item to gray.

The System Utilization, Capacity, Storage Group (SG) response time, and Service Level compliance are based on performance information.

The Capacity health score is based on % Effective Used Capacity. Capacity levels are checked at the SRP level and SRP Emulation level (where a mixed SRP emulation is involved).

The capacity health scores are calculated as follows:

Fatal level - based on what is defined in the System Threshold and Alerts dialog. The default fatal threshold is 100% - 30 points.

Critical level - based on what is defined in the System Threshold and Alerts dialog. The default critical threshold is 80% - 20 points.

The System Utilization health score is calculated using the threshold limits of the following categories and metrics:

FE Director : % busy, queue depth utilization (queue depth utilization is not checked for EF directors) FE port: % busy BE port: % busy (not applicable for storage systems running PowerMaxOS 10 (6079)) BE Director (DA): % busy SRDF port: % busy SRDF Director: % busy DX port: % busy (not applicable for storage systems running PowerMaxOS 10 (6079)) External Director: % busy (not applicable for storage systems running PowerMaxOS 10 (6079)) EDS Director: % busy (not applicable for storage systems running PowerMaxOS 10 (6079)) Cache Partition: %WP utilization (not applicable for storage systems running PowerMaxOS 10 (6079)) EM Director : %WP utilization (applicable for storage systems running PowerMaxOS 10 (6079))

8

For each instance and metric for particular category, the threshold info is found. If not set, use the default thresholds. The default thresholds are:

FE Port: % busy - Critical 70

FE Director: % busy - Critical 70 ; Queue Depth Utilization - Critical 75

BE Port: % busy - Critical 70

BE Director (DA): % busy - Critical 70

SRDF Port: % busy - Critical 70

SRDF Director: % busy - Critical 70

DX Port: - % busy - Critical 70

External Director: % busy- Critical 70

EDS Director: % busy - Critical 70

Cache Partition: %WP utilization - Critical 75

EM Director: % busy - Critical 70

The system utilization score is calculated as follows:

Critical level: - five points

The Storage Group Response health score is based on software category health scores. Certain key metrics are examined against threshold values and if they exceed a certain threshold, the health score is negatively affected.

The storage group response score is based on the following: Storage Group: Read Response Time, Write Response Time, Response

For each instance and metric for particular category, the threshold info is found. If not found, default thresholds are used.

The storage group response score is calculated as follows: Read Response Time: Critical - five points

Write Response Time: Critical - five points Response Time: Critical - five points

The Response Time, Read Response Time, Write Response Time metrics are ignored from the health score calculation if the alert is not enabled for the SG/metric.

The Service Level Compliance health score is based on Workload Planner (WLP) workload state. A reduction from the health score is performed when storage groups that have a service level that is defined are not meeting the service level requirements.

If the Service Level Compliance alert is disabled for the SG, it will be ignored from the health score calculation.

The Service Level compliance score is calculated as follows:

Underperforming: - five points

The Configuration health score is calculated every five minutes and is based on the director and port alerts in the system at the time of calculation. Unisphere does not support alert correlation or auto clearing, you are required to manually delete alerts that have been dealt with or are no longer relevant as these impact on the hardware health score until they are removed from Unisphere.

The Configuration health score is calculated as follows:

Director out of service- 40 points Director Offline - 20 points Port Offline - 10 points

For embedded Unisphere systems that are running PowerMaxOS 6079 and that support PowerMax File, the configuration health score calculation is also impacted by the following PowerMax File alerts: Recovery failed on file system 'fsname' in NAS server 'nasserver' (fsid 'fsid') - 30 points File system 'fsname' in NASserver 'nasserver' (fsid 'fsid') is offline after discovering corruption - 30 points File system 'fsname' in NASserver 'nasserver' (fsid 'fsid') is offline due to receiving an I/O error -30 points Recovery failed on the 'fsname' file system in NAS server 'nasserver' (fsid 'fsid') - 30 points The 'fsname' file system in NAS server 'nasserver' (fsid 'fsid') is offline after discovering corruption. - 30 points The 'fsname' file system in NAS server 'nasserver' (fsid 'fsid') is offline due to receiving an I/O error - 30 points NAS node 'node' is down - 20 points

9

NAS node 'node' is down and its automatic recovery has failed - 20 points NAS server 'nasserver' is down - 10 points NAS server 'nasserver' fault tolerance is degraded - 10 points NAS server 'nasserver' is in maintenance mode - 10 points The DNS client of the 'nasserver' NAS server is unable to connect to all configured DNS servers - 20 points No LDAP servers configured for NAS server 'nasserver' are responding - 20 points The LDAP service configuration of the NAS server 'nasserver' for domain 'domain' failed - 20 points The NIS client is unable to connect to all configured NISservers - 20 points The NAS server 'nasserver' in the domain 'domain' can't reach any Domain Controller - 20 points No virus checker server is available - 5 points LDAP client settings on NAS server 'nasserver' are not validwithin domain 'domain' - 5 points The SMB server of the NASserver 'nasserver' is configured to be joined to the domain 'domain', but is currently not joined -

5 points

For embedded Unisphere systems that are running PowerMaxOS 10 (6079) and that support PowerMax File, auto clearing of PowerMax File alerts is supported.

Auto clearing of alerts is a mechanism whereby if an alert of severity Normal arrives into the system, this alert clears the alerts relating to this normal alert that are already there and have a higher severity.

Manage jobs Certain configuration tasks that are performed on a storage system may not be not immediately processed, but instead are kept in a job list for review and submission in batches.

About this task

One way to identify these tasks is from the dialog boxes. They have a button that is named Add to Job List.

Unisphere includes a job list view, from which you can view and manage the job list for a storage system.

Unisphere for PowerMax alert monitoring recommendations This topic outlines the list of recommended alerts for you to monitor or consider monitoring (depending on your environment) when configuring alert policies using Unisphere for PowerMax.

NOTE: This relates to storage system running HYPERMAX OS 5977, PowerMaxOS 5978, and PowerMaxOS 10 (6079).

Alert notifications should also be enabled for these alerts.

In addition, notifications should be configured for the default System Thresholds Alerts set and Notifications set.

It is recommended that you monitor the following: Array Component events Array Events Array - Deferred Service Threshold Alert Array - Director Status Array - Disk Status Array - Environmental Alert Array - Hot spare Invoked Array - Migration Complete Alert Array - Port Link status Array - Port status Array - RVA Spare Coverage Array - SP Alerts Array - SRDF Alerts Array - SRDF Job Flow Control Change Array - SRDF Link Status

10

Array - SRDF/A No Cycle Switch Alert Array - SRDF/A Session Array - SRDF/A Session dropped, Transmit Idle state timeout Array - SRDF/A Session entering Transmit Idle state Array - SRDF/A Session recovered from a Transmit Idle state Array - Target Enginuity Warning

Consider monitoring the following depending on customer environment:

Array - Device Config Change Array - Device Status Array - Thin Device Allocation Array - Thin Device Usage

Alerts You can configure Unisphere to monitor storage systems for specific events or error conditions. When an event or error of interest occurs, Unisphere displays an alert and, if configured to do so, notifies you of the alert by way of email, SNMP, or Syslog.

Server alerts Server alerts are alerts that are generated by Unisphere itself.

Unisphere generates server alerts under the conditions that are listed in the following table:

Checks are run on 10 minute intervals, and alerts are raised on 24-hour intervals from the time the server was last started. These time intervals also apply to discover operations. That is, performing a discover operation does not force the delivery of these alerts.

NOTE: Runtime alerts are not storage system-specific. They can be deleted if the user has admin or storage admin rights on

at least one storage system. A user with a monitor role cannot delete the server alerts.

Server alert Number of volumes Threshold Alert Details

Total memory on the Unisphere server

064,000 12 GB System memory <# GB> is below the minimum requirement of <# GB>

64,000128,000 16 GB

128,000256,000 20 GB

Free disk space on the Unisphere installed directory

064,000 100 GB Free disk space <# GB> is below the minimum requirement of <# GB>

64,000128,000 140 GB

128,000256,000 180 GB

Number of managed storage systems

Threshold is 20. Number of managed storage systems <#> is over the maximum supported number of #

Number of managed volumes 256,000 Number of managed volumes <#> is over the maximum supported number of <#>.

Solutions Enabler may indicate a slightly different number of volumes than indicated in this alert.

Number of gatekeepers 6 Number of gatekeepers <#> on storage system is below the minimum requirement of 6.

11

Understanding licenses Unisphere supports electronic licensing (e-Licensing). e-Licensing is a license management solution to help you track and comply with software license entitlement.

e-Licensing uses embedded locking functions and back-office IT systems and processes. It provides better visibility into software assets, easier upgrade, and capacity planning. It reduces the risk of noncompliance, while still adhering to a strict do no harm policy to your operations.

When installing licenses with e-Licensing, you obtain license files from customer service, copy them to a Solutions Enabler or a Unisphere host, and load them onto storage systems.

Each license file fully defines the entitlements for a specific system, including its activation type (Individual or Enterprise), the licensed capacity, and the date the license was created. If you want to add a product title or increase the licensed capacity of an entitlement, obtain a new license file from online support and load it onto the storage system.

When managing licenses, Solutions Enabler, Unisphere, z/OS Storage Manager (EzSM), MF SCF native command line, TPF, and IBM i platform console, provide detailed usage reports that enable you to better manage capacity and compliance planning.

There are two types of e-Licenses: host-based and storage system-based. Host-based licenses, as the name implies, are installed on the host. And, storage system-based licenses are installed on the storage system. For information about the types of licenses and the features they activate, see the Solutions Enabler Installation Guide.

Unisphere enables you to add and view storage system-based licenses, and add, view, and remove host-based licenses.

Unisphere uses storage system-based e-Licensing.

For storage systems running PowerMaxOS 10 (6079), or later, only XML format licenses are supported. The installation of older format licenses for storage systems running PowerMax 5978, or earlier, is not supported.

NOTE:

For more information about e-Licensing, see the Solutions Enabler Installation Guide.

Understanding user authorization User authorization is a tool for restricting the management operations users can perform on a storage system.

By default, user authorization is enabled for Unisphere users, regardless of whether it is enabled on the storage system.

When configuring user authorization, an Administrator or SecurityAdmin maps individual users or groups of users to specific roles on storage systems and these roles determine the operations that the users can perform. These user-to-role-to-storage system mappings (known as authorization rules) are maintained in the symauth users list file that is stored on the host or storage system, depending on the storage operating environment.

NOTE: If the symauth file contains one or more users, users who are not listed in the file are unable to access or even see

storage systems from the Unisphere console.

Individual and group roles Users gain access to a storage system or component either directly through a role assignment and/or indirectly through membership in a user group that has a role assignment.

If a user has two different role assignments (one as an individual and one as a member of a group), the permissions that are assigned to the user are combined. For example, if a user is assigned a Monitor role and a StorageAdmin role through a group, the user is granted Monitor and StorageAdmin rights.

Roles A Unisphere user can assume several roles. Tasks and associated permissions are associated with each role.

The following lists the available roles.

You can assign up to four of these roles per authorization rule.

12

For a more detailed look at the permissions that go along with each role, see Roles and associated permissions.

NoneProvides no permissions. MonitorPerforms read-only (passive) operations on a storage system excluding the ability to read the audit log or access

control definitions. StorageAdminPerforms all management (active or control) operations on a storage system and modifies GNS group

definitions in addition to all Monitor operations. AdministratorPerforms all operations on a storage system, including security operations, in addition to all StorageAdmin

and Monitor operations. SecurityAdminPerforms security operations on a storage system, in addition to all Monitor operations. AuditorGrants the ability to view, but not modify, security settings for a storage system(including reading the audit log,

symacl list, and symauth) in addition to all Monitor operations. This role is the minimum role that is required to view the storage system audit log.

Local ReplicationPerforms local replication operations (SnapVX or legacy Snapshot, Clone, BCV). To create Secure SnapVX snapshots a user must have Storage Admin rights at the storage system level. This role also automatically includes Monitor rights.

Remote ReplicationPerforms remote replication (SRDF) operations involving devices and pairs. Users can create, operate on, or delete SRDF device pairs but cannot create, modify, or delete SRDF groups. This role also automatically includes Monitor rights.

Device ManagementGrants user rights to perform control and configuration operations on devices. Storage Admin rights are required to create, expand, or delete devices. This role also automatically includes Monitor rights.

In addition to these user roles, Unisphere includes an administrative role, the Initial Setup User. This user, that is defined during installation, is a temporary role that provides administrator-like permissions for the purpose of adding local users and roles to Unisphere.

User IDs Users and user groups are mapped to their respective roles by IDs.

These IDs consist of a three-part string in the form:

Type:Domain\Name Where:

TypeSpecifies the type of security authority that is used to authenticate the user or group. Possible types are: LIndicates a user or group that LDAP authenticates. In this case, Domain specifies the domain controller on the LDAP

server. For example:

L:danube.com\Finance Indicates that user group Finance logged in through the domain controller danube.com

CIndicates a user or group that the Unisphere server authenticates. For example:

C:Boston\Legal Indicates that user group Legal logged in through Unisphere server Boston

HIndicates a user or group that is authenticated by logging in to a local account on a Windows host. In this case, Domain specifies the hostname. For example:

H:jupiter\mason Indicates that user mason logged in on host jupiter

DIndicates a user or group that is authenticated by a Windows domain. In this case, Domain specifies the domain or realm name. For example:

D:sales\putman Indicates that user putman has logged in through a Windows domain sales.

Namespecifies the username relative to that authority. It cannot be longer than 32 characters, and spaces are allowed if delimited with quotes. Usernames can be for individual users or user groups.

Within role definitions, IDs can be either fully qualified (as shown above), partially qualified, or unqualified. When the Domain portion of the ID string is an asterisk (*), the asterisk is treated as a wildcard, meaning any host or domain.

The Domain portion of the ID must be fully qualified when configuring group access.

13

For example:

D:ENG\jonesFully qualified path with a domain and username (for individual domain users)

D:ENG.xyz.com\ExampleGroupFully qualified domain name and group name (for domain groups)

D:*\jonesPartially qualified that matches username jones with any domain

H:HOST\jonesFully qualified path with a hostname and username

H:*\jonesPartially qualified that matches username jones within any host

jonesUnqualified username that matches any jones in any domain on any host

If a user is matched by more than one mapping, the user authorization mechanism uses the more specific mapping. If an exact match (for example, D:sales\putman) is found, that is used. If a partial match (for example, D:*\putman) is found, that is used. If an unqualified match (for example, putman) is found, that is used. Otherwise, the user is assigned a role of None.

Roles and associated permissions Users gain access to a storage system or component directly through a role assignment or indirectly through membership in a user group that has a role assignment.

The Role Based Access Control (RBAC) feature provides a method for restricting the management operations that individual users or groups of users may perform on storage systems.

The following diagram outlines the role hierarchy.

Roles are assigned as part of the user creation process.

The following tables detail the permissions that are associated with each role in Unisphere.

NOTE: The Unisphere Initial Setup User has all permissions on a storage system until an Administrator or SecurityAdmin is

added to the storage system.

The roles (and the acronyms that are used for the roles) in these tables are: NoneProvides no permissions. Monitor (MO)Performs read-only (passive) operations on a storage system excluding the ability to read the audit log or

access control definitions. StorageAdmin (SA)Performs all management (active or control) operations on a storage system and modifies GNS group

definitions in addition to all Monitor operations. Admininstrator (AD)Performs all operations on a storage system, including security operations, in addition to all

StorageAdmin and Monitor operations. SecurityAdmin (SecA)Performs security operations on a storage system, in addition to all Monitor operations. Auditor (AUD)Grants the ability to view, but not modify, security settings for a storage system(including reading the

audit log, symacly list, and symauth) in addition to all Monitor operations. It is the minimum role that is required to view the storage system audit log.

Performance Monitor (PM)Includes Monitor role permissions and grants additional privileges within the performance component of the Unisphere application to set up various alerts and update thresholds to monitor storage system performance.

14

Local ReplicationPerforms local replication operations (SnapVX or legacy Snapshot, Clone, BCV). To create Secure SnapVX snapshots a user must have Storage Admin rights at the storage system level. This role also automatically includes Monitor rights.

Remote ReplicationPerforms remote replication (SRDF) operations involving devices and pairs. Users can create, operate upon or delete SRDF device pairs but cannot create, modify, or delete SRDF groups. This role also automatically includes Monitor rights.

Device ManagementGrants user rights to perform control and configuration operations on devices.

NOTE: Storage Admin rights are required to create, expand, or delete devices.

This role also automatically includes Monitor rights.

NOTE: The RBAC roles for performing local and local and remote replication actions are outlined in Roles for performing

local and remote replication actions.

NOTE: The RBAC roles for SRDF local and remote replication actions are outlined in RBAC roles for SRDF local and remote

replication actions.

NOTE: The RBAC roles for TimeFinder SnapVX local and remote replication actions are outlined in RBAC roles for

TimeFinder SnapVX local and remote replication actions.

Table 3. User roles and associated permissions

Permissions AD SA MO SecA AUD None PM

Create/delete user accounts

Yes No No Yes No No No

Reset user password Yes No No Yes No No No

Create roles Yes Yes No Yes (self- excluded)

No No No

Change own password Yes Yes Yes Yes Yes Yes Yes

Manage storage systems Yes Yes No No No No No

Discover storage systems Yes No No Yes No No No

Add/show license keys Yes Yes No No No No No

Set alerts and Optimizer monitoring options

Yes Yes No No No No No

Release storage system locks

Yes Yes No No No No No

Set Access Controls Yes Yes No No No No No

Set replication and reservation preferences

Yes Yes No No No No No

View and export the storage system audit log

Yes No No Yes Yes No No

Access performance data Yes Yes Yes Yes Yes No Yes

Start data traces Yes Yes Yes Yes Yes No Yes

Set performance thresholds/alerts

Yes Yes No No No No Yes

Create and manage performance dashboards

Yes Yes Yes Yes Yes No Yes

Table 4. Permissions for Local Replication, Remote Replication and Device Management roles

Permissions Local Replication Remote

Replication Device Management

Create/delete user accounts No No No

15

Table 4. Permissions for Local Replication, Remote Replication and Device Management roles (continued)

Permissions Local Replication Remote

Replication Device Management

Reset user password No No No

Create roles No No No

Change own password Yes Yes Yes

Manage storage systems No No No

Discover storage systems No No No

Add/show license keys No No No

Set alerts and Optimizer monitoring options

No No No

Release storage system locks No No No

Set Access Controls No No No

Set replication and reservation preferences

No No No

View the storage system audit log

No No No

Access performance data Yes Yes Yes

Start data traces Yes Yes Yes

Set performance thresholds/ alerts

No No No

Create and manage performance dashboards

Yes Yes Yes

Perform control and configuration operations on devices

No No Yes

Create, expand, or delete devices No No No

Perform local replication operations (SnapVX, legacy Snapshot, Clone, BCV)

Yes No No

Create Secure SnapVX snapshots

No No No

Create, operate upon, or delete SRDF device pairs

No Yes No

Create, modify, or delete SRDF groups

No No No

Roles for performing local and remote replication actions Users gain access to a storage system or component either directly through a role assignment and/or indirectly through membership in a user group that has a role assignment.

The table below details the roles required to perform local and remote replication actions.

Local Replication Remote Replication Device Manager

Protection Wizard - Create SnapVx Snapshot

Yes (a)

16

Local Replication Remote Replication Device Manager

Create Snapshot Yes (a)

Edit Snapshot Yes

Link Snapshot Yes (b) (c) Yes (d)

Relink Snapshot Yes (b) (c) Yes (d)

Restore Snapshot Yes (b) Yes (b)

Set Time To Live Yes

Set Mode Yes (b) Yes (d)

Terminate Snapshot Yes

Unlink Snapshot Yes (b) Yes (d)

SRDF Delete Yes

SRDF Establish Yes

SRDF Failback Yes

SRDF Failover Yes

SRDF Invalidate Yes

SRDF Move Yes

SRDF Not Ready Yes

SRDF R1 Update Yes

SRDF Ready Yes

SRDF Refresh Yes

SRDF Restore Yes

SRDF Resume Yes

SRDF RW Disable R2 Yes

SRDF RW Enable Yes

SRDF Set Bias Yes

SRDF Set Consistency Yes

SRDF Set Mode Yes

SRDF Set SRDF/A Yes

SRDF Split Yes

SRDF Suspend Yes

SRDF Swap Yes

SRDF Write Disable Yes

(a) - Set Secure is blocked for users who only have Local_REP rights.

(b) - The user must have the specified rights on the source volumes.

(c) - The user may only choose existing storage groups to link to. Creating a storage group requires Storage Admin rights.

(d) - The user must have the specified rights on the link volumes.

17

RBAC roles for SRDF local and remote replication actions A user must be assigned the necessary roles to perform SRDF local and remote replication actions.

The following table details the roles that can perform SRDF local and remote replication actions:

NOTE: Unisphere for PowerMax does not support RBAC device group management.

Local Replication Remote Replication Device Manager

SRDF Delete Yes

SRDF Establish Yes

SRDF Failback Yes

SRDF Failover Yes

SRDF Invalidate Yes

SRDF Move Yes

SRDF Not Ready Yes

SRDF R1 Update Yes

SRDF Ready Yes

SRDF Refresh Yes

SRDF Restore Yes

SRDF Resume Yes

SRDF RW Disable R2 Yes

SRDF RW Enable Yes

SRDF Set Bias Yes

SRDF Set Consistency Yes

SRDF Set Mode Yes

SRDF Set SRDF/A Yes

SRDF Split Yes

SRDF Suspend Yes

SRDF Swap Yes

SRDF Write Disable Yes

RBAC roles for TimeFinder SnapVX local and remote replication actions A user must be assigned the necessary roles to perform TimeFinder SnapVX local and remote replication actions.

The following table details the roles that are required to perform TimeFinder SnapVX local and remote replication actions:

NOTE: Unisphere for PowerMax does not support RBAC device group management.

Local Replication Remote Replication Device Manager

Protection Wizard - Create SnapVx Snapshot

Yes (a)

Create Snapshot Yes (a)

18

Local Replication Remote Replication Device Manager

Edit Snapshot Yes

Link Snapshot Yes (b) (c) Yes (d)

Relink Snapshot Yes (b) (c) Yes (d)

Restore Snapshot Yes (b) Yes (b)

Set Time To Live Yes

Set Mode Yes (b) Yes (d)

Terminate Snapshot Yes

Unlink Snapshot Yes (b) Yes (d)

(a) - Set Secure is blocked for users who only have Local_REP rights.

(b) - The user must have the specified rights on the source volumes.

(c) - The user may only choose existing storage groups to link to. Creating a storage group requires Storage Admin rights.

(d) - The user must have the specified rights on the link volumes.

Storage Management Storage consists of the following: storage groups, service levels, templates, storage resource pools, volumes, external storage, vVols, and disk groups.

Storage Management covers the following areas: Storage Group management - Storage groups are a collection of devices that are stored on the array, and an application, a

server, or a collection of servers use them. Service Level management - A service level is the response time target for a storage group. The service level sets the

storage system with the required response time target for a storage group. It automatically monitors and adapts to the workload needed maintain the response time target. The service level includes an optional workload type so it can be optimized to meet performance levels.

Template management - Using the configuration and performance characteristics of an existing storage group as a starting point, you can create templates that will pre-populate fields in the provisioning wizard and create a more realistic performance reservation in your future provisioning requests.

Storage Resource Pool management - SRP management provides automated management of storage system disk resources to achieve expected service levels. Disk groups can be configured to form a Storage Resource Pool (SRP) by creating thin pools according to each individual disk technology, capacity, and RAID type.

Volume management - A storage volume is an identifiable unit of data storage. Storage groups are sets of volumes. External Storage management - Attaching external storage to storage systems directs workload movement to these external

storage systems while having access to the storage system features such as local replication, remote replication, storage tiering, data management, and data migration. Also, it simplifies multi-vendor or Dell storage system management.

vVol management - VMware vVols enable data replication, snapshots, and encryption to be controlled at the VMDK level instead of the LUN level, where these data services are performed on a per VM (application level) basis from the storage array.

Disk Groups management - A disk group is a collection of hard drives within the storage system that share the same performance characteristics.

Understanding storage groups Storage groups are a collection of devices that are stored on the array. An application, a server, or a collection of servers use them. The maximum number of storage groups that are allowed on a storage system is 16,384. The maximum number of child storage groups that are allowed in a cascaded configuration is 64. A storage group can contain up to 4,096 volumes. A volume can belong to multiple storage groups when only one of the groups has an SRP. You cannot create a storage group containing CKD volumes and FBA volumes.

19

Understanding data reduction Data reduction allows users to reduce user data on storage groups and storage resources.

Data reduction is enabled by default and can be turned on and off at storage group and storage resource level.

If a storage group is cascaded, enabling data reduction at this level enables data reduction for each of the child storage groups. The user can disable data reduction on one or more of the child storage groups.

To turn the feature off on a particular storage group or storage resource, clear the Enable Data Reduction check box in the Create Storage Group, Modify Storage Group, or Add Storage Resource To Storage Container dialogs or when using the Provision Storage or Create Storage Container wizards.

The following are the prerequisites for using data reduction:

Data reduction is only allowed on All Flash systems running the HYPERMAX OS 5977 Q3 2016 Service Release or PowerMaxOS 5978 or higher.

Data reduction is allowed on FBA and CKD devices. The user must have at least StorageAdmin rights. The storage group must have an associated SRP. The associated SRP cannot be comprised, either fully or partially, of external storage.

Reporting

Users can see the current compression ratio on the device, the storage group and the SRP. Efficiency ratios are reported in units of 1/10:1.

NOTE: External storage is not in efficiency reports. Mixed SRPs with internal and external storage, only the internal storage

is used in the efficiency ratio calculations.

Understanding service levels A service level is the response time target for the storage group. The service level enables you set the storage system with the required response time target for the storage group.

It automatically monitors and adapts to the workload to maintain (or meet) the response time target. The service level includes an optional workload type. The optional workload type can be used to further tune expectations for the workload storage group to provide enough flash to meet your performance objective.

Performance Impact The performance impact operation determines whether the storage system can handle the updated service level.

The performance impact is only available when:

The storage system is registered with the performance data processing option for statistics. The workloads have been processed. All the storage groups that are involved have a service level and SRP set. The target SRP does not contain only external disk groups. The storage system is local. The storage group is not in a masking view (only for the local provisioning wizard). An issue with one of the selected ports arises when provisioning storage and a valid Front-End Suitability score cannot be

derived. Examples of issues are: A virtual port is selected, an offline port is selected, and a selected port has no negotiated speed. When an issue arises, 200.0% (not a real suitability score) is displayed. Excluding data has no impact on the 200% displayed value.

This message indicates whether the storage system can handle the updated service level. Results are indicated with either of the following:

Indicates suitable.

Indicates not suitable.

20

In both cases, results are displayed in a bar chart by component (Front End, Back End, Cache) along with a score 0 to 100. The score can be viewed by hovering the cursor over the bar. It indicates the expected availability of the components on the target storage system after the change.

The current score for the component is shown in gray, with the additional load for the component shown in green or red indicating suitability. The additional score is red if the current and additional loads total more than 100.

Understanding storage templates Storage templates are a reusable set of storage requirements that simplify storage management for virtual data centers by eliminating many of the repetitive tasks that are required to create and make storage available to hosts or applications.

About this task

With this feature, Administrators and Storage Administrators can create templates for their common provisioning tasks and then invoke them later when performing such things as:

Creating or provisioning storage groups.

The templates that are created on a particular Unisphere server can be used across all the arrays on that particular server.

A provisioning template contains configuration information and a performance reservation.

The performance reservation that is saved with a template is generated from a 2-week snapshot of the performance data of the source storage group. The total IOPS and MBPS, I/O mixture, and skew profile from this snapshot are used for array impact tests when the template is used to provision a new storage group.

Understanding Storage Resource Pools A Storage Resource Pool is a collection of data pools used for capacity and performance management.

By default, a single default Storage Resource Pool is pre-configured at the factory. More Storage Resource Pools can be created with a service engagement.

Understanding volumes A storage volume is an identifiable unit of data storage. Storage groups are sets of volumes.

The Volumes view on the Unsiphere user interface provides you with a single place from which to view and manage all the volume types on the system.

Understanding disk groups A disk group is a collection of hard drives within the storage system that share the same performance characteristics.

Disk groups can be viewed and managed from the Unisphere user interface.

Host Management Storage hosts are systems that use storage system LUN resources. Unisphere manages the hosts.

Host Management covers the following areas: Management of host and host groups Management of masking views - A masking view is a container of a storage group, a port group, and an initiator group , and

makes the storage group visible to the host. Devices are masked and mapped automatically. The groups must contain some devices entries.

Management of port groups - Port groups contain director and port identification and belong to a masking view. Ports can be added to and removed from the port group. Port groups that are no longer associated with a masking view can be deleted.

21

Management of initiators and initiator groups - An initiator group is a container of one or more host initiators (Fibre or iSCSI). Each initiator group can contain up to 64 initiator addresses or 64 child IG names. Initiator groups cannot contain a mixture of host initiators and child IG names.

Management of PowerPath hosts Management of mainframe configured splits, CU images, and CKD volumes

Understanding hosts Storage hosts are systems that use storage system LUN resources. A logical unit number (LUN) is an identifier that is used for labeling and designating subsystems of physical or virtual storage.

The maximum number of initiators that is allowed is 64.

Understanding masking views A masking view is a container of a storage group, a port group, and an initiator group , and makes the storage group visible to the host.

Masking viewed are manageable from the Unisphere user interface. Devices are masked and mapped automatically. The groups must contain some devices entries.

Understanding port groups Port groups contain director and port identification and belong to a masking view. Ports can be added to and removed from the port group. Port groups that are no longer associated with a masking view can be deleted.

Note the following recommendations:

Port groups should contain four or more ports.

Each port in a port group should be on a different director.

A port can belong to more than one port group. However, for storage systems running HYPERMAX OS 5977 or higher, you cannot mix different types of ports (physical FC ports, virtual ports, and iSCSI virtual ports) within a single port group.

Understanding initiators An initiator group is a container of one or more host initiators (Fibre or iSCSI or NVMe/TCP).

Each initiator group can contain up to 64 initiator addresses or 64 child IG names. Initiator groups cannot contain a mixture of host initiators and child IG names.

Understanding PowerPath hosts PowerPath is host-based software that provides automated data path management and load-balancing capabilities for heterogeneous server, network, and storage deployed in physical and virtual environments.

The following are the minimum requirements to perform this task: A storage system running PowerMax 5978 or higher PowerPath 6.3

22

Understanding mainframe management Service level provisioning for mainframe simplifies storage system management by automating many of the tasks that are associated with provisioning storage.

The mainframe dashboard provides you with a single place to monitor and manage configured splits, CU images, and CKD volumes.

In 10.0, SE blocks the following operations triggered from a U4P server installed on a mainframe host - z/OS and z/Linux hosts.

Map/unmap to/from CU

Create/delete CU on all z/OS hosts

Create/delete CU on all z/Linux hosts

One of the CKD volume's properties is the physical name (also known as volume serial number (VOLSER)). VOLSER information is displayed if running a SMAS client server to zOS as described in Appendix C - Configuring SMAS to work in z/OS of the Unisphere for PowerMax Installation Guide.

The mainframe dashboard is organized into the following panels:

CKD Compliance - Displays how well CKD storage groups are complying with their respective service level policies, if applicable.

CKD Storage Groups - Displays the mainframe storage groups on the storage system. Double-click a storage group to see more details and information about its compliance and volumes.

Actions - Allow the user to provisioning storage and create CKD volumes. Summary - Displays the mainframe summary information in terms of splits, CU images, and CKD volumes.

With the release of HYPERMAX OS 5977 Q1 2016, Unisphere introduces support for service level provisioning for mainframe. Service level provisioning simplifies storage system management by automating many of the tasks that are associated with provisioning storage.

Service level provisioning eliminates the need for storage administrators to manually assign physical resources to their applications. Instead, storage administrators specify the service level and capacity that is required for the application and the system provisions the storage group appropriately.

You can provision CKD storage to a mainframe host using the Provision Storage wizard.

The storage system must be running HYPERMAX OS 5977 Q1 2016, or higher, and have at least one FICON director configured.

You can map CKD devices to front-end EA/EF directors. Addressing on EA and EF directors is divided into Logical Control Unit images, also known as CU images. Each CU image has its own unique SSID and contains a maximum of 256 devices (numbered 0x00 through 0xFF). When mapped to an EA or EF port, a group of devices becomes part of a CU image.

With the release of HYPERMAX OS 5977 Q2 2017, Unisphere introduces support for All Flash Mixed FBA/CKD storage systems. NOTE: This feature is only available for All Flash 450F/850F/950F storage systems that are:

Purchased as a mixed All Flash system

Installed at HYPERMAX OS 5977 Q2 2017 or later

Configured with two Storage Resource Pools - one FBA Storage Resource Pool and one CKD Storage Resource Pool

You can provision FBA/CKD storage to a mainframe host using the Provision Storage wizard.

NOTE:

1. A CKD SG can only provision from a CKD SRP.

2. A FBA SG can only provision from a FBA SRP.

3. FBA volumes cannot reside in a CKD SRP.

4. CKD volumes cannot reside in a FBA SRP.

5. Compression is only for FBA volumes.

You can map FBA devices to front-end EA/EF directors. Addressing on EA and EF directors is divided into Logical Control Unit images (CU images). Each CU image has its own unique SSID and contains a maximum of 256 devices (numbered 0x000 through 0xFF). When mapped to an EA or EF port, a group of devices becomes part of a CU image.

23

Data protection management Data Protection management ensures that data is protected and remains available.

Data Protection Management covers the following areas: Management of snapshot policies Management of MetroDR Management of SRDF groups - SRDF groups provide a collective data transfer path linking volumes of two separate storage

systems. These communication and transfer paths are used to synchronize data between the R1 and R2 volume pairs that are associated with the SRDF group. At least one physical connection must exist between the two storage systems within the fabric topology. See Dell SRDF Introduction for an overview of SRDF.

Migrations management - enables the migration of storage group (application) data from migration capable source arrays to migration capable target arrays.

Management of Virtual Witness - The Witness feature supports a third party that the two storage systems consult when they lose connectivity with each other, that is, their SRDF links go out of service. When SRDF links go out of service, the Witness helps to determine, for each SRDF/Metro Session, which of the storage systems should remain active (volumes continue to be read and write to hosts) and which goes inactive (volumes not accessible).

Management of Open Replicator - Open Replicator is a software application that is used to migrate data from third-party arrays to PowerMax arrays.

Management of device groups - A device group is a user-defined group consisting of devices that belong to a locally attached array. Control operations can be performed on the group as a whole, or on the individual device pairs in the group. By default, a device can belong to more than one device group.

Understanding Snapshot policy The Snapshot policy feature provides snapshot orchestration at scale (1,024 snaps per storage group). The feature simplifies snapshot management for standard and cloud snapshots.

The flowchart below depicts the basic workflow of using snapshot policies:

Snapshots can be used to recover from data corruption, accidental deletion or other damage, offering continuous data protection. A large number of snapshots can be difficult to manage. The Snapshot policy feature provides an end to end solution to create, schedule, and manage standard (local) and cloud snapshots.

The snapshot policy (Recovery Point Objective (RPO)) specifies how often the snapshot should be taken and how many of the snapshots should be retained. The snapshot may also be specified to be secure (these snapshots cannot be terminated by users before their time to live (TTL), derived from the snapshot policy's interval and maximum count, has expired). Up to four policies can be associated with a storage group, and a snapshot policy may be associated with many storage groups. Unisphere provides views and dialogs to view and manage the snapshot policies. Unisphere also calculates and reports on the compliance of each storage group to its snapshot policies.

The following rules apply to snapshot policies: The maximum number of snapshot policies (local and cloud) that can be created on a storage system is 20. Multiple storage

groups can be associated with a snapshot policy. A maximum of four snapshot policies can be associated with an individual storage group. A storage group or device can have a maximum of 256 manual snapshots. A storage group or device can have a maximum of 4000 snapshots. When there are 4000 snapshots in existence and another snapshot is taken, the oldest unused snapshot that is associated

with the snapshot policy is removed. When devices are added to a snapshot policy storage group, snapshot policies that apply to the storage group are applied to

the added devices. When devices are removed from a snapshot policy storage group, snapshot policies that apply to the storage group are no

longer applied to the removed devices. If overlapping snapshot policies are applied to storage groups, they run and take snapshots independently.

24

Unisphere provides compliance information for each snapshot policy that is directly associated with a storage group. Snapshot policy compliance is measured against the count and intervals of the existing snapshots. Snapshots must be valid (must still exist, must be in a non-failed state, and must be at the expected scheduled time). A snapshot could be missing due to it being manually terminated or due to a failure in the snapshot operation.

Snapshot compliance for a storage group is taken as the lowest compliance value for any of the snapshot policies that are directly associated with the storage group.

Compliance for a snapshot policy that is associated with a storage group is based on the number of valid snapshots within the retention count. The retention count is translated to a retention period for compliance calculation. The retention period is the snapshot interval multiplied by the snapshot maximum count. For example, a one hour interval with a 30 snapshot count means a 30-hour retention period.

The compliance threshold value for green to yellow is stored in the snapshot policy definition. Once the number of valid snapshots falls below this value, compliance turns yellow.

The compliance threshold value for yellow to red is stored in the snapshot policy definition. Once the number of valid snapshots falls below this value, compliance turns red.

In addition to performance level compliance, snapshot compliance is also calculated by polling the storage system once an hour for SnapVX related information for storage groups that have snapshot policies that are associated with them. The returned snapshot information is summarized into the required information for the database compliance entries.

When the maximum count of snapshots for a snapshot policy is changed, this changes the compliance for the storage group or service level combination. Compliance values are updated accordingly.

If compliance calculation is performed during the creation of a snapshot, an establish-in-progress state may be detected. This is acceptable for the most recent snapshot but is considered failed for any older snapshot.

When a storage group and service level have only recently been associated and the full maximum count of snapshots has not yet been reached, Unisphere scales the calculation to the number of snapshots that are available and represents compliance accordingly until the full maximum count of snapshots has been reached. If a snapshot failed to be taken for a reason (such as the storage group or service level was suspended or a snapshot was manually terminated before the maximum snapshot count was reached), the compliance is reported as out of compliance appropriately.

When the service level interval is changed, the compliance window changes and the number of snapshots may not exist for correct compliance.

If a service level is suspended or a storage group or service level combination is suspended, snapshots are not created. Older snapshots fall outside the compliance window, and the maximum count of required snapshots is not found.

Manual termination of snapshots inside the compliance window results in the storage group or service level combination falling out of compliance.

Configuration of alerts related to snapshot policies is available from Settings > Alerts > Alert Policies on the Unisphere user interface.

NOTE: Snapshot policy offsets (the execution time within the RPO interval) and snapshot time stamps are both mapped

to be relative to the clock (including time zone) of the local management host. If times are not synchronized across hosts,

these appear different to users on those hosts. Even if they are synchronized, rounding that occurs during time conversion

may result in the times being slightly different.

Unisphere supports the following snapshot policy management tasks: Create snapshot policies. View and modify snapshot policies. Associate a snapshot policy and a storage group with each other. Disassociate a snapshot policy and a storage group from each other. View snapshot policy compliance. Suspend or resume snapshot policies. Suspend or resume snapshot policies that are associated with one, more than one, or all storage groups. Set a snapshot policy snapshot to be persistent. Bulk terminate snapshots (not specific to snapshots associated with a snapshot policy). Delete snapshot policies.

25

Understanding SRDF/Metro Smart DR SRDF/Metro Smart DR is a two-region high available (HA) disaster recovery (DR) solution. It integrates SRDF/Metro and SRDF/A enabling HA DR for a Metro session.

A session or environment name uniquely identifies each smart DR environment . It is composed of three arrays (MetroR1 array, MetroR2 array, DR array). All arrays contain the same number of devices and all device pairings form a triangle.

The MetroR1 array contains: One Metro SRDF Group that is configured to the MetroR2 array (MetroR1_Metro_RDFG) One DR SRDF Group that is configured to the DR array (MetroR1_DR_RDFG) Devices that are concurrent SRDF and are paired using MetroR1_Metro_RDFG and MetroR1_DR_RDFG.

The MetroR2 array contains: One Metro SRDF Group that is configured to the MetroR1 array (MetroR2_Metro_RDFG) One DR SRDF Group that is configured to the DR array (MetroR2_DR_RDFG). Devices that are concurrent SRDF and are paired using MetroR2_Metro_RDFG and MetroR2_DR_RDFG.

The DR array contains one DR SRDF Group that is configured to the MetroR1 array (DR_MetroR1_RDFG).

Unisphere supports the setup, monitoring, and management of a smart DR configuration using both UI and REST API.

Unisphere blocks attempts at using smart DR SRDF groups for other replication sessions, and also blocks certain active management on smart DR SRDF groups, including device expansion and adding new devices. This limitation can be overcome by temporarily deleting the Smart DR environment to perform these operations. Replication is never suspended so Recovery Point Objective (RPO) is not affected.

Unisphere blocks attempts at SRDF active management of storage groups that are part of a smart DR environment.

Manage remote replication sessions Unisphere supports the monitoring and management of SRDF replication on storage groups directly without having to map to a device group.

The SRDF dashboard provides a single place to monitor and manage SRDF sessions on a storage system, including device groups types R1, R2, and R21.

See Dell SRDF Introduction for an overview of SRDF.

Unisphere allows you to monitor and manage SRDF/Metro from the SRDF dashboard. SRDF/Metro delivers active/active high availability for non-stop data access and workload mobility within a data center and across metro distance. It provides array clustering for storage systems enabling even more resiliency, agility, and data mobility. SRDF/Metro enables hosts and host clusters to directly access a LUN or storage group on the primary SRDF array and secondary SRDF array (sites A and B). This level of flexibility delivers the highest availability and best agility for rapidly changing business environments.

In a SRDF/Metro configuration, SRDF/Metro uses the SRDF link between the two sides of the SRDF device pair to ensure consistency of the data on the two sides. If the SRDF device pair becomes Not Ready (NR) on the SRDF link, SRDF/Metro must respond by choosing one side of the SRDF device pair to remain accessible to the hosts, while making the other side of the SRDF device pair inaccessible. There are two options which enable this choice: Bias and Witness.

The first option, Bias, is a function of the two storage systems taking part in the SRDF/Metro and is a required and integral component of the configuration. The second option, Witness, is an optional component of SRDF/Metro which allows a third

26

storage system to act as an external arbitrator to avoid an inconsistent result in cases where the bias functionality alone may not result in continued host availability of a surviving nonbiased array.

Understanding SRDF groups SRDF groups provide a collective data transfer path linking volumes of two separate storage systems. These communication and transfer paths are used to synchronize data between the R1 and R2 volume pairs that are associated with the SRDF group. At least one physical connection must exist between the two storage systems within the fabric topology.

These communication and transfer paths are used to synchronize data between the R1 and R2 volume pairs that are associated with the SRDF group. At least one physical connection must exist between the two storage systems within the fabric topology.

See Dell SRDF Introduction for an overview of SRDF.

The maximum number of supported SRDF groups differs by version:

OS Maximum number of SRDF Groups supported Group numbers

per storage system per director per port

5977 or 5978 250 250 250 1250

10 (6079) 2000 2000 2000 1-2048

NOTE: If both arrays are arrays running PowerMaxOS 10 (6079), up to 2000 SRDF groups can be defined across all the

ports on a specific SRDF director or up to 2000 SRDF groups can be defined on one port on a specific SRDF director. A

port on the array running PowerMaxOS 10 (6079) connected to an array running HYPERMAX OS 5977 or PowerMaxOS

5978 supports a maximum of 250 SRDF groups.

When specifying a local or remote director for a storage system, you can select one or more SRDF ports.

SRDF group modes

SRDF groups provide a collective data transfer path linking volumes of two separate storage systems.

The following values can be set for SRDF groups:

SynchronousThis setting provides the host access to the source (R1) volume on a write operation only after the storage system containing the target (R2) volume acknowledges that it has received and checked the data.

AsynchronousThe storage system acknowledges all writes to the source (R1) volumes as if they were local volumes. Host writes accumulate on the source (R1) side until the cycle time is reached and are transferred to the target (R2) volume in one delta set. Write operations to the target volume can be confirmed when the current SRDF/A cycle commits the data to disk by successfully de-staging it to the R2 storage volumes.

Semi SynchronousThe storage system containing the source (R1) volume informs the host of successful completion of the write operation when it receives the data. The SRDF (RA) director transfers each write to the target (R2) volume as the SRDF links become available. The storage system containing the target (R2) volume checks and acknowledges receipt of each write.

AC WP Mode On(adaptive copy write pending) the storage system acknowledges all writes to the source (R1) volume as if it was a local volume. The new data accumulates in cache until it is successfully written to the source (R1) volume and the remote director has transferred the write to the target (R2) volume.

AC Disk Mode OnThis setting is used for situations requiring the transfer of large amounts of data without loss of performance. Use this mode to temporarily transfer the bulk of your data to target (R2) volumes, then switch to synchronous or semi synchronous mode.

Domino Mode OnEnsures that the data on the source (R1) and target (R2) volumes are always synchronized. The storage system forces the source (R1) volume to a Not Ready state to the host whenever it detects one side in a remotely mirrored pair is unavailable.

Domino Mode OffThe remotely mirrored volume continues processing I/O operations per second with its host, even when an SRDF volume or link failure occurs.

AC Mode OffTurns off the AC disk mode.

AC Change SkewModifies the adaptive copy skew threshold. When the skew threshold is exceeded, the remotely mirrored pair operates in the predetermined SRDF state (synchronous or semi-synchronous). When the number of invalid tracks drop below this value, the remotely mirrored pair reverts to the adaptive copy mode.

27

(R2 NR If Invalid) OnSets the R2 device to Not Ready when there are invalid tracks.

(R2 NR If Invalid) OfTurns off the (R2 NR_If_Invalid) On mode.

SRDF session modes

SRDF transparently remotely mirrors production or primary (source) site data to a secondary (target) site to users, applications, databases, and host processors.

Mode Description

Adaptive Copy This mode allows the source (R1) volume and target (R2) volume to be out of synchronization by a number of I/O operations per second that are defined by a skew value.

Adaptive copy disk mode Data is read from the disk, and the unit of transfer across the SRDF link is the entire track. While less global memory is consumed, it is typically slower to read data from disk than from global memory. Also, more bandwidth is used because the unit of transfer is the entire track. Also, because it is slower to read data from disk than global memory, device resynchronization time increases.

Adaptive Copy WP Mode The unit of transfer across the SRDF link is the updated blocks rather than an entire track, resulting in more efficient use of SRDF link bandwidth. Data is read from global memory instead of disk, thus improving overall system performance. However, the global memory is temporarily consumed by the data until it is transferred across the link.

Synchronous This mode provides the host access to the source (R1) volume on a write operation only after the storage system containing the target (R2) volume acknowledges that it has received and checked the data.

Asynchronous The storage system acknowledges all writes to the source (R1) volumes as if they were local devices. Host writes accumulate on the source (R1) side until the cycle time is reached and are transferred to the target (R2) volume in one delta set. Write operations to the target device can be confirmed when the current SRDF/A cycle commits the data to disk by successfully de-staging it to the R2 storage volumes.

AC Skew Adaptive Copy Skew - sets the number of tracks per volume the source volume can be ahead of the target volume. Values are 065535.

SRDF session options

SRDF transparently remotely mirrors production or primary (source) site data to a secondary (target) site to users, applications, databases, and host processors.

Session option Description Available with action

Bypass This option bypasses the exclusive locks for the local or remote storage system during SRDF operations. Use this option only if you are sure that no other SRDF operation is in progress on the local or remote storage systems.

Establish

Failback

Failover

Restore

Incremental Restore

28

Session option Description Available with action

Split

Suspend

Swap

Write Disable R1

Ready R1

Ready R2

RWDisableR2

Enable

Disable

Consistent This option allows only consistent transition from async to sync mode.

Activate

Consistency Exempt This option allows you to add or remove volumes from an SRDF group that is in Async mode without requiring other volumes in the group to be suspended.

Half Move

Move

Suspend

Establish This option fails over the volume pairs, performs a dynamic swap, and incrementally establishes the pairs. This option is not supported when volumes operating in Asynchronous mode are read/write on the SRDF link. To perform a failover operation on such volumes, specify the Restore option.

Failover

Force This option overrides any restrictions and forces the operation, even though one or more paired volumes may not be in the expected state. Use caution when checking this option because improper use may result in data loss.

Establish

Incremental Establish

Restore

Incremental Restore

Write Disable R1

Ready R1

Ready R2

RWDisableR2

Enable

Disable Swap

Immediate This option causes the suspend, split, and failover actions on asynchronous volumes to happen immediately.

Suspend

Split

Failover

NoWD The No write disable option bypasses the check to ensure that the target of operation is write disabled to the host. This option applies to the source (R1) volumes when used with the Invalidate R1option and to the target (R2) volumes when used with the Invalidate R2 option.

SymForce This option forces an operation on the volume pair including pairs that would be rejected. Use caution when checking this

Restore

Incremental Restore

29

Session option Description Available with action

option because improper use may result in data loss.

Write Disable R1

Ready R1

Ready R2

RWDisableR2

Enable

Disable Swap

Refresh R1 This option marks any changed tracks on the source (R1) volume to be refreshed from the target (R2) side.

Swap

Refresh R2 This option marks any changed tracks on the target (R2) volume to be refreshed from the source (R1) side.

Swap

Remote This option is used when performing a restore or failback action with the concurrent link up, data that is copied from the R2 to the R1 is copied to the concurrent R2. These actions require this option.

Restore

Incremental Restore

Failback

Restore When the failover swap completes, invalid tracks on the new R2 side (formerly the R1 side) are restored to the new R1 side (formerly the R2 side).

When used together with the Immediate option, the failover operation immediately deactivates the SRDF/A session without waiting two cycle switches for session to terminate.

Failover

SRDF/A control actions

SRDF transparently remotely mirrors production or primary (source) site data to a secondary (target) site to users, applications, databases, and host processors.

Action Activate Type Write Pacing Type Description

Activate DSE N/A This type activates the SRDF/A Delta Set Extension feature. This feature extends the available cache space by using device SAVE pools.

Write Pacing

This feature extends the availability of SRDF/A by preventing conditions that result in cache overflow on both the R1 and R2 sides.

Group write pacing This type activates SRDF/A write pacing at the group level.

Group and Volume Write Pacing

This type activates SRDF/A write pacing at the group level and the volume level.

Volume Write Pacing This type activates SRDF/A write pacing at the volume level.

Write Pacing Exempt N/A This type activates write pacing exempt. Write pacing

30

Action Activate Type Write Pacing Type Description

exempt allows you to remove a volume from write pacing.

SRDF group SRDF/A flags

SRDF groups provide a collective data transfer path linking volumes of two separate storage systems.

Flag Status

(C) Consistency X = Enabled, . = Disabled, - = N/A

(S) Status A = Active, I = Inactive, - = N/A

(R) RDFA Mode S = Single-session, M = MSC, - = N/A

(M) Msc Cleanup C = MSC Cleanup required, - = N/A

(T) Transmit Idle X = Enabled , . = Disabled, - = N/A

(D) DSE Status A = Active, I = Inactive, - = N/A

DSE (A) Autostart X = Enabled, . = Disabled, - = N/A

Understanding migration Unisphere supports a migration application to provide a method for migrating data from a source storage system to a target storage system.

Non-Disruptive Migration (NDM) provides a method for migrating data from a source storage system to a target storage system without application host downtime. NDM enables you to migrate storage group (application) data (the storage groups must have masking views) in a non-disruptive manner with no downtime.

Minimally disruptive migration enables migrations on the same supported platforms as non-disruptive migration it but requires a short application outage. The outage is because that the non-disruptive nature of migration is heavily dependent on the behavior of multi-pathing software to detect or enable or disable paths and is not in the control of Dell (except for PowerPath).

PowerMax data mobility is a migration tool (leveraging the NDM interface) that streamlines data mobility from VMAX, PowerMax and competitive arrays to arrays running PowerMaxOS 10 (6079).

Additional migration information is available in the Solutions Enabler Array Controls and Management CLI User Guide and the Non-Disruptive Migration Best Practices and Operational Guide. Source side service levels are automatically mapped to target side service levels.

Non-Disruptive Migration applies to open systems or FBA devices only.

Non-Disruptive Migration supports the ability to reduce data on all-flash storage systems while migrating.

A Non-Disruptive Migration session can be created on a storage group containing session target volumes (R2s) where the SRDF mode is synchronous. The target volumes of a Non-Disruptive Migration session may also have a SRDF/Synchronous session that is added after the Non-Disruptive Migration session is in the cutover sync state.

Suggested best practices

Try to migrate during slow processing times; QoS can be used to throttle copy rate. Use more SRDF links, if possible, to minimize impact:

Two is minimum number of SRDF links allowed; Non-Disruptive Migration can use up to eight SRDF links. More links = more IOPS, lower response time.

Use dedicated links as they yield more predictable performance than shared links.

You can migrate masked storage groups where the devices can also be in other storage groups. Examples of overlapping storage devices include: Storage groups with the exact same devices, for example, SG-A has devices X, Y, Z; SG-B has devices X, Y, Z. Devices that overlap, for example, SG-A has devices X, Y, Z ; SG-B has devices X, Y. Storage groups where there is overlap with one other migrated SG, for example, SG-A has devices X, Y, Z ; SG-B has

devices W, X, Y; SG-C has devices U, V, W.

31

The following migration tasks can be performed from Unisphere: Setting up a migration environmentConfigures source and target storage system infrastructure for the migration process. Viewing migration environments Creating a Migration sessionDuplicates the application storage environment from source storage system to target array. Viewing Migration sessions Viewing Migration session details Cutting over a Migration sessionSwitches the application data access form the source storage system to the target

storage system and duplicates the application data on the source storage system to the target storage system. Optional: Stop synchronizing data after Migration cutover and Start synchronizing data after Migration cutoverstop or

start the synchronization of writes to the target storage system back to source array. When stopped, the application runs on the target storage system only.

Canceling a Migration sessioncancels a migration that has not yet been committed Committing a Migration sessionRemoves application resources from the source storage system and releases the resources

that are used for migration. The application permanently runs on the target array. Recovering a Migration sessionRecovers a migration process following an error. Removing a migration environmentRemoves the migration infrastructure.

Understanding Virtual Witness The Virtual Witness feature supports a third party that the two storage systems consult when they lose connectivity with each other, that is, their SRDF links go out of service.

When SRDF links go out of service, the Witness helps to determine, for each SRDF/Metro Session, which of the storage systems should remain active (volumes continue to be read and write to hosts) and which goes inactive (volumes not accessible).

For additional information about vWitness, see the Dell SRDF/Metro vWitness Configuration Guide.

The following vWitness tasks can be performed from Unisphere. Adding a Virtual Witness Viewing Virtual Witness instances Viewing Virtual Witness instance details Enabling a Virtual Witness Disabling a Virtual Witness Removing a Virtual Witness

Understanding Open Replicator Open Replicator is a non-disruptive migration and data mobility application.

When the Open Replicator control volumes are on a storage system running HYPERMAX OS 5977 or higher, the following session options cannot be used:

Push Differential Precopy

There are many rules and limitations for running Open Replicator sessions. Refer to the Solutions Enabler Migration CLI Product Guide before creating a session. For a quick reference, see Open Replicator session options.

Open Replicator session options Open Replicator is a non-disruptive migration and data mobility application.

Depending on the operation that you are performing, some of the following options may not apply.

Session Option Used with UI operation Description

Consistent Activate This option causes the volume pairs to be consistently activated.

32

Session Option Used with UI operation Description

Donor Update Off Consistently stops the donor update portion of a session and maintains the consistency of data on the remote volumes.

Copy Create Volume copy takes place in the background. This behavior is the default for both pull and push sessions.

Cold Create Control volume is write-disabled to the host while the copy operation is in progress. A cold copy session can be created provided one or more directors discovers the remote device.

Differential Create This option creates a one-time full volume copy. Only sessions that are created with the differential option can be recreated.

For push operations, this option is selected by default.

For pull operations, this option is cleared by default (no differential session).

Donor Update Create This option causes data that is written to the control volume during a hot pull to also be written to the remote volume.

Incremental Restore Maintains a remote copy of any newly written data while the Open Replicator session is restoring.

Force Terminate

Restore

Select the Force option if the copy session is in progress. This option allows the session to continue to copy in its current mode without donor update.

Donor Update Off Select the Force option if the copy session is in progress. This option allows the session to continue to copy in its current mode without donor update.

Force Copy Activate This option overrides any volume restrictions and allows a data copy.

For a push operation, remote capacity must be equal to or larger than the control volume extents and conversely for a pull operation. The exception to this rule is when you have pushed data to a remote volume that is larger than the control volume, and you want to pull the data back, you can use the Force_Copy option.

Front-End Zero Detection

Create This option enables front end zero detection for thin control volumes in the session. Front end zero detection looks for incoming zero patterns from the remote volume, and instead of writing the incoming data of all zeros to the thin control volume, the group on the thin volume is de- allocated.

Hot Create Hot copying allows the control device to be read/ write online to the host while the copy operation is in progress. All directors that have the local devices mapped are required to participate in the session. A hot copy session cannot be created unless all directors can discover the remote device.

33

Session Option Used with UI operation Description

Nocopy Activate Temporarily stops the background copying for a session by changing the state to CopyOnAccess or CopyOnWrite from CopyInProg.

Pull Create A pull operation copies data to the control device from the remote device.

Push Create A push operation copies data from the control volume to the remote volume.

Precopy Create

Recreate

For hot push sessions only, begins immediately copying data in the background before the session is activated.

SymForce Terminate Forces an operation on the volume pair including pairs that would be rejected. Use caution when checking this option because improper use may result in data loss.

Understanding device groups A device group is a user-defined group that consists of devices that belong to a locally attached array. Control operations can be performed on the group as a whole, or on the individual device pairs in the group. By default, a device can belong to more than one device group.

The user can create a legacy TF emulation from source devices with a SNAP VX snapshot. The prerequisites are: A SNAP VX storage group with a snapshot must exist. A device group must already have been created from this storage group. The device group must also have enough candidate target devices to create the required TF emulation session.

Understanding TimeFinder/Mirror sessions TimeFinder/Mirror is a business continuity solution that enables the use of special business continuance volume (BCV) devices. Copies of data from a standard device (which are online for regular I/O operations from the host) are sent and stored on BCV devices to mirror the primary data. Uses for the BCV copies can include backup, restore, decision support, and applications testing. Each BCV device has its own host address, and is configured as a stand-alone device. On storage systems running HYPERMAX OS 5977 or PowerMaxOS 5978, TimeFinder/Mirror operations are mapped to their

TimeFinder/SnapVX equivalents. TimeFinder operations are not supported on Open Replicator control volumes on storage systems running HYPERMAX OS

5977 or PowerMaxOS 5978.

The TimeFinder/Mirror dashboard provides a single place to monitor and manage TimeFinder/Mirror sessions on a storage system.

Understanding TimeFinder SnapVX TimeFinder SnapVX is a local replication solution that is designed to nondisruptively create point-in-time copies (snapshots) of critical data.

TimeFinder SnapVX creates snapshots by storing changed tracks (deltas) directly in the Storage Resource Pool of the source volume. With TimeFinder SnapVX, you do not need to specify a target volume and source/target pairs when you create a snapshot. If the application ever needs to use the point-in-time data, you can create links from the snapshot to one or more target volumes. If there are multiple snapshots and the application needs to find a particular point-in-time copy for host access, you can link and relink until the correct snapshot is located.

34

Understanding Performance Management The Unisphere Performance Management application enables the user to gather, view, and analyze performance data to troubleshoot and optimize the storage systems.

Performance Management covers the following areas: Dashboards - Display predefined, user-defined custom dashboards, and templates. Charts - Create custom charts across multiple categories, metrics, time, or intervals. Analyze - Provide in-depth analysis on storage system data for various collection ranges. Heatmap - Display hardware instances represented as colored squares, with the color indicating utilization levels. Reports - Create, manage, and run performance reports. Real-Time Traces - Create, manage, and run performance real-time traces . Databases - Manage Performance database tasks, for example, back up, restore, delete, and also individual performance

database information. Plan - Provide performance projection capacity dashboards displaying predicted future data that is based on linear

projection.

Understanding Unisphere support for VMware Unisphere supports the discovery of vCenters or ESXi servers (using a read-only user) and integrates the information into Unisphere. VMware information is connected to its storage extents, and this enables seamless investigation of any storage- related issues.

Unisphere support for VMware provides the storage admin access to all the relevant storage-related objects to an ESXi server and also helps troubleshooting storage performance-related issues to the ESXi server.

You can, as a read-only user, discover at the vCenter level and discover an individual ESXi server. If a vCenter is discovered, all ESXi servers under that vCenter are discovered. All ESXi servers that do not have local storage on the Unisphere performing the discovery, are filtered out.

Once the user adds VMware information, all other users of Unisphere can access this information.

The minimum version number that vCenter supports is version 5.5.

Understanding eNAS Embedded NAS (eNAS) integrates the file-based storage capabilities of VNX arrays into storage systems running HYPERMAX OS 5977 or PowerMaxOS 5978.

With this integrated storage solution, the Unisphere StorageAdmin provisions storage to eNAS data movers, which trigger the creation of storage pools in VNX. Then, users of Unisphere for VNX can use the storage pools for file-level provisioning, for example, creating file systems, file shares.

Unisphere provides the following features to support eNAS:

File System dashboard

Provides a central location from which to monitor and manage integrated VNX file services.

Provision Storage for File wizard

Allows you to provision storage to eNAS data movers.

Launch Unisphere for VNX

Allows you to link and launch Unisphere for VNX.

Understanding iSCSI Unisphere provides monitoring and management for Internet Small Computer Systems Interface (iSCSI) directors, iSCSI ports, iSCSI endpoints, IP interfaces, and IP routes.

iSCSI is a protocol that uses the TCP to transport SCSI commands, enabling the use of the existing TCP/IP networking infrastructure as a SAN. As with SCSI over Fibre Channel (FC), iSCSI presents SCSI endpoints and devices to iSCSI initiators

35

(requesters). Unlike NAS, which presents devices at the file level, iSCSI makes block devices available from the network. Block devices are presented across an IP network to your local system, and can be consumed in the same way as any other block storage device.

The iSCSI changes address the market needs originating from cloud or service provider space, where a slice of infrastructure, for example, computes, network and storage, is assigned to different users (tenants). Control and isolation of resources in this environment is achieved by the iSCSI changes. Also, more traditional IT enterprise environments also benefit from this new functionality. The changes also provide greater scalability and security.

Understanding Cloud Mobility for Dell PowerMax Unisphere uses PowerMax Cloud Mobility functionality to enable you to move snapshots off the storage system and on to the cloud. The snapshots can also be restored back to the original storage system.

Cloud Mobility is available only on Embedded Management instances of Unisphere for PowerMax.

The Unisphere UI supports the following operations: Set up, resolve, and remove a cloud system. Configure and view cloud-related network configuration (interfaces, teams, routes, and DNS servers). Configure and view cloud providers. View active cloud jobs. Configure and view scheduled snapshots of a storage group using the snapshot policy functionality. View and manage snapshots of a storage group that are archived or are being archived to the cloud. Create a snapshot for a storage group and archive this snapshot to a selected cloud provider. View cloud snapshots for a selected storage group. View and delete array cloud snapshots. Back up a cloud configuration Restore a cloud snapshot to the same storage array from which it was taken. Restore a cloud snapshot to a different storage array. Manage the cloud system certificates. Set bandwidth limits View cloud alerts. View cloud statistics.

For more information about Cloud Mobility for Dell Storage, see the following resources: Cloud Mobility for Dell PowerMax White Paper Cloud Mobility for Dell PowerMax Overview on YouTube Dell Cloud Mobility for Storage Guide, available on the VMware Marketplace and AWS Marketplace.

Understanding NVMe/TCP Unisphere provides monitoring and management for NVMe/TCP directors, NVMe/TCP ports, NVMe/TCP endpoints, IP interfaces, and IP routes.

NVMe/TCP is a NVME-oF technology that supports high performance with lower deployment costs and reduced design complexity. NVMe/TCP defines a new storage networking fabric for the NVMe block storage protocol over a storage networking fabric, and provides the capability of extending NVMe across the entire data center .

The Unisphere UI supports the following operations (on storage systems running PowerMaxOS 10 (6079)) : View the NVMe dashboard. Configure NVMe/TCP using a wizard. View NVMe/TCP directors. View and configure NVMe/TCP endpoints. View NVMe/TCP ports. View and configure IP routes. View and configure IP interfaces.

36

Understanding PowerMax File for storage systems Network-attached storage is a file-level storage architecture that makes stored data more accessible to networked devices.

A storage system running PowerMaxOS 10 (6079) with embedded Unisphere supports software-defined network-attached storage (PowerMax File).

PowerMax File support a file services architecture that provides a reliable, high performance, highly available, and highly scalable system. PowerMax File runs as a container instance inside each file guest offering based on the customer configuration.

Unisphere in the Embedded Element Manger (EEM) manages a single instance of PowerMax File that is present locally. The Unisphere that is installed on an external host does not manage any PowerMax File instances.

PowerMax File uses virtualized Network-Attached Storage (NAS) servers that use the SMB, NFS, and FTP protocols to catalog, organize, and transfer files within file system shares and exports.

A NAS server, the basis for multi-tenancy, must be created before you can create file-level storage resources. NAS servers are responsible for the configuration parameters on the set of file systems that it serves.

Network File System (NFS) is an access protocol that enables users to access files and folders on a network. You can create an NFS export to make file system paths on your storage system available for mounting by NFS clients.

Server Message Block (SMB) is an access protocol that allows remote file data access from clients to hosts on a network. An SMB share, also known as an SMB file share, is a shared resource on an SMB server.

VLAN and Jumbo frames are not supported for File services.

Services such as anti-virus, scheduled snapshots, and Network Data Management Protocol (NDMP) backups ensure that the data on the file systems is well protected.

The high-level sequence of tasks to configure PowerMax File for storage systems running PowerMaxOS 10 (6079) is as follows: 1. Configure subnets (subnet configuration is required in order to create a NAS server). 2. Create NAS server (a NAS server is required in order to create a file system). 3. Create File Systems (file systems enable you to partition data for your users). 4. Protect File Systems (optionally protect file system data using snapshots or replication).

The detailed sequence of tasks to configure PowerMax File is as follows: 1. View node inventory (under System > File Configuration> Node Inventory tab). 2. Configure subnets. Create and configure bond devices (under System > File Configuration> Network Devices tab). 3. Create and configure NAS servers (under Storage > File). Other properties and controls that are associated with NAS

servers are in the following tabs: Details Network > File Interface Network > Routes to external services Sharing Protocols > SMB Server Sharing Protocols > NFS Server Sharing Protocols > FTP Sharing Protocols > User Mapping Naming services > DNS Naming services > UDS Naming services > Local Files Security > Antivirus Security > Kerberos Data protection > Snapshot policy Data protection > Replication Backup & Events > NDMPYou can configure standard backup for the NAS servers using Network Data Management

Protocol (NDMP). NDMP provides a standard for backing up file servers on a network. Backup & Events > DHSMDistributed Hierarchical Storage Management (DHSM) supports file based archiving. Backup & Events > Events pool Nodes

4. Create and configure file systems (under Storage > File). Other properties and controls that are associated with file systems are in the following tabs : Details SnapshotsSnapshots can be used for restoring individual files or the entire file system back to a previous point in time.

37

User QuotasQuota management allows you to place limits on the amount of space that can be consumed in order to regulate file system storage consumption. User quotas are set at a file system level and limit the amount of space a user may consume on a file system.

Tree QuotasQuota management allows you to place limits on the amount of space that can be consumed in order to regulate file system storage consumption. Quota trees limit the maximum size of a directory on a file system. Unlike user quotas, which are applied and tracked on a user-by-user basis, quota trees are applied to directories within the file system.

File Level RetentionWhen a snapshot is created, it can be configured to have no automatic deletion or retention until a specific date and time. If retention is set, the snapshot is automatically deleted upon reaching the retention date. This does not prevent the snapshot from being deleted before the retention date.

5. Create and configure SMB Shares (under Storage > File > SMB Shares). 6. Create and configure NFS Exports (under Storage > File > NFS exports). You can create an NFS export to make file system

paths on your storage system available for mounting by NFS clients. 7. Create Global Namespaces (under Storage > File > Global Namespaces). 8. Create and configure snapshot policies (under Data Protection > File Protection> Snapshot policies). 9. Create and configure replication (under Data Protection > File Protection > Replication. Properties and controls that are

associated with replication are: File control network Remote connection Replication sessions

Understanding serviceability Unisphere provides a serviceability application that supports the deployment of update packages for PowerMax embedded applications.

About this task

The Unisphere for PowerMax Serviceability application integrates functionality that was previously supported by the stand-alone vApp manager application. The functionality enables the user to: View the version information and status of applications and products. Configure NTP server. Modify external NAT IP addresses. Download logs that log the changes that are made by the Serviceability applications. Configure Solutions Enabler base and service settings. Set Symavoid on both instances on an embedded system. Control service access. Perform operations on VASA container. Configure VASA Provider settings for VASA containers. Manage certificates for SE/vWitness. Configure nethost entries. Manage certificates for Unisphere. Download performance reports, performance databases, and system databases.

Understanding PowerMax software system profiles and compliance A PowerMax software system profile is a subset of system settings that are associated with at least one storage system (array). The settings are defined once as part of a system profile, and these settings can be applied to one or more storage systems.

This feature supports the creation of a software profile that can be applied to one or more storage systems running PowerMaxOS 10 (6079) that are local to the managing Unisphere for PowerMax. The profile covers a number of alert and performance settings. The compliance part of the feature relates to monitoring of changes, that occur within a certain user- specified time window, to any local storage system (running PowerMaxOS 10 (6079)) that is part of a profile.

The Unisphere UI supports the following operations (on storage systems running PowerMaxOS 10 (6079)): View the system profiles dashboard.

38

Create, modify, and delete system profiles. Apply a system profile. Add storage system to a profile. View compliance statu.s Create, modify, and delete change control window. View audit log records for changes outside the change control window. Import, export, and clone a change control window.

Where to get help The Dell Technologies Support site (https://www.dell.com/support) contains important information about products and services including drivers, installation packages, product documentation, knowledge base articles, and advisories.

A valid support contract and account might be required to access all the available information about a specific Dell Technologies product or service.

Your comments

Your suggestions help continue to improve the accuracy, organization, and overall quality of the user publications. Send your feedback to content feedback.

39

Notes, cautions, and warnings

NOTE: A NOTE indicates important informa

Manualsnet FAQs

If you want to find out how the 10 Dell works, you can view and download the Dell PowerMaxOS 10 V10.0.0 Storage Product Guide on the Manualsnet website.

Yes, we have the Product Guide for Dell 10 as well as other Dell manuals. All you need to do is to use our search bar and find the user manual that you are looking for.

The Product Guide should include all the details that are needed to use a Dell 10. Full manuals and user guide PDFs can be downloaded from Manualsnet.com.

The best way to navigate the Dell PowerMaxOS 10 V10.0.0 Storage Product Guide is by checking the Table of Contents at the top of the page where available. This allows you to navigate a manual by jumping to the section you are looking for.

This Dell PowerMaxOS 10 V10.0.0 Storage Product Guide consists of sections like Table of Contents, to name a few. For easier navigation, use the Table of Contents in the upper left corner.

You can download Dell PowerMaxOS 10 V10.0.0 Storage Product Guide free of charge simply by clicking the “download” button in the upper right corner of any manuals page. This feature allows you to download any manual in a couple of seconds and is generally in PDF format. You can also save a manual for later by adding it to your saved documents in the user profile.

To be able to print Dell PowerMaxOS 10 V10.0.0 Storage Product Guide, simply download the document to your computer. Once downloaded, open the PDF file and print the Dell PowerMaxOS 10 V10.0.0 Storage Product Guide as you would any other document. This can usually be achieved by clicking on “File” and then “Print” from the menu bar.