Contents

Dell EMC VMAX 200K V9.2.0 Storage Product Guide PDF

1 of 44
1 of 44

Summary of Content for Dell EMC VMAX 200K V9.2.0 Storage Product Guide PDF

Dell EMC Unisphere for PowerMax Product Guide 9.2.0

September 2020

Overview of Unisphere........................................................................................................................................................................................... 3

Capacity information................................................................................................................................................................................................4

Login authentication................................................................................................................................................................................................ 4

Functionality supported by each OS type..........................................................................................................................................................5

Unisphere dashboards overview...........................................................................................................................................................................6

Understanding the system health score.............................................................................................................................................................8

Manage jobs............................................................................................................................................................................................................. 10

Server alerts.............................................................................................................................................................................................................10

Understanding settings......................................................................................................................................................................................... 10

Understanding licenses.......................................................................................................................................................................................... 11

Understanding user authorization....................................................................................................................................................................... 11

Individual and group roles...................................................................................................................................................................................... 11

Roles...........................................................................................................................................................................................................................12

User IDs..................................................................................................................................................................................................................... 12

Roles and associated permissions.......................................................................................................................................................................13

RBAC roles for TimeFinder SnapVX local and remote replication actions............................................................................................... 16

RBAC roles for SRDF local and remote replication actions......................................................................................................................... 16

Understanding access controls for volumes.................................................................................................................................................... 17

Storage Management.............................................................................................................................................................................................17

Understanding storage provisioning...................................................................................................................................................................18

Understanding storage groups........................................................................................................................................................................... 20

Understanding data reduction............................................................................................................................................................................ 20

Understanding service levels.............................................................................................................................................................................. 20

Suitability Check restrictions............................................................................................................................................................................... 21

Understanding storage templates...................................................................................................................................................................... 21

Understanding Storage Resource Pools........................................................................................................................................................... 21

Understanding volumes.........................................................................................................................................................................................21

Understanding Federated Tiered Storage .......................................................................................................................................................21

Understanding FAST ............................................................................................................................................................................................ 22

Understanding Workload Planner.......................................................................................................................................................................22

Understanding time windows..............................................................................................................................................................................22

Understanding FAST.X......................................................................................................................................................................................... 23

Overview of external LUN virtualization.......................................................................................................................................................... 23

Understanding tiers............................................................................................................................................................................................... 24

Understanding thin pools..................................................................................................................................................................................... 24

Understanding disk groups.................................................................................................................................................................................. 24

Understanding Virtual LUN Migration...............................................................................................................................................................25

Understanding vVols............................................................................................................................................................................................. 25

Host Management................................................................................................................................................................................................. 25

Understanding hosts............................................................................................................................................................................................. 26

Understanding masking views............................................................................................................................................................................ 26

Understanding port groups..................................................................................................................................................................................26

Understanding initiators....................................................................................................................................................................................... 26

Understanding PowerPath hosts....................................................................................................................................................................... 26

Understanding mainframe management.......................................................................................................................................................... 26

Data protection management............................................................................................................................................................................. 27

Manage remote replication sessions................................................................................................................................................................. 28

Understanding Snapshot policy..........................................................................................................................................................................28

Understanding SRDF/Metro Smart DR............................................................................................................................................................30

Understanding non-disruptive migration..........................................................................................................................................................30

Understanding Virtual Witness ...........................................................................................................................................................................31

Understanding SRDF Delta Set Extension (DSE) pools...............................................................................................................................32

Understanding TimeFinder/Snap operations.................................................................................................................................................. 32

Understanding Open Replicator......................................................................................................................................................................... 32

Open Replicator session options........................................................................................................................................................................ 33

Understanding device groups............................................................................................................................................................................. 34

Understanding SRDF groups...............................................................................................................................................................................34

SRDF session modes............................................................................................................................................................................................. 35

SRDF session options........................................................................................................................................................................................... 35

SRDF/A control actions....................................................................................................................................................................................... 38

SRDF group modes................................................................................................................................................................................................38

SRDF group SRDF/A flags.................................................................................................................................................................................. 39

Understanding TimeFinder/Clone operations.................................................................................................................................................39

Understanding TimeFinder/Mirror sessions....................................................................................................................................................40

Understanding TimeFinder SnapVX.................................................................................................................................................................. 40

Understanding RecoverPoint.............................................................................................................................................................................. 40

Understanding Performance Management..................................................................................................................................................... 40

Database Storage Analyzer (DSA) Management............................................................................................................................................41

Understanding Unisphere support for VMware.............................................................................................................................................. 41

Understanding eNAS..............................................................................................................................................................................................41

Understanding iSCSI..............................................................................................................................................................................................42

Understanding Cloud Mobility for Dell EMC PowerMax.............................................................................................................................. 42

Understanding dynamic cache partitioning..................................................................................................................................................... 42

2

Overview of Unisphere Unisphere enables the user to configure and manage PowerMax, VMAX All Flash, and VMAX storage systems.

Unisphere is a HTML5 web-based application that enables you to configure and manage PowerMax, VMAX All Flash, and VMAX storage systems. The term Unisphere incorporates "Unisphere for PowerMax" for the management of PowerMax and All Flash storage systems running PowerMaxOS 5978, and "Unisphere for VMAX" for the management of VMAX All Flash and VMAX storage systems running HYPERMAX OS 5977 and Enginuity OS 5876.

Blog posts and videos on Unisphere functionality can be accessed by clicking here.

The side panel has the following items when the All Systems view is selected:

HOMEView system view dashboard of all storage systems being managed PERFORMANCEMonitors and manages storage system performance data (Dashboards, Charts, Analyze, Heatmap,

Reports, Plan, Real-Time traces, and Performance Database management). Refer to Understanding Performance Management on page 40 for more information.

VMWAREViews all the relevant storage-related objects at an ESXi server and helps troubleshooting storage performance- related issues at the ESXi server. Refer to Understanding Unisphere support for VMware on page 41 for more information.

DATABASESMonitors and troubleshoots database performance issues. Refer to Database Storage Analyzer (DSA) Management on page 41 for more information.

EVENTSIncludes Alerts and Job List. SUPPORTDisplays support information.

You can set preferences by clicking .

You can hide the side panel by clicking and you can display the display panel by clicking again.

You can return to the All Systems view by clicking HOME.

The side panel has the following items when the storage system-specific view is selected:

HOMEView system view dashboard of all storage systems being managed DASHBOARDView the following dashboards for a selected storage system: Capacity and Performance, System Health,

Storage Group compliance, Capacity, and Replication STORAGEManage storage (storage groups, service levels, templates, Storage resource Pools, volumes, external storage,

vVols, FAST policies, tiers, thin pools, disk groups, and VLUN migration). Refer to Storage Management on page 17 for more information.

HOSTSManage hosts (hosts, masking views, port groups, initiators, XtremSW Cache Adapters, PowerPath Hosts, mainframe, and CU images). Refer to Host Management on page 25 for more information.

DATA PROTECTIONManage data protection (storage groups, device groups, SRDF groups, migrations, virtual witness, Snapshot Policies, MetroDR, Open Replicator, SRDF/A DSE pools, TimeFinder SnapVX pools, and RecoverPoint systems). Refer to Data protection management on page 27 for more information.

PERFORMANCEMonitors and manages storage system performance data (Dashboards, Charts, Analyze, Heatmap, Reports, Plan, Real-Time traces, and Performance Database management). Refer to Understanding Performance Management on page 40 for more information.

SYSTEMIncludes Hardware, Properties, File (eNAS), Cloud, and iSCSI. EVENTSIncludes Alerts, Job List, and Audit log. SUPPORTDisplays support information.

The following options are available from the title bar:

Discover systems Refresh system information. Search for objects View newly added features. View and manage alerts. View and manage jobs. View online help Exit the console.

A Unisphere Representational State Transfer (REST) API is also available. The API enables you to access diagnostic, performance and configuration data, and also enables you to perform provisioning operations on the storage system.

Supporting documentation

3

Perform the following steps to access REST API documentation:

Point the browser to: https://{UNIVMAX_IP}:{UNIVMAX_PORT}/univmax/restapi/docs where UNIVMAX_IP is the IP address and UNIVMAX_PORT is the port of the host running Unisphere.

Copy the .zip file (restapi-docs.zip) locally, extract the file, and go to target/docs/index.html. To access the documented resources, open the index.html file.

Information on the installation of Unisphere for PowerMax can be found in the Unisphere for PowerMax Installation Guide at the Dell EMC support website or the technical documentation page.

For information specific to this Unisphere product release, see the Unisphere for PowerMax Release Notes at the Dell EMC support website.

Your commentsYour suggestions help continue to improve the accuracy, organization, and overall quality of the user publications. Send your feedback to: content feedback.

Capacity information Unisphere supports measurement of capacity using both the base 2 (binary) and base 10 (decimal) systems.

Storage capacity can be measured using two different systems base 2 (binary) and base 10 (decimal). Organizations such as the International System of Units (SI) recommend using the base 10 measurement to describe storage capacity. In base 10 notation, one MB is equal to 1 million bytes, and one GB is equal to 1 billion bytes.

Operating systems generally measure storage capacity using the base 2 measurement system. Unisphere and Solutions Enabler use the base 2 measurement system to display storage capacity with the TB notation as it is more universally understood. In base 2 notation, one MB is equal to 1,048,576 bytes and one GB is equal to 1,073,741,824 bytes.

Name Abbreviation Binary Power Binary Value (in Decimal) Decimal Power

Decimal (Equivalent)

kilobyte KB 210 1,024 103 1,000

megabyte MB 220 1,048,576 106 1,000,000

GB GB 230 1,073,741,824 109 1,000,000,000

terabyte TB 240 1,099,511,627,776 1012 1,000,000,000,000

Login authentication Unisphere authenticates users attempting to access the system.

When you log in, Unisphere checks the following locations for validation:

Windows The user has a Windows account on the server. (Log in to Unisphere with your Windows Domain\Username and Password.)

LDAP-SSL The user account is stored on an LDAP-SSL server. (Log in to with your Unisphere LDAP-SSL Username and Password.)

The Unisphere Administrator or SecurityAdmin must set the LDAP-SSL server location in the LDAP-SSL Configuration dialog box.

Local The user has a local Unisphere account. Local user accounts are stored locally on the Unisphere server host. (Log in to Unisphere with your Username and Password.)

User names are case-sensitive and allow alphanumeric characters of either case, an underscore, a dash, or a period:

a-z A-Z 0-9 _ . -

Passwords cannot exceed 16 characters. There are no restrictions on special characters when using passwords.

The Initial Setup User, Administrator, or SecurityAdmin must create a local Unisphere user account for each user.

4

Functionality supported by each OS type Unisphere enables the user to configure and manage PowerMax, VMAX All Flash, and VMAX storage systems.

Unisphere is a HTML5 web-based application that allows you to configure and manage PowerMax, VMAX All Flash, and VMAX storage systems. The term Unisphere incorporates "Unisphere for PowerMax" for the management of PowerMax and All Flash storage systems running PowerMaxOS 5978, and "Unisphere for VMAX" for the management of VMAX All Flash and VMAX storage systems running HYPERMAX OS 5977 and Enginuity OS 5876. See Overview of Unisphere on page 3 for an overview of Unisphere's applications and functionality.

Table 1. Functionality supported by each OS type

Functionality Enginuity OS 5876 HYPERMAX OS 5977 PowerMaxOS 5978

Storage > Storage Groups

Storage > Service Levels

Storage > Templates

Storage > Storage Resource Pools

Storage > Volumes

Storage > External Storage

Storage > vVol Dashboard

Storage > FAST Policies

Storage > Tiers

Storage > Thin Pools

Storage > Disk Groups

Storage > Vlun Migration

Hosts > Hosts

Hosts > Masking Views

Hosts > Port Groups

Hosts > Initiators

Hosts > Mainframe

5

Table 1. Functionality supported by each OS type (continued)

Functionality Enginuity OS 5876 HYPERMAX OS 5977 PowerMaxOS 5978

Hosts > PowerPath

Hosts >XtremSW Cache Adapters

Data Protection > Snapshot Policies

running PowerMaxOS 5978 Q3 2020

Data Protection > MetroDR

running PowerMaxOS 5978 Q3 2020

Data Protection > SRDF Groups

Data Protection > Migrations

Data Protection > Virtual Witness

Data Protection > Open Replicator

Data Protection > Device Groups

Data Protection > SRDF/A DSE Pools

Data Protection > TimeFinder Snap Pools

Data Protection > RecoverPoint Systems

System > Hardware

System > System Properties

System > ISCSI

System > File

System > Cloud

running PowerMaxOS 5978 Q3 2020

Unisphere dashboards overview Unisphere dashboards display overall status information.

Home dashboard view for all storage systems

6

The home dashboard view for all storage systems (the default view after logging in) provides an overall view of the status of the storage systems that Unisphere manages in terms of the following:

ComplianceService level compliance data in the form of storage group counts for each compliance state (Critical, Marginal, Stable), total storage group count, and number of storage groups with no service level assigned.

CapacityPercentage of allocated capacity for the storage system Health scorean overall health score based on the lowest health score out of the five metrics (see Understanding the

system health score on page 8 for more information). ThroughputCurrent throughput for the system, in MB/second IOPSCurrent IOPS for the system EfficiencyThe overall efficiency ratio for the array. It represents the ratio of the sum of all TDEVs plus snapshot sizes

(calculated based on the 128K track size) and the physical used storage (calculated based on the compressed pool track size). The ratio is displayed using the k notation to represent 1000, for example, 361900:1 is displayed as 361.9k:1.

CloudIQIf CLOUDIQ has been selected and the Secure Remote Services (SRS) gateway has not been registered, an option to register is displayed on each storage system card. Clicking REGISTER enables you to register the SRS gateway . If the SRS gateway has already been registered, the enabled or disabled data collection status is displayed within each storage system card. Each card also displays the last time data was sent to CloudIQ for that storage system. Clicking the Enabled/ Disabled link opens the Settings dialog on the CloudIQ tab to enable the user to enable or disable data collection on the storage system .

Unisphere release 9.1 enables you to tag arrays and storage groups for PowerMaxOS 5978 storage systems running release 9.1. You can add a tag to an object for identification or to give other information. An association between objects can be inferred by adding the same tag to each object. You can search by tag to retrieve related objects. Tags can be added to and removed from objects. When a tag is no longer associated with an object, it no longer exists.

Home dashboard view for a specific storage system

The home dashboard view for a specific storage system provides a view of the status of a specific storage system. The default view is the view of Performance and Capacity information. The following panels are displayed:

SYSTEM HEALTH SG COMPLIANCE CAPACITY REPLICATION

System Health dashboard view for a specific storage system

The System Health dashboard provides a single place from which you can quickly determine the health of the system. You can also access hardware information.

The System Health section displays values for the following five high-level health or performance metrics: System Utilization, Configuration, Capacity, SG Response Time, and Service Level Compliance. It also displays an overall health score based on these five categories. The overall system health score is based on the lowest health score out of the categories System Utilization, Configuration, Capacity, SG Response Time, and service level compliance. See Understanding the system health score on page 8 for details on how these scores are calculated. These five categories are for systems running HYPERMAX OS 5977 or later. For systems running Enginuity 5876, the health score is based on the Hardware, Configuration, Capacity, and SG Response time scores. The health score is calculated every five minutes.

NOTE: The Health score values for Hardware, SG Response, and service level compliance are not real time; they are based

on values within the last hour.

The Hardware section shows the director count for Front End, Back End, and SRDF Directors and the available port count on the system. An alert status is indicated through a colored bell beside the title of the highest level alert in that category. If no alerts are present, then a green tick is displayed.

Replication dashboard view for a specific storage system

The Replication Dashboard provides storage group summary protection information, summarizing the worst states of various replication technologies and counts of management objects participating in these technologies. For systems running HYPERMAX OS 5977 and higher, summary information for SRDF, SRDF/ Metro, and SnapVX (including zDP snapshots) is displayed. For systems running Enginuity OS 5876, summary information for SRDF and device groups is displayed.

The Replication Dashboard has a SRDF topology view that visually describes the layout of the SRDF connectivity of the selected storage system in Unisphere.

The Replication Dashboard provides a Migrations Environments topology view that visually describes the layout of the migration environments of the selected storage system.

Storage Group Compliance dashboard view for a specific storage system

7

The Storage Group Compliance dashboard displays how well the workload of the storage system is complying with the overall service level. Storage group compliance information displays for storage systems that are registered with the Performance component. The total number of storage groups is listed, along with information about the number of storage groups performing in accordance with service level targets. A list view of the storage groups is also provided and this can be filtered.

Capacity dashboard view for a specific storage system

The storage system capacity dashboard enables you to see the amount of capacity your storage system is subscribed for, and the amount of that subscribed capacity that has been allocated. You can also see how efficient the storage system is in using data reduction technologies.

The SRP capacity dashboard reports the capacity and efficiency breakdown of a SRP. For PowerMaxOS 5978 storage systems running 9.1, FBA and CKD devices can be configured in a single SRP. This reduces the cost of storage array ownership for a mixed system and enables the efficient management of drive slot consumption in the array. Where the SRP is of mixed emulation, you can select by emulation to examine breakdown.

Performance and Capacity dashboard view or a specific storage system

The performance and capacity dashboard for a specific storage system provides a view of key performance and capacity indicators.

A Capacity panel displays the following:

Subscribed Capacity

A graphical representation of the subscribed capacity of the system (used = blue and free = gray) and the percentage used

Usable Capacity A graphical representation of the usable capacity of the system (used = blue and free = gray) and the percentage used

Subscribed Usable Capacity

The percentage of subscribed usable capacity

Overall Efficiency The overall efficiency ratio

Trend A panel displaying usable capacity and subscribed capacity in terabytes

A Performance panel displays the following graphs over a four hour, one week, or two-week period:

Host IOs/sec in terms of read and write operations over time. Latency in terms of read and write operations over time. Throughput in terms of read and write operations over time.

To the right of each graph, a list of the top five active storage groups for that graph is displayed. Zooming in to a timeframe on a graph automatically updates the top five storage groups lists for that timeframe. Clicking a particular point in time on one graph automatically updates the top five storage group lists for that particular time.

Understanding the system health score The System Health dashboard provides a single place from which you can quickly determine the health of the system.

The System Health panel displays values for the following high-level health or performance metrics: Configuration, Capacity, System Utilization, Storage Group (SG) response time, and Service Level compliance. It also displays an overall health score that is based on the lowest health score out of the five metrics. These five categories are for storage systems running HYPERMAX OS 5977 or PowerMaxOS 5978. For storage systems running Enginuity OS 5876, the health score is based on four categories: Configuration, Capacity, System Utilization, and Storage Group (SG) response time. The health score is calculated every five minutes. The overall value is always calculated from all metric values. If a health score category is seen as stale or unknown, then the overall health score is not updated. The previously calculated overall health score is displayed but its value is denoted as stale by setting the menu item to grey.

The System Utilization, Capacity, Storage Group response time, and Service Level compliance are based on performance information.

The Configuration health score is calculated every five minutes and is based on the director and port alerts in the system at the time of calculation. Unisphere does not support alert correlation or auto clearing, so you are required to manually delete alerts that have been dealt with or are no longer relevant as these impact on the hardware health score until they are removed from Unisphere.

The Configuration health score is calculated as follows:

Director out of service- 40 points Director Offline - 20 points

8

Port Offline - 10 points

The Capacity health score is based on % Effective Used Capacity. For storage systems running HYPERMAX OS 5977, capacity levels are checked at the Storage Resource Pool (SRP) level. For storage systems running PowerMaxOS 5978, capacity levels are checked at the SRP level or SRP Emulation level (where a mixed SRP emulation is involved). For storage systems running Enginuity OS 5876, capacity levels are checked at thin pool level.

The capacity health scores are calculated as follows:

Fatal level - based on what is defined in the System Threshold and Alerts dialog. The default fatal threshold is 100% - 30 points.

Critical level - based on what is defined in the System Threshold and Alerts dialog. The default critical threshold is 80% - 20 points.

The System Utilization health score is calculated using the threshold limits of the following categories and metrics:

FE Director : % busy, queue depth utilization FE port: % busy BE port: % busy BE Director (DA): % busy SRDF port: % busy SRDF Director: % busy DX port: % busy External Director: % busy EDS Director: % busy Cache Partition: %WP utilization

For each instance and metric for particular category, the threshold info is found. If not set, use the default thresholds. The default thresholds are:

FE Port: % busy - Critical 70, Warning 50

FE Director: % busy - Critical 70, Warning 50; Queue Depth Utilization - Critical 75, Warning 60

BE Port: % busy - Critical 70, Warning 55

BE Director (DA): % busy - Critical 70, Warning 55

SRDF Port: % busy - Critical 70, Warning 50

SRDF Director: % busy - Critical 70, Warning 50

DX Port: - % busy - Critical 70, Warning 55

External Director: % busy- Critical 70, Warning 55

EDS Director: % busy - Critical 70, Warning 55

Cache Partition: %WP utilization - Critical 75, Warning 55

The system utilization score is calculated as follows:

Critical level: - five points

The Storage Group Response health score is based on software category health scores. Certain key metrics are examined against threshold values and if they exceed a certain threshold, the health score is negatively affected.

The storage group response score is based on the following:

Storage Group: Read Response Time, Write Response Time, Response Time Database: Read Response Time, Write Response Time, Response Time

For each instance and metric for particular category, the threshold info is found. If not found, default thresholds are used.

The storage group response score is calculated as follows:

Read Response Time: Critical - five points Write Response Time: Critical - five points Response Time: Critical - five points

Storage systems running HYPERMAX OS 5977 or PowerMaxOS 5978: The Service Level Compliance health score is based on Workload Planner (WLP) workload state. A reduction from the health score is performed when storage groups that have a service level that is defined are not meeting the service level requirements.

The Service Level compliance score is calculated as follows:

9

Underperforming: - five points

Manage jobs Certain configuration tasks performed on a storage system may not be not immediately processed, but instead are kept in a job list for review and submission in batches.

About this task

One way to identify these tasks is from the dialog boxes. They have a button that is named Add to Job List.

Unisphere includes a job list view, from which you can view and manage the job list for a storage system.

Server alerts Server alerts are alerts that are generated by Unisphere itself.

Unisphere generates server alerts under the conditions that are listed in the following table:

Checks are run on 10 minute intervals and alerts are raised on 24-hour intervals from the time the server was last started. These time intervals also apply to discover operations. That is, performing a discover operation does not force the delivery of these alerts.

NOTE: Runtime alerts are not storage system-specific. They can be deleted if the user has admin or storage admin rights

on at least one storage system. A user with a monitor role cannot delete the server alerts.

Server alert Number of volumes Threshold Alert Details

Total memory on the Unisphere server

064,000 12 GB System memory <# GB> is below the minimum requirement of <# GB>

64,000128,000 16 GB

128,000256,000 20 GB

Free disk space on the Unisphere installed directory

064,000 100 GB Free disk space <# GB> is below the minimum requirement of <# GB>

64,000128,000 140 GB

128,000256,000 180 GB

Number of managed storage systems

Threshold is 20. Number of managed arrays <#> is over the maximum supported number of #

Number of managed volumes 256,000 Number of managed volumes <#> is over the maximum supported number of <#>.

Solutions Enabler may indicate a slightly different number of volumes than indicated in this alert.

Number of gatekeepers 6 Number of gatekeepers <#> on storage system is below the minimum requirement of 6.

Understanding settings Systems settings are managed from a central point.

The following categories of settings can be modified:

PreferencesGeneral and Performance settings System and LicensesLicense Usage, Solutions Enabler, and System Entitlements settings Users and GroupsAuthentication, Local Users, User Sessions, and Authorized Users and Groups settings System Access ControlAccess Control entries, Access groups, and Access Pools settings ManagementSystem Attributes, Link and Launch, Secure Remote Services, and CloudIQ settings

10

Data ProtectionData Protection settings PerformanceSystem Registrations, Dashboard Catalog, Real-Time Traces, Metrics, and Export PV settings Unisphere DatabasesPerformance Databases and System Database settings DSA EnvironmentDatabase Storage analyzer (DSA) settings AlertsAlerts Policies, Compliance Alert Policies, Performance Thresholds and Alerts, System Thresholds and Alerts, and

Notifications settings

Unisphere 9.1 provides the ability of saving specific settings on one array, so that these settings can be applied to other arrays of the same family and PowerMax version. Settings can be cloned, imported, and exported.

Understanding licenses Unisphere supports electronic licensing (eLicensing). eLicensing is a license management solution to help you track and comply with software license entitlement.

eLicensing uses embedded locking functions and back-office IT systems and processes. It provides you with better visibility into software assets, easier upgrade, and capacity planning and reduced risk of non-compliance, while still adhering to a strict do no harm policy to your operations.

When installing licenses with eLicensing, you obtain license files from customer service, copy them to a Solutions Enabler or a Unisphere host, and load them onto storage systems.

Each license file fully defines the entitlements for a specific system, including its activation type (Individual or Enterprise), the licensed capacity, and the date the license was created. If you want to add a product title or increase the licensed capacity of an entitlement, obtain a new license file from online support and load it onto the storage system.

When managing licenses, Solutions Enabler, Unisphere, z/OS Storage Manager (EzSM), MF SCF native command line, TPF, and IBM i platform console, provide detailed usage reports that enable you to better manage capacity and compliance planning.

There are two types of eLicenses: host-based and array-based. Host-based licenses, as the name implies, are installed on the host. And, array-based licenses are installed on the storage system. For information about the types of licenses and the features they activate, see the Solutions Enabler Installation Guide.

Unisphere enables you to add and view array-based licenses, and add, view, and remove host-based licenses.

Unisphere uses array-based eLicensing.

NOTE:

For more information about eLicensing, see the Solutions Enabler Installation Guide.

Understanding user authorization User authorization is a tool for restricting the management operations users can perform on a storage system or with the Database Storage Analyzer application.

By default, user authorization is enabled for Unisphere users, regardless of whether it is enabled on the storage system.

When configuring user authorization, an Administrator or SecurityAdmin maps individual users or groups of users to specific roles on storage systems or Database Storage Analyzer, which determine the operations the users can perform. These user-to- role-to-storage system/Database Storage Analyzer mappings (known as authorization rules) are maintained in the symauth users list file, which is located on either a host or storage system, depending on the storage operating environment.

NOTE: If there is one or more users listed in the symauth file, users not listed in the file are unable to access or even see

storage systems from the Unisphere console.

Individual and group roles Users gain access to a storage system or component either directly through a role assignment and/or indirectly through membership in a user group that has a role assignment.

If a user has two different role assignments (one as an individual and one as a member of a group), the permissions that are assigned to the user are combined. For example, if a user is assigned a Monitor role and a StorageAdmin role through a group, the user is granted Monitor and StorageAdmin rights.

11

Roles A Unisphere user can assume a number of roles. Tasks and associated permissions are associated with each role.

The following lists the available roles. Note that you can assign up to four of these roles per authorization rule. For a more detailed look at the permissions that go along with each role, see Roles and associated permissions on page 13.

NoneProvides no permissions. MonitorPerforms read-only (passive) operations on a storage system excluding the ability to read the audit log or access

control definitions. StorageAdminPerforms all management (active or control) operations on a storage system and modifies GNS group

definitions in addition to all Monitor operations AdministratorPerforms all operations on a storage system, including security operations, in addition to all StorageAdmin

and Monitor operations. SecurityAdminPerforms security operations on a storage system, in addition to all Monitor operations. AuditorGrants the ability to view, but not modify, security settings for a storage system, (including reading the audit log,

symacly list and symauth) in addition to all Monitor operations. This is the minimum role required to view the storage system audit log.

DSA AdminCollects and analyzes database activity with Database Storage Analyzer.

A user cannot change their own role so as to remove Administrator or SecurityAdmin privileges from themselves.

Local ReplicationPerforms local replication operations (SnapVX or legacy Snapshot, Clone, BCV). To create Secure SnapVX snapshots a user needs to have Storage Admin rights at the array level. This role also automatically includes Monitor rights.

Remote ReplicationPerforms remote replication (SRDF) operations involving devices and pairs. Users can create, operate upon or delete SRDF device pairs but can't create, modify or delete SRDF groups. This role also automatically includes Monitor rights.

Device ManagementGrants user rights to perform control and configuration operations on devices. Note that Storage Admin rights are required to create, expand or delete devices. This role also automatically includes Monitor rights.

In addition to these user roles, Unisphere includes an administrative role, the Initial Setup User. This user, defined during installation, is a temporary role that provides administrator-like permissions for the purpose of adding local users and roles to Unisphere.

User IDs Users and user groups are mapped to their respective roles by IDs.

These IDs consist of a three-part string in the form:

Type:Domain\Name Where:

TypeSpecifies the type of security authority that is used to authenticate the user or group. Possible types are:

LIndicates a user or group that LDAP authenticates. In this case, Domain specifies the domain controller on the LDAP server. For example:

L:danube.com\Finance Indicates that user group Finance logged in through the domain controller danube.com

CIndicates a user or group that the Unisphere server authenticates. For example:

C:Boston\Legal Indicates that user group Legal logged in through Unisphere server Boston

HIndicates a user or group that is authenticated by logging in to a local account on a Windows host. In this case, Domain specifies the hostname. For example:

H:jupiter\mason Indicates that user mason logged in on host jupiter

DIndicates a user or group that is authenticated by a Windows domain. In this case, Domain specifies the domain or realm name. For example:

12

D:sales\putman Indicates that user putman has logged in through a Windows domain sales.

Namespecifies the username relative to that authority. It cannot be longer than 32 characters, and spaces are allowed if delimited with quotes. Usernames can be for individual users or user groups.

Within role definitions, IDs can be either fully qualified (as shown above), partially qualified, or unqualified. When the Domain portion of the ID string is an asterisk (*), the asterisk is treated as a wildcard, meaning any host or domain.

When configuring group access, the Domain portion of the ID must be fully qualified.

For example:

D:ENG\jonesFully qualified path with a domain and username (for individual domain users)

D:ENG.xyz.com\ExampleGroupFully qualified domain name and group name (for domain groups)

D:*\jonesPartially qualified that matches username jones with any domain

H:HOST\jonesFully qualified path with a hostname and username

H:*\jonesPartially qualified that matches username jones within any host

jonesUnqualified username that matches any jones in any domain on any host

If a user is matched by more than one mapping, the user authorization mechanism uses the more specific mapping. If an exact match (for example, D:sales\putman) is found, that is used. If a partial match (for example, D:*\putman) is found, that is used. If an unqualified match (for example, putman) is found, that is used. Otherwise, the user is assigned a role of None.

Roles and associated permissions Users gain access to a storage system or component either directly through a role assignment and/or indirectly through membership in a user group that has a role assignment.

The Role Based Access Control (RBAC) feature provides a method for restricting the management operations that individual users or groups of users may perform on storage systems. See https://www.youtube.com/watch?v=2V7KidifeA4 for more information.

The following diagram outlines the role hierarchy.

Roles are assigned as part of the user creation process.

The following tables detail the permissions associated with each role in Unisphere.

NOTE: The Unisphere Initial Setup User has all permissions on a storage system until an Administrator or SecurityAdmin is

added to the storage system.

The roles (and the acronyms used for the roles) in these tables are:

NoneProvides no permissions. Monitor (MO)Performs read-only (passive) operations on a storage system excluding the ability to read the audit log or

access control definitions.

13

StorageAdmin (SA)Performs all management (active or control) operations on a storage system and modifies GNS group definitions in addition to all Monitor operations

Admininstrator (AD)Performs all operations on a storage system, including security operations, in addition to all StorageAdmin and Monitor operations.

SecurityAdmin (SecA)Performs security operations on a storage system, in addition to all Monitor operations. Auditor (AUD)Grants the ability to view, but not modify, security settings for a storage system, (including reading the

audit log, symacly list and symauth) in addition to all Monitor operations. This is the minimum role required to view the storage system audit log.

Performance Monitor (PM)Includes Monitor role permissions and grants additional privileges within the performance component of the Unisphereapplication to set up various alerts and update thresholds to monitor array performance.

DSACollects and analyzes database activity with Database Storage Analyzer.

A user cannot change their own role so as to remove Administrator or SecurityAdmin privileges from themselves.

Local ReplicationPerforms local replication operations (SnapVX or legacy Snapshot, Clone, BCV). To create Secure SnapVX snapshots a user needs to have Storage Admin rights at the array level. This role also automatically includes Monitor rights.

Remote ReplicationPerforms remote replication (SRDF) operations involving devices and pairs. Users can create, operate upon or delete SRDF device pairs but can't create, modify or delete SRDF groups. This role also automatically includes Monitor rights.

Device ManagementGrants user rights to perform control and configuration operations on devices. Note that Storage Admin rights are required to create, expand or delete devices. This role also automatically includes Monitor rights.

NOTE: The RBAC roles for SRDF local and remote replication actions are outlined in RBAC roles for SRDF local and remote

replication actions on page 16.

NOTE: The RBAC roles for Timefinder SnapVX local and remote replication actions are outlined in RBAC roles for

TimeFinder SnapVX local and remote replication actions on page 16.

Table 2. User roles and associated permissions

Permissions AD SA MO SecA AUD None PM DSA

Create/delete user accounts

Yes No No Yes No No No No

Reset user password Yes No No Yes No No No No

Create roles Yes Yes No Yes (self- excluded)

No No No No

Change own password Yes Yes Yes Yes Yes Yes Yes Yes

Manage storage systems

Yes Yes No No No No No No

Discover storage systems

Yes No No Yes No No No No

Add/show license keys Yes Yes No No No No No No

Set alerts and Optimizer monitoring options

Yes Yes No No No No No No

Release storage system locks

Yes Yes No No No No No No

Set Access Controls Yes Yes No No No No No No

Set replication and reservation preferences

Yes Yes No No No No No No

View and export the storage system audit log

Yes No No Yes Yes No No No

14

Table 2. User roles and associated permissions (continued)

Permissions AD SA MO SecA AUD None PM DSA

Access performance data

Yes Yes Yes Yes Yes No Yes No

Start data traces Yes Yes Yes Yes Yes No Yes No

Set performance thresholds/alerts

Yes Yes No No No No Yes No

Create and manage performance dashboards

Yes Yes Yes Yes Yes No Yes No

Collect and analyze database activity with DSA

No No No No No No No Yes

Table 3. Permissions for Local Replication, Remote Replication and Device Management roles

Permissions Local Replication Remote

Replication Device Management

Create/delete user accounts No No No

Reset user password No No No

Create roles No No No

Change own password Yes Yes Yes

Manage storage systems No No No

Discover storage systems No No No

Add/show license keys No No No

Set alerts and Optimizer monitoring options

No No No

Release storage system locks No No No

Set Access Controls No No No

Set replication and reservation preferences

No No No

View the storage system audit log

No No No

Access performance data Yes Yes Yes

Start data traces Yes Yes Yes

Set performance thresholds/ alerts

No No No

Create and manage performance dashboards

Yes Yes Yes

Collect and analyze database activity with Database Storage Analyzer

No No No

Perform control and configuration operations on devices

No No Yes

Create, expand or delete devices No No No

15

Table 3. Permissions for Local Replication, Remote Replication and Device Management roles (continued)

Permissions Local Replication Remote

Replication Device Management

Perform local replication operations (SnapVX, legacy Snapshot, Clone, BCV)

Yes No No

Create Secure SnapVX snapshots

No No No

Create, operate upon or delete SRDF device pairs

No Yes No

Create, modify or delete SRDF groups

No No No

RBAC roles for TimeFinder SnapVX local and remote replication actions A user needs to be assigned the necessary roles to perform TimeFinder SnapVX local and remote replication actions.

The following table details the roles that are needed to perform TimeFinder SnapVX local and remote replication actions:

NOTE: Unisphere for PowerMax does not support RBAC device group management.

Local Replication Remote Replication Device Manager

Protection Wizard - Create SnapVx Snapshot

Yes (a)

Create Snapshot Yes (a)

Edit Snapshot Yes

Link Snapshot Yes (b) (c) Yes (d)

Relink Snapshot Yes (b) (c) Yes (d)

Restore Snapshot Yes (b) Yes (b)

Set Time To Live Yes

Set Mode Yes (b) Yes (d)

Terminate Snapshot Yes

Unlink Snapshot Yes (b) Yes (d)

(a) - Set Secure is blocked for users who only have Local_REP rights.

(b) - The user must have the specified rights on the source volumes.

(c) - The user may only choose existing storage groups to link to. Creating a storage group requires Storage Admin rights.

(d) - The user must have the specified rights on the link volumes.

RBAC roles for SRDF local and remote replication actions A user must be assigned the necessary roles to perform SRDF local and remote replication actions.

The following table details the roles that can perform SRDF local and remote replication actions:

NOTE: Unisphere for PowerMax does not support RBAC device group management.

16

Local Replication Remote Replication Device Manager

SRDF Delete Yes

SRDF Establish Yes

SRDF Failback Yes

SRDF Failover Yes

SRDF Invalidate Yes

SRDF Move Yes

SRDF Not Ready Yes

SRDF R1 Update Yes

SRDF Ready Yes

SRDF Refresh Yes

SRDF Restore Yes

SRDF Resume Yes

SRDF RW Disable R2 Yes

SRDF RW Enable Yes

SRDF Set Bias Yes

SRDF Set Consistency Yes

SRDF Set Mode Yes

SRDF Set SRDF/A Yes

SRDF Split Yes

SRDF Suspend Yes

SRDF Swap Yes

SRDF Write Disable Yes

Understanding access controls for volumes Access controls can be set on specific volumes within a storage system and those volumes can be assigned to a specific host.

Administrators, StorageAdmins, and SecurityAdmins can set access controls on specific volumes within a storage system and assign those volumes to a specific host. When set, only that host can see the volumes, and perform the granted operations. Other hosts that are connected to that storage system do not see those volumes. This behavior eliminates the possibility of one host inadvertently performing operations on volumes that belong to someone else.

NOTE: Refer to the Solutions Enabler Array Management CLI Product Guide for more information about Access Controls.

Storage Management Storage consists of the following: storage groups, service levels, templates, storage resource pools, volumes, external storage, vVols, FAST policies, tiers, thin pools, disk groups, and VLUN migration.

Storage Management covers the following areas:

Storage Group management - Storage groups are a collection of devices that are stored on the array, and an application, a server, or a collection of servers use them. Storage groups are used to present storage to hosts in masking/mapping, Virtual LUN Technology, FAST, and various base operations.

Service Level management - A service level is the response time target for a storage group. The service level sets the storage array with the required response time target for a storage group. It automatically monitors and adapts to the

17

workload needed maintain the response time target. The service level includes an optional workload type so it can be optimized to meet performance levels.

Template management - Using the configuration and performance characteristics of an existing storage group as a starting point, you can create templates that will pre-populate fields in the provisioning wizard and create a more realistic performance reservation in your future provisioning requests.

Storage Resource Pool management - Fully Automated Storage Tiering (FAST) provides automated management of storage array disk resources to achieve expected service levels. FAST automatically configures disk groups to form a Storage Resource Pool (SRP) by creating thin pools according to each individual disk technology, capacity, and RAID type.

Volume management - A storage volume is an identifiable unit of data storage. Storage groups are sets of volumes. External Storage management - External Fully Automated Storage Tiering (FAST.X) attaches external storage to storage

systems directs workload movement to these external arrays while having access to the array features such as local replication, remote replication, storage tiering, data management, and data migration. Also, it simplifies multi-vendor or Dell EMC storage array management.

vVol management - VMware vVols enable data replication, snapshots, and encryption to be controlled at the VMDK level instead of the LUN level, where these data services are performed on a per VM (application level) basis from the storage array.

FAST Policies management - A FAST policy consists of one to three DP tiers, or one to four VP tiers, but not a combination of both DP and VP tiers. Policies define a limit for each tier in the policy. This limit determines the amount of data from a storage group that is associated with the policy that can reside on the tier.

Tiers management - FAST automatically moves active data to high-performance storage tiers and inactive data to low-cost, high-capacity storage tiers.

Thin Pools management - Storage systems are preconfigured at the factory with virtually provisioned devices. Thin Provisioning helps reduce cost, improve capacity utilization, and simplify storage management. Thin Provisioning presents a large amount of capacity to a host and then consumes space only as needed from a shared pool. Thin Provisioning ensures that thin pools can expand in small increments while protecting performance, and performs nondisruptive shrinking of thin pools to help reuse space and improve capacity utilization.

Disk Groups management - A disk group is a collection of hard drives within the storage array that share the same performance characteristics.

VLUN Migration management - Virtual LUN Migration (VLUN Migration) enables transparent, nondisruptive data mobility for disk group provisioned storage system volumes, virtually provisioned storage system volumes, between storage tiers and between RAID protection schemes. Virtual LUN can be used to populate newly added drives or move volumes between high performance and high capacity drives, resulting in the delivery of tiered storage capabilities within a single storage system. Migrations are performed while providing constant data availability and protection.

Understanding storage provisioning Service level provisioning simplifies storage management by automating many of the tasks that are associated with provisioning storage.

With the release of HYPERMAX OS 5977, Unisphere introduces support for service level provisioning. Service level provisioning simplifies storage management by automating many of the tasks that are associated with provisioning storage.

Service level provisioning eliminates the need for storage administrators to manually assign physical resources to their applications. Instead, storage administrators specify the storage performance and capacity that is required for the application and let the system provision the workload appropriately.

By default, storage systems running HYPERMAX OS 5977 or PowerMaxOS 5978 are pre-configured with a single Storage Resource Pool (SRP). The SRP contains all the hard disks on the system that is organized into disk groups by technology, capacity, rotational speed, and RAID protection type. Storage administrators can view all SRPs configured on the system, and the demand that storage groups are placing on them.

Storage systems are also pre-configured with several service levels and workloads. Storage administrators use the service levels and workloads to specify the performance objectives for the application they are provisioning.

When provisioning storage for an application, storage administrators assign the appropriate SRP, service level, and workload to the storage group containing the LUNs associated with the application.

Unisphere provides the following methods for provisioning storage:

Recommended: This method relies on wizards to step you through the provisioning process. It is best suited for novice and advanced users who do not require a high level of customization. Customization is the ability to create their own volumes, storage groups, and so on.

Advanced: This method, as its name implies, is for advanced users who want the ability to control every aspect of the provisioning process.

18

This section provides the high-level steps for each method, with links to the relevant help topics for more detail.

Regardless of the method you choose, once you have completed the process, a masking view has been created. In the masking view, the volumes in the storage group are masked to the host initiators and mapped to the ports in the port group.

Before you begin:

The storage system has been configured.

To provision storage for storage systems running HYPERMAX OS 5977 or PowerMaxOS 5978:

Recommended

1. Use the Create Host dialog box to group host initiators (HBAs).

2. Use the Provision Storage wizard, which steps you through the process of creating the storage group, port group, and masking view.

Advanced

1. Use the Create Host dialog box to group host initiators (HBAs).

2. Create one or more volumes on the storage system.

3. Use the Create Storage Group dialog box to add the created volumes to a storage group, and associate the storage group with a storage resource pool, a service level, and a workload.

4. Group Fibre Channel and/or iSCSI front-end directors.

5. Associate the host, storage group, and port group into a masking view.

Unisphere provides the following methods for provisioning storage on storage systems running Enginuity OS 5876:

Recommended: This method relies on wizards to step you through the provisioning process. It is best suited for novice and advanced users who do not require a high level of customization. Customization is the ability to create their own volumes, storage groups, and so on.

Advanced: This method, as its name implies, is for advanced users who want the ability to control every aspect of the provisioning process.

This section provides the high-level steps for each method, with links to the relevant help topics for more detail.

Regardless of the method you choose, once you have completed the process, a masking view has been created. In the masking view, the volumes in the storage group are masked to the host initiators and mapped to the ports in the port group.

Before you begin:

The storage system has been configured.

To provision storage for storage systems running Enginuity OS 5876:

Recommended

1. Use the Create Host dialog box to group host initiators (HBAs).

2. Use the Provision Storage wizard, which steps you through the process of creating the storage group, port group, and masking view. The wizard optionally associates the storage group with a FAST policy.

Advanced

1. Use the Create Host dialog box to group host initiators (HBAs).

2. Create one or more volumes on the storage system.

3. Use the Create Storage Group wizard to create a storage group. If you want to add the volumes you created in step 2, be sure to set the Storage Group Type to Empty, and then complete adding volumes to storage groups.

4. Group Fibre Channel and/or iSCSI front-end directors.

5. Associate the host, storage group, and port group into a masking view.

6. Associate the storage group with a FAST policy.

Optional: Associate the storage that you created in step 3 with an existing FAST policy and assign a priority value for the association.

19

Understanding storage groups Storage groups are a collection of devices that are stored on the array, and an application, a server, or a collection of servers use them. Storage groups are used to present storage to hosts in masking/mapping, Virtual LUN Technology, FAST, and various base operations.

For storage groups on storage systems running HYPERMAX OS 5977 or higher:

The maximum number of storage groups that are allowed on a storage system running HYPERMAX OS 5977 is 16,384. For HYPERMAX OS 5977 or higher, the maximum number of child storage groups that are allowed in a cascaded

configuration is 64. A storage group can contain up to 4,096 volumes. A volume can belong to multiple storage groups when only one of the groups is under FAST control. You cannot create a storage group containing CKD volumes and FBA volumes.

For storage groups on storage systems running Enginuity OS 5876:

The maximum number of storage groups that are allowed on a storage system running Enginuity OS 5876 is 8,192. For Enginuity 5876 or higher, the maximum number of child storage groups that are allowed in a cascaded configuration is

32.

.

Understanding data reduction Data reduction allows users to reduce user data on storage groups and storage resources.

Data reduction is enabled by default and can be turned on and off at storage group and storage resource level.

If a storage group is cascaded, enabling data reduction at this level enables data reduction for each of the child storage groups. The user has the option to disable data reduction on one or more of the child storage groups if desired.

To turn the feature off on a particular storage group or storage resource, uncheck the Enable Data Reduction check box in the in the Create Storage Group, Modify Storage Group or Add Storage Resource To Storage Container dialogs or when using the Provision Storage or Create Storage Container wizards.

The following are the prerequisites for using data reduction:

Data reduction is only allowed on All Flash systems running the HYPERMAX OS 5977 Q3 2016 Service Release or PowerMaxOS 5978.

Data reduction is allowed for FBA devices only. The user must have at least StorageAdmin rights. The storage group needs to be FAST managed. The associated SRP cannot be comprised, either fully or partially, of external storage.

Reporting

Users are able to see the current compression ratio on the device, the storage group and the SRP. Efficiency ratios are reported in units of 1/10th:1.

NOTE: External storage is not included in efficiency reports. For mixed SRPs with internal and external storage only the

internal storage is used in the efficiency ratio calculations.

Understanding service levels A service level is the response time target for the storage group. The service level enables you set the storage array with the desired response time target for the storage group.

It automatically monitors and adapts to the workload to maintain (or meet) the response time target. The service level includes an optional workload type. The optional workload type can be used to further tune expectations for the workload storage group to provide enough flash to meet your performance objective.

20

Suitability Check restrictions The suitability check determines whether the storage system can handle the updated service level.

The Suitability Check option is only available when:

The storage system is running HYPERMAX OS 5977 or PowerMaxOS 5978. The storage system is registered with the performance data processing option for statistics. The workloads have been processed. All the storage groups that are involved have a service level and SRP set. The target SRP does not contain only external disk groups (like XtreamIO). The storage system is local. The storage group is not in a masking view (only for the local provisioning wizard). An issue with one of the selected ports arises when provisioning storage and a valid Front-End Suitability score cannot be

derived. Examples of issues are: a virtual port is selected, an offline port is selected, and a selected port has no negotiated speed. When an issue arises, 200.0% (not a real suitability score) is displayed. Excluding data has no impact on the 200% displayed value.

Understanding storage templates Storage templates are a reusable set of storage requirements that simplify storage management for virtual data centers by eliminating many of the repetitive tasks required to create and make storage available to hosts or applications.

About this task

With this feature, Administrators and Storage Administrators can create templates for their common provisioning tasks and then invoke them later when performing such things as:

Creating or provisioning storage groups.

The templates created on a particular Unisphere server can be used across all the arrays on that particular server.

Storage templates are supported on storage system running HYPERMAX OS 5977 or PowerMaxOS 5978.

A provisioning template contains configuration information and a performance reservation.

The performance reservation saved with a template is generated from a 2 week snapshot of the source storage group's performance data. The total IOPS and MBPS, I/O mixture, and skew profile from this snapshot are used for array impact tests when the template is used to provision a new storage group.

Understanding Storage Resource Pools A Storage Resource Pool is a collection of data pools that provide FAST a domain for capacity and performance management.

By default, a single default Storage Resource Pool is preconfigured at the factory. More Storage Resource Pools can be created with a service engagement. FAST performs all its data movements within the boundaries of the Storage Resource Pool.

Understanding volumes A storage volume is an identifiable unit of data storage. Storage groups are sets of volumes.

The Volumes view on the Unsiphere user interface provides you with a single place from which to view and manage all the volume types on the system.

Understanding Federated Tiered Storage Federated Tiered Storage (FTS) enables you to attach external storage to a storage system.

Attaching external storage enables you to use physical disk space on existing storage systems. You also gain access to features such as local replication, remote replication, storage tiering, data management, and data migration.

21

For additional information about FTS, see the following documents:

Federated Tiered Storage (FTS) Technical Notes Solutions Enabler Array Management CLI Product Guide Solutions Enabler TimeFinder Family CLI User Guide

Understanding FAST Fully Automated Storage Tiering (FAST) automates management of storage system disk resources on behalf of thin volumes.

NOTE: This section describes FAST operations for storage systems running HYPERMAX OS 5977 or PowerMaxOS 5978.

FAST automatically configures disk groups to form a Storage Resource Pool by creating thin pools according to each individual disk technology, capacity, and RAID type.

FAST technology moves the most active parts of your workloads (hot data) to high-performance flash disks and the least- frequently accessed storage (cold data) to lower-cost drives, using the best performance and cost characteristics of each different drive type. FAST delivers higher performance using fewer drives to help reduce acquisition, power, cooling, and footprint costs. FAST can factor in the RAID protections to ensure write heavy workloads go to RAID 1 and read heavy workloads go to RAID 6. This process is entirely automated and requires no user intervention.

FAST also delivers variable performance levels through service levels. Thin volumes can be added to storage groups and the storage group can be associated with a specific service level to set performance expectations.

FAST monitors the performance of the storage group relative to the service level and automatically provisions the appropriate disk resources to maintain a consistent performance level.

Understanding Workload Planner Workload Planner is a FAST component that is used to display performance metrics for applications. It also models the impact of migrating the workload from one storage system to another.

Workload Planner is supported on storage systems running Enginuity OS 5876 or HYPERMAX OS 5977.

For storage groups to be eligible for workload planning, they must meet the following criteria:

On a locally attached storage system registered for performance Belong to only one masking view. Under FAST control:

For storage systems running HYPERMAX OS 5977, they must be associated with a service level. For storage systems running Enginuity OS 5876, they must be associated with a FAST policy.

Contain only FBA volumes.

Also, the Unisphere server must be on an open systems host.

Understanding time windows Time windows are used by FAST, FAST VP, and Optimizer to specify when data can be collected for performance analysis and when moves or swaps can execute.

There are two types of time windows:

Performance time windowsSpecify when performance samples can be taken for analysis. Move time windowsSpecify when moves/swaps are allowed to start or not start.

In addition, performance and move time windows can be further defined as open or closed:

OpenWhen creating performance time windows, this specifies that the data collected in the time window should be included in the analysis. When creating move time windows, this specifies that the moves can start within the time window. This type of time window is also referred to as inclusive.

ClosedWhen creating performance time windows, this specifies that the data collected in the time window should be excluded from analysis. When creating move time windows, this specifies that the moves cannot start within the time window. This type of time window is also referred to as exclusive.

22

Understanding FAST.X FAST.X enables the integration of storage systems running HYPERMAX OS 5977 or higher and heterogeneous arrays.

FAST.X enables LUNs on external storage to be used as raw capacity. Data services such as SRDF, TimeFinder, and Open Replicator are supported on the external device.

For additional information, see the following documents:

Solutions Enabler Array Management CLI Guide Solutions Enabler TimeFinder CLI User Guide

Overview of external LUN virtualization When you attach external storage to a storage system, the SCSI logical units of an external storage system are virtualized as disks called eDisks.

eDisks have two modes of operation:

Encapsulation Allows you to preserve existing data on external storage systems and access it through storage volumes. These volumes are called encapsulated volumes.

External Provisioning

Allows you to use external storage as raw capacity for new storage volumes. These volumes are called externally provisioned volumes. Existing data on the external volumes is deleted when they are externally provisioned.

The following restrictions apply to eDisks:

Can only be unprotected volumes. The RAID protection scheme of eDisks depends on the external storage system. Cannot be AS400, CKD, or gatekeeper volumes. Cannot be used as VAULT, SFS, or ACLX volumes.

Encapsulation

Encapsulation has two modes of operation:

Encapsulation for disk group provisioning (DP encapsulation)

The eDisk is encapsulated and exported from the storage system as disk group provisioned volumes.

Encapsulation for virtual provisioning (VP encapsulation)

The eDisk is encapsulated and exported from the storage system as thin volumes.

In either case, Enginuity automatically creates the necessary volumes. If the eDisk is larger than the maximum volume capacity or the configured minimum auto meta capacity, Enginuity creates multiple volumes to account for the full capacity of the eDisk. These volumes are concatenated into a single concatenated meta volume to enable access to the complete volume of data available from the eDisk.

External provisioning

After you virtualize an eDisk for external provisioning, you can create volumes from the external disk group and present the storage to users. You can also use this storage to create a new FAST VP tier.

NOTE: If you use external provisioning, any data that is on the external volume is deleted.

23

Geometry of encapsulated volumes

Storage volumes are built based on the storage system cylinder size (fifteen 64 K tracks), so the capacity of storage volumes does not always match the raw capacity of the eDisk. If the capacity does not match, Enginuity sets a custom geometry on the encapsulated volume. For created meta volumes, Enginuity defines the geometry on the meta head, and only the last member can have a capacity that spans beyond the raw capacity of the eDisk.

Encapsulated volumes that have a cylinder size larger than the reported user-defined geometry are considered geometry limited. For more details and a list of restrictions that apply to geometry-limited volumes, see the Solutions Enabler Array Controls CLI Guide.

Understanding tiers FAST automatically moves active data to high-performance storage tiers and inactive data to low-cost, high-capacity storage tiers.

The following rules apply to tier creation:

This feature requires Enginuity OS 5876. The maximum number of tiers that can be defined on a storage system is 256. When a disk group or thin pool is specified, its technology type must match the tier technology. Disk groups can only be specified when the tier include type is static. A standard tier cannot be created if it :

Leads to static and dynamic tier definitions in the same technology. Partially overlaps with an existing tier. Two tiers partially overlap when they share only a subset of disk groups. For

example, Tier A partially overlaps with Tier B when Tier A contains disk groups 1 and 2, and Tier B contains only disk group 2.

Understanding thin pools Storage systems are preconfigured at the factory with virtually provisioned devices. Thin Provisioning helps reduce cost, improve capacity utilization, and simplify storage management. Thin Provisioning presents a large amount of capacity to a host and then consumes space only as needed from a shared pool. Thin Provisioning ensures that thin pools can expand in small increments while protecting performance, and performs nondisruptive shrinking of thin pools to help reuse space and improve capacity utilization.

Unisphere works on a best effort basis when creating thin pools, meaning that it attempts to satisfy as much as possible of the requested pool from existing DATA volumes, and then creates the volumes necessary to meet any shortfall.

Before you begin:

Thin pools contain DATA volumes of the same emulation and the same configuration.

When creating thin pools, Unisphere attempts to instill best practices in the creation process by updating the default protection level according to the selected disk technology:

Technology Default protection level

EFD RAID5(3+1)

FC 2-Way Mirror

SATA RAID6(6+2)

Understanding disk groups A disk group is a collection of hard drives within the storage array that share the same performance characteristics.

Disk groups can be viewed and managed from the Unisphere user interface.

24

Understanding Virtual LUN Migration Virtual LUN Migration (VLUN Migration) enables transparent, nondisruptive data mobility for both disk group provisioned and virtually provisioned storage system volumes between storage tiers and between RAID protection schemes.

NOTE: Virtual LUN migration requires Enginuity OS 5876.

Virtual LUN can be used to populate newly added drives or move volumes between high performance and high capacity drives, thereby delivering tiered storage capabilities within a single storage system. Migrations are performed while providing constant data availability and protection.

Virtual LUN Migration performs tiered storage migration by moving data from one RAID group to another, or from one thin pool to another. It is also fully interoperable with all other storage system replication technologies such as SRDF, TimeFinder/Clone, TimeFinder/Snap, and Open Replicator.

RAID Virtual Architecture allows, for the purposes of migration, two distinct RAID groups, of different types or on different storage tiers, to be associated with a logical volume. In this way, Virtual LUN allows for the migration of data from one protection scheme to another, for example RAID 1 to RAID 5, without interruption to the host or application accessing data on the storage system volume.

Virtual LUN Migration can be used to migrate regular storage system volumes and metavolumes of any emulation FBA, CKD, and IBM i series. Migrations can be performed between all drive types including high-performance enterprise Flash drives, Fibre Channel drives, and large capacity SATA drives.

Migration sessions can be volume migrations to configured and unconfigured space, or migration of thin volumes to another thin pool.

Understanding vVols VMware vVols enables data replication, snapshots, and encryption to be controlled at the VMDK level instead of the LUN level, where these data services are performed on a per VM (application level) basis from the storage array.

The vVol Dashboard provides a single place to monitor and manage vVols.

The storage system must be running HYPERMAX OS 5977 or PowerMaxOS 5978.

Host Management Storage hosts are systems that use storage system LUN resources. Unisphere manages the hosts.

Host Management covers the following areas:

Management of host and host groups Management of masking views - A masking view is a container of a storage group, a port group, and an initiator group , and

makes the storage group visible to the host. Devices are masked and mapped automatically. The groups must contain some devices entries.

Management of port groups - Port groups contain director and port identification and belong to a masking view. Ports can be added to and removed from the port group. Port groups that are no longer associated with a masking view can be deleted.

Management of initiators and initiator groups - An initiator group is a container of one or more host initiators (Fibre or iSCSI). Each initiator group can contain up to 64 initiator addresses or 64 child IG names. Initiator groups cannot contain a mixture of host initiators and child IG names

Monitor of Xtrem SW Cache (host) cache adapters. Management of PowerPath hosts Management of mainframe configured splits, CU images, and CKD volumes

25

Understanding hosts Storage hosts are systems that use storage system LUN resources. A logical unit number (LUN) is an identifier that is used for labeling and designating subsystems of physical or virtual storage.

The maximum number of initiators that are allowed in a host depends on the storage operating environment:

For Enginuity OS 5876, the maximum number of initiators that is allowed is 32. For HYPERMAX OS 5977 or higher, the maximum number of initiators that is allowed is 64.

Understanding masking views A masking view is a container of a storage group, a port group, and an initiator group , and makes the storage group visible to the host.

Masking viewed are manageable from the Unisphere user interface. Devices are masked and mapped automatically. The groups must contain some devices entries.

Understanding port groups Port groups contain director and port identification and belong to a masking view. Ports can be added to and removed from the port group. Port groups that are no longer associated with a masking view can be deleted.

Note the following recommendations:

Port groups should contain four or more ports.

Each port in a port group should be on a different director.

A port can belong to more than one port group. However, for storage systems running HYPERMAX OS 5977 or higher, you cannot mix different types of ports (physical FC ports, virtual ports, and iSCSI virtual ports) within a single port group.

Understanding initiators An initiator group is a container of one or more host initiators (Fibre or iSCSI).

Each initiator group can contain up to 64 initiator addresses or 64 child IG names. Initiator groups cannot contain a mixture of host initiators and child IG names.

Understanding PowerPath hosts PowerPath is host-based software that provides automated data path management and load-balancing capabilities for heterogeneous server, network, and storage deployed in physical and virtual environments.

The following are the minimum requirements to perform this task:

A storage system running PowerMax 5978 or higher PowerPath 6.3

Understanding mainframe management Service level provisioning for mainframe simplifies storage system management by automating many of the tasks that are associated with provisioning storage.

The mainframe dashboard provides you with a single place to monitor and manage configured splits, CU images, and CKD volumes.

The mainframe dashboard is organized into the following panels:

26

CKD Compliance - Displays how well CKD storage groups are complying with their respective service level policies, if applicable.

CKD Storage Groups - Displays the mainframe storage groups on the array. Double-click a storage group to see more details and information about its compliance and volumes.

Actions - Allow the user to provisioning storage and create CKD volumes. Summary - Displays the mainframe summary information in terms of splits, CU images, and CKD volumes.

With the release of HYPERMAX OS 5977 Q1 2016, Unisphere introduces support for service level provisioning for mainframe. Service level provisioning simplifies storage system management by automating many of the tasks that are associated with provisioning storage.

Service level provisioning eliminates the need for storage administrators to manually assign physical resources to their applications. Instead, storage administrators specify the service level and capacity that is required for the application and the system provisions the storage group appropriately.

You can provision CKD storage to a mainframe host using the Provision Storage wizard.

The storage system must be running HYPERMAX OS 5977 Q1 2016, or higher, and have at least one FICON director configured.

You can map CKD devices to front-end EA/EF directors. Addressing on EA and EF directors is divided into Logical Control Unit images, also known as CU images. Each CU image has its own unique SSID and contains a maximum of 256 devices (numbered 0x00 through 0xFF). When mapped to an EA or EF port, a group of devices becomes part of a CU image.

With the release of HYPERMAX OS 5977 Q2 2017, Unisphere introduces support for All Flash Mixed FBA/CKD arrays. NOTE: This feature is only available for All Flash 450F/850F/950F arrays that are:

Purchased as a mixed All Flash system

Installed at HYPERMAX OS 5977 Q2 2017 or later

Configured with two Storage Resource Pools - one FBA Storage Resource Pool and one CKD Storage Resource Pool

You can provision FBA/CKD storage to a mainframe host using the Provision Storage wizard.

NOTE:

1. A CKD SG can only provision from a CKD SRP.

2. A FBA SG can only provision from a FBA SRP.

3. FBA volumes cannot reside in a CKD SRP.

4. CKD volumes cannot reside in a FBA SRP.

5. Compression is only for FBA volumes.

You can map FBA devices to front-end EA/EF directors. Addressing on EA and EF directors is divided into Logical Control Unit images (CU images). Each CU image has its own unique SSID and contains a maximum of 256 devices (numbered 0x000 through 0xFF). When mapped to an EA or EF port, a group of devices becomes part of a CU image.

Data protection management Data Protection management ensures that data is protected and remains available.

Data Protection Management covers the following areas:

Management of snapshot policies Management of MetroDR Management of SRDF protected storage groups Management of device groups - A device group is a user-defined group consisting of devices that belong to a locally

attached array. Control operations can be performed on the group as a whole, or on the individual device pairs in the group. By default, a device can belong to more than one device group.

Management of SRDF groups - SRDF groups provide a collective data transfer path linking volumes of two separate storage systems. These communication and transfer paths are used to synchronize data between the R1 and R2 volume pairs that are associated with the RDF group. At least one physical connection must exist between the two storage systems within the fabric topology. See Dell EMC SRDF Introduction for an overview of SRDF.

Management of Non-disruptive migration (NDM) management - NDM enables you to migrate storage group (application) data in a non-disruptive manner with no downtime from NDM capable source arrays to NDM capable target arrays.

Management of SRDF/A DSE Pools Management of TimeFinder/Snap pool

27

Management of Open Replicator - Open Replicator is a software tool that is used to migrate data from third-party arrays to PowerMax.

Management of RecoverPoint systems Management of Virtual Witness - The Witness feature supports a third party that the two storage systems consult when

they lose connectivity with each other, that is, their SRDF links go out of service. When SRDF links go out of service, the Witness helps to determine, for each SRDF/Metro Session, which of the storage systems should remain active (volumes continue to be read and write to hosts) and which goes inactive (volumes not accessible).

Manage remote replication sessions Unisphere supports the monitoring and management of SRDF replication on storage groups directly without having to map to a device group.

The SRDF dashboard provides a single place to monitor and manage SRDF sessions on a storage system, including device groups types R1, R2, and R21.

See Dell EMC SRDF Introduction for an overview of SRDF.

Unisphere allows you to monitor and manage SRDF/Metro from the SRDF dashboard. SRDF/Metro delivers active/active high availability for non-stop data access and workload mobility within a data center and across metro distance. It provides array clustering for storage systems running HYPERMAX OS 5977 or PowerMaxOS 5978 enabling even more resiliency, agility, and data mobility. SRDF/Metro enables hosts and host clusters to directly access a LUN or storage group on the primary SRDF array and secondary SRDF array (sites A and B). This level of flexibility delivers the highest availability and best agility for rapidly changing business environments.

In a SRDF/Metro configuration, SRDF/Metro uses the SRDF link between the two sides of the SRDF device pair to ensure consistency of the data on the two sides. If the SRDF device pair becomes Not Ready (NR) on the SRDF link, SRDF/Metro must respond by choosing one side of the SRDF device pair to remain accessible to the hosts, while making the other side of the SRDF device pair inaccessible. There are two options which enable this choice: Bias and Witness.

The first option, Bias, is a function of the two storage systems running HYPERMAX OS 5977 taking part in the SRDF/Metro and is a required and integral component of the configuration. The second option, Witness, is an optional component of SRDF/Metro which allows a third storage system running Enginuity OS 5876 or HYPERMAX OS 5977 system to act as an external arbitrator to avoid an inconsistent result in cases where the bias functionality alone may not result in continued host availability of a surviving non-biased array.

Understanding Snapshot policy The Snapshot policy feature provides snapshot orchestration at scale (1,024 snaps per storage group). The feature simplifies snapshot management for standard and cloud snapshots.

Snapshots can to be used to recover from data corruption, accidental deletion or other damage, offering continuous data protection. A large number of snapshots can be difficult to manage. The Snapshot policy feature provides an end to end solution to create, schedule and manage standard (local) and cloud snapshots.

The snapshot policy (Recovery Point Objective (RPO)) specifies how often the snapshot should be taken and how many of the snapshots should be retained. The snapshot may also be specified to be secure (these snapshots cannot be terminated by users before their time to live (TTL), derived from the snapshot policy's interval and maximum count, has expired). Up to four policies can be associated with a storage group, and a snapshot policy may be associated with many storage groups. Unisphere provides views and dialogs to view and manage the snapshot policies. Unisphere also calculates and reports on the compliance of each storage group to its snapshot policies.

The following rules apply to snapshot policies:

The maximum number of snapshot policies (local and cloud) that can be created on a storage system is 20. Multiple storage groups can be associated with a snapshot policy.

A maximum of four snapshot policies can be associated with an individual storage group. A storage group or device can have a maximum of 256 manual snapshots. A storage group or device can have a maximum of 1024 snapshots. When there are 1024 snapshots in existence and another snapshot is taken, the oldest unused snapshot that is associated

with the snapshot policy is removed. When devices are added to a snapshot policy storage group, snapshot policies that apply to the storage group are applied to

the added devices.

28

When devices are removed from a snapshot policy storage group, snapshot policies that apply to the storage group are no longer applied to the removed devices.

If overlapping snapshot policies are applied to storage groups, they run and take snapshots independently.

Unisphere provides compliance information for each snapshot policy that is directly associated with a storage group. Snapshot policy compliance is measured against the count and intervals of the existing snapshots. Snapshots must be valid (must still exist, must be in a non-failed state, and must be at the expected scheduled time). A snapshot could be missing due to it being manually terminated or due a failure in the snapshot operation.

Snapshot compliance for a storage group is taken as the lowest compliance value for any of the snapshot policies that are directly associated with the storage group.

Compliance for a snapshot policy that is associated with a storage group is based on the number of valid snapshots within the retention count. The retention count is translated to a retention period for compliance calculation. The retention period is the snapshot interval multiplied by the snapshot maximum count. For example, a one hour interval with a 30 snapshot count means a 30-hour retention period.

The compliance threshold value for green to yellow is stored in the snapshot policy definition. Once the number of valid snapshots falls below this value, compliance turns yellow.

The compliance threshold value for yellow to red is stored in the snapshot policy definition. Once the number of valid snapshots falls below this value, compliance turns red.

In addition to performance level compliance, snapshot compliance is also calculated by polling the storage system once an hour for SnapVX related information for storage groups that have snapshot policies that are associated with them. The returned snapshot information is summarized into the required information for the database compliance entries.

When the maximum count of snapshots for a snapshot policy is changed, this changes the compliance for the storage group or service level combination. Compliance values are updated accordingly.

If compliance calculation is performed during the creation of a snapshot, then an establish-in-progress state may be detected. This is acceptable for the most recent snapshot but is considered failed for any older snapshot.

When a storage group and service level have only recently been associated and the full maximum count of snapshots has not yet been reached, Unisphere scales the calculation to the number of snapshots that are available and represents compliance accordingly until the full maximum count of snapshots has been reached. If a snapshot failed to be taken for a reason (such as the storage group or service level was suspended or a snapshot was manually terminated before the maximum snapshot count was reached), the compliance is reported as out of compliance appropriately.

When the service level interval is changed, the compliance window changes and the number of snapshots may not exist for correct compliance.

If a service level is suspended or a storage group or service level combination is suspended, snapshots are not created. Older snapshots fall outside the compliance window and the maximum count of required snapshots is not found.

Manual termination of snapshots inside the compliance window results in the storage group or service level combination falling out of compliance.

Configuration of alerts related to snapshot policies is available from Settings > Alerts > Alert Policies on the Unisphere user interface.

NOTE: Snapshot policy offsets (the execution time within the RPO interval) and snapshot time stamps are both mapped to

be relative to the clock (including time zone) of the local management host. If times are not synchronized across hosts,

these appear different to users on those hosts. Even if they are synchronized, rounding that occurs during time conversion

may result in the times being slightly different.

Unisphere supports the following snapshot policy management tasks:

Create snapshot policies View and modify snapshot policies Associate a snapshot policy and a storage group with each other Disassociate a snapshot policy and a storage group from each other View snapshot policy compliance Suspend or resume snapshot policies Suspend or resume snapshot policies associated with one, more than one, or all storage groups Set a snapshot policy snapshot to be persistent Bulk terminate snapshots (not specific to snapshots associated with a snapshot policy) Delete snapshot policies

29

Understanding SRDF/Metro Smart DR SRDF/Metro Smart DR is a two-region high available (HA) disaster recovery (DR) solution. It integrates SRDF/Metro and SRDF/A enabling HA DR for a Metro session.

A session or environment name uniquely identifies each smart DR environment . It is composed of three arrays (MetroR1 array, MetroR2 array, DR array). All arrays contain the same number of devices and all device pairings form a triangle.

The MetroR1 array contains:

One Metro SRDF Group that is configured to the MetroR2 array (MetroR1_Metro_RDFG) One DR SRDF Group that is configured to the DR array (MetroR1_DR_RDFG) Devices that are concurrent SRDF and are paired using MetroR1_Metro_RDFG and MetroR1_DR_RDFG.

The MetroR2 array contains:

One Metro SRDF Group that is configured to the MetroR1 array (MetroR2_Metro_RDFG) One DR SRDF Group that is configured to the DR array (MetroR2_DR_RDFG). Devices that are concurrent SRDF and are paired using MetroR2_Metro_RDFG and MetroR2_DR_RDFG.

The DR array contains one DR SRDF Group that is configured to the MetroR1 array (DR_MetroR1_RDFG).

Unisphere supports the setup, monitoring, and management of a smart DR configuration using both UI and REST API. POST, PUT and GET methods are accessible thorough the /92/replication/metrodr API resource.

Unisphere blocks attempts at using smart DR SRDF groups for other replication sessions, and also blocks certain active management on smart DR SRDF groups. including device expansion and adding new devices. This limitation can be overcome by temporarily deleting the Smart DR environment to perform these operations. Replication is never suspended so Recovery Point Objective (RPO) is not affected.

Unisphere blocks attempts at SRDF active management of storage groups that are part of a smart DR environment.

Understanding non-disruptive migration Non-Disruptive Migration (NDM) provides a method for migrating data from a source array to a target array without application host downtime. Minimally disruptive migration enables migrations on the same supported platforms as NDM it but requires a short application outage.

More NDM information is available in the Dell EMC Solutions Enabler Array Controls and Management CLI User Guide and the Non-Disruptive Migration Best Practices and Operational Guide.

Minimally disruptive migration enables migrations on the same supported platforms as NDM it but requires a short application outage. The outage is because that the non-disruptive nature of migration is heavily dependent on the behavior of multi-pathing software to detect or enable or disable paths and is not in the control of Dell EMC (except for PowerPath).

NDM enables you to migrate storage group (application) data (the storage groups must have masking views) in a non-disruptive manner with no downtime for the following scenarios:

From source arrays running Enginuity OS 5876 Q4 2016 and higher to target arrays running HYPERMAX OS 5977 Q4 2016 and higher.

From source arrays running Enginuity OS 5876 Q4 2016 and higher to target arrays running PowerMaxOS 5978. From source arrays running HYPERMAX OS 5977 Q3 2016 or higher to target arrays running PowerMaxOS 5978. From source arrays running HYPERMAX OS 5977 Q3 2016 or higher to target arrays running HYPERMAX OS 5977 Q3 2016

and higher.

30

From source arrays running PowerMaxOS 5978 to target arrays running PowerMaxOS 5978.

Source side service levels are automatically mapped to target side service levels.

NDM applies to open systems or FBA devices only.

NDM supports the ability to reduce data on all-flash storage systems while migrating.

An NDM session can be created on a storage group containing session target volumes (R2s) where the SRDF mode is synchronous. The target volumes of an NDM session may also have a SRDF/Synchronous session that is added after the NDM session is in the cutover sync state.

Suggested best practices

Try to migrate during slow processing times; QoS can be used to throttle copy rate. Use more SRDF links, if possible, to minimize impact:

Two is minimum number of SRDF links allowed; NDM can use up to eight SRDF links. More links = more IOPS, lower response time.

Use dedicated links as they yield more predictable performance than shared links.

You can migrate masked storage groups where the devices can also be in other storage groups. Examples of overlapping storage devices include:

Storage groups with the exact same devices, for example, SG-A has devices X, Y, Z; SG-B has devices X, Y, Z. Devices that overlap, for example, SG-A has devices X, Y, Z ; SG-B has devices X, Y. Storage groups where there is overlap with one other migrated SG, for example, SG-A has devices X, Y, Z ; SG-B has

devices W, X, Y ; SG-C has devices U,V,W.

The following migration tasks can be performed from Unisphere:

Setting up a migration environment - Configures source and target array infrastructure for the migration process. Viewing migration environments Creating a NDM session - Duplicates the application storage environment from source array to target array. Viewing NDM sessions Viewing NDM session details

Cutting over a NDM session - Switches the application data access form the source array to the target array and duplicates the application data on the source array to the target array.

Optional: Stop synchronizing data after NDM cutover and Start synchronizing data after NDM cutover - stop or start the synchronization of writes to the target array back to source array. When stopped, the application runs on the target array only.

Optional: Cancelling a NDM session - cancels a migration that has not yet been committed Committing a NDM session - Removes application resources from the source array and releases the resources that are used

for migration. The application permanently runs on the target array. Optional: Recovering a NDM session - recovers a migration process following an error. Removing a migration environment - Removes the migration infrastructure.

Understanding Virtual Witness The Virtual Witness feature supports a third party that the two storage systems consult if they lose connectivity with each other, that is, their SRDF links go out of service.

When SRDF links go out of service, the Witness helps to determine, for each SRDF/Metro Session, which of the storage systems should remain active (volumes continue to be read and write to hosts) and which goes inactive (volumes not accessible).

Before the HYPERMAX OS 5977 Q3 2016 or later release, a Witness could only be a third storage system that the two storage systems that are involved in a SRDF/Metro Session could both connect to over their SRDF links.

The HYPERMAX OS 5977 Q3 2016 or later release adds the ability for these storage systems to instead use a Virtual Witness (vWitness) running within a management virtual application (vApp) deployed by the customer.

For additional information on vWitness, see the Dell EMC SRDF/Metro vWitness Configuration Guide.

The following vWitness tasks can be performed from Unisphere.

Viewing Virtual Witness instances

Adding a Virtual Witness

31

Viewing Virtual Witness instance details

Enabling a Virtual Witness

Disabling a Virtual Witness

Removing Virtual Witness

Understanding SRDF Delta Set Extension (DSE) pools SRDF Delta Set Extension (DSE) pools provide a mechanism for augmenting the cache-based delta set buffering mechanism of SRDF/Asynchronous (SRDF/A) with a disk-based buffering ability.

This feature is useful when links are lost and the R1 system approaches the cache limitation. Data is moved out of cache into preconfigured storage pools set up to handle the excess SRDF/A data. When links recover, the data is moved back to cache and pushed over to the R2 system. DSE enables asynchronous replication operations to remain active when system cache resources are in danger of reaching system Write Pending (WP) or SRDF/A maximum cache limit.

Understanding TimeFinder/Snap operations TimeFinder/Snap operations enable you to create and manage copy sessions between a source volume and multiple virtual target volumes.

When you activate a virtual copy session, a point-in-time copy of the source volume is immediately available to its host through the corresponding virtual volume. Virtual volumes consume minimal physical disk storage because they contain only the address pointers to the data that is stored on the source volume or in a pool of SAVE volumes. SAVE volumes are storage volumes that are not host-accessible and can only be accessed through the virtual volumes that point to them. SAVE volumes provide pooled physical storage for virtual volumes.

Snapping data to a virtual volume uses a copy-on-first-write technique. Upon a first write to the source volume during the copy session, Enginuity copies the preupdated image of the changed track to a SAVE volume and updates the track pointer on the virtual volume to point to the data on the SAVE volume.

The attached host views the point-in-time copy through virtual volume pointers to both the source volume and SAVE volume, for as long as the session remains active. If you terminate the copy session, the copy is lost, and the space that is associated with the session is freed and returned to the SAVE volume pool for future use.

NOTE: TimeFinder operations are not supported directly on storage systems running HYPERMAX OS 5977 or PowerMaxOS

5978. Instead, they are mapped to their TimeFinder/SnapVX equivalents.

The following are the basic actions that are performed in a TimeFinder/Snap operation:

CreateCreates the relationship between the source volume and the virtual target volume. ActivateMakes the virtual target volume available for read/write access and starts the copy-on-first-write mechanism. RecreateCreates a point-in-time copy. RestoreCopies tracks from the virtual volume to the source volume or another volume. TerminateCauses the target host to lose access to data pointed to by the virtual volume.

For more information about TimeFinder concepts, see the Solutions Enabler TimeFinder Family CLI Product Guide and the TimeFinder Family Product Guide.

Understanding Open Replicator Open Replicator is a non-disruptive migration and data mobility application.

When the Open Replicator control volumes are on a storage system running HYPERMAX OS 5977 or higher, the following session options cannot be used:

Push Differential Precopy

There are many rules and limitations for running Open Replicator sessions. Refer to the Solutions Enabler Migration CLI Product Guide before creating a session. For a quick reference, see Open Replicator session options.

32

Open Replicator session options Open Replicator is a non-disruptive migration and data mobility application.

Depending on the operation you are performing, some of the following options may not apply.

Session Option Used with UI operation Description

Consistent Activate Causes the volume pairs to be consistently activated.

Donor Update Off Consistently stops the donor update portion of a session and maintains the consistency of data on the remote volumes.

Copy Create Volume copy takes place in the background. This is the default for both pull and push sessions.

Cold Create Control volume is write disabled to the host while the copy operation is in progress. A cold copy session can be created as long as one or more directors discovers the remote device.

Differential Create Creates a one-time full volume copy. Only sessions created with the differential option can be recreated.

For push operations, this option is selected by default.

For pull operations, this option is cleared by default (no differential session).

Donor Update Create Causes data written to the control volume during a hot pull to also be written to the remote volume.

Incremental Restore Maintains a remote copy of any newly written data while the Open Replicator session is restoring.

Force Terminate

Restore

Select the Force option if the copy session is in progress. This will allow the session to continue to copy in its current mode without donor update.

Donor Update Off Select the Force option if the copy session is in progress. This will allow the session to continue to copy in its current mode without donor update.

Force Copy Activate Overrides any volume restrictions and allows a data copy.

For a push operation, remote capacity must be equal to or larger than the control volume extents and vice versa for a pull operation. The exception to this is when you have pushed data to a remote volume that is larger than the control volume, and you want to pull the data back, you can use the Force_Copy option.

Front-End Zero Detection

Create Enables front end zero detection for thin control volumes in the session. Front end zero detection looks for incoming zero patterns from the remote volume, and instead of writing the incoming data of all zeros to the thin control volume, the group on the thin volume is de-allocated.

Hot Create Hot copying allows the control device to be read/ write online to the host while the copy operation is in progress. All directors that have the local

33

Session Option Used with UI operation Description

devices mapped are required to participate in the session. A hot copy session cannot be created unless all directors can discover the remote device.

Nocopy Activate Temporarily stops the background copying for a session by changing the state to CopyOnAccess or CopyOnWrite from CopyInProg.

Pull Create A pull operation copies data to the control device from the remote device.

Push Create A push operation copies data from the control volume to the remote volume.

Precopy Create

Recreate

For hot push sessions only, begins immediately copying data in the background before the session is activated.

SymForce Terminate Forces an operation on the volume pair including pairs that would be rejected. Use caution when checking this option because improper use may result in data loss.

Understanding device groups A device group is a user-defined group that consists of devices that belong to a locally attached array. Control operations can be performed on the group as a whole, or on the individual device pairs in the group. By default, a device can belong to more than one device group.

The user can create a legacy TF emulation from source devices with a SNAP VX snapshot. The prerequisites are:

A SNAP VX storage group with a snapshot must exist. A device group must already have been created from this storage group. The device group must also have enough candidate target devices to create the required TF emulation session.

Understanding SRDF groups SRDF groups provide a collective data transfer path linking volumes of two separate storage systems. These communication and transfer paths are used to synchronize data between the R1 and R2 volume pairs that are associated with the SRDF group. At least one physical connection must exist between the two storage systems within the fabric topology.

See Dell EMC SRDF Introduction for an overview of SRDF.

The maximum number of supported SRDF groups differs by operating system version:

OS Maximum number of SRDF Groups supported Group numbers

per storage system per director per port

5977 or 5978 250 250 250 1250

5876 250 64 64 1250

When specifying a local or remote director for a storage system running HYPERMAX OS 5977 or PowerMaxOS 5978, you can select one or more SRDF ports.

If the SRDF interaction includes a storage system running HYPERMAX OS 5977, then the other storage system must be running Enginuity OS 5876. Also, in this interaction the maximum storage system volume number that is allowed on the system running HYPERMAX OS 5977 is FFFF (65635).

34

SRDF session modes SRDF transparently remotely mirrors production or primary (source) site data to a secondary (target) site to users, applications, databases, and host processors.

Mode Description

Adaptive Copy Allow the source (R1) volume and target (R2) volume to be out of synchronization by a number of I/Os that are defined by a skew value.

Adaptive copy disk mode Data is read from the disk, and the unit of transfer across the SRDF link is the entire track. While less global memory is consumed, it is typically slower to read data from disk than from global memory. Also, more bandwidth is used because the unit of transfer is the entire track. Also, because it is slower to read data from disk than global memory, device resynchronization time increases.

Adaptive Copy WP Mode The unit of transfer across the SRDF link is the updated blocks rather than an entire track, resulting in more efficient use of SRDF link bandwidth. Data is read from global memory than from disk, thus improving overall system performance. However, the global memory is temporarily consumed by the data until it is transferred across the link.

This mode requires that the device group containing the SRDF pairs with R1 mirrors be on a storage system running Enginuity OS 5876.

Synchronous Provides the host access to the source (R1) volume on a write operation only after the storage system containing the target (R2) volume acknowledges that it has received and checked the data.

Asynchronous The storage system acknowledges all writes to the source (R1) volumes as if they were local devices. Host writes accumulate on the source (R1) side until the cycle time is reached and are then transferred to the target (R2) volume in one delta set. Write operations to the target device can be confirmed when the current SRDF/A cycle commits the data to disk by successfully de-staging it to the R2 storage volumes.

For storage systems running Enginuity OS 5876, you can put an SRDF relationship into Asynchronous mode when the R2 device is a snap source volume.

AC Skew Adaptive Copy Skew - sets the number of tracks per volume the source volume can be ahead of the target volume. Values are 065535.

SRDF session options SRDF transparently remotely mirrors production or primary (source) site data to a secondary (target) site to users, applications, databases, and host processors.

Session option Description Available with action

Bypass Bypasses the exclusive locks for the local or remote storage system during SRDF operations. Use this option only if you are sure that no other SRDF

Establish

Failback

Failover

35

Session option Description Available with action

operation is in progress on the local or remote storage systems.

Restore

Incremental Restore

Split

Suspend

Swap

Write Disable R1

Ready R1

Ready R2

RWDisableR2

Enable

Disable

Consistent Allows only consistent transition from async to sync mode.

Activate

Consistency Exempt Allows you to add or remove volumes from an SRDF group that is in Async mode without requiring other volumes in the group to be suspended.

Half Move

Move

Suspend

Establish Fails over the volume pairs, performs a dynamic swap, and incrementally establishes the pairs. This option is not supported when volumes operating in Asynchronous mode are read/write on the SRDF link. To perform a failover operation on such volumes, specify the Restore option detailed higher in this table.

Failover

Force Overrides any restrictions and forces the operation, even though one or more paired volumes may not be in the expected state. Use caution when checking this option because improper use may result in data loss.

Establish

Incremental Establish

Restore

Incremental Restore

Write Disable R1

Ready R1

Ready R2

RWDisableR2

Enable

Disable Swap

Immediate Causes the suspend, split, and failover actions on asynchronous volumes to happen immediately.

Suspend

Split

Failover

NoWD No write disable - bypasses the check to ensure that the target of operation is write disabled to the host. This applies to the source (R1) volumes when used with the Invalidate R1option and to the target (R2) volumes when used with the Invalidate R2 option.

36

Session option Description Available with action

SymForce Forces an operation on the volume pair including pairs that would be rejected. Use caution when checking this option because improper use may result in data loss.

Restore

Incremental Restore

Write Disable R1

Ready R1

Ready R2

RWDisableR2

Enable

Disable Swap

RecoverPoint Tag Specifies that the operation is performed on RecoverPoint volumes.

Restore

Failback

Refresh

R1 Update

Refresh R1 Marks any changed tracks on the source (R1) volume to be refreshed from the target (R2) side.

Swap

Refresh R2 Marks any changed tracks on the target (R2) volume to be refreshed from the source (R1) side.

Swap

Remote When performing a restore or failback action with the concurrent link up, data copied from the R2 to the R1 is copied to the concurrent R2. These actions require this option.

Restore

Incremental Restore

Failback

Restore When the failover swap completes, invalid tracks on the new R2 side (formerly the R1 side) are restored to the new R1 side (formerly the R2 side).

When used together with the Immediate option, the failover operation immediately deactivates the SRDF/A session without waiting two cycle switches for session to terminate.

Failover

Star Selecting this option indicates that the volume pair is part of an SRDF/Star configuration. SRDF/Star environments are three-site disaster recovery solutions that use one of the following:

Concurrent SRDF sites with SRDF/ Star

Cascaded SRDF sites with SRDF/ Star

This technology replicates data from a primary production (workload) site to both a nearby remote site and a distant remote site. Data is transferred in SRDF/Synchronous (SRDF/S) mode to the nearby remote site (synchronous target site) and in SRDF/Asynchronous (SRDF/A) mode to the distant remote

Establish

Failback

Failover

Restore

Incremental Restore

Split

Suspend

Write Disable R1

Ready R1

Ready R2

RWDisableR2

Enable

Disable

37

Session option Description Available with action

site (referred to as the asynchronous target site).

SRDF/Star is supported on Enginuity OS 5876. The Solutions Enabler SRDF Family CLI Product Guide contains more information about SRDF/Star.

SRDF/A control actions SRDF transparently remotely mirrors production or primary (source) site data to a secondary (target) site to users, applications, databases, and host processors.

Action Activate Type Write Pacing Type Description

Activate DSE N/A Activates the SRDF/A Delta Set Extension feature. This extends the available cache space by using device SAVE pools.

Write Pacing

This feature extends the availability of SRDF/A by preventing conditions that result in cache overflow on both the R1 and R2 sides.

Group write pacing

Group level write pacing is supported on storage systems running Enginuity OS 5876 and higher.

Activates SRDF/A write pacing at the group level.

Group and Volume Write Pacing

Activates SRDF/A write pacing at the group level and the volume level.

Volume Write Pacing

Volume write pacing is supported on storage systems running Enginuity 5876 and higher.

Activates SRDF/A write pacing at the volume level.

Write Pacing Exempt N/A Activates write pacing exempt. Write pacing exempt allows you to remove a volume from write pacing.

SRDF group modes SRDF groups provide a collective data transfer path linking volumes of two separate storage systems.

The following values can be set for SRDF groups:

SynchronousProvides the host access to the source (R1) volume on a write operation only after the storage system containing the target (R2) volume acknowledges that it has received and checked the data.

AsynchronousThe storage system acknowledges all writes to the source (R1) volumes as if they were local volumes. Host writes accumulate on the source (R1) side until the cycle time is reached and are then transferred to the target (R2) volume in one delta set. Write operations to the target volume can be confirmed when the current SRDF/A cycle commits the data to disk by successfully de-staging it to the R2 storage volumes.

For storage systems running Enginuity OS 5876, you can put an SRDF relationship into Asynchronous mode when the R2 volume is a snap source volume.

38

Semi SynchronousThe storage system containing the source (R1) volume informs the host of successful completion of the write operation when it receives the data. The SRDF (RA) director transfers each write to the target (R2) volume as the SRDF links become available. The storage system containing the target (R2) volume checks and acknowledges receipt of each write.

AC WP Mode On(adaptive copy write pending) the storage system acknowledges all writes to the source (R1) volume as if it was a local volume. The new data accumulates in cache until it is successfully written to the source (R1) volume and the remote director has transferred the write to the target (R2) volume.

AC Disk Mode OnFor situations requiring the transfer of large amounts of data without loss of performance; use this mode to temporarily transfer the bulk of your data to target (R2) volumes; then switch to synchronous or semi synchronous mode.

Domino Mode OnEnsures that the data on the source (R1) and target (R2) volumes are always synchronized. The storage system forces the source (R1) volume to a Not Ready state to the host whenever it detects one side in a remotely mirrored pair is unavailable.

Domino Mode OffThe remotely mirrored volume continues processing I/Os with its host, even when an SRDF volume or link failure occurs.

AC Mode OffTurns off the AC disk mode.

AC Change SkewModifies the adaptive copy skew threshold. When the skew threshold is exceeded, the remotely mirrored pair operates in the predetermined SRDF state (synchronous or semi-synchronous). When the number of invalid tracks drop below this value, the remotely mirrored pair reverts to the adaptive copy mode.

(R2 NR If Invalid) OnSets the R2 device to Not Ready when there are invalid tracks.

(R2 NR If Invalid) OfTurns off the (R2 NR_If_Invalid) On mode.

SRDF group SRDF/A flags SRDF groups provide a collective data transfer path linking volumes of two separate storage systems.

Flag Status

(C) Consistency X = Enabled, . = Disabled, - = N/A

(S) Status A = Active, I = Inactive, - = N/A

(R) RDFA Mode S = Single-session, M = MSC, - = N/A

(M) Msc Cleanup C = MSC Cleanup required, - = N/A

(T) Transmit Idle X = Enabled , . = Disabled, - = N/A

(D) DSE Status A = Active, I = Inactive, - = N/A

DSE (A) Autostart X = Enabled, . = Disabled, - = N/A

Understanding TimeFinder/Clone operations Clone copy sessions enable you to create clone copies of a source volume on multiple target volumes.

The source and target volumes can be either standard volumes or BCVs, if they are the same size and emulation type (FBA/ CKD). Once you have activated the session, the target host can instantly access the copy, even before the data is fully copied to the target volume.

NOTE: TimeFinder operations are not supported directly on storage systems running HYPERMAX OS 5977 or PowerMaxOS

5978. Instead, they are mapped to their TimeFinder/SnapVX equivalents.

An overview of a typical clone session is:

1. Create a device group, or add volumes to an existing device group. 2. Create the session; restore the session. 3. Activate the session. 4. View the progress of the session. 5. Terminate the session.

39

For more information about TimeFinder/Clone concepts, see the Solutions Enabler TimeFinder Family CLI Product Guide and the TimeFinder Family Product Guide.

Understanding TimeFinder/Mirror sessions TimeFinder/Mirror is a business continuity solution that enables the use of special business continuance volume (BCV) devices. Copies of data from a standard device (which are online for regular I/O operations from the host) are sent and stored on BCV devices to mirror the primary data. Uses for the BCV copies can include backup, restore, decision support, and applications testing. Each BCV device has its own host address, and is configured as a stand-alone device.

TimeFinder/Mirror requires Enginuity OS 5876. On storage systems running HYPERMAX OS 5977 or higher, TimeFinder/ Mirror operations are mapped to their TimeFinder/SnapVX equivalents.

TimeFinder operations are not supported on Open Replicator control volumes on storage systems running HYPERMAX OS 5977 or higher.

The TimeFinder/Mirror dashboard provides a single place to monitor and manage TimeFinder/Mirror sessions on a storage system.

Understanding TimeFinder SnapVX TimeFinder SnapVX is a local replication solution that is designed to nondisruptively create point-in-time copies (snapshots) of critical data.

TimeFinder SnapVX creates snapshots by storing changed tracks (deltas) directly in the Storage Resource Pool of the source volume. With TimeFinder SnapVX, you do not need to specify a target volume and source/target pairs when you create a snapshot. If the application ever needs to use the point-in-time data, you can create links from the snapshot to one or more target volumes. If there are multiple snapshots and the application needs to find a particular point-in-time copy for host access, you can link and relink until the correct snapshot is located.

Understanding RecoverPoint RecoverPoint provides block-level continuous data protection and continuous remote replication for on-demand protection and recovery at any point-in-time, and enables you to implement a single, unified solution to protect and/or replicate data across heterogeneous servers and storage.

RecoverPoint operations on Unisphere require OS Enginuity 5876 on the storage system.

Understanding Performance Management The Unisphere Performance Management application enables the user to gather, view, and analyze performance data to troubleshoot and optimize the storage systems.

Performance Management covers the following areas:

Dashboards - Display predefined, user-defined custom dashboards, and templates. Charts - Create custom charts across multiple categories/metrics/time intervals. Analyze - Provide in-depth drill-down on storage system data for various collection ranges. Heatmap - Display hardware instances represented as colored squares, with the color indicating utilization levels. Reports - Create, manage, and run performance reports. Real-Time Traces - Create, manage, and run performance real-time traces . Databases - Manage Performance database tasks, for example, back up, restore, delete, as well as individual performance

database information. Plan - Provide performance projection capacity dashboards displaying predicted future data that is based on linear

projection.

40

Database Storage Analyzer (DSA) Management Database Storage Analyzer (DSA) is a feature that provides a database to storage performance troubleshooting solution for Oracle and MS SQL Server databases running on storage systems. No extra licensing cost is required in order to use DSA.

DSA is accessible by any Unisphere user. A DSA only user can also be created that has read only Unisphere access and can only view Databases section and Performance.

The main database list view presents I/O metrics such as response time, Input/Output Operations per second (IOPS) and throughput from both the database and the storage system which helps to immediately identify any gap between the database I/O performance and the storage I/O performance.

DSA offers the following benefits:

Provides a unified view across database and storage. Quickly identifies when a database is suffering from high I/O response times. Reduces troubleshooting time for database and/or storage performance issuesDBAs and SAs can look at a unified

database and storage I/O metrics view and quickly identify performance gaps or issues on both layers. Identifies database bottlenecks that are not related to the storage. Maps DB objects to storage devices Allows better coordination between the SA and DBA. Reduces repetitive manual drill downs for troubleshooting.

DSA supports the mapping of database files located on VMware virtual disks to their storage system volumes. With full database mapping, DSA can actively monitor 15-30 databases per Unisphere installation, depending on database size. Registering a database or instance with no extents mapping option allows the user to monitor hundreds of databases.

RAC and ASM are supported for Oracle, for CDB DSA guest user name should be started with c##. An Oracle diagnostic pack license is required for monitoring Oracle databases.

In addition, DSA supports FAST hinting capabilities for Oracle and MS SQL databases on storage systems running HYPERMAX OS 5977 or PowerMaxOS 5978 that allows users to accelerate mission-critical database processes in order to achieve improved response time. The user provides the timeframe, the database objects that should be hinted and the business priority. DSA then sends hints to the array in advance so that the FAST internal engine promotes those Logical Block Addresses (LBAs) to the right tier at the right time.

NOTE: FAST hinting is only supported on hybrid arrays running HYPERMAX OS 5977 or PowerMaxOS 5978.

Understanding Unisphere support for VMware Unisphere supports the discovery of vCenters or ESXi servers (using a read-only user) and integrates the information into Unisphere. VMware information is connected to its storage extents and this enables seamless investigation of any storage- related issues.

Unisphere support for VMware provides the storage admin access to all the relevant storage-related objects to an ESXi server and also helps troubleshooting storage performance-related issues to the ESXi server.

You can, as a read-only user, discover at the vCenter level and discover an individual ESXi server. If a vCenter is discovered, then all ESXi servers under that vCenter are discovered. All ESXi servers that do not have local storage on the Unisphere performing the discovery, are filtered out.

Once the user adds VMware information, all other users of Unisphere can access this information.

The minimum version number that vCenter supports is version 5.5.

Understanding eNAS Embedded NAS (eNAS) integrates the file-based storage capabilities of VNX arrays into storage systems running HYPERMAX OS 5977 or PowerMaxOS 5978.

With this integrated storage solution, the Unisphere StorageAdmin provisions storage to eNAS data movers, which trigger the creation of storage pools in VNX. The users of Unisphere for VNX then use the storage pools for file-level provisioning, for example, creating file systems, file shares.

Unisphere provides the following features to support eNAS:

41

File System dashboard

Provides a central location from which to monitor and manage integrated VNX file services.

Provision Storage for File wizard

Allows you to provision storage to eNAS data movers.

Launch Unisphere for VNX

Allows you to link and launch Unisphere for VNX.

Understanding iSCSI Unisphere provides monitoring and management for Internet Small Computer Systems Interface (iSCSI) directors, iSCSI ports, iSCSI targets, IP interfaces, and IP routes on storage systems running HYPERMAX OS 5977 or PowerMaxOS 5978.

iSCSI is a protocol that uses the TCP to transport SCSI commands, enabling the use of the existing TCP/IP networking infrastructure as a SAN. As with SCSI over Fibre Channel (FC), iSCSI presents SCSI targets and devices to iSCSI initiators (requesters). Unlike NAS, which presents devices at the file level, iSCSI makes block devices available from the network. Block devices are presented across an IP network to your local system, and can be consumed in the same way as any other block storage device.

The iSCSI changes address the market needs originating from cloud/service provider space, where a slice of infrastructure, for example, computes, network and storage, is assigned to different users (tenants). Control and isolation of resources in this environment is achieved by the iSCSI changes. Also, more traditional IT enterprise environments also benefit from this new functionality. The changes also provide greater scalability and security.

Understanding Cloud Mobility for Dell EMC PowerMax Unisphere uses PowerMax Cloud Mobility functionality to enable you to move snapshots off the storage system and on to the cloud. The snapshots can also be restored back to the original storage system.

Cloud Mobility is available only on PowerMax systems running PowerMaxOS Q3 2020, and on Embedded Management instances of Unisphere for PowerMax.

The Unisphere UI supports the following operations:

Set up/resolve/remove a cloud system Configure and view cloud related network configuration (interfaces, teams, routes and DNS servers) Configure and view cloud providers View active cloud jobs Configure and view scheduled snapshots of a storage group using the snapshot policy functionality View and manage snapshots of a Storage Group that are archived or are in the process of being archived to the cloud Create a new snapshot for a Storage Group and archive this snapshot to a selected cloud provider View cloud snapshots for a selected Storage Group and recover a snapshot View and delete array cloud snapshots Back up cloud configuration Manage the cloud system certificates Set bandwidth limits View cloud alerts View cloud statistics

Understanding dynamic cache partitioning Dynamic Cache Partitioning (DCP) divides the cache memory into multiple partitions with unique names and their device path assignments. Partition areas can be made static or dynamic in size.

The dynamic partitioning provides flexibility to the amount of floating memory that can be allocated with a high and low watermark. The flexibility enables memory resources to be temporarily donated to other partitions when needed. The symqos command enables you to create partitions for different device groupings besides the default partition that all devices belong to initially. Each partition has a target cache percentage and a minimum and maximum percentage. Also, you can donate unused cache to other partitions after a specified donation time.

42

NOTE: Enginuity OS 5876 is required for managing dynamic cache partitions. DCPs can be viewed on storage systems

running HYPERMAX OS 5977 Q316SR or higher, but they cannot be managed.

43

Notes, cautions, and warnings

NOTE: A NOTE indicates important information that helps you make better use of your product.

Manualsnet FAQs

If you want to find out how the EMC Dell works, you can view and download the Dell EMC VMAX 200K V9.2.0 Storage Product Guide on the Manualsnet website.

Yes, we have the Product Guide for Dell EMC as well as other Dell manuals. All you need to do is to use our search bar and find the user manual that you are looking for.

The Product Guide should include all the details that are needed to use a Dell EMC. Full manuals and user guide PDFs can be downloaded from Manualsnet.com.

The best way to navigate the Dell EMC VMAX 200K V9.2.0 Storage Product Guide is by checking the Table of Contents at the top of the page where available. This allows you to navigate a manual by jumping to the section you are looking for.

This Dell EMC VMAX 200K V9.2.0 Storage Product Guide consists of sections like Table of Contents, to name a few. For easier navigation, use the Table of Contents in the upper left corner.

You can download Dell EMC VMAX 200K V9.2.0 Storage Product Guide free of charge simply by clicking the “download” button in the upper right corner of any manuals page. This feature allows you to download any manual in a couple of seconds and is generally in PDF format. You can also save a manual for later by adding it to your saved documents in the user profile.

To be able to print Dell EMC VMAX 200K V9.2.0 Storage Product Guide, simply download the document to your computer. Once downloaded, open the PDF file and print the Dell EMC VMAX 200K V9.2.0 Storage Product Guide as you would any other document. This can usually be achieved by clicking on “File” and then “Print” from the menu bar.