Contents

Dell DD7200 1.0 Storage System Implementation Guide PDF

1 of 76
1 of 76

Summary of Content for Dell DD7200 1.0 Storage System Implementation Guide PDF

EMC ProtectPoint

Version 1.0

Implementation Guide 302-001-384

REV 02

Copyright 2014-2015 EMC Corporation. All rights reserved. Published in USA.

Published March, 2015

EMC believes the information in this publication is accurate as of its publication date. The information is subject to change without notice.

The information in this publication is provided as is. EMC Corporation makes no representations or warranties of any kind with respect to the information in this publication, and specifically disclaims implied warranties of merchantability or fitness for a particular purpose. Use, copying, and distribution of any EMC software described in this publication requires an applicable software license.

EMC, EMC, and the EMC logo are registered trademarks or trademarks of EMC Corporation in the United States and other countries. All other trademarks used herein are the property of their respective owners.

For the most up-to-date regulatory document for your product line, go to EMC Online Support (https://support.emc.com).

EMC Corporation Hopkinton, Massachusetts 01748-9103 1-508-435-1000 In North America 1-866-464-7381 www.EMC.com

2 EMC ProtectPoint 1.0 Implementation Guide

Preface 5

Revision History 9

EMC ProtectPoint Overview 11

ProtectPoint solution overview...................................................................... 12 Basic backup workflow.................................................................... 15 Basic Restore workflow.................................................................... 16

ProtectPoint environment..............................................................................18 Additional Information..................................................................... 18

ProtectPoint Controller overview....................................................................19 Configuration file............................................................................. 19

ProtectPoint prerequisites............................................................................. 20 ProtectPoint Controller Prerequisites................................................ 21 Data Domain vdisk prerequisites......................................................22

Host considerations...................................................................................... 24

Application Storage Configuration 25

Application storage configuration overview................................................... 26 Data Domain sizing considerations............................................................... 26 Discovering application storage.................................................................... 27

Discovering device geometry............................................................27 Provisioning LUNs on the VMAX3 to the AR host............................................ 27 Encapsulating Data Domain devices on the VMAX3 array...............................30

Delete an encapsulated disk............................................................ 36

Setting Up the ProtectPoint Controller 39

ProtectPoint Controller setup overview.......................................................... 40 Setting up the ProtectPoint Controller............................................................40 Validating the configuration file.................................................................... 50

ProtectPoint Administration 53

ProtectPoint administration overview............................................................ 54 Application changes........................................................................ 54

ProtectPoint file systems............................................................................... 54 Specifying the ProtectPoint configuration file................................... 54 Performing a backup........................................................................ 54 Restoring a backup.......................................................................... 56 Replicating a backup........................................................................58 Deleting a backup............................................................................ 60 Rebuilding the Catalog..................................................................... 61

ProtectPoint CLI Options 63

ProtectPoint CLI options overview..................................................................64

Chapter 1

Chapter 2

Chapter 3

Chapter 4

Chapter 5

CONTENTS

EMC ProtectPoint 1.0 Implementation Guide 3

Specifying the ProtectPoint configuration file................................... 64 Managing the credentials..............................................................................64 Managing the catalog....................................................................................65 Managing a backup.......................................................................................65 Managing replications...................................................................................67 Showing the ProtectPoint Controller version.................................................. 69

Troubleshooting 71

ProtectPoint log file....................................................................................... 72 Check connectivity in the ProtectPoint environment...................................... 72 ProtectPoint troubleshooting scenarios......................................................... 72

Failure of a host at the primary site...................................................72 Failure of host with a new host on the secondary site....................... 73 Primary ste failure (both primary and protection storage)................. 73 Secondary site failure (both primary and protection storage)............73 Failure of primary storage at the production site...............................74 Failure of primary storage at the secondary site................................74 Failure of protection storage at the production site...........................74 Failure of protection storage at the secondary site............................75

Chapter 6

CONTENTS

4 EMC ProtectPoint 1.0 Implementation Guide

Preface

As part of an effort to improve its product lines, EMC periodically releases revisions of its software and hardware. Therefore, some functions described in this document might not be supported by all versions of the software or hardware currently in use. The product release notes provide the most up-to-date information on product features.

Contact your EMC technical support professional if a product does not function properly or does not function as described in this document.

Note

This document was accurate at publication time. Go to EMC Online Support (https:// support.emc.com) to ensure that you are using the latest version of this document.

Purpose This guide explains how to set up and configure the functionality available in the EMC ProtectPoint solution. Use this implementation guide in conjunction with the solution overview information documented in the EMC ProtectPoint Solutions Guide, and the EMC commands documented in the EMC ProtectPoint Command Reference Guide.

Note

A command line interface (CLI) command may offer more options than those described in this document. The EMC Data Domain Operating System Command Reference Guide and the EMC Solutions Enabler CLI Command Reference provide complete descriptions of the supported commands and options.

Audience This guide is intended for system administrator-level or equivalent users who are familiar with standard backup software packages and general backup administration.

Related documentation The following EMC Data Domain system documents provide additional information:

l EMC Data Domain Installation and Setup Guide for the particular Data Domain system

l EMC Data Domain Operating System Release Notes

l EMC Data Domain Operating System Initial Configuration Guide

l EMC Data Domain Operating System Command Quick Reference

l EMC Data Domain Operating System Command Reference Guide

l EMC Data Domain Operating System Administration Guide

l EMC Data Domain Operating System MIB Quick Reference

l EMC Data Domain Operating System Offline Diagnostics Suite User's Guide

l Hardware overview guide for the system

l Field replacement guides for the system components

l EMC Data Domain System Controller Upgrade Guide

l EMC Data Domain Expansion Shelf Hardware Guide for shelf model ES20 or ES30

l EMC Data Domain Boost for OpenStorage Administration Guide

Preface 5

l EMC Data Domain Boost for OpenStorage Release Notes

l EMC Data Domain Boost for Oracle Recovery Manager Administration Guide

l EMC Data Domain Boost for Oracle Recovery Manager Release Notes

l EMC Data Domain Boost SDK Programmer's Guide

l Statement of Volatility for the system

If you have the optional RSA Data Protection (DPM) Key Manager, see the latest version of the RSA Data Protection Manager Server Administrator's Guide, available with the RSA Key Manager product.

The following VMAX system documents provide additional information:

l EMC Solutions Enabler TimeFinder Family CLI User Guide

l EMC Solutions Enabler V8.0.1 Array Management CLI User Guide

Special notice conventions used in this document EMC uses the following conventions for special notices:

NOTICE

Addresses practices not related to personal injury.

Note

Presents information that is important, but not hazard-related.

Typographical conventions EMC uses the following type style conventions in this document:

Table 1 Typographical Conventions

Bold Indicates interface element names, such as names of windows, dialog boxes, buttons, fields, tab names, key names, and menu paths (what the user specifically selects or clicks)

Italic Highlights publication titles listed in text

Monospace Indicates system information, such as:

l System code

l System output, such as an error message or script

l Pathnames, filenames, prompts, and syntax

l Commands and options

Monospace italic Highlights a variable name that must be replaced with a variable value

Monospace bold Indicates text for user input

[ ] Square brackets enclose optional values

| Vertical bar indicates alternate selectionsthe bar means or

{ } Braces enclose content that the user must specify, such as x or y or z

... Ellipses indicate nonessential information omitted from the example

Where to get help EMC support, product, and licensing information can be obtained as follows:

Preface

6 EMC ProtectPoint 1.0 Implementation Guide

Product information For documentation, release notes, software updates, or information about EMC products, go to EMC Online Support at https://support.emc.com.

Technical support Go to EMC Online Support and click Service Center. You will see several options for contacting EMC Technical Support. Note that to open a service request, you must have a valid support agreement. Contact your EMC sales representative for details about obtaining a valid support agreement or with questions about your account.

Your comments Your suggestions will help us continue to improve the accuracy, organization, and overall quality of the user publications. Send your opinions of this document to DPAD.Doc.Feedback@emc.com.

Preface

7

Preface

8 EMC ProtectPoint 1.0 Implementation Guide

Revision History

Table 2 Document revision history

Revision Date Description

01 (1.0.0) December 2014 This is the initial release of this document.

02 (1.0.0) March 2015 This version of ProtectPoint provides the ability to delete encapsulated storage devices on the VMAX3 storage system.

Revision History 9

Revision History

10 EMC ProtectPoint 1.0 Implementation Guide

CHAPTER 1

EMC ProtectPoint Overview

This chapter includes the following topics:

l ProtectPoint solution overview.............................................................................. 12 l ProtectPoint environment...................................................................................... 18 l ProtectPoint Controller overview............................................................................19 l ProtectPoint prerequisites..................................................................................... 20 l Host considerations.............................................................................................. 24

EMC ProtectPoint Overview 11

ProtectPoint solution overview The EMC ProtectPoint solution integrates primary storage on an EMC VMAX3 array and protection storage for backups on an EMC Data Domain system. ProtectPoint provides block movement of the data on application source LUNs to encapsulated Data Domain LUNs for incremental backups.

The ProtectPoint solution requires both IP network (LAN or WAN) and Fibre Channel (FC) storage area network (SAN) connectivity. Table 3 on page 12 lists the required topologies for connecting each component of the solution.

Table 3 ProtectPoint topology requirements

Connected components Connection type

Primary application host to primary VMAX3 array FC SAN

Primary application host to primary Data Domain system IP LAN

Primary recovery host to primary VMAX3 array FC SAN

IP LANPrimary recovery host to primary Data Domain system

Primary VMAX3 array system to primary Data Domain system FC SAN

(Optional) Secondary recovery host to secondary VMAX3 array FC SAN

(Optional) Secondary recovery host to secondary Data Domain system IP LAN

(Optional) Secondary VMAX3 array system to secondary Data Domain system FC SAN

(Optional) Primary application host to secondary Data Domain system IP WAN

(Optional) Primary Data Domain system to secondary Data Domain system IP WAN

Figure 1 on page 13 shows a sample primary site topology.

EMC ProtectPoint Overview

12 EMC ProtectPoint 1.0 Implementation Guide

Figure 1 Sample primary site ProtectPoint topology

1. Application host 2. Recovery host 3. VMAX3 production device 0001A 4. VMAX3 production device 0001B 5. Encapsulated backup device 000BA 6. Encapsulated backup device 000BB 7. VMAX3 restore device 0001C 8. VMAX3 restore device 0001D 9. Encapsulated recovery device 000BC 10. Encapsulated recovery device 000BD 11. Data Domain vdisk device 0 12. Data Domain vdisk device 1 13. Data Domain vdisk device 2 14. Data Domain vdisk device 3

The ProtectPoint solution works with the features on the Data Domain system and VMAX3 array to provide VMAX3 to Data Domain protection. ProtectPoint uses the following features:

l On the Data Domain system:

n vdisk services

n FastCopy

l On the VMAX3 array:

n Federated Tiered Storage (FTS)

EMC ProtectPoint Overview

ProtectPoint solution overview 13

n SnapVX

Figure 2 on page 14 shows the data movement from the application/recovery (AR) host to the VMAX3 array, and then to the Data Domain system.

Figure 2 Data movement

1. AR host 2. Application 3. Host file system 4. Host operating system 5. Solutions Enabler 6. VMAX3 FTS and SnapVX functionality 7. VMAX3 production device 8. VMAX3 backup device 9. SnapVX link copy process 10. Data Domain vdisk device 11. Data Domain static-image

The solution enables an application administrator to leverage the ProtectPoint workflow to protect applications and application data. The storage administrator configures the underlying storage resources on the VMAX3 array and the Data Domain system. With this storage configuration information and the ProtectPoint software executable, the application administrator can trigger the workflow to protect the application. Before triggering the workflow, the application administrator must quiesce the application to ensure that an application-consistent snapshot is preserved on the Data Domain system.

In addition to backing up and protecting data, the application administrator must retain and replicate copies, restore data, and recover applications. The combination of ProtectPoint and the VMAX3 to Data Domain workflow enables the application administrator to complete all of these operations.

For restoring data ProtectPoint enables the application administrator to select a specific backup and make that backup available on selected VMAX3 devices. The operations to mount, mask, and restore the data must be performed manually on the VMAX3 system through EMC Solutions Enabler. The workflow provides a copy of the data, but not any application intelligence.

EMC ProtectPoint Overview

14 EMC ProtectPoint 1.0 Implementation Guide

Basic backup workflow In the basic backup workflow, data is transferred from the VMAX3 array to the Data Domain system. ProtectPoint manages the data flow, but does not modify the data.

To create a copy or backup of an application, the application administrator or other appropriate user must ensure that the copy or backup is application-consistent. This means that the application administrator must quiesce the application before initiating the backup operation. Using ProtectPoint to take the snapshot on the VMAX3 array enables the application administrator to minimize the disruption to the application.

After creating the snapshot, the application administrator uses ProtectPoint to move the snapshot to the Data Domain system. The VMAX3 array keeps track of the data that has changed since the last update to the Data Domain system, and only copies the changed data. Once all the data captured in the snapshot has been sent to the Data Domain system, the application administrator can create a static-image of the data that reflects the application-consistent copy initially created on the VMAX3 array.

The static-image and any additional metadata can be managed separately from the snapshot on the VMAX3 array, and can be a source from which to create additional copies of the backup. Static-images that are complete with metadata are called backup images. ProtectPoint creates one backup image for every protected LUN. Backup images can be combined into backup sets that represent an entire application point-in-time backup.

The backup workflow consists of the following steps:

1. On the application host, the application administrator quiesces the application.

2. On the VMAX3 array, ProtectPoint creates a snapshot of the VMAX3 primary storage device. It is safe to unquiesce the application when this step is complete.

3. The VMAX3 array analyzes the data and uses FTS to copy the changed data to an encapsulated Data Domain storage device.

4. The Data Domain system creates and stores a static-image of the snapshot.

Figure 3 on page 16 shows the basic backup workflow.

EMC ProtectPoint Overview

Basic backup workflow 15

Figure 3 Basic backup Workflow

1. Application host 2. VMAX3 production device 0001A 3. VMAX3 production device 0001B 4. Encapsulated backup device 000BA 5. Encapsulated backup device 000BB 6. Data Domain vdisk device 0 7. Data Domain vdisk device 1

Basic Restore workflow The application administrator can perform two types of restores: l Object-level restoreThe application administrator selects and restores one or more

files from a snapshot. l Full-application rollback restoreThe application administrator restores the

application to a previous point-in-time. A VMAX3 device-level rollback efficiently transfers the data.

For either type of restore, the application administrator selects the backup image to restore from the Data Domain system.

For an object-level restore, after selecting the backup image on the Data Domain system, the application administrator performs a restore to a new set of encapsulated Data Domain vdisk devices that the VMAX3 array presents to the AR host for an object-level restore.

For a full-application rollback restore, after selecting the backup image on the Data Domain system, the application administrator performs a restore to a new set of

EMC ProtectPoint Overview

16 EMC ProtectPoint 1.0 Implementation Guide

encapsulated Data Domain restore LUNs. Unlike an object-level restore, a full-application rollback restore requires manual intervention to complete the restore process. To make the backup image available on the VMAX3 array, the application administrator must create a snapshot between the encapsulated Data Domain restore LUN and the target VMAX3 LUN, and then initiate the copy operation.

The object-level restore workflow consists of the following steps:

1. The Data Domain system writes the static-image to the encapsulated storage device, making it available on the VMAX3 array.

2. The application administrator mounts the encapsulated storage device to the host, and uses OS- and application-specific tools and commands to restore specific objects.

Figure 4 on page 17 shows the object-level restore workflow.

Figure 4 Object-level restore workflow

1. Recovery host 2. Encapsulated recovery device 000BC 3. Encapsulated recovery device 000BD 4. Data Domain vdisk device 2 5. Data Domain vdisk device 3

The full-application rollback restore workflow consists of the following steps:

1. The Data Domain system writes the static-image to the encapsulated storage device, making it available on the VMAX3 array.

2. The application administrator creates a SnapVX snapshot of the encapsulated storage device and copies it to the VMAX3 storage device, overwriting the existing data on the device.

EMC ProtectPoint Overview

Basic Restore workflow 17

3. The restored data is presented to the application host.

Figure 5 on page 18 shows the full-application rollback restore workflow.

Figure 5 Full-application rollback restore workflow

1. Recovery host 2. VMAX3 restore device 0001C 3. VMAX3 restore device 0001D 4. Encapsulated recovery device 000BC 5. Encapsulated recovery device 000BD 6. Data Domain vdisk device 2 7. Data Domain vdisk device 3

ProtectPoint environment The ProtectPoint environment consists of the following components:

l A VMAX3 array with SnapVX and FTS

l A Data Domain system with the Data Domain Operating System (DDOS) 5.5 or higher, vdisk services, and optional Data Domain replication.

l An AR host with Solutions Enabler 8.0.1 in local mode

Additional Information The EMC ProtectPoint Solutions Guide provides more detailed information about the components that make up the ProtectPoint solution.

EMC ProtectPoint Overview

18 EMC ProtectPoint 1.0 Implementation Guide

ProtectPoint Controller overview The ProtectPoint Controller includes the following features:

l Provides a CLI that you can use to trigger the VMAX3 to Data Domain workflow for backup and restore operations.

l Provides an interface for to replicate backups to a secondary Data Domain system for disaster recovery.

l Provides commands for lifecycle management of the backups.

l Triggers backup and restore operations on the VMAX3 array and Data Domain system through the use of the Solutions Enabler and Data Domain vdisk management libraries, respectively.

l Operates on the device level. ProtectPoint works with VMAX3 LUNs and Data Domain vdisk devices, not with file system objects.

You can use the ProtectPoint Controller CLI to complete the following tasks:

l Create a snapshot of the production application LUNs on the VMAX3 array.

l Trigger the movement of data created from the point-in-time snapshot on the VMAX3 array to the encapsulated Data Domain devices.

l Create a static image for each LUN in the data set on the Data Domain system.

l Manage the static image replication from the source Data Domain system to a Data Domain system in the data recovery site.

l Securely manage the credentials for the Data Domain systems.

l Manage the ProtectPoint backup and restore catalog.

l Manage the lifecycles of the data backups by listing and optionally deleting existing backups.

l Show the version number of the ProtectPoint Controller.

l Validate the content and format of the configuration files.

Configuration file The default configuration file, protectpoint.config, contains information about the source devices, the backup target devices, the restore target devices, and the relationships between these devices for both the VMAX3 array and Data Domain system.

When you set up the ProtectPoint Controller on the AR host, modify the default configuration file to include the specific details about your devices. Table 4 on page 19 lists the required information for the modified configuration file.

Note

You can have more than one configuration file.

Table 4 Required configuration file information

Configuration information Purpose

Symdev-ID and SYMM-ID of the VMAX3 source devices

Identifiers for the VMAX3 production devices. Used to identify the production devices from which ProtectPoint creates a snapshot on the VMAX3 system.

EMC ProtectPoint Overview

ProtectPoint Controller overview 19

Table 4 Required configuration file information (continued)

Configuration information Purpose

Symdev-ID and Data Domain World Wide Number (WWN) of the encapsulated backup target devices

Identifiers for the encapsulated Data Domain devices. Snapshots are copied to these devices and picked up on the Data Domain system.

Data Domain WWN of the encapsulated restore target devices

Identifier for the encapsulated restore devices. Used to receive the static-image that contains the backup image from the Data Domain system.

Data Domain WWN of the restore target devices on the secondary Data Domain system

Identifiers for the restore devices at the secondary site. Used to copy the static-image that contains the backup image from the primary Data Domain system to the secondary Data Domain system.

vdisk pool and vdisk device group on the secondary Data Domain system

Primary Data Domain system hostname or IP address, username, and password

Information for the primary Data Domain system. Used for all control path operations for the Data Domain system.

Secondary Data Domain system hostname or IP address, username, and password.

Information for the secondary Data Domain system. Used for static-image replication.

ProtectPoint prerequisites You must meet the following prerequisites for ProtectPoint operations:

l You must have a VMAX3 array with SnapVX and FTS software.

l You must have a Data Domain system that is supported by ProtectPoint.

Note

A Data Domain replication license is required to use the ProtectPoint replication functionality.

l You must have the appropriate VMAX3 source capacity licenses.

l You must have Solutions Enabler 8.0.1 installed on the AR host.

l You must establish IP connectivity to the Data Domain systems TCP port 3009.

l You must configure SAN zoning between the VMAX3 FA ports and the AR hosts.

n You must ensure one DX emulation exists on each director within the same engine. Each DX emulation requires two ports exclusively for zoning to the Data Domain system.

Figure 6 on page 21 shows an example of a simple SAN zoning configuration.

EMC ProtectPoint Overview

20 EMC ProtectPoint 1.0 Implementation Guide

Figure 6 SAN zoning example

1. AR host 2. VMAX3 array 3. Data Domain system 4. Primary VMAX3 storage device 5. External device encapsulated on the VMAX3 array 6. VMAX3 FA ports 7. VMAX3 DX port 8. VMAX3 DX port 9. Data Domain storage device attached to the VMAX3 array as an external device 10. Data Domain HBA 11. Data Domain HBA 12. FC SAN 13. FC zone connecting the VMAX3 FA ports to the AR host 14. FC zones connecting the VMAX3 DX ports to the Data Domain HBA ports

ProtectPoint Controller Prerequisites The following ProtectPoint Controller prerequisites must be met before proceeding:

l Solutions Enabler must use the same user account as the ProtectPoint Controller.

EMC ProtectPoint Overview

ProtectPoint Controller Prerequisites 21

Note

Refer to the Solutions Enabler documentation to configure a user account without root access.

l The ProtectPoint Controller must be installed on the AR host.

Data Domain vdisk prerequisites You must meet the following prerequsities for Data Domain vdisk operations:

l You must configure a username and password for ownership of the Data Domain vdisk devices.

l You must apply a Data Domain vdisk license.

l You must create the Data Domain file system (DDFS).

vdisk object hierarchy Use the vdisk object hierarchy mappings in Table 5 on page 22 to plan the ProtectPoint configuration.

Table 5 vdisk object hierarchy mappings

Storage bbject Mapping level

Pool Department

Device-group Application

Device Device

Note

By default, access control is implemented at the pool level. If additional granularity is required, create the pools based on the access control requirements.

Data Domain supports the following maximum numbers of pools, device-groups, and vdisk devices:

l Pools: 32

l Device-groups: 1024 per pool

l vdisk devices: 2048, but quotas can be set to limit the amount of space allocated for vdisk devices

Use the following commands to determine if additional pools, device-groups, or vdisk devices can be created:

l filesys show space displays the space available to and used by Data Domain storage devices.

l mtree show compression displays compression statistics.

l quota capacity show all displays the capacity quotas and usage of all storage devices.

l vdisk pool show list displays a list of the pools on the Data Domain system.

Create a separate device-group for each application host that is part of the configuration. If multiple applications will run on a single host, create a separate device-group for each application. Place all backup and restore devices intended for use with an application

EMC ProtectPoint Overview

22 EMC ProtectPoint 1.0 Implementation Guide

into a single device-group for that application. ProtectPoint operations will fail if the backup and restore devices for a given application are in different device-groups.

Note

If the backup and restore devices for an application are in separate device-groups, there is no way to move devices from one device-group to another. Delete the existing backup and restore devices, and create new ones.

Data Domain sorage layout Figure 7 on page 23 shows the storage layout of the DDFS. Figure 7 DDFS storage layout

1. /data 2. /data/col1 3. /data/col1/backup 4. /data/col1/MTree2 5. /data/col1/MTree3

Table 6 on page 23 describes each element of the DDFS storage layout.

Table 6 DDFS storage elements

DDFS directory Description

/data Top-level directory of the Data Domain storage file system. This directory cannot be changed.

/data/col1 Represents a collection of data, and enables the expansion of the file system by creating additional collections, col2, col3, col4, and so on.

/data/col1/ backup

Contains backups of the data and directory structure of the collection. This MTree cannot be deleted or renamed. Subdirectories can be created to organize and separate the data.

/data/col1/ MTree

Lowest level of the Data Domain storage file system. Each MTree is an independently managed directory. Data Domain allows up to 100 MTrees

EMC ProtectPoint Overview

Data Domain vdisk prerequisites 23

Table 6 DDFS storage elements (continued)

DDFS directory Description

to be created, but performance degradation occurs when more than 32 MTrees are active at one time.

vdisk service The vdisk service enables you to create devices, device-groups, and device pools. The vdisk service also provides additional functionality, such as creating static-images (snapshots) and replicating data.

The EMC Data Domain Operating System Command Reference Guide provides more information about the vdisk service and the vdisk commands.

Note

The ProtectPoint Controller does not support the use of Virtual Tape Library (VTL) functionality or DD Boost over Fibre Channel (DFC) on a Data Domain system running vdisk services.

Data Domain file system The DDFS stores the vdisk objects. For example, a vdisk static-image can be treated as a file that resides within the DDFS. Therefore, replicating a static-image is the same as replicating a file. By leveraging the services provided by the DDFS, the vdisk service is able to efficiently create static-images of LUNs.

Note

The DDFS automatically defragments backups created in a ProtectPoint environment to prevent performance degradation over time.

The EMC Data Domain Operating System Administration Guide provides more information about the DDFS.

Host considerations The following host considerations can impact the ProtectPoint implementation:

l For FC multi-pathing, verify that enough FC ports are available on the VMAX3 array, the Data Domain system, and the FC switch.

l For IP network redundancy, verify that enough Ethernet ports and interfaces are available to create the redundant configuration.

EMC ProtectPoint Overview

24 EMC ProtectPoint 1.0 Implementation Guide

CHAPTER 2

Application Storage Configuration

This chapter includes the following topics:

l Application storage configuration overview........................................................... 26 l Data Domain sizing considerations....................................................................... 26 l Discovering application storage............................................................................ 27 l Provisioning LUNs on the VMAX3 to the AR host.................................................... 27 l Encapsulating Data Domain devices on the VMAX3 array.......................................30

Application Storage Configuration 25

Application storage configuration overview Application storage is the primary storage for application data. This storage often consists of a number of separate LUNs that have file systems and are made available to the application host by mount points or drive letters. These LUNs are the source for any ProtectPoint backup. This chapter provides information on configuring or discovering LUNs, and using that information to create the equivalent snapshot target and recovery LUNs.

When you configure application storage, you complete the following high-level processes:

l Provision VMAX3 LUNs to the AR host.

l Encapsulate Data Domain devices on the VMAX3 array.

Provision the LUNs on the VMAX3 array to the AR host first. Then encapsulate the Data Domain devices on the VMAX3 array.

The second procedure, encapsulating the Data Domain devices to the VMAX3 array, requires storage management visibility into both system environments. You complete some tasks in the VMAX3 environment and other tasks in the Data Domain environment. Therefore, you must have one window open to the VMAX3 environment and one window open to the Data Domain environment. You will switch between environments as you encapsulate the LUNs.

Data Domain sizing considerations Use the best practices and limits defined in the EMC VMAX3 documentation to plan for provisioning storage on the VMAX3 array.

The following guidelines apply to provisioning an appropriate amount of storage on the primary Data Domain system, and if applicable, providing an appropriate amount of storage at the secondary site:

l Verify that sufficient capacity is available on the primary Data Domain system to accommodate all the AR hosts in the deployment. Each VMAX3 LUN requires two encapsulated vdisk devices of equal or greater size on the primary Data Domain system. One device is for backups, and the other device is for restores.

l If applicable, verify that the VMAX3 array and Data Domain system at the secondary site have available storage capacity that is greater than or equal to the storage capacity on the primary VMAX3 array and Data Domain system.

Use the following values to estimate the required logical capacity on the Data Domain system:

l Number: The number of LUNs to back up.

l Size: The size of each LUN to back up.

l Copies: The number of backups to keep.

l Add two to the number of backups to keep to account for copies of the data on both the backup and restore devices.

The formula for calculating the logical capacity is: (Number * Size) * (Copies + 2)

Table 7 on page 27 shows an example with 10 LUNs that are each 10 GB in size, keeping five backups.

Application Storage Configuration

26 EMC ProtectPoint 1.0 Implementation Guide

Table 7 Sizing calculation example

Element Value

Number 10

Size 10 GB

Copies 5 + 2

Required capacity 700 GB

Note

Logical capacity is calculated prior to compression and deduplication. Compression and deduplication reduce the actual amount of capacity consumed.

Capacity monitoring is available at the following levels:

l Logical capacity is reported at the system, MTree (vdisk pool), and file (static-image) levels.

l Actual capacity is reported at the system level.

Discovering application storage You can configure ProtectPoint after storage provisioning is complete by discovering existing application storage and using the storage to determine the Data Domain storage requirements. The storage administrator can help identify all the necessary storage configuration information.

The information required is similar to the preceding details. The information includes, but is not limited to, the number of devices allocated to the application, the geometry of the devices created, and awareness of the masking views, ports used, and SAN zoning. The process to provision the Data Domain storage and encapsulate the storage on the VMAX3 array is identical to provisioning new storage.

Discovering device geometry You will need the geometry of the existing VMAX3 storage devices. Step 6 on page 32 provides instructions on identifying the VMAX3 device geometry and mapping it to the required geometry for Data Domain devices.

Provisioning LUNs on the VMAX3 to the AR host To provision the VMAX3 LUNs to the AR host, complete the following tasks on the VMAX3 array.

Note

This procedure is for provisioning LUNs on the VMAX3 array for new installations. Some of these steps may not be needed in all cases. Before you begin, use Solutions Enabler to determine the devices currently in use. Use that information to complete the tasks in this section, as applicable.

l Create one or more devices.

Application Storage Configuration

Discovering application storage 27

l Create a storage group.

l Add the devices to the storage group.

l Create a port group.

l Add the front-end ports to the port group where the server is zoned.

l Create an initiator group.

l Add the initiator WWNs of the AR host to the initiator group.

l Create a masking view.

l Discover newly provisioned LUNs on the AR host.

Note

Do not log out of the VMAX3 array at the end of this procedure.

Procedure

1. From the AR host, log in to the VMAX3 array.

2. Create the device. This is the device that will be provisioned to the AR host.

Run the following command: create dev count= , size = [MB | GB | CYL ], emulation= , config= [, attribute= [in pool= ] [member_state= ]] create dev count=4, size=1200 cyl, emulation=FBA, config=TDEV

3. View the geometry of the VMAX3 devices.

Run the following commands:

a. Run the following command to list the VMAX3 devices: symdev list -sid

symdev list -sid 0129 Symmetrix ID: 000196700129 Device Name Dir Device ---------------------------- ------- ------------------------------------- Cap Sym Physical SA :P Config Attribute Sts (MB) ---------------------------- ------- ------------------------------------- 00001 Not Visible ???:??? TDEV N/Grp'd ACLX RW 11 00002 Not Visible ???:??? TDEV N/Grp'd RW 6 00003 Not Visible ???:??? TDEV N/Grp'd RW 6 00004 Not Visible ???:??? TDEV N/Grp'd RW 6 (output truncated for display) 00019 Not Visible ???:??? TDEV N/Grp'd RW 6 0001A Not Visible ???:??? TDEV N/Grp'd RW 23016 0001B Not Visible ???:??? TDEV N/Grp'd RW 23016 0001C Not Visible ???:??? TDEV N/Grp'd RW 23016 0001D Not Visible ???:??? TDEV N/Grp'd RW 23016 000A6 Not Visible ???:??? TDEV N/Grp'd RW 215719 000A7 Not Visible ???:??? TDEV N/Grp'd RW 215719 000A8 Not Visible ???:??? TDEV N/Grp'd RW 215719

b. Run the following command to view the geometry of a specific device: symdev show [] -sid symdev show 0001A -sid 0129 ... Geometry : Native { Sectors/Track : 256

Application Storage Configuration

28 EMC ProtectPoint 1.0 Implementation Guide

Tracks/Cylinder : 15 Cylinders : 12275 512-byte Blocks : 47136000 MegaBytes : 23016 KiloBytes : 23568000 } ...

Use the geometry mappings in Table 9 on page 33 to create the Data Domain vdisk devices.

Table 8 VMAX3 to Data Domain device geometry mappings

VMAX3 device geometry value Equivalent Data Domain device geometry value

Sectors per track Sectors per track

Tracks per cylinder Heads

Cylinders Cylinders

4. Create a storage group.

Run the following command: symaccess -sid create -name -type storage [devs : | [, [, . . .]] | <-g [-std] [- bcv] [-vdev] [-tgt]> | [, , . . . ]> <-file [src] [tgt]> [-reserve_id [, [, . . .] ] ] ] symaccess -sid 0129 create -name group1 -type storage

5. Add devices to the storage group.

Run the following command: symsg -sg -sid [-i ] [-c ] add dev symsg -sg group1 -sid 0129 add dev device_1

6. Create a port group.

Run the following command: symaccess -sid create -name -type port [-dirport

: [, : ..]] symaccess -sid 0129 create -name group1 -type port

7. Add the front-end ports to the port group.

The front-end ports are the ones that connect the AR host to the VMAX3 array.

Run the following command: symaccess -sid -name -type [-dirport

: [, : [, : . . .]] [-ckd] add [-celerra] [-rp] symaccess -sid 0129 -name group1 -type port add -dirport 1e:8

8. Create an initiator group.

Run the following command: symaccess -sid create -name -type initiator [-wwn | -iscsi | -file

Application Storage Configuration

Provisioning LUNs on the VMAX3 to the AR host 29

| -ig ] [- consistent_lun] symaccess -sid 0129 create -name group1 -type initiator - consistent_lun

Note

The -consistent_lun option forces all devices masked to the initiator group to

connect to the same LUN with all available ports.

9. Add the initiator WWNs to the initiator group.

Initiators can be added to an existing initiator group by specifying the initiator type (- wwn or -iscsi), the initiator group name, or by using an input file.

Run the following command: symaccess -sid -name -type initiator -wwn | -iscsi i | -ig | -f add symaccess -sid 0129 -name group1 -type initiator -wwn 6002188000002ddb5d0525eee8a00011 add

10.Create a masking view.

Once you put the storage group, the port group, and the initiator group in the masking view, the initiators in the initiator group can see the devices via the port in the port group.

Run the following command: symaccess create view -sid -name - sg -pg -ig [-reserve_id [, [, ...] ] ] [-lun addr] [-ckd] [-celerra] [-rp] symaccess create view -sid 0129 -name view1 -sg group2 -pg portgroup1 -ig initator_group1

11.Update the local symapi database on the AR host.

symcfg discover

Do not log out of the VMAX3 array at the end of this procedure.

Encapsulating Data Domain devices on the VMAX3 array Before you begin

Before encapsulating the Data Domain Devices on to the VMAX3 array, verify the following prerequisites are met:

l You must have already provisioned LUNs on the VMAX3 array to the AR host.

l You know or can obtain the geometry of the devices created on the VMAX3 array.

l You are logged in to the VMAX3 array.

To encapsulate the Data Domain devices on the VMAX3 array, complete the following steps on the AR host. Use the AR host to access both the VMAX3 and Data Domain environments.

Application Storage Configuration

30 EMC ProtectPoint 1.0 Implementation Guide

Note

This procedure is for encapsulating Data Domain devices on the VMAX3 array for new installations. Some of these steps may not be needed in all cases.

Before you begin, use the appropriate commands in the EMC Data Domain Operating System Administration Guide and the Solutions Enabler functionality to determine the devices currently in use. Then use that information to complete the tasks in this section, as applicable.

Note

You complete some tasks in the VMAX3 environment, and other tasks in the Data Domain environment. Therefore, you must have one window open to the VMAX3 environment and one window open to the Data Domain environment.

Complete the following tasks on the Data Domain system:

1. Log in to an SSH session on the Data Domain system.

2. Enable the vdisk service if not already enabled.

3. Create a vdisk device pool.

4. Create a vdisk device-group.

5. Create vdisk devices that have the same geometry as the VMAX3 primary LUNs. Create two vdisk devices for every device created on the VMAX3 array.

6. Create an access group on the Data Domain.

7. Add the vdisk devices to the access group on the Data Domain.

8. Verify the VMAX3 DX ports and the Data Domain endpoint ports are zoned together.

9. View the list of VMAX3 initiators on the Data Domain.

10. Add the VMAX3 initiators to the access group on the Data Domain.

Use Solutions Enabler to complete the following steps on the VMAX3 array:

1. View the back-end ports (DX ports) on the VMAX3 array and display the WWNs.

2. Display the LUNs that are visible for a specific WWN.

3. List the disk groups that are available on the VMAX3 array.

4. Use the FTS functionality to encapsulate the Data Domain disks on the VMAX3 array.

Encapsulating an external LUN creates the VMAX3 LUN that enables access to the external LUN. Manually set the encapsulated LUN as the snapshot target. Encapsulating the restore vdisk LUNs is required for restoration operations.

The VMAX3 array must have four paths to the Data Domain system to properly enable the relationship between the two systems. ProtectPoint prerequisites on page 20 provides more information about path requirements.

Procedure

1. From the AR host, log in to the Data Domain system as the system administrator, sysadmin.

2. In the Data Domain environment, enable the vdisk protocol if not already enabled.

Run the following command: vdisk enable

To see if the vdisk protocol is enabled, run the vdisk status command.

Application Storage Configuration

Encapsulating Data Domain devices on the VMAX3 array 31

3. Create a user to own the new vdisk device pool.

Run the following command: user add [role {admin | security | user | backup- operator | none}] user add user1 role none

Note

The EMC Data Domain Operating System Version 5.5 Command Reference Guide provides more information about the user add command and its options.

4. Create the vdisk device pool.

Note

Use the guidelines in vdisk object hierarchy on page 22 to name device pools.

Run the following command: vdisk pool create user vdisk pool create demo-1 user user1

5. Create vdisk device groups within the vdisk device pool.

Note

Use the guidelines in vdisk object hierarchy on page 22 to name device-groups.

Run the following command: vdisk device-group create [count ] [capacity ] pool device-group vdisk device-group create pool demo-1 device-group demo-devgrp

Note

Create a separate device-group for each AR host.

6. View the geometry of the VMAX3 devices.

Run the following commands:

a. Run the following command to list the VMAX3 devices: symdev list -sid

symdev list -sid 0129 Symmetrix ID: 000196700129 Device Name Dir Device ---------------------------- ------- ------------------------------------- Cap Sym Physical SA :P Config Attribute Sts (MB) ---------------------------- ------- ------------------------------------- 00001 Not Visible ???:??? TDEV N/Grp'd ACLX RW 11 00002 Not Visible ???:??? TDEV N/Grp'd RW 6 00003 Not Visible ???:??? TDEV N/Grp'd RW 6 00004 Not Visible ???:??? TDEV N/Grp'd RW 6 (output truncated for display) 00019 Not Visible ???:??? TDEV N/Grp'd RW 6 0001A Not Visible ???:??? TDEV N/Grp'd RW 23016 0001B Not Visible ???:??? TDEV N/Grp'd RW 23016 0001C Not Visible ???:??? TDEV N/Grp'd RW 23016 0001D Not Visible ???:??? TDEV N/Grp'd RW 23016 000A6 Not Visible ???:??? TDEV N/Grp'd RW 215719

Application Storage Configuration

32 EMC ProtectPoint 1.0 Implementation Guide

000A7 Not Visible ???:??? TDEV N/Grp'd RW 215719 000A8 Not Visible ???:??? TDEV N/Grp'd RW 215719

b. Run the following command to view the geometry of a specific device: symdev show [] -sid symdev show 0001A -sid 0129 ... Geometry : Native { Sectors/Track : 256 Tracks/Cylinder : 15 Cylinders : 12275 512-byte Blocks : 47136000 MegaBytes : 23016 KiloBytes : 23568000 } ...

Use the geometry mappings in Table 9 on page 33 to create the Data Domain vdisk devices.

Table 9 VMAX3 to Data Domain device geometry mappings

VMAX3 device geometry value Equivalent Data Domain device geometry value

Sectors per track Sectors per track

Tracks per cylinder Heads

Cylinders Cylinders

7. Create the vdisk device that matches the geometry of the device created on the VMAX3 array.

Run the following command: vdisk device create [count ] heads cylinders count> pool device-group

vdisk device create heads 15 cylinders 109227 sectors-per-track 256 pool demo-1 device-group demo-devgrp

8. (Optional) Display the vdisk pools, the vdisk device-groups, and the vdisk devices, as appropriate.

Run one or more of the following commands: vdisk pool show detailed , vdisk device-group show detailed , or vdisk device show detailed vdisk pool show detailed vdisk device-group show detailed demo-devgrp vdisk device show detailed device-demo

9. Create an access group.

Run the following command: scsitarget group create service vdisk scsitarget group create demo-accgrp service vdisk

Application Storage Configuration

Encapsulating Data Domain devices on the VMAX3 array 33

10.Add the newly created vdisk to the access group.

Run the following command: vdisk group add {device | pool device-group [device ]} [lun ] [primary-endpoint {all | none | }] [secondary-endpoint {all | none | }] vdisk group add demo-accgrp device vdisk-dev16

11.View the list of initiators.

The list displays the VMAX3 back-end (DX) ports that have logged on to the Data Domain system.

Run the following command: scsitarget initiator show list

scsitarget initiator show list Initiator System Address Group Service ------------------ ----------------------- ----------- ----------- initiator-1 2a:10:00:21:88:00:82:74 n/a n/a initiator-2 2b:10:00:21:88:00:82:74 n/a n/a initiator-3 50:02:18:82:08:a0:02:14 n/a n/a initiator-4 50:02:18:81:08:a1:03:cc n/a n/a ucs16d_2a 21:00:00:24:ff:3f:25:1a n/a n/a ucs16d_2b 21:00:00:24:ff:3f:25:1b n/a n/a ------------------ ----------------------- ----------- -----------

12.Rename (alias) the initiators.

Run the following command: scsitarget initiator rename initiator-name>

scsitarget initiator rename initiator-1 vmax4d_3_08 Initiator 'initiator-1' successfully renamed.

13.Add the initiators to the access group.

Run the following command: vdisk group add initiator vdisk group add demo-accgrp initiator symm_9h0

14.Switch to the VMAX3 environment.

15.Review the list of DX ports, which are connected to the Data Domain system from the VMAX3 array.

Run the following command: symsan list -sid -sanports -DX all -port all symsan list -sid 0129 -sanports -DX all -port all

16.Copy the WWN for one of the ports and use the WWN to determine which LUNs are visible through this WWN.

The LUN is "exposed" but is not ready for use yet. The LUN must be encapsulated on to the VMAX3 array with FTS.

Run the following command: symsan -sid -dir All -p All list -sanluns -wwn

symsan -sid 0129 -dir All -p All list -sanluns -wwn 2800002182DDB5D 17.Create a Symmetrix device group.

Application Storage Configuration

34 EMC ProtectPoint 1.0 Implementation Guide

Run the following command: symdg create -sid -type REGULAR symdg create -sid 0129 device-group-5 -type REGULAR

18.Add the source and target devices to the device group:

a. Run the following command to add the source device: symdg add -sid dev symdg add -sid 0129 dev 001

b. Run the following command to add the target device: symdg add -sid dev -tgt symdg add -sid 0129 dev -tgt 002

19.Encapsulate the Data Domain disk on the VMAX3 array. There are two ways to encapsulate disks:

l Run the following command to encapsulate disks individually : symconfigure -sid -cmd "add external_disk, wwn= , encapsulate_data=yes;" commit -v -nop symconfigure -sid 0129 -cmd "add external_disk, wwn=6002189000002DDB5D0525EEE8A00011, encapsulate_data=yes;" commit -v -nop

It takes approximately seven minutes to encapsulate each device.

l Run the following commands to encapsulate a group of devices:

a. In a BASH shell on the Data Domain system, run the following command to capture all the vdisk devices to encapsulate in a text file: ssh sysadmin@ { } vdisk device show list pool { } device-group { } | awk '/^vdisk-dev/{print "add external_disk wwn=" $5 ", encapsulate_data=YES;"}' | sed 's/://g' > devs.txt

Note

The command will fail if there are less than two devices specified in the text file.

b. On the AR host, run the following command to encapsulate the vdisk devices listed in the text file: symconfigure commit -f devs.txt

Establishing a configuration change session...............Established. Processing symmetrix 000196700129 Performing Access checks..................................Allowed. Checking Device Reservations..............................Allowed. Initiating COMMIT of configuration changes................Queued. COMMIT requesting required resources......................Obtained. Step 008 of 064 steps.....................................Executing. Step 091 of 214 steps.....................................Executing. Step 210 of 219 steps.....................................Executing. Local: COMMIT............................................Done. New symdev: 0004C [DATA device] New symdev: 000BA [TDEV] Terminating the configuration change session..............Done. The configuration change session has successfully completed. Run the symconfigure command to encapsulate the remaining three vDisk devices.

Application Storage Configuration

Encapsulating Data Domain devices on the VMAX3 array 35

Note

The VMAX3 array returns a contiguous range of new device names whenever possible. However, the range of device names will not always be contiguous.

The system takes the device with the specified WWN (the device is now visible to the VMAX3 array), encapsulates the device, puts the device into the external device group, and assigns a name to the device.

20.Establish the relationship between the VMAX3 array and Data Domain system, and activate the snapshot.

Run the following command: symsnapvx -sid -dg establish -name

symsnapvx -sid 0129 -dg device-group-5 establish -name dg5-snap 21.Move the data from the VMAX3 array on to the Data Domain system.

This command moves the changed blocks on the source device to the target device.

Run the following command: symsnapvx -sid -dg link -copy - snapshot_name symsnapvx -sid 0129 -dg device-group-5 link -copy -snapshot_name dg5-snap

22.Check the status of the link copy operation.

Run the following command: symsnapvx list -detail -sid . symsnapvx list -detail -sid 0129

Delete an encapsulated disk If the environment requirements change, Data Domain vdisks that are encapsulated on the VMAX3 system can be deleted from the ProtectPoint configuration.

Note

You complete some tasks in the VMAX3 environment, and other tasks in the Data Domain environment. Therefore, you must have one window open to the VMAX3 environment and one window open to the Data Domain environment.

Procedure

1. Terminate all snapshot and replication sessions to the encapsulated disk you intend to delete.

Performing a backup on page 54 explains how to abort a backup in progress, and Replicating a backup on page 58 explains how to abort a replication in progress.

2. Place the encapsulated device in a not-ready state and unmap it from any hosts that have access to it.

3. Remove the encapsulated disk from the ProtectPoint configuration file.

Setting up the ProtectPoint Controller on page 40 explains how to edit the configuration file.

4. Delete the encapsulated disk on the VMAX3.

Application Storage Configuration

36 EMC ProtectPoint 1.0 Implementation Guide

Run the following command: symconfigure -sid -cmd "remove external_disk | spid= >;" commit -nop symconfigure -sid 0129 -cmd "remove external_disk wwn=2800002182DDB5D;" commit -nop

5. Delete the corresponding vdisk device from the Data Domain system.

Run the following command: vdisk device destroy [destroy static-images {yes | no}

Application Storage Configuration

Delete an encapsulated disk 37

Application Storage Configuration

38 EMC ProtectPoint 1.0 Implementation Guide

CHAPTER 3

Setting Up the ProtectPoint Controller

This chapter includes the following topics:

l ProtectPoint Controller setup overview.................................................................. 40 l Setting up the ProtectPoint Controller....................................................................40 l Validating the configuration file.............................................................................50

Setting Up the ProtectPoint Controller 39

ProtectPoint Controller setup overview When you set up the ProtectPoint Controller, you complete some tasks in the VMAX3 environment, and other tasks in the Data Domain environment. Therefore, you must have one window open to the VMAX3 environment and one window open to the Data Domain environment. You will switch between environments as you set up the ProtectPoint Controller on the AR host.

Note

You receive a password with the software license. You must use the password when you unpack the downloaded software package.

Setting up the ProtectPoint Controller You must install the Solutions Enabler 8.0.1 package, and the EMC ProtectPoint 1.0 package. You must also create one or more configuration files for the applications and devices per application, save the Data Domain credentials securely on the host, and validate the configuration and connectivity.

Before you begin

Before you set up the ProtectPoint Controller, verify the following prerequisites are met:

l SAN connectivity is established for the VMAX3 array, the Data Domain system, and the AR host.

l VMAX3 to Data Domain SAN connectivity has been established over a SAN switch that is either single or dual fabric.

l You have the required licenses for the products and systems you are installing.

l You have provisioned LUNs on the VMAX3 array and the Data Domain system, as applicable. Application Storage Configuration on page 25 provides more information about storage provisioning.

l You have configured the appropriate applications on the AR host.

l You have the password provided with the software license to unpack the ProtectPoint software package.

Figure 8 on page 41 shows an example where three ProtectPoint configuration files are required for three separate applications.

Setting Up the ProtectPoint Controller

40 EMC ProtectPoint 1.0 Implementation Guide

Figure 8 Multiple configuration files for multiple applications

Procedure

1. Log in to the AR host as a system administrator, such as root for Linux or UNIX systems.

2. Install the Solutions Enabler 8.0.1 software on the AR host.

The EMC Solutions Enabler Array Management CLI User Guide provides more information about Solutions Enabler.

Note

The client/server mode is not supported for this release. Only local mode is supported. When installing Solutions Enabler, configure it to use the same user account on the AR host as the ProtectPoint Controller. Refer to the Solutions Enabler documentation to configure a user account without root access.

3. Copy the ProtectPoint software package to the /tmp directory.

4. Unpack the ProtectPoint software package.

Run the following command: tar xvf protectpoint-1.0.0.X-{arch}.tar

5. When prompted for a password, type the password you received with the software license.

6. Set the ProtectPoint environment variables.

Run the following command: .protectpointrc

Failure to set the ProtectPoint environment variable will cause all protectpoint commands to fail. The environment variables do not persist between sessions. Set them each time you log in to the ProtectPoint Controller.

7. Set the default directory to the location of the ProtectPoint software.

Setting Up the ProtectPoint Controller

Setting up the ProtectPoint Controller 41

8. Install the EMC ProtectPoint 1.0 software.

Run the following command: ./protectpoint_install -install

a. Accept the default installation directory, /opt/emc, or specify a different installation directory.

b. Optionally specify the following application details when prompted:

l Application name

l Application version

l Application information

9. Modify (using a text editor or other similar application) the default configuration file, protectpoint.config, and rename (if necessary) the default configuration file in the /protectpoint-1.0.0.X/config directory. Create one or more configuration files as needed. For example:

l Create a separate configuration file for each application.

l Create a separate configuration file for each set of devices per application.

l If necessary, create a separate configuration file for the data and log files.

The configuration file contains the following subsections:

l General The general information section contains information about the application, the path for the RSA lockbox, catalog, and log files.

l Primary system The primary system section contains information associated with the primary Data Domain and VMAX3 storage systems used in the workflow.

l Primary devices (used for backing up data) The information in the primary device section of the configuration file contains information identifying the VMAX3 production devices (holding the Oracle database) and the encapsulated devices for the backup vdisk devices on the Data Domain system.

l Primary restore devices (used for restoring data) The information in the primary restore devices section of the configuration file contains information describing the restore vdisk devices on the primary Data Domain system and their encapsulated devices on the VMAX3 array.

l (Optional) Secondary system (used for replicating data)

Note

This section is required for Data Domain static-image remote replication.

The secondary system section contains information associated with replicating/ copying data from the primary Data Domain system to the secondary Data Domain system.

l (Optional) Secondary restore devices (used for restoring data from the secondary system) The information in the secondary restore devices section of the configuration file contains information describing the restore vdisk devices on the secondary Data Domain system and their encapsulated devices on the VMAX3 array.

You need to modify the content in each of these sections according to your topology.

Setting Up the ProtectPoint Controller

42 EMC ProtectPoint 1.0 Implementation Guide

Each section can contain multiple key-value pairs in the key = value format. Values can be indicated with single (') or double (") quotation marks. Key-value pairs can include spaces as well as special characters, such as the equal sign ( =), the pound symbol (#), and the semi-colon ( ;). Use only one key-value pair per line.

a. Modify the content in the General section. Table 10 on page 43 lists the details.

Table 10 General section

Key-value pair Description Mandatory or optional: additional information

APP_NAME = application name

Application name on the AR host containing the data that will be backed up.

Optional. When the data is backed up, this information is written to the Data Domain device-group and the static-image properties.

APP_INFO = application info

Application information. Optional. When the data is backed up, this information is written to the Data Domain device-group and the static-image properties.

APP_VERSION =application version

Application version. Optional. When the data is backed up, this information is written to the Data Domain device-group and the static-image properties.

BASE_DIR = install path

Path that the ProtectPoint agent uses to save the RSA lockbox, the catalog files, and the log files. The install path used is /opt/emc/ protectpoint-1.0.0.X/.

Mandatory. By default, the ProtectPoint Controller stores data in the following files in the install path:

l RSA lockbox: /lockbox/ protectpoint.clb

l Primary Data Domain system catalog: /catalogs/ primary data domain system.db

l Secondary Data Domain system catalog: /catalogs/ datalogs/secondary data domain system.db

l Log file: /logs/ protectpoint.log

LOCKBOX_DIR = path

RSA lockbox directory. Optional. By default, the ProtectPoint controller saves the RSA lockbox file as protectpoint.clb in the path $ {[GENERAL].BASE_DIR}/ lockbox/protectpoint.clb.

CATALOG_DIR = path

Catalog directory. Optional. By default, the ProtectPoint controller saves the catalog files in the path $

Setting Up the ProtectPoint Controller

Setting up the ProtectPoint Controller 43

Table 10 General section (continued)

Key-value pair Description Mandatory or optional: additional information

{[GENERAL].BASE_DIR }/ catalog.

LOG_DIR = path

Log directory. Optional. By default, the ProtectPoint controller saves the catalog files in the path $ {[GENERAL].BASE_DIR}/ log .

LOGLEVEL = log level

Log level. Optional. By default, the log level value is 2. The possible values are:

1: Error

2: Error and warning

3: Error, warning, and information

4: Error, warning, information, and debug

LOGFILE_SIZE = file-size

Log file size in megabytes (MB). Optional. By default, the maximum log file size is 4 MB.

LOGFILE_COUNT = number-of- files

Number of log files retained. Optional. By default, 16 files are retained.

b. Modify the content in the Primary system section. Table 11 on page 44 lists the details.

Table 11 Primary system section

Key-value pair Description Mandatory or optional: additional information

DD_SYSTEM = hostname/ip- address

Hostname or IP address of the primary Data Domain system used for backup.

Mandatory.

DD_PORT = port- number

Port number used to connect to the primary Data Domain system.

Optional. By default, the port number is 3009.

DD_USER = user- name

Owner of the vdisk pool.

Mandatory.

DD_POOL = pool- name

Name of vdisk pool on the primary Data Domain system.

May be optional or mandatory. Note the following:

Setting Up the ProtectPoint Controller

44 EMC ProtectPoint 1.0 Implementation Guide

Table 11 Primary system section (continued)

Key-value pair Description Mandatory or optional: additional information

l Optional for routine data backup, used for validating that all vdisk devices belong to the vdisk pool.

l Mandatory when replicating data from the secondary Data Domain system to the primary Data Domain system. Data is replicated from the secondary Data Domain system to the primary Data Domain system in the vdisk groups contained in this vdisk pool.

DD_DEVICE_GROUP = vdisk-device- group-name

Name of vdisk device-group.

May be optional or mandatory. Note the following:

l Optional for routine data backup, used for validating that all vdisk devices belong to the vdisk device- group.

l Mandatory when replicating data from the secondary Data Domain system to the primary Data Domain system. Data is replicated from the secondary Data Domain system to the primary Data Domain system in the vdisk device-group.

SYMID = VMAX SymID VMAX3 array identifier (SymID).

Optional, can be specified in other sections of the configuration file. Identifies the VMAX3 array containing the data requiring backup.

Note

ProtectPoint requires the full VMAX3 serial number with leading zeros.

c. Modify the content in the Primary device section. Table 12 on page 45 lists the details.

Create one set of key-value pairs in the Primary device section for each production device with the naming convention PRIMARY_DEVICE_name/id with a unique identifier for the device, for example, PRIMARY_DEVICE_1, PRIMARY_DEVICE_2, and so on.

Setting Up the ProtectPoint Controller

Setting up the ProtectPoint Controller 45

Table 12 Primary device section

Key-value pair Description Mandatory or optional: additional information

SRC_SYMID = SymID

SymID for the source VMAX3 LUN.

Optional. By default, the value is $ {[PRIMARY_SYSTEM].SYMID}.

Note

This is required if not already included in the General section.

SRC_SYMDEVID = SymDevID

SymDevID for the production device.

Mandatory. This information is written to the metadata for the device and the static- image.

FTS_SYMID = SymID

SymID for the FTS- encapsulated Data Domain device.

Note

This is the same as the SRC_SYMID key.

Optional. By default, the value is $ {[PRIMARY_SYSTEM].SYMID}.

Note

This is required if not already included in the General section.

FTS_SYMDEVID = SymDevID

SymDevID for the FTS- encapsulated vdisk LUN (eLUN).

Mandatory. This information is written to the metadata for the device and the static- image.

DD_WWN = wwn WWN of the Data Domain device used for data backup.

Mandatory.

d. Modify the content in the Primary restore device section. Table 13 on page 46 lists the details.

Create one set of key-value pairs in the Primary restore device section for each restore device with the naming convention PRIMARY_SYSTEM_RESTORE_DEVICE_name/id with a unique identifier for the device, for example, PRIMARY_SYSTEM_RESTORE_DEVICE_1, PRIMARY_SYSTEM_RESTORE_DEVICE_2, and so on.

Table 13 Primary restore device section

Key-value pair Description Mandatory or optional: additional information

DD_WWN = wwn WWN of the Data Domain vdisk device used for data restoration.

Mandatory.

FTS_SYMDEVID = SymDevID

SymDevID for the FTS- encapsulated vdisk LUN (eLUN).

Mandatory. This information is written to the metadata for the device and the static-image.

e. (Optional) Modify the content in the Secondary system section. Table 14 on page 47 lists the details.

Setting Up the ProtectPoint Controller

46 EMC ProtectPoint 1.0 Implementation Guide

This section is required for Data Domain static-image remote replication.

Table 14 Secondary system section (optional)

Key-value pair Description Mandatory or optional: additional information

DD_SYSTEM = hostname/ip- address

Host name or IP address of the secondary Data Domain system used for replication.

Mandatory.

DD_PORT = port- number

Port number of the secondary Data Domain system. Used to connect to the Data Domain system.

Optional. By default, the port number is 3009.

DD_USER = user- name

Name of the vdisk user on the secondary Data Domain system.

Mandatory. Name of user who owns the vdisk pool on the secondary Data Domain system.

DD_POOL = pool- name

Name of vdisk pool to replicate to on the secondary Data Domain system.

Mandatory. Backups will be replicated to this vdisk pool $ {[SECONDARY_SYSTEM]. DD_POOL} and this vdisk device-group $ {[SECONDARY_SYSTEM]. DD_DEVICE_GROUP}.

DD_DEVICE_GROUP = vdisk-device- group-name

Name of vdisk device- group to replicate to on the secondary Data Domain system.

Mandatory. Backups will be replicated to this vdisk pool $ {[SECONDARY_SYSTEM]. DD_POOL} and this vdisk device-group $ {[SECONDARY_SYSTEM]. DD_DEVICE_GROUP}.

f. (Optional) Modify the Secondary restore device section. Table 15 on page 47 lists the details.

Create one set of key-value pairs in the Secondary restore device section for each restore device with the naming convention SECONDARY_SYSTEM_RESTORE_DEVICE_name/id with a unique identifier for the device, for example, SECONDARY_SYSTEM_RESTORE_DEVICE_1, SECONDARY_SYSTEM_RESTORE_DEVICE_2, and so on.

Table 15 Secondary restore device section (optional)

Key-value pair Description Mandatory or optional: additional information

DD_WWN = wwn WWN of the vdisk restore device on the Data Domain system.

Mandatory.

The following example shows a modified configuration file.

###################################################################### # this is just template file # Indentation just made for readability

Setting Up the ProtectPoint Controller

Setting up the ProtectPoint Controller 47

###################################################################### [GENERAL] # APP_NAME is optional APP_NAME = # APP_VERSION is optional APP_VERSION = # APP_INFO is optional APP_INFO = # give absolute path of base directory where catalog, log & lockbox # files should be generated by default BASE_DIR = # CATALOG_DIR is optional, default is ${[GENERAL].BASE_DIR}/catalog # CATALOG_DIR = # LOCKBOX_DIR is optional, default is ${[GENERAL].BASE_DIR}/lockbox # LOCKBOX_DIR = # LOG_DIR is optional, default value is ${[GENERAL].BASE_DIR}/log # LOG_DIR = # LOGLEVEL is optional, default value is 2, 2: error + warning, # 3: error + warning + info, 4: error + warning + info + debug # LOGLEVEL = # LOGFILE_SIZE is optional, default value is 4 MB # LOGFILE_SIZE = # LOGFILE_COUNT is optional, by default 16 files will be kept # LOGFILE_COUNT = ##################### Primary System ################################# # VMAX Devices will be backed up to this System [PRIMARY_SYSTEM] # VMAX Devices will be backed up to this DD System DD_SYSTEM = # DD_PORT is optional, default value is 3009 # DD_PORT = # NEED A COMMENT DESCRIBING THIS VALUE DD_USER = # DD_POOL is optional, used just for validation that all devices # belong to this pool # DD_POOL = # DD_DEVICE_GROUP is optional, used just for validation that all # devices belong to this device group # DD_DEVICE_GROUP = # NEED A COMMENT DESCRIBING THIS VALUE SYMID =

########### Primary Devices on Primary System ######################## # All section name starting with PRIMARY_DEVICE_ will be backed up on # Primary DD i.e. [PRIMARY_SYSTEM]

[PRIMARY_DEVICE_1] # SRC_SYMID is optional, default is ${[PRIMARY_SYSTEM].SYMID} # SRC_SYMID = SRC_SYMDEVID = # this is optional, default value is ${[PRIMARY_SYSTEM].SYMID} # FTS_SYMID = FTS_SYMDEVID = # NEED A COMMENT DESCRIBING THIS VALUE DD_WWN =

[PRIMARY_DEVICE_2] # SRC_SYMID is optional, default is ${[PRIMARY_SYSTEM].SYMID} # SRC_SYMID = SRC_SYMDEVID = # FTS_SYMID is optional, default is ${[PRIMARY_SYSTEM].SYMID} # FTS_SYMID = FTS_SYMDEVID = # NEED A COMMENT DESCRIBING THIS VALUE DD_WWN = ###################################################################### ############### Restore Devices on Primary System ####################

Setting Up the ProtectPoint Controller

48 EMC ProtectPoint 1.0 Implementation Guide

# All section name starting with PRIMARY_SYSTEM_RESTORE_DEVICE will be # used to restore on Primary DD i.e. [PRIMARY_SYSTEM] # Total number of restore devices should be greater than or equal to # number of static images in backup & should have exact geometry as # static image in backup # [PRIMARY_SYSTEM_RESTORE_DEVICE_1] # # FTS_SYMID is optional, default is ${[PRIMARY_SYSTEM].SYMID} # # FTS_SYMID = # # NEED A COMMENT DESCRIBING THIS VALUE # FTS_SYMDEVID = # # NEED A COMMENT DESCRIBING THIS VALUE # DD_WWN = # # [PRIMARY_SYSTEM_RESTORE_DEVICE_2] # # FTS_SYMID is optional, default is ${[PRIMARY_SYSTEM].SYMID} # # FTS_SYMID = # # NEED A COMMENT DESCRIBING THIS VALUE # FTS_SYMDEVID = # # NEED A COMMENT DESCRIBING THIS VALUE # DD_WWN = # # [PRIMARY_SYSTEM_RESTORE_DEVICE_3] # # FTS_SYMID is optional, default is ${[PRIMARY_SYSTEM].SYMID} # # FTS_SYMID = # # NEED A COMMENT DESCRIBING THIS VALUE # FTS_SYMDEVID = # # NEED A COMMENT DESCRIBING THIS VALUE # DD_WWN = # ###################################################################### ###################################################################### ################## Secondary System ################################## # Backup will be replicated/copied from Primary DD i.e. # [PRIMARY_SYSTEM] to Secondary System # # [SECONDARY_SYSTEM] # # NEED A COMMENT DESCRIBING THIS VALUE # DD_SYSTEM = # # DD_PORT is optional, default value is 3009 # # DD_PORT = # # NEED A COMMENT DESCRIBING THIS VALUE # DD_USER = # # NEED A COMMENT DESCRIBING THIS VALUE # DD_POOL = # # NEED A COMMENT DESCRIBING THIS VALUE # DD_DEVICE_GROUP = # # # # this is optional if no restore device or FTS_SYMID mentioned in # # each restore device # # SYMID = # ########### Restore Devices on Secondary System ###################### # All section name starting with SECONDARY_SYSTEM_RESTORE_DEVICE will # be used to restore on Secondary DD i.e. [SECONDARY_SYSTEM] # Total number of restore devices should be greater than or equal to # number of static images in backup & should have exact geometry as # static image in backup # [SECONDARY_SYSTEM_RESTORE_DEVICE_1] # # FTS_SYMID is optional, default is ${[SECONDARY_SYSTEM].SYMID} # # FTS_SYMID = # # NEED A COMMENT DESCRIBING THIS VALUE # FTS_SYMDEVID = # # NEED A COMMENT DESCRIBING THIS VALUE # DD_WWN = # # [SECONDARY_SYSTEM_RESTORE_DEVICE_2] # # FTS_SYMID is optional, default is ${[SECONDARY_SYSTEM].SYMID} # # FTS_SYMID =

Setting Up the ProtectPoint Controller

Setting up the ProtectPoint Controller 49

# # NEED A COMMENT DESCRIBING THIS VALUE # FTS_SYMDEVID = # # NEED A COMMENT DESCRIBING THIS VALUE # DD_WWN = # # [SECONDARY_SYSTEM_RESTORE_DEVICE_3] # # FTS_SYMID is optional, default is ${[SECONDARY_SYSTEM].SYMID} # # FTS_SYMID = # # NEED A COMMENT DESCRIBING THIS VALUE # FTS_SYMDEVID = # # NEED A COMMENT DESCRIBING THIS VALUE # DD_WWN = # ##################################################################### #####################################################################

10.Store the Data Domain user credentials securely.

Run the following command: protectpoint security add dd-credentials [dd-system {primary | secondary}] [user ] [config-file ] protectpoint security add dd-credentials config-file protectpoint.config

Validating the configuration file Validate the content and format of the configuration file modified when the ProtectPoint Controller was set up on the AR host. Before you complete this task, run the following command to save either the primary or secondary Data Domain system credentials to the RSA lockbox: protectpoint security add dd-credentials [dd-system {primary | secondary}] [user ] [config-file ] Procedure

1. Log in to the AR host as a system administrator, such as root for Linux or UNIX systems.

2. Start the validation process.

Run the following command: protectpoint config validate [config-file ]

protectpoint config validate Validating host requirements............................[OK] Validating Primary System: Connection Information..............................[OK] Backup Devices are in same Data Domain Device Group.[OK] Backup Devices are unique...........................[OK] Backup Device's VMAX & DD Device Configuration......[OK] Restore Devices are in same Data Domain Device Group[OK] Restore Devices are unique..........................[OK] Restore Device's VMAX & DD Device Configuration.....[OK] Replication License.................................[OK] Validating Secondary System: Connection Information..............................[OK] Replication Device Group............................[OK] Restore Devices are in same Data Domain Device Group[OK] Restore Devices are unique..........................[OK] Restore Device's VMAX & DD Device Configuration.....[OK] Replication License.................................[OK] Validating Primary and Secondary System are different...[OK]

Configuration is valid.

Setting Up the ProtectPoint Controller

50 EMC ProtectPoint 1.0 Implementation Guide

3. Create and backup a snapshot to verify the ProtectPoint implementation.

a. Log in to the AR host as a system administrator, such as root for Linux or UNIX systems.

b. Run the appropriate host-specific command to quiesce the application.

c. Use the ProtectPoint CLI to take a snapshot of the devices holding the data on the VMAX3 array.

Run the following command: protectpoint snapshot create config-file protectpoint snapshot create config-file protecpoint.config

d. Run the appropriate host-specific command to unquiesce the application.

e. Establish the relationship between the VMAX3 array and Data Domain system, and activate the snapshot.

Run the following command: symsnapvx -sid -dg establish -name

symsnapvx -sid 0129 -dg device-group-5 establish -name dg5-snap

f. Move the data from the VMAX3 array on to the Data Domain system. This command moves the changed blocks on the source device to the target device.

Run the following command: symsnapvx -sid -dg link -copy - snapshot_name symsnapvx -sid 0129 -dg device-group-5 link -copy - snapshot_name dg5-snap

g. Check the status of the link copy operation.

Note

VMAX3 symsnapvx commands are asynchronous. Verify that all symsnapvx link -copy operations are complete before you perform a backup; otherwise

the backup will fail.

Run the following command: symsnapvx list -detail -sid . symsnapvx list -detail -sid 0129

h. Use the ProtectPoint CLI to create the SnapVX backup on the Data Domain system.

Run the following command: protectpoint backup create description " description>" config-file protectpoint backup create description "backup from server 1" config-file protectpoint.config

Note

If NSM returns the error message :SYMAPI_C_SNAPSHOT_NOT_FOUND", one or

more of the source LUNs being operated on is missing its required NSM SnapVX snapshot. Create the required snapshot and retry the backup operation.

Setting Up the ProtectPoint Controller

Validating the configuration file 51

Setting Up the ProtectPoint Controller

52 EMC ProtectPoint 1.0 Implementation Guide

CHAPTER 4

ProtectPoint Administration

This chapter includes the following topics:

l ProtectPoint administration overview.................................................................... 54 l ProtectPoint file systems....................................................................................... 54

ProtectPoint Administration 53

ProtectPoint administration overview The application administrator typically performs the following tasks by using the ProtectPoint configuration file and ProtectPoint commands:

l Backing up data

l Restoring data (object-level restore or a full-application rollback restore)

l Listing and deleting backups

l Replicating backups

l Rebuilding (refreshing) the catalog

Application changes If you make changes to an application, such as adding or deleting table space in a database, you must verify that the changes are duplicated appropriately on the VMAX3 array and Data Domain system by adding or deleting LUNs in the ProtectPoint configuration. The following issues may occur if changes are not accurately reflected across the ProtectPoint implementation:

l Unneeded data may continue to get backed up and consume available storage at a higher-than-expected rate

l Data on LUNs added to the environment but not to the ProtectPoint configuration might not get backed up.

ProtectPoint file systems This section describes how to perform backup and recovery tasks with the ProtectPoint Controller.

Specifying the ProtectPoint configuration file You can specify the ProtectPoint configuration file in one of the following ways:

l Use the config-file keyword and the file-path argument with the protectpoint command to use your modified configuration file.

l Use the environment variable, PP_CONFIG_FILE.

l Use the default configuration file, protectpoint.config, in the current working directory.

Note

If you do not specify a ProtectPoint configuration file, the default file is used if the working directory is set to the ProtectPoint config directory.

Performing a backup Before you begin

Create a new SnapVX snapshot before you perform a backup operation to ensure that the most recent changes to the source devices are backed up.

ProtectPoint Administration

54 EMC ProtectPoint 1.0 Implementation Guide

Note

VMAX3 symsnapvx commands are asynchronous. Verify that all symsnapvx link - copy operations are complete before you perform a backup; otherwise the backup will

fail.

In this procedure, the application administrator uses a combination of the application or host-specific commands and the ProtectPoint CLI to back up the data. The first time a backup is performed, the entire source or production LUN is backed up. For subsequent backups, only the changed data is backed up, that is, the backups are incremental at the block level. However, If the SnapVX session has to be re-created after a restore operation that overwrites the original source devices, the next backup operation will include the entire source production LUN.

The ProtectPoint Controller creates snapshots on the VMAX3 array as part of the backup operation. These snapshots are named NSM_SNAPVX. ProtectPoint manages the generations of the VMAX3 SnapVX snapshot named NSM_SNAPVX according to the existing VMAX3 backup and snapshot schedules.

Note

Do not manually create any SnapVX snapshots named NSM_SNAPVX. ProtectPoint will

terminate all snapshots with this name.

The protectpoint backup create command relinks the snapshot with the most recent generation (gen0) created by the snapshot create command , and deletes the previously linked snapshot. When the operation is complete, only one NSM_SNAPVX snapshot should exist.

Procedure

1. Log in to the AR host as a system administrator, such as root for Linux or UNIX systems.

2. Run the appropriate host-specific command to quiesce the application.

3. Use the ProtectPoint CLI to take a snapshot of the devices holding the data on the VMAX3 array.

Run the following command: protectpoint snapshot create config-file protectpoint snapshot create config-file protecpoint.config

4. Run the appropriate host-specific command to unquiesce the application.

5. For the first backup, verify that there is an existing snapshot created and linked/ copied prior to running protectpoint backup. Skip this step on subsequent backup operations.

Run the following command: symsnapvx -sid -f - snapshot_name list -linked -copied -detail symsnapvx -sid 0129 -f primarylun.out -snapshot_name snap1 list - linked -copied -detail

6. Use the ProtectPoint CLI to create the SnapVX backup on the Data Domain system.

Run the following command: protectpoint backup create description " description>" config-file

ProtectPoint Administration

Performing a backup 55

protectpoint backup create description "backup from server 1" config-file protectpoint.config

Note

If NSM returns the error message :SYMAPI_C_SNAPSHOT_NOT_FOUND", one or

more of the source LUNs being operated on is missing its required NSM SnapVX snapshot. Create the required snapshot and retry the backup operation.

If you type Ctrl + C to abort a backup, the source devices on the VMAX3 may remain in a locked state. This will cause subsequent backups of these devices to fail with the error message Create backup failed: Error relinking snapvx snapshot to target: Unable to perform action Relink on SnapVX snapshot, error SYMAPI_C_DEV_LOCK_CANT_ACQUIRE, devices: , first device , name [NSM_SNAPVX]. Complete the following steps to determine if the VMAX3 devices are locked and release the locks.

a. Run the following command to list all the locked VMAX3 devices: symdev list -lock symdev list -lock

b. Run the following command to unlock the VMAX3 devices: symdev release -lock -nop -sid symdev release -lock -nop -sid 0129

Restoring a backup Before you begin

Obtain the ProtectPoint backup ID and the location, on the primary or secondary Data Domain system, for the backup you want to restore.

Note

The Data Domain restore devices must be encapsulated on the VMAX3 array in order to restore a backup.

In this procedure, the application administrator uses a combination of the application or host-specific commands and the ProtectPoint CLI to restore the data.

Procedure

1. Log in to the AR host as a system administrator, such as root for Linux or UNIX systems.

2. View the list of backups previously completed on the primary or secondary Data Domain systems.

Run the following command: protectpoint backup show list [dd-system {primary | secondary}] [{last {count | days | weeks | months}} | {from [[ ] ] [to [[ ] ]]}] [status {complete | in-progress | failed | partial}] [config-file ]

If you do not specify any of the optional keywords, the system displays information about all the existing and in-progress backups.

protectpoint backup show list last 2 days

ProtectPoint Administration

56 EMC ProtectPoint 1.0 Implementation Guide

3. Prepare the selected backup for restore on the Data Domain system.

Run the following command: protectpoint restore prepare backup-id [dd- system {primary | secondary}] [config-file ] protectpoint restore prepare backup-id 5c318f96-e224-3e40- fba4-250da1dd1dc9 config-file protectpoint.config

Note

The protectpoint restore prepare command overwrites the contents of the

encapsulated restore devices with the specified backup.

4. Choose the type of restore operation to perform, and follow the appropriate steps.

Object-level restore

Complete the following steps for an object-level restore.

Procedure

1. Use the application or host-specific tools or commands and Solutions Enabler to provision and mount the encapsulated restore devices on the AR host.

2. Use the application or host-specific tool or commands to identify and restore the object-level data as appropriate.

Full-application rollback restore

Complete the following steps for a full-application rollback restore.

Note

A full-application rollback restore overwrites the contents of the target LUNs.

Procedure

1. Shut down all applications and unmount/deport all LVM structures that access the production devices.

2. Make the production devices unavailable to users.

3. From the VMAX3 array, use the symsnapvx commands to establish a snapshot session with the encapsulated restore devices on the Data Domain system as the source, and the VMAX3 devices as the target.

4. Link/copy the new snapshot to the target devices specified in the previous step to copy the content of the encapsulated restore devices to the original source devices.

Change the link copy target from the VMAX3 restore devices to the VMAX3 production devices to restore data directly back to the VMAX3 production devices.

5. Use Solutions Enabler to present the newly restored source devices to the host if they are not already masked, and import LVM structures as needed.

6. Recover the application, and perform any appropriate application-specific validation.

Restoring a backup to a different VMAX3 array

It is possible to restore a backup stored on the Data Domain system to a different VMAX3 array. Complete the following steps.

ProtectPoint Administration

Restoring a backup 57

Procedure

1. Configure SAN zoning so the Data Domain system can communicate with the new VMAX3 array.

2. If necessary, set up the ProtectPoint Controller on a new host that has access to the new VMAX3 array as described in Setting up the ProtectPoint Controller on page 40.

3. If necessary, create new vdisk devices on the Data Domain system as described in Encapsulating Data Domain devices on the VMAX3 array on page 30.

4. Encapsulate the restore target devices on the new VMAX3 array as described in Encapsulating Data Domain devices on the VMAX3 array on page 30.

5. Complete the following changes in the ProtectPoint configuration file.

a. Specify the Symmetrix ID for the new VMAX3 array as the primary restore destination.

b. If the original VMAX3 array is inaccessible to the host where the restore operation is being performed, comment out the backup devices in the configuration file.

Setting up the ProtectPoint Controller on page 40 provides more information about editing the ProtectPoint configuration file.

6. If necessary, copy the ProtectPoint configuration file to the new host.

7. Update the ProtectPoint catalog as described in Managing the catalog on page 65.

8. Verify the required backup appears as described in Managing the catalog on page 65.

9. Initiate the restore operation as described in Restoring a backup on page 56.

Replicating a backup Before you begin

Before you start the replication, you need the ProtectPoint backup ID to be replicated.

Note

You can only have one active replication session for a backup set at a time.

The task allows you to replicate the data from one Data Domain system to another, primary or secondary, Data Domain system. This task also allows you to do the following:

l View the history of any previously completed replications.

l Stop the replication process.

l Display replication history information.

If there are no open data streams when ProtectPoint tries to replicate data, the ProtectPoint Controller waits for a data stream to become available before continuing.

Procedure

1. Log in to the AR host as a system administrator, such as root for Linux or UNIX systems.

2. View the list of backups previously completed on the primary or secondary Data Domain systems.

Run the following command: protectpoint backup show list [dd-system {primary | secondary}] [{last {count | days | weeks | months}} | {from [[CC] YY ] [to [[ ]

ProtectPoint Administration

58 EMC ProtectPoint 1.0 Implementation Guide

>]]}] [status {complete | in-progress | failed | partial}] [config-file ]

If you do not specify any of the optional keywords, the system displays information about all the existing and in-progress backups.

protectpoint backup show list last 2 days 3. Start the replication process.

Run the following command: protectpoint replication run backup-id [source- dd-system {primary | secondary}] [config-file ] protectpoint replication run backup-id 8a1603ab-7b9f-6fb6- f05e-1f00f7c87f07 config-file protectpoint.config

4. (Optional) Stop the replication process. This allows you to stop any in-progress or partically complete replications. If a replication is complete it is not deleted.

a. From the terminal window where the replication session is running, type Ctrl + C to terminate the protectpoint replication command.

b. Wait three minutes and run the following command: protectpoint replication abort [source-dd-system {primary | secondary}] [config-file ] protectpoint replication abort config-file protectpoint.config

Note

The EMC Data Domain Operating System Administration Guide provides more information about replication.

5. (Optional) Display replication history information.

Run the following command: protectpoint replication show list [source-dd-system {primary | secondary}] [{last {count | days | weeks | months}} | {from [[ ] ] [to [[ ] ]]}] [config-file ]

protectpoint replication show list 6. (Optional) Display detailed replication history information.

Run the following command: protectpoint replication show detailed backup-id [source-dd-system {primary | secondary}] [config-file path>]

protectpoint replication show detailed backup-id cdcda305-c867-08f3-d3b2-ca03564e41ee

Backup id: cdcda305-c867-08f3-d3b2-ca03564e41ee Replication start time: 2014-01-17 11:02:43 Replication end time: 2014-01-17 11:03:22 Replication duration: 00:59:59 (hh:mm:ss) Status: failed Static images:

-------- ---------------------------------------- ------------------- Image Static-image Status Sequence -------- ---------------------------------------- -------------------

ProtectPoint Administration

Replicating a backup 59

1 0400002ddb6a052d97be70001200000000000000 complete 2 0400002ddb6a052d97be70001400000000000001 complete 3 0400002ddb6a052d97be70001600000000000002 failed 4 0400002ddb6a052d97be70001600000000000003 aborted -------- ---------------------------------------- -------------------

Note

The EMC Data Domain Operating System Administration Guide provides more information about replication.

Deleting a backup The application administrator views the list of completed backups on either the primary or secondary Data Domain systems, and then deletes a specific backup, as appropriate.

Procedure

1. Log in to the AR host as a system administrator, such as root for Linux or UNIX systems.

2. Display the list of backups completed on either the primary or secondary Data Domain system.

Run the following command: protectpoint backup show list [dd-system {primary | secondary}] [{last {count | days | weeks | months}} | {from [[ ] ] [to [[< CC>] ]]}] [status {complete | in-progress | failed | partial}] [config-file ]

protectpoint backup show list last 2 days 3. (Optional) Display detailed information about a specific backup on either the primary

or secondary Data Domain system.

Run the following command: protectpoint backup show detailed backup-id [dd- system {primary | secondary}] [config-file ]

protectpoint backup show detailed backup-id cdcda305-c867-08f3-d3b2-ca03564e41ee

Backup id: cdcda305-c867-08f3-d3b2-ca03564e41ee Backup start time: 2014-01-17 11:02:43 Backup end time: 2014-01-17 11:03:22 Backup duration: 00:59:59 (hh:mm:ss) Status: complete Description: Oracle Host1 Static images: Expected count: 3 Present in catalog: 3 Present on DD system: 3

-------- ---------------------------------------- ------------------- Image Static-image Present on Sequence DD System -------- ---------------------------------------- ------------------- 1 0400002ddb6a052d97be70001200000000000000 yes 2 0400002ddb6a052d97be70001400000000000001 yes 3 0400002ddb6a052d97be70001600000000000002 yes -------- ---------------------------------------- -------------------

4. Delete a specific backup from either the primary or secondary Data Domain system.

Run the following command:

ProtectPoint Administration

60 EMC ProtectPoint 1.0 Implementation Guide

protectpoint backup delete backup-id [dd-system {primary | secondary}] [config-file ] protectpoint backup delete backup-id cdcda305-c867-08f3-d3b2- ca03564e41ee

Deleting a replicated backup copy from the secondary Data Domain system

The purpose of replication is to create a copy of the original backup. The ProtectPoint Controller is designed so that deleting the original backup does not delete the replicated copy of that backup on the secondary Data Domain System.

Procedure

1. Create a catalog of the backups stored on the secondary Data Domain System.

Run the following command: protectpoint catalog update dd-system secondary

2. Delete a specific backup from either the secondary Data Domain system.

Run the following command: protectpoint backup delete backup-id dd-system secondary protectpoint backup delete backup-id cdcda305-c867-08f3-d3b2- ca03564e41ee dd-system secondary

Rebuilding the Catalog The ProtectPoint Controller creates and maintains a backup catalog for each Data Domain system to which backups are written. In normal operating circumstances, this catalog accurately reflects the backups available for restore on the primary Data Domain system. In the rare circumstance that this catalog is deleted or becomes corrupted, the catalog can be rebuilt by reading the metadata in the vdisk objects on the primary Data Domain system. The protectpoint catalog update command rebuilds the catalog.

The backup catalog showing the backups replicated to the secondary Data Domain system is not updated automatically, but can be created or refreshed by running the protectpoint catalog update dd-system secondary command.

To rebuild (refresh) the catalog, complete the following steps.

Note

The catalog is rebuilt from Data Domain vdisk objects identified by information in the ProtectPoint configuration file, namely the DD_SYSTEM, DD_POOL, and

DD_DEVICE_GROUP in the PRIMARY_SYSTEM or SECONDARY_SYSTEM sections,

depending on the option selection.

Procedure

1. Log in to the AR host as a system administrator, such as root for Linux or UNIX systems.

2. Create or rebuild (refresh) the backup catalog.

Run the following command: protectpoint catalog update [dd-system {primary | secondary}] [config-file ] protectpoint catalog update

ProtectPoint Administration

Rebuilding the Catalog 61

ProtectPoint Administration

62 EMC ProtectPoint 1.0 Implementation Guide

CHAPTER 5

ProtectPoint CLI Options

This chapter includes the following topics:

l ProtectPoint CLI options overview..........................................................................64 l Managing the credentials...................................................................................... 64 l Managing the catalog............................................................................................65 l Managing a backup...............................................................................................65 l Managing replications...........................................................................................67 l Showing the ProtectPoint Controller version.......................................................... 69

ProtectPoint CLI Options 63

ProtectPoint CLI options overview Once you have modified and optionally renamed the default configuration file to meet the needs of your devices and topology, you can use the modified configuration file and the ProtectPoint CLI to perform the following tasks:

l Manage credentials

l Manage the catalog

l Manage backups

l Manage replications

l Show the ProtectPoint Controller version

l Validate the content and format of the configuration file

Specifying the ProtectPoint configuration file You can specify the ProtectPoint configuration file in one of the following ways:

l Use the config-file keyword and the file-path argument with the protectpoint command to use your modified configuration file.

l Use the environment variable, PP_CONFIG_FILE.

l Use the default configuration file, protectpoint.config, in the current working directory.

Note

If you do not specify a ProtectPoint configuration file, the default file is used if the working directory is set to the ProtectPoint config directory.

Managing the credentials Procedure

1. Log in to the AR host as a system administrator, such as root for Linux or UNIX systems.

2. Add the security credentials for either the primary or secondary Data Domain system to the RSA lockbox.

Run the following command: protectpoint security add dd-credentials [dd-system {primary | secondary}] [config-file ] protectpoint security add dd-credentials

3. (Optional) Remove the credentials from the RSA lockbox.

Run the following command: protectpoint security del dd-credentials [dd-system {primary | secondary}] [config-file ] protectpoint security del dd-credentials

4. (Optional) Add an additional AR host for recovery or redundancy to the RSA lockbox access list.

Run the following command:

ProtectPoint CLI Options

64 EMC ProtectPoint 1.0 Implementation Guide

protectpoint security access add host [config- file ] protectpoint security access add host apphost03

5. (Optional) Remove an AR host from the RSA lockbox access list.

Run the following command: protectpoint security access remove host [config- file ] protectpoint security access remove host apphost03

6. (Optional) Display information about AR host access.

Run the following command: protectpoint security access show [config-file ]

Security Access List: user1-dl.datadomain.com

Managing the catalog In this procedure, the ProtectPoint Controller creates or refreshes the backup catalog on either the primary or secondary Data Domain system.

Note

The catalog is rebuilt from Data Domain vdisk objects identified by information in the ProtectPoint configuration file namely the DD_SYSTEM, DD_POOL, and

DD_DEVICE_GROUP in the PRIMARY_SYSTEM or SECONDARY_SYSTEM sections,

depending on the option selection.

Procedure

1. Log in to the AR host as a system administrator, such as root for Linux or UNIX systems.

2. Create or rebuild (refresh) the backlog catalog.

Run the following command: protectpoint catalog update [dd-system {primary | secondary}] [config-file ] protectpoint catalog update

Managing a backup Before you begin

Note

VMAX3 symsnapvx commands are asynchronous. Verify that all symsnapvx link - copy operations are complete before you perform a backup; otherwise the backup will

fail.

Procedure

1. Log in to the AR host as a system administrator, such as root for Linux or UNIX systems.

ProtectPoint CLI Options

Managing the catalog 65

2. Create a snapshot of the application data.

Run the following command: protectpoint snapshot create

3. Initiate the backup process.

This creates a backup on the primary Data Domain system, moves the VMAX3 data in the snapshot to the Data Domain system, and creates the static-images on the Data Domain system.

Run the following command: protectpoint backup create description " >" [config-file ] protectpoint backup create description "backups on server 1" config-file protectpoint.config

Note

If NSM returns the error message :SYMAPI_C_SNAPSHOT_NOT_FOUND", one or

more of the source LUNs being operated on is missing its required NSM SnapVX snapshot. Create the required snapshot and retry the backup operation.

If you type Ctrl + C to abort a backup, the source devices on the VMAX3 may remain in a locked state. This will cause subsequent backups of these devices to fail with the error message Create backup failed: Error relinking snapvx snapshot to target: Unable to perform action Relink on SnapVX snapshot, error SYMAPI_C_DEV_LOCK_CANT_ACQUIRE, devices: , first device , name [NSM_SNAPVX]. Complete the following steps to determine if the VMAX3 devices are locked and release the locks.

a. Run the following command to list all the locked VMAX3 devices: symdev list -lock symdev list -lock

b. Run the following command to unlock the VMAX3 devices: symdev release -lock -nop -sid symdev release -lock -nop -sid 0129

4. (Optional) View the list of backups previously completed on the primary or secondary Data Domain systems.

Run the following command: protectpoint backup show list [dd-system {primary | secondary}] [{last {count | days | weeks | months}} | {from [[ ] ] [to [[ >]]}] [status {complete | in-progress | failed | partial}] [config-file ]

Note

The protectpoint backup show command will fail with the message Unable to open catalog file > /emc/protectpoint-1.0.0.X/catalog/ {dd_hostname}.db if the command is run before the first backup or before a

ProtectPoint catalog update operation is executed.

protectpoint backup show list last 2 days

ProtectPoint CLI Options

66 EMC ProtectPoint 1.0 Implementation Guide

5. (Optional) View and validate the details about the backups previously completed on the primary or secondary Data Domain systems.

Run the following command: protectpoint backup show detailed backup-id [dd- system { primary | secondary}] [config-file ]

protectpoint backup show detailed backup-id cdcda305-c867-08f3-d3b2-ca03564e41ee

Backup id: cdcda305-c867-08f3-d3b2-ca03564e41ee Backup start time: 2014-01-17 11:02:43 Backup end time: 2014-01-17 11:03:22 Backup duration: 00:59:59 (hh:mm:ss) Status: complete Description: Oracle Host1 Static images: Expected count: 3 Present in catalog: 3 Present on DD system: 3

-------- ---------------------------------------- ------------------- Image Static-image Present on Sequence DD System -------- ---------------------------------------- ------------------- 1 0400002ddb6a052d97be70001200000000000000 yes 2 0400002ddb6a052d97be70001400000000000001 yes 3 0400002ddb6a052d97be70001600000000000002 yes -------- ---------------------------------------- -------------------

6. (Optional) Prepare to restore backups from the primary or secondary Data Domain systems.

The following command prepares the data for restore but does not trigger the restore process. The system administrator or other appropriate user must perform the actual restore by using the host-specific commands, as applicable.

Run the following command: protectpoint restore prepare backup-id [dd- system {primary | secondary}] [config-file ] protectpoint restore prepare backup-id cdcda309-c867-08f5-d3b2- ca03564e41ff

7. (Optional) Delete the backups from the primary or secondary Data Domain systems.

Run the following command: protectpoint backup delete backup-id [dd-system {primary | secondary}] [config-file ] protectpoint backup delete backup-id cdcda305-c867-08f3-d3b2- ca03564e41ee

Managing replications Before you begin

Before you start a replication session, you need the ProtectPoint backup ID to be replicated.

Note

You can only have one active replication session for a backup set at a time.

This task enables you to complete the following operations:

ProtectPoint CLI Options

Managing replications 67

l Replicate data from one Data Domain system to another Data Domain system (primary or secondary).

l View the status of the replication process.

l View the history of replications completed.

l Stop the replication process.

If there are no open data streams when ProtectPointtries to replicate data, the ProtectPoint controller waits for a data stream to become available before continuing.

Procedure

1. Log in to the AR Host as a system administrator, such as root for Linux or UNIX systems.

2. Start the replication process.

Run the following command: protectpoint replication run backup-id [source- dd-system {primary | secondary}] [config-file ] protectpoint replication run backup-id e197e2ff-3960-55db-9919- b494b44520dd

3. (Optional) View the replication history. The following command displays the replication history along with replication identification numbers.

Run the following command: protectpoint replication show list [source-dd-system {primary | secondary}] [{last {count | days | weeks | months}} | {from [[ ] ] [to MMDDhhmm [[ ] ]]}] [config-file ]

protectpoint replication show list 4. (Optional) View detailed information about the replication history.

Run the following command: protectpoint replication show detailed backup-id [source-dd-system {primary | secondary}] [config-file path>]

protectpoint replication show detailed backup-id cdcda305-c867-08f3-d3b2-ca03564e41ee

Backup id: cdcda305-c867-08f3-d3b2-ca03564e41ee Replication start time: 2014-01-17 11:02:43 Replication end time: 2014-01-17 11:03:22 Replication duration: 00:59:59 (hh:mm:ss) Status: failed Static images:

-------- ---------------------------------------- ------------------- Image Static-image Status Sequence -------- ---------------------------------------- ------------------- 1 0400002ddb6a052d97be70001200000000000000 complete 2 0400002ddb6a052d97be70001400000000000001 complete 3 0400002ddb6a052d97be70001600000000000002 failed 4 0400002ddb6a052d97be70001600000000000003 aborted -------- ---------------------------------------- -------------------

5. (Optional) Stop the replication process. The following commands enable you to stop any in-progress or partically complete replications. If a replication is complete, then it is not deleted.

ProtectPoint CLI Options

68 EMC ProtectPoint 1.0 Implementation Guide

a. From the terminal window where the replication session is running, type Ctrl + C to terminate the protectpoint replication command.

b. Wait three minutes and run the following command: protectpoint replication abort [source-dd-system {primary | secondary}] [config-file ] protectpoint replication abort

Note

The EMC Data Domain Operating System Administration Guide provides more information about replication.

Showing the ProtectPoint Controller version Use this task to view the version of the ProtectPoint Controller.

Procedure

1. Log in to the AR host as a system administrator, such as root for Linux or UNIX systems.

2. Show the ProtectPoint Controller version in use.

Run the following command: protectpoint show version

version: 1.0.0. X-{arch}

ProtectPoint CLI Options

Showing the ProtectPoint Controller version 69

ProtectPoint CLI Options

70 EMC ProtectPoint 1.0 Implementation Guide

CHAPTER 6

Troubleshooting

This chapter includes the following topics:

l ProtectPoint log file............................................................................................... 72 l Check connectivity in the ProtectPoint environment.............................................. 72 l ProtectPoint troubleshooting scenarios................................................................. 72

Troubleshooting 71

ProtectPoint log file The ProtectPoint log file, protectpoint.log, is located in the install- directory/protectpoint/logs directory. The log shows information, error, and audit messages captured by the ProtectPoint Controller.

By default, the log level value is 2. The possible values are as follows:

1: Error 2: Error and warning 3: Error, warning, and information 4: Error, warning, information, and debug

Check connectivity in the ProtectPoint environment If there is a problem in the ProtectPoint environment, check the connectivity between the solution components to verify that all the components are communicating with each other. Table 16 on page 72 indicates the type of connection used by the components in the ProtectPoint environment.

Table 16 ProtectPoint Connectivity

Connected Components Connection Type

Primary application host to primary VMAX3 array FC SAN

Primary application host to primary Data Domain system IP LAN

Primary recovery host to primary VMAX3 array FC SAN

IP LANPrimary recovery host to primary Data Domain system

Primary VMAX3 array to primary Data Domain system FC SAN

Secondary recovery host to secondary VMAX3 array FC SAN

Secondary recovery host to secondary Data Domain system IP LAN

Secondary VMAX3 array to secondary Data Domain system FC SAN

Primary application host to secondary Data Domain system IP WAN

Primary Data Domain system to secondary Data Domain system IP WAN

ProtectPoint troubleshooting scenarios The following sections list some potential troubleshooting scenarios and steps to correct the issues.

Failure of a host at the primary site A failure has occurred on the AR host at the primary site, and a new AR host is brought online at the primary site to replace it. The application administrator would like to continue to leverage the workflow for data protection.

Before you begin

Verify the following elements of the configuration:

Troubleshooting

72 EMC ProtectPoint 1.0 Implementation Guide

l Connectivity has been established between the VMAX3 array and the new AR host.

l Connectivity has been established between the new AR host and the Data Domain storage array.

Complete the following high-level tasks.

Procedure

1. Update the masking vew for the VMAX3 source devices if the WWN values have changed.

2. If necessary, re-create the configuration on the new AR host.

Failure of host with a new host on the secondary site A failure has occurred on the AR host at the primary site, and a new AR host is brought online at the secondary site. The application administrator would like to continue to leverage the workflow for data protection.

Treat this as a failure of the primary and protection storage at the primary site, and initiate a full failover to the secondary site.

Primary ste failure (both primary and protection storage) A failure has occurred that disables the primary site, affecting both the storage systems and AR host. The decision has been made to fail over to the secondary site. Restore LUNs and restore devices are already configured on the secondary VMAX3 array and Data Domain system.

Before you begin

Verify the following elements of the configuration:

l There is a replicated copy of the primary data on the secondary site.

l There is a replicated copy of the backups on the secondary site.

l There is an AR host on the secondary site that can be used to run the application.

l The WWN for the new production AR host has been provided.

Complete the following high-level tasks.

Procedure

1. Set the replicated copy on the secondary VMAX3 array as the production copy.

2. Create the backup LUNs on the Data Domain system, matching the geometry of the source devices in the appropriate access group.

3. Encapsulate the Data Domain backup LUNs on the VMAX3 array. These LUNs will be referred to as the Data Domain backup target devices.

4. Create the SnapVX snapshots of the source devices.

5. Link the SnapVX snapshots to the backup target devices.

6. Mask the source devices to the new production host.

7. Re-create the configuration on the new host.

Secondary site failure (both primary and protection storage) A recoverable failure occurs that disables the secondary site, affecting both the primary and protection storage systems at the secondary site. The storage administrator can

Troubleshooting

Failure of host with a new host on the secondary site 73

either follow the procedure for provisioning storage resources on the secondary site or stop replication to the secondary site.

Choose one of the following options:

l Provision new primary and protection storage resources at the secondary site, and create new replication sessions to the new destination devices.

l Stop replication to the secondary site.

Failure of primary storage at the production site A failure has occurred on the primary storage at the production site.

Initiate a failover to the secondary site as described in Primary ste failure (both primary and protection storage) on page 73.

Failure of primary storage at the secondary site A recoverable failure occurs that disables the primary storage at the secondary site. This does not affect replication of the backup sets, but could impact restores from the secondary site. To allow restores from the secondary site, the storage administrator can use a different VMAX3 array for restores on the secondary site.

Before you begin

Verify the following elements of the configuration:

l The WWN for the AR host has been provided.

l The WWN used on the Data Domain storage array for backups has been provided.

l The required licenses are configured on the VMAX3 array.

l The AR host has access to the Data Domain system and the VMAX3 array.

l SAN zoning is complete between the new VMAX3 array and the Data Domain system.

l SAN zoning is complete between the AR host and the new VMAX3 array.

Complete the following high-level tasks.

Procedure

1. Modify the existing access groups used by the Data Domain restore LUNs for the new VMAX3 initiators.

2. Encapsulate the Data Domain restore LUNs on the VMAX3 array. These new devices will be referred to as the restore target devices.

3. Mask the restore target devices to the AR host.

Failure of protection storage at the production site A failure occurs that disables the protection storage at the primary site. The storage administrator can provision new protection storage resources on the Data Domain system, or initiate a failover to the secondary site.

Choose one of the following options:

l Provision new protection storage resources on the Data Domain system.

l Initiate a failover to the secondary site as described in Primary ste failure (both primary and protection storage) on page 73.

Troubleshooting

74 EMC ProtectPoint 1.0 Implementation Guide

Failure of protection storage at the secondary site A recoverable failure occurs that disables the protection storage at the secondary site. The application administrator can stop replication until the error is fixed.

Edit the ProtectPoint configuration file to remove the portion used for replication.

Troubleshooting

Failure of protection storage at the secondary site 75

Manualsnet FAQs

If you want to find out how the 1.0 Dell works, you can view and download the Dell DD7200 1.0 Storage System Implementation Guide on the Manualsnet website.

Yes, we have the Implementation Guide for Dell 1.0 as well as other Dell manuals. All you need to do is to use our search bar and find the user manual that you are looking for.

The Implementation Guide should include all the details that are needed to use a Dell 1.0. Full manuals and user guide PDFs can be downloaded from Manualsnet.com.

The best way to navigate the Dell DD7200 1.0 Storage System Implementation Guide is by checking the Table of Contents at the top of the page where available. This allows you to navigate a manual by jumping to the section you are looking for.

This Dell DD7200 1.0 Storage System Implementation Guide consists of sections like Table of Contents, to name a few. For easier navigation, use the Table of Contents in the upper left corner.

You can download Dell DD7200 1.0 Storage System Implementation Guide free of charge simply by clicking the “download” button in the upper right corner of any manuals page. This feature allows you to download any manual in a couple of seconds and is generally in PDF format. You can also save a manual for later by adding it to your saved documents in the user profile.

To be able to print Dell DD7200 1.0 Storage System Implementation Guide, simply download the document to your computer. Once downloaded, open the PDF file and print the Dell DD7200 1.0 Storage System Implementation Guide as you would any other document. This can usually be achieved by clicking on “File” and then “Print” from the menu bar.