Contents

Dell PowerProtect 19.10 Data Manager Kubernetes User Guide PDF

1 of 82
1 of 82

Summary of Content for Dell PowerProtect 19.10 Data Manager Kubernetes User Guide PDF

PowerProtect Data Manager 19.10 Kubernetes User Guide

March 2022 Rev. 01

Notes, cautions, and warnings

NOTE: A NOTE indicates important information that helps you make better use of your product.

CAUTION: A CAUTION indicates either potential damage to hardware or loss of data and tells you how to avoid

the problem.

WARNING: A WARNING indicates a potential for property damage, personal injury, or death.

2021- 2022 Dell Inc. or its subsidiaries. All rights reserved. Dell, EMC, and other trademarks are trademarks of Dell Inc. or its subsidiaries. Other trademarks may be trademarks of their respective owners.

Preface.........................................................................................................................................................................................6

Chapter 1: PowerProtect Data Manager for Kubernetes Overview................................................ 10 About asset sources, assets, and storage................................................................................................................... 10

About Kubernetes cluster asset sources and namespace assets.....................................................................10 About Tanzu Kubernetes guest clusters and Supervisor clusters.................................................................... 11

Prerequisites........................................................................................................................................................................ 11 Port usage............................................................................................................................................................................12 Role-based security........................................................................................................................................................... 17 Roadmap for Kubernetes cluster protection............................................................................................................... 17 Roadmap for Tanzu Kubernetes guest cluster protection.......................................................................................18 Updating PowerProtect Data Manager to version 19.10 or later in a Kubernetes environment.....................18

Update PowerProtect Data Manager to version 19.10 or later in a Kubernetes environment.................. 19

Chapter 2: Enabling the Kubernetes Cluster.................................................................................21 Adding a Kubernetes cluster asset source...................................................................................................................21 Prerequisites to Tanzu Kubernetes guest cluster protection................................................................................. 21

Set up the Supervisor cluster....................................................................................................................................21 Prerequisites to Kubernetes cluster discovery.......................................................................................................... 22 Enable an asset source.................................................................................................................................................... 23

Disable an asset source..............................................................................................................................................24 Delete an asset source.....................................................................................................................................................24 Add a VMware vCenter Server......................................................................................................................................25

Specify a vCenter server as the PowerProtect Data Manager host.............................................................. 27 Disable vCenter SSL certificate validation............................................................................................................ 27

Add a Kubernetes cluster................................................................................................................................................ 28 Protection engine limitations..........................................................................................................................................29 Add a VM Direct Engine.................................................................................................................................................. 29

Chapter 3: Managing Storage, Assets, and Protection for Kubernetes Clusters........................... 32 Add protection storage ...................................................................................................................................................32 Add a protection policy for Kubernetes namespace protection............................................................................ 33 Add a Cloud Tier schedule to a protection policy...................................................................................................... 37 Extended retention........................................................................................................................................................... 38 Edit the retention period for backup copies............................................................................................................... 40 Delete backup copies....................................................................................................................................................... 40

Retry a failed backup copy deletion.........................................................................................................................41 Export data for deleted backup copies.................................................................................................................. 42 Remove backup copies from the PowerProtect Data Manager database.................................................... 42

Add a service-level agreement.......................................................................................................................................43

Chapter 4: Restoring Kubernetes Namespaces and PVCs.............................................................46 View backup copies available for restore.................................................................................................................... 46 Restoring a Kubernetes namespace............................................................................................................................. 47

Restore to the original namespace..........................................................................................................................48

Contents

Contents 3

Restore to a new namespace................................................................................................................................... 49 Restore to an existing namespace........................................................................................................................... 51

Self-service restore of Kubernetes namespaces.......................................................................................................52 Quick recovery for server DR.........................................................................................................................................53

Quick recovery prerequisites....................................................................................................................................56 Identifying a remote system..................................................................................................................................... 57 Add a remote system for quick recovery...............................................................................................................57 Edit a remote system................................................................................................................................................. 58 Quick recovery remote view.....................................................................................................................................58

Appendix A: Kubernetes Cluster Best Practices and Troubleshooting.......................................... 60 Configuration changes required for use of optimized data path and first class disks..................................... 60 Recommendations and considerations when using a Kubernetes cluster........................................................... 61 Support Network File System (NFS) root squashing...............................................................................................62 Update the Velero or OADP version used by PowerProtect Data Manager...................................................... 62 VM Direct protection engine overview........................................................................................................................ 63

Transport mode considerations............................................................................................................................... 64 Requirements for an external VM Direct Engine................................................................................................. 65 Additional VM Direct actions.................................................................................................................................... 65

Troubleshooting network setup issues.........................................................................................................................67 Troubleshooting Kubernetes cluster issues................................................................................................................ 67

Specify volumesnapshotclass for v1 CSI snapshots........................................................................................... 69 Enabling protection when the vSphere CSI driver is installed as a process................................................. 70 Customizing PowerProtect Data Manager pod configuration......................................................................... 70 Backups fail or hang on OpenShift after a new PowerProtect Data Manager installation or

update from a 19.9 or earlier release ................................................................................................................. 70 Data protection operations for high availability Kubernetes cluster might fail when API server not

configured to send ROOT certificate..................................................................................................................71 Kubernetes cluster on Amazon Elastic Kubernetes Service certificate considerations..............................71 Removing PowerProtect Data Manager components from a Kubernetes cluster...................................... 72 Increase the number of worker threads in Supervisor cluster backup-driver if Velero timeout

occurs.........................................................................................................................................................................72 Velero pod backup and restore might fail if namespace being protected contains a large number

of resources.............................................................................................................................................................. 73 Pull images from Docker Hub as authenticated user if Docker pull limits reached..................................... 73

Appendix B: Application-Consistent Database Backups in Kubernetes......................................... 75 About application-consistent database backups in Kubernetes............................................................................ 75

Supported database applications.............................................................................................................................75 Prerequisites................................................................................................................................................................. 76

Obtain and deploy the CLI package.............................................................................................................................. 76 About application templates........................................................................................................................................... 76

YAML configuration files............................................................................................................................................78 Application actions...................................................................................................................................................... 78 Pod actions................................................................................................................................................................... 78 Selectors........................................................................................................................................................................79

Deploy application templates..........................................................................................................................................80 Perform application-consistent backups.....................................................................................................................80 Verify application-consistent backups.......................................................................................................................... 81 Disaster recovery considerations.................................................................................................................................. 82

4 Contents

Granular-level restore considerations.......................................................................................................................... 82 Log truncation considerations........................................................................................................................................82

Contents 5

Preface As part of an effort to improve product lines, periodic revisions of software and hardware are released. Therefore, all versions of the software or hardware currently in use might not support some functions that are described in this document. The product release notes provide the most up-to-date information on product features.

If a product does not function correctly or does not function as described in this document, contact Customer Support.

NOTE: This document was accurate at publication time. To ensure that you are using the latest version of this document,

go to the Customer Support website.

Product naming Data Domain (DD) is now PowerProtect DD. References to Data Domain or Data Domain systems in this documentation, in the user interface, and elsewhere in the product include PowerProtect DD systems and older Data Domain systems. In many cases the user interface has not yet been updated to reflect this change.

Language use This document might contain language that is not consistent with Dell Technologies current guidelines. Dell Technologies plans to update the document over subsequent future releases to revise the language accordingly.

This document might contain language from third-party content that is not under Dell Technologies control and is not consistent with the current guidelines for Dell Technologies own content. When such third-party content is updated by the relevant third parties, this document will be revised accordingly.

Website links The website links used in this document were valid at publication time. If you find a broken link, provide feedback on the document, and a Dell employee will update the document as necessary.

Purpose This document describes how to configure and administer the Dell EMC PowerProtect Data Manager software to protect and recover namespace and PVC data on the Kubernetes cluster. The PowerProtect Data Manager Administration and User Guide provides additional details about configuration and usage procedures.

Audience This document is intended for the host system administrator who is involved in managing, protecting, and reusing data across the enterprise by deploying PowerProtect Data Manager software.

Revision history The following table presents the revision history of this document.

Table 1. Revision history

Revision Date Description

01 March 22, 2022 Initial release of this document for PowerProtect Data Manager version 19.10.

6 Preface

Compatibility information Software compatibility information for the PowerProtect Data Manager software is provided at the E-Lab Navigator.

Related documentation The following publications are available at Customer Support and provide additional information:

Table 2. Related documentation

Title Content

PowerProtect Data Manager Administration and User Guide Describes how to configure the software.

PowerProtect Data Manager Deployment Guide Describes how to deploy the software.

PowerProtect Data Manager Licensing Guide Describes how to license the software.

PowerProtect Data Manager Release Notes Contains information on new features, known limitations, environment, and system requirements for the software.

PowerProtect Data Manager Security Configuration Guide Contains security information.

PowerProtect Data Manager Amazon Web Services Deployment Guide

Describes how to deploy the software to Amazon Web Services (AWS).

PowerProtect Data Manager Azure Deployment Guide Describes how to deploy the software to Microsoft Azure.

PowerProtect Data Manager Google Cloud Platform Deployment Guide

Describes how to deploy the software to Google Cloud Platform (GCP).

PowerProtect Data Manager Cloud Disaster Recovery Administration and User Guide

Describes how to deploy Cloud Disaster Recovery (Cloud DR), protect virtual machines in the AWS or Azure cloud, and run recovery operations.

PowerProtect Data Manager Cyber Recovery User Guide Describes how to install, update, patch, and uninstall the Dell EMC PowerProtect Cyber Recovery software.

PowerProtect Data Manager File System User Guide Describes how to configure and use the software with the File System agent for file-system data protection.

PowerProtect Data Manager Kubernetes User Guide Describes how to configure and use the software to back up and restore namespaces and PVCs in a Kubernetes cluster.

PowerProtect Data Manager Microsoft Exchange Server User Guide

Describes how to configure and use the software to back up and restore the data in a Microsoft Exchange Server environment.

PowerProtect Data Manager Microsoft SQL Server User Guide

Describes how to configure and use the software to back up and restore the data in a Microsoft SQL Server environment.

PowerProtect Data Manager Oracle RMAN User Guide Describes how to configure and use the software to back up and restore the data in an Oracle Server environment.

PowerProtect Data Manager SAP HANA User Guide Describes how to configure and use the software to back up and restore the data in an SAP HANA Server environment.

PowerProtect Data Manager Storage Direct User Guide Describes how to configure and use the software with the Storage Direct agent to protect data on VMAX storage arrays through snapshot backup technology.

PowerProtect Data Manager Network Attached Storage User Guide

Describes how to configure and use the software to protect and recover the data on network-attached storage (NAS) shares and appliances.

PowerProtect Data Manager Virtual Machine User Guide Describes how to configure and use the software to back up and restore virtual machines and virtual-machine disks (VMDKs) in a vCenter Server environment.

Preface 7

Table 2. Related documentation (continued)

Title Content

VMware Cloud Foundation Disaster Recovery With PowerProtect Data Manager

Provides a detailed description of how to perform an end-to- end disaster recovery of a VMware Cloud Foundation (VCF) environment.

PowerProtect Data Manager Disaster Recovery Best Practices Guide

Provides guidance and best practices for a PowerProtect Data Manager server disaster-recovery solution.

PowerProtect Data Manager Public REST API documentation Contains the PowerProtect Data Manager APIs and includes tutorials to guide you in their use.

vRealize Automation Data Protection Extension for Data Protection Systems Installation and Administration Guide

Describes how to install, configure, and use the Dell EMC vRealize Data Protection Extension.

Typographical conventions The following type style conventions are used in this document:

Table 3. Style conventions

Formatting Description

Bold Used for interface elements that a user specifically selects or clicks, for example, names of buttons, fields, tab names, and menu paths. Also used for the name of a dialog box, page, pane, screen area with title, table label, and window.

Italic Used for full titles of publications that are referenced in text.

Monospace Used for: System code System output, such as an error message or script Pathnames, file names, file name extensions, prompts, and syntax Commands and options

Monospace italic Used for variables.

Monospace bold Used for user input.

[ ] Square brackets enclose optional values.

| Vertical line indicates alternate selections. The vertical line means or for the alternate selections.

{ } Braces enclose content that the user must specify, such as x, y, or z.

... Ellipses indicate non-essential information that is omitted from the example.

You can use the following resources to find more information about this product, obtain support, and provide feedback.

Where to find product documentation The Customer Support website The Community Network

Where to get support The Customer Support website provides access to product licensing, documentation, advisories, downloads, and how-to and troubleshooting information. The information can enable you to resolve a product issue before you contact Customer Support.

To access a product-specific page:

1. Go to the Customer Support website.

8 Preface

2. In the search box, type a product name, and then from the list that appears, select the product.

Knowledgebase The Knowledgebase contains applicable solutions that you can search for either by solution number (for example, KB000xxxxxx) or by keyword.

To search the Knowledgebase:

1. Go to the Customer Support website. 2. On the Support tab, click Knowledge Base. 3. In the search box, type either the solution number or keywords. Optionally, you can limit the search to specific products by

typing a product name in the search box, and then selecting the product from the list that appears.

Live chat To participate in a live interactive chat with a support agent:

1. Go to the Customer Support website. 2. On the Support tab, click Contact Support. 3. On the Contact Information page, click the relevant support, and then proceed.

Service requests To obtain in-depth help from a support agent, submit a service request. To submit a service request:

1. Go to the Customer Support website. 2. On the Support tab, click Service Requests.

NOTE: To create a service request, you must have a valid support agreement. For details about either an account or

obtaining a valid support agreement, contact a sales representative. To find the details of a service request, in the

Service Request Number field, type the service request number, and then click the right arrow.

To review an open service request:

1. Go to the Customer Support website. 2. On the Support tab, click Service Requests. 3. On the Service Requests page, under Manage Your Service Requests, click View All Dell Service Requests.

Online communities For peer contacts, conversations, and content on product support and solutions, go to the Community Network. Interactively engage with customers, partners, and certified professionals online.

How to provide feedback Feedback helps to improve the accuracy, organization, and overall quality of publications. You can send feedback to DPAD.Doc.Feedback@emc.com.

Preface 9

PowerProtect Data Manager for Kubernetes Overview

Topics:

About asset sources, assets, and storage Prerequisites Port usage Role-based security Roadmap for Kubernetes cluster protection Roadmap for Tanzu Kubernetes guest cluster protection Updating PowerProtect Data Manager to version 19.10 or later in a Kubernetes environment

About asset sources, assets, and storage In PowerProtect Data Manager, assets are the basic units that PowerProtect Data Manager protects. Asset sources are the mechanism that PowerProtect Data Manager uses to manage assets and communicate with the protection storage where backup copies of the assets are stored.

PowerProtect Data Manager supports Dell EMC PowerProtect DD Management Center (DDMC) as the storage and programmatic interface for controlling protection storage systems.

Asset sources can be a vCenter server, Kubernetes cluster, application host, SMIS server, or Cloud Snapshot Manager tenant. Assets can be virtual machines, Microsoft Exchange Server databases, Microsoft SQL Server databases, Oracle databases, SAP HANA databases, file systems, Kubernetes namespaces, or storage groups.

Before you can add an asset source, you must enable the source within the PowerProtect Data Manager user interface.

About Kubernetes cluster asset sources and namespace assets

Kubernetes clusters and containers play an important role in the speed and efficiency of deploying and developing applications, and also in reducing downtime when a change to application scaling is required. PowerProtect Data Manager enables you to protect the Kubernetes environment by adding a Kubernetes cluster as an asset source, and discovering namespaces as assets for data protection operations.

In a traditional application, an environment might consist of a web server, application server, and database server, with the web server servicing requests in front of a load balancer. Scaling this application, for example, by increasing the web layer by adding servers, requires the involvement of many resources to manually change the configuration. In a Kubernetes cluster, however, once you develop the code and write a YAML file that indicates the required systems and configuration details, Kubernetes deploys these containers and the application can be started quickly. Also, a change to the scale of the application only requires you to change the YAML file and post the updated file to the cluster.

A typical Kubernetes cluster can contain several physical and virtual systems. Once the clusters are running, the applications, binaries, and a framework are bundled into a container, which is then wrapped in a pod. Before you can run the pod in a Kubernetes cluster, the cluster must be divided into namespaces. A namespace is a pool of resources that are divided logically in the cluster. It is these namespaces that are protected as assets within the PowerProtect Data Manager UI for the purposes of backup and recovery.

However, because pods only last for a short time, to persist state information Kubernetes uses Persistent Volumes. You can create Persistent Volumes on external storage and then attach to a particular pod using Persistent Volume Claims (PVCs). PVCs can then be included along with other namespaces in PowerProtect Data Manager backup and recovery operations.

1

10 PowerProtect Data Manager for Kubernetes Overview

About Tanzu Kubernetes guest clusters and Supervisor clusters

In addition to support for protection of a Kubernetes cluster running directly on the virtual machine in a vSphere environment, PowerProtect Data Manager supports vSphere with Kubernetes for Tanzu guest cluster protection.

This functionality is provided by a Supervisor cluster which, unlike a regular upstream Kubernetes cluster, acts as a customized cluster for vCenter purposes and includes VM Controller Services, the cluster API, and the guest cluster controller. The Tanzu Kubernetes guest cluster, which is where the worker nodes and all applications reside, is controlled and run on the Supervisor cluster, and a Supervisor cluster exists on each vSphere cluster.

When creating a Tanzu Kubernetes guest cluster running a Supervisor cluster, a yaml manifest file is used to specify the number of control plane nodes and worker nodes you want to create in the guest cluster. Guest VTK services will then use cluster API services to create the guest cluster and, at the same time, use VM operator services to create the virtual machines that will make up the guest cluster.

Protecting the Tanzu Kubernetes guest cluster involves two layers of interaction:

A Supervisor cluster running on the vSphere infrastructure acts as the controlling authority that allows you to create the guest clusters. Also, you can directly create virtual machines running on a supervisor cluster, where the supervisor cluster provides the native functionality of Kubernetes.

You can create an upstream Kubernetes cluster, and specify how many control plane nodes that you require.

Note the following differences in behavior between the protection of Kubernetes clusters deployed directly on vSphere and the protection of Kubernetes clusters deployed by vSphere with Tanzu:

The pods running in the Kubernetes Tanzu guest cluster will not have direct access to the supervisor cluster resource, and therefore, any components running inside the guest cluster, such as the PowerProtect controller, will not have access to the Supervisor cluster resource.

A mapping is created for Persistent Volumes that are provisioned by vSphere CSI on the guest cluster, so that virtual FCDs and Persistent Volumes created on the guest cluster are mapped to the Supervisor Cluster that runs directly on the vSphere STD infrastructure.

Since cProxy pods running in the Kubernetes Tanzu guest cluster will not have access to FCDs directly, a vProxy will be deployed in vCenter to protect the guest cluster. This protection requires an external VM Direct engine dedicated to Kubernetes workloads. During protection of the guest cluster, cndm locates this VM Direct engine and notifies the guest cluster to use this engine for backup and restore operations.

Communication is then established between the PowerProtect controller and the Velero vSphere plug-in running in the guest cluster so that, once the backup is created, the vSphere plug-in can notify the Supervisor cluster's API server. The Supervisor cluster performs the FCD snapshot, returns the snapshot ID to the guest cluster, and once the PowerProtect controller becomes aware of the snapshot, a session is created on the vProxy virtual machine and pod in the Supervisor cluster namespace that have access to the FCDs in order to facilitate moving data from the FCD to the backup destination.

Prerequisites Ensure that your environment meets the requirements for a new deployment or update of PowerProtect Data Manager.

Requirements: NOTE: The most up-to-date software compatibility information for the PowerProtect Data Manager software and the

application agents is provided in the E-Lab Navigator.

A list of hosts that write backups to DD systems is available. DDOS version 6.1 or later and the PowerProtect DD Management Center are required. All models of DD systems are

supported.

NOTE: PowerProtect DD Management Center is required with a DDOS version earlier than 6.1.2. With DDOS version

6.1.2 or later, you can add and use a DD system directly without PowerProtect DD Management Center.

License: A trial license is provided with the PowerProtect Data Manager software. Dell EMC Data Protection Suite Applications, Backup, and Enterprise customers can contact Dell EMC Licensing Support for assistance with a permanent PowerProtect Data Manager license.

Large environments require multiple PowerProtect Data Manager instances. Contact Champions.eCDM@emc.com for assistance with sizing requests.

The PowerProtect Data Manager 19.10 download file requires the following: ESXi version 6.5, 6.7, or 7.0. 8 vCPUs, 18 GB RAM, one 100 GB disk, and one 500 GB disk.

PowerProtect Data Manager for Kubernetes Overview 11

The latest version of the Google Chrome browser to access the PowerProtect Data Manager UI. TCP port 7000 is open between PowerProtect Data Manager and the application agent hosts.

VMware ESXi server that hosts PowerProtect Data Manager meets the following minimum system requirements: 10 CPU cores 18 GB of RAM for PowerProtect Data Manager Five disks with the following capacities:

Disk 1100 GB Disk 2500 GB Disk 310 GB Disk 410 GB Disk 55 GB

One 1-GB NIC

Port usage This table summarizes the port requirements for PowerProtect Data Manager and its associated internal and external components or systems. PowerProtect Data Manager audits and blocks all ports that are not listed below.

The PowerProtect DD Security Configuration Guide provides more information about ports for DD systems and protocols.

Table 4. PowerProtect Data Manager port requirements

Source system Destination system Port Protocol TLS supported

Notes

Backup clientsa DD system 111 TCP No Dynamic port detection and mapping. Used only for port verification, not for data.

Backup clientsa DD system 2049 Proprietary TLS 1.2 Optional DD Boost client TLS encryption.

Backup clientsa DD system 2052 TCP No NFS mountd, not for data.

Backup clients DD Global Scale 2053 TCP TLS 1.2 DD Boost connection.

Backup clientsa PowerProtect Data Manager

8443 HTTPS TLS 1.2 REST API service.

Backup clients VMAX SE server 2707 Proprietary TLS 1.2 Backup clients require access to the default port 2707 on the VMAX SE server. Applies to Storage Direct.

Callhome (SupportAssist)

PowerProtect Data Manager

22 SSH TLS 1.2 SSH for support and administration. Encrypted by private key or optional certificates.

Callhome (SupportAssist)

PowerProtect Data Manager

443 HTTPS TLS 1.2 SSH for remote support.

ESXi DD systemb 111 TCP No Dynamic port detection and mapping. Used only for port verification, not for data.

ESXi DD systemb 2049 Proprietary TLS 1.2 NFS datastore and DD Boost. NFS is unencrypted. DD Boost is encrypted.

ESXi DD systemb 2052 TCP No NFS mountd, not for data.

Kubernetes cluster DD system 111 TCP No Dynamic port detection and mapping. Used only for port verification, not for data.

Kubernetes cluster DD system 2049 Proprietary TLS 1.2 Optional DD Boost client TLS encryption.

Kubernetes cluster DD system 2052 TCP TLS 1.2 NFS mountd, not for data.

12 PowerProtect Data Manager for Kubernetes Overview

Table 4. PowerProtect Data Manager port requirements (continued)

Source system Destination system Port Protocol TLS supported

Notes

Kubernetes cluster ESXi 902 TCP TLS 1.2 vSphere client access for PVCs using VMware CSI. Not required for Tanzu Kubernetes Guest clusters.

Kubernetes cluster Protection engine 9090 HTTPS TLS 1.2/1.3 Required for Tanzu Kubernetes Guest clusters.

Kubernetes cluster vCenter 443 HTTPS TLS 1.2 Primary management interface for vSphere using the vCenter Server, including the vSphere client for PVCs using VMware CSI. Not required for Tanzu Kubernetes Guest clusters.

NAS protection engine NAS appliance 443 HTTPS TLS 1.2 Management access for Unity and PowerStore appliances.

NAS protection engine NAS appliance 8080 HTTPS TLS 1.2 Management access for PowerScale/ Isilon appliances.

PowerProtect Data Manager

Backup clients 7000 HTTPS TLS 1.2 Microsoft SQL Server, Oracle, Microsoft Exchange Server, SAP HANA, and file system. Requirement applies to Application Direct and VM Direct.

PowerProtect Data Manager

Callhome (SupportAssist)

25 SMTP TLS 1.2 TLS version in use depends on the mail server. TLS used where possible.

PowerProtect Data Manager

Callhome (SupportAssist)

465 TCP TLS 1.2

PowerProtect Data Manager

Callhome (SupportAssist)

587 TCP TLS 1.2

PowerProtect Data Manager

Callhome (SupportAssist)

9443 HTTPS TLS 1.2 REST API for service notification.

PowerProtect Data Manager

DD system 111 TCP No Dynamic port detection and mapping. Used only for port verification, not for data.

PowerProtect Data Manager

DD system 2049 Proprietary No Server DR NFS connections. Used only for metadata, client name, and indexing, not for backup data.

PowerProtect Data Manager

DD system 2052 TCP/UDP No NFS mountd, not for data.

PowerProtect Data Manager

DD system 3009 HTTPS TLS 1.2 Communication with DDMC for configuration and discovery.

PowerProtect Data Manager

ESXi 443 HTTPS TLS 1.2 Depends on ESXi configuration and version.

PowerProtect Data Manager

Kubernetes cluster 6443 Proprietary TLS 1.2 Connects to the Kubernetes API server. Encryption depends on the Kubernetes cluster configuration. PowerProtect Data Manager supports TLS 1.2.

PowerProtect Data Manager

LDAP server 389 TCP/UDP No Insecure LDAP port, outbound only. Use port 636 for encryption.

PowerProtect Data Manager

LDAP server 636 TCP TLS 1.2 LDAPS, depending on LDAP configuration in use. Outbound only.

PowerProtect Data Manager for Kubernetes Overview 13

Table 4. PowerProtect Data Manager port requirements (continued)

Source system Destination system Port Protocol TLS supported

Notes

PowerProtect Data Manager

NAS appliance 443 HTTPS TLS 1.2 Management access for Unity and PowerStore appliances.

PowerProtect Data Manager

NAS appliance 8080 HTTPS TLS 1.2 Management access for PowerScale/ Isilon appliances.

PowerProtect Data Manager

NAS share 139 TCP TLS 1.2 Windows file server shares (CIFS).

PowerProtect Data Manager

NAS share 443 HTTPS TLS 1.2 NetApp shares (NFS and CIFS). Also used for NAS share verification check.

PowerProtect Data Manager

NAS share 445 TCP TLS 1.2 Windows file server shares (CIFS).

PowerProtect Data Manager

NAS share 2049 TCP TLS 1.2 Linux file server shares (NFS).

PowerProtect Data Manager

NTP server 123 NTP No Time synchronization.

PowerProtect Data Manager

PowerProtect Data Manager - Catalog

9760 TCP Internal only. Blocked by firewall.

PowerProtect Data Manager

PowerProtect Data Manager - Configuration Manager

55555 TCP Internal only. Blocked by firewall.

PowerProtect Data Manager

PowerProtect Data Manager - Elastic Search

9200 TCP Internal only.

PowerProtect Data Manager

PowerProtect Data Manager - Elastic Search

9300 TCP Internal only.

PowerProtect Data Manager

PowerProtect Data Manager - Embedded VM proxy

9095 TCP Internal only. Blocked by firewall.

PowerProtect Data Manager

PowerProtect Data Manager - Quorum peer

2181 TCP Internal only. Blocked by firewall.

PowerProtect Data Manager

PowerProtect Data Manager - RabbitMQ

5672 TCP Internal only. Blocked by firewall.

PowerProtect Data Manager

PowerProtect Data Manager - Secrets manager

9092 TCP Internal only.

PowerProtect Data Manager

PowerProtect Data Manager - VM Direct infrastructure manager

9097 TCP Internal only. Blocked by firewall.

PowerProtect Data Manager

PowerProtect Data Manager - VM Direct orchestration

9096 TCP Internal only. Blocked by firewall.

PowerProtect Data Manager

Protection engine 22 SSH TLS 1.2 SSH for support and administration. Encrypted by private key or optional certificates.

PowerProtect Data Manager

Protection engine 9090 HTTPS TLS 1.2 REST API service.

14 PowerProtect Data Manager for Kubernetes Overview

Table 4. PowerProtect Data Manager port requirements (continued)

Source system Destination system Port Protocol TLS supported

Notes

PowerProtect Data Manager

Protection engine 9613c Proprietary TLS 1.2

PowerProtect Data Manager

Reporting engine 9002 TCP TLS 1.2 REST API service.

PowerProtect Data Manager

Search cluster 9613c Proprietary TLS 1.2 Infrastructure node agent management of Search Engine nodes.

PowerProtect Data Manager

Search cluster 14251 Proprietary TLS 1.2 Search query REST API endpoint.

PowerProtect Data Manager

SMI-S 5989 HTTPS TLS 1.2 Communication with SMI-S provider. Discovery.

PowerProtect Data Manager

Storage Direct system 3009 HTTPS TLS 1.2 Discovery.

PowerProtect Data Manager

UI 443 HTTPS TLS 1.2 Between the browser host and the PowerProtect Data Manager system.

PowerProtect Data Manager

Update Manager UI 14443 HTTPS TLS 1.2 Connects the host that contains the update package to the PowerProtect Data Manager system.

PowerProtect Data Manager

vCenter 443 HTTPS TLS 1.2 vSphere API for direct restore, discovery, initiating Hot Add transport mode, and restores including Instant Access restore. Depends on vCenter configuration.

PowerProtect Data Manager

vCenter 7444 Proprietary TLS 1.2 vCenter single sign-on.

PowerProtect Data Manager

VMAX Solutions Enabler server

2707 Proprietary TLS 1.2 Storage Direct functionality. PowerProtect Data Manager uses the Solutions Enabler default server port for configuration steps and to control active snapshot management for SnapVX, including for PP-VMAX.

Protection engine DD system 111 TCP No Dynamic port detection and mapping. Used only for port verification, not for data.

Protection engine DD system 2049 Proprietary TLS 1.2 Optional DD Boost client TLS encryption.

Protection engine DD system 2052 TCP No NFS mountd, not for data.

Protection engine DD system 3009 HTTPS TLS 1.2 DD REST API service.

Protection engine ESXi 443 HTTPS TLS 1.2 Client connections.

Protection engine ESXi 902 TCP TLS 1.2 vSphere client access.

Protection engine Guest VM 9613c Proprietary TLS 1.2 VM Direct Agent provides capabilities for file-level restore and application- aware protection.

Protection engine NAS agent Docker container

443 HTTPS TLS 1.2 Applies for NAS only. Internal only. Blocked by firewall.

Protection engine Search cluster 14251 TCP TLS 1.2 Search query REST API endpoint.

PowerProtect Data Manager for Kubernetes Overview 15

Table 4. PowerProtect Data Manager port requirements (continued)

Source system Destination system Port Protocol TLS supported

Notes

Protection engine vCenter 443 HTTPS TLS 1.2 Primary management interface for vSphere using the vCenter server, including the vSphere client.

Protection engine vCenter 7444 TCP TLS 1.2 Secure token service.

Protection engine Protection engine - RabbitMQ

4369 TCP Internal only. Blocked by firewall.

Protection engine Protection engine - RabbitMQ

5672 TCP Internal only. Blocked by firewall.

Reporting engine PowerProtect Data Manager

8443 TCP TLS 1.2 REST API service for collecting reporting data.

Search cluster DD system 111 TCP No Server DR. Dynamic port detection and mapping. Used only for port verification, not for data.

Search cluster DD system 2049 Proprietary No Server DR NFS connections. Used only for metadata, client name, and indexing, not for backup data.

Search cluster DD system 2052 TCP/UDP No Server DR. NFS mountd, not for data.

Source DD system Target DD system 111 TCP No Dynamic port detection and mapping. Used only for port verification, not for data.

Source DD system Target DD system 2049 Proprietary TLS 1.2

Source DD system Target DD system 2051 Proprietary TLS 1.2

Source DD system Target DD system 2052 TCP No NFS mountd, not for data.

Target DD system Source DD system 111 TCP No Dynamic port detection and mapping. Used only for port verification, not for data.

Target DD system Source DD system 2049 Proprietary TLS 1.2

Target DD system Source DD system 2051 Proprietary TLS 1.2

Target DD system Source DD system 2052 TCP No NFS mountd, not for data.

Update Manager UI PowerProtect Data Manager

14443 HTTPS TLS 1.2 Connects the host that contains the update package to the PowerProtect Data Manager system.

User PowerProtect Data Manager

22 SSH TLS 1.2 SSH for support and administration. Encrypted by private key or optional certificates.

User PowerProtect Data Manager

80 HTTP No Redirect to HTTPS.

User PowerProtect Data Manager

443 HTTPS TLS 1.2 Connects the browser host to the PowerProtect Data Manager system.

User PowerProtect Data Manager

8443 HTTPS TLS 1.2 REST API service.

User Search Cluster 22 SSH TLS 1.2 SSH for support and administration. Encrypted by private key or optional certificates.

16 PowerProtect Data Manager for Kubernetes Overview

Table 4. PowerProtect Data Manager port requirements (continued)

Source system Destination system Port Protocol TLS supported

Notes

User Protection engine 22 SSH TLS 1.2 SSH for support and administration. Encrypted by private key or optional certificates.

vCenter ESXi 443 HTTPS TLS 1.2 vSphere client to ESXi/ESX host management connection.

vCenter PowerProtect Data Manager

443 HTTPS TLS 1.2 vCenter plug-in UI.

vCenter PowerProtect Data Manager

8443 HTTPS TLS 1.2 REST API service.

vCenter PowerProtect Data Manager

9009 HTTPS TLS 1.2/1.3 vSphere APIs for Storage Awareness (VASA) provider, storage policy based management (SPBM) service within PowerProtect Data Manager.

a. Applies to Application Direct, Storage Direct, and VM Direct (VM application-aware only). b. Instant access restore. NFS connection established under PowerProtect Data Manager control of vSphere from the ESXi

node to the DD system. Can be directed to any ESXi node, so allowed ports would be between any ESXi node to any DD system used by PowerProtect Data Manager.

c. Port number is a default which you can change on a per-agent basis, and which can change dynamically in case of listening conflicts.

The term "protection engine" in this table refers to all types of protection engine: VM Direct, NAS, and Kubernetes, unless otherwise specified.

For VM application-aware backups, open the ports for the protection engine and for the backup clients on the guest VM.

For NAS assets, open any custom ports between PowerProtect Data Manager, the NAS protection engine, and the NAS that may be required for access to specific shares. You can supply custom port information for connections to NAS appliances and shares as part of the process for adding NAS asset sources.

Role-based security PowerProtect Data Manager provides predefined user roles that control access to areas of the user interface and to protected operations. Some of the functionality in this guide is reserved for particular roles and may not be accessible from every user account.

By using the predefined roles, you can limit access to PowerProtect Data Manager and to backup data by applying the principle of least privilege.

The PowerProtect Data Manager Security Configuration Guide provides more information about user roles, including the associated privileges and the tasks that each role can perform.

Roadmap for Kubernetes cluster protection The following roadmap provides the steps required to configure the Kubernetes cluster in PowerProtect Data Manager in order to run protection policies.

Steps

1. Add a storage system.

Add protection storage provides information.

2. Enable an asset source for the Kubernetes cluster in the PowerProtect Data Manager UI.

3. Add a Kubernetes cluster as an asset source in the PowerProtect Data Manager UI.

4. Optionally, to use optimized data path and first class disks, complete the steps in Configuration changes required for use of optimized data path and first class disks before adding a Kubernetes protection policy.

PowerProtect Data Manager for Kubernetes Overview 17

5. Create a protection policy to protect the Kubernetes cluster namespace assets and PVCs.

Add a protection policy for Kubernetes namespace protection provides information.

Roadmap for Tanzu Kubernetes guest cluster protection The following roadmap provides the steps required to configure the Tanzu Kubernetes guest cluster in PowerProtect Data Manager in order to run protection policies.

Steps

1. Add a storage system.

Add protection storage provides information.

2. Complete the Set up the Supervisor cluster to perform a one-time configuration required in the Supervisor cluster.

3. Enable an asset source for both the vCenter Server and the Kubernetes cluster in the PowerProtect Data Manager UI.

4. Add a VMware vCenter Server that hosts the Tanzu Kubernetes guest cluster as an asset source in the PowerProtect Data Manager UI.

5. Add a Kubernetes cluster as an asset source in the PowerProtect Data Manager UI. When adding a Tanzu Kubernetes guest cluster, ensure that you create an association with the vCenter Server.

6. Add a VM Direct Engine in the PowerProtect Data Manager UI that is dedicated to Kubernetes workloads. There should be a minimum of one VM Direct engine per Supervisor cluster.

7. Create a protection policy to protect the Kubernetes cluster namespace assets and PVCs.

Add a protection policy for Kubernetes namespace protection provides information.

Updating PowerProtect Data Manager to version 19.10 or later in a Kubernetes environment When updating PowerProtect Data Manager to version 19.10 or later in a Kubernetes environment, certain considerations must be taken into account.

CAUTION: If you are not using the cluster-admin role and you do not follow a Kubernetes-specific update

procedure, PowerProtect Data Manager will fail to function properly.

PowerProtect Data Manager uses structural custom resource definitions (CRDs) to extend the Kubernetes API and increase system security. But versions of PowerProtect Data Manager earlier than 19.9 used non-structural schemas that did not validate the consistent format of custom resources (CRs). When updating from a version of PowerProtect Data Manager that supports Kubernetes asset sources to version 19.10 or later, certain considerations must be taken into account. For information about updating PowerProtect Data Manager in general, see the PowerProtect Data Manager Administration and User Guide or the PowerProtect Data Manager Deployment Guide.

Self-service recovery

During the update, all CRs are deleted, including the backupjobs CR. After the update, the self-service recovery of the last backup copy taken before the update is not possible. That backup can only be restored from the PowerProtect Data Manager user interface. Self-service recoveries can be performed on backups taken after the update.

Minimum cluster-role privileges

The following cluster roles must have certain privileges:

The cluster role bound to the service account provided to PowerProtect Data Manager must have Delete for customresourcedefinitions resources.

The cluster role bound to ppdm-serviceaccount must have Patch for deployments resources.

18 PowerProtect Data Manager for Kubernetes Overview

Update PowerProtect Data Manager to version 19.10 or later in a Kubernetes environment

Use these steps to update PowerProtect Data Manager to version 19.10 or later in a Kubernetes environment.

Steps

1. Before updating, if you are not using the cluster-admin role, perform the following actions:

a. From a PowerProtect Data Manager command line, type the following command:

kubectl edit clusterrole powerprotect:discovery-clusterrole

b. Browse for customresourcedefinitions in the rules section, and then add the delete verb. The format for the new verb should match the format of the other verbs. For example:

resources: - customresourcedefinitions verbs: - create - delete - get - list - update

c. Save the edited file. d. Type the following command:

kubectl edit clusterrole powerprotect:cluster-role

e. Browse for deployments in the rules section, and then add the patch verb. The format for the new verb should match the format of the other verbs. For example:

resources: - deployments - deployments/scale verbs: - create - get - patch - list - update

f. If upgrading from a PowerProtect Data Manager 19.8 and earlier release and your environment includes OpenShift, browse for apiGroups and the existing networking.k8s.io entry, and then add operators.coreos.com and konveyor.openshift.io entries below it that match the following content and format:

- apiGroups: - networking.k8.io resources: - networkpolicies verbs: - create - delete - get - apiGroups: - operators.coreos.com resources: - '*' verbs: - '*' - apiGroups: - konveyer.openshift.io resources: - '*' verbs: - '*' - apiGroups: [ "oadp.openshift.io" ]

PowerProtect Data Manager for Kubernetes Overview 19

resources: [ '*' ] verbs: [ '*' ]

NOTE: If upgrading from PowerProtect Data Manager 19.9, most of this content should have already been applied,

and therefore, only the last three lines entries (starting with apiGroups: [ "oadp.openshift.io" ] are

required.

g. Save the edited file.

NOTE: If you already updated PowerProtect Data Manager without first performing this step, you can correct the

update by following step 1, updating PowerProtect Data Manager again, and then following step 3.

2. Follow the appropriate PowerProtect Data Manager update procedure from either the PowerProtect Data Manager Administration and User Guide or the PowerProtect Data Manager Deployment Guide.

3. After updating PowerProtect Data Manager to the new version, type the following commands from a PowerProtect Data Manager command line to apply the new RBAC yaml. This step is required even if the yaml was applied in PowerProtect Data Manager 19.9 and earlier, in order to add any privileges that are required with 19.10 and later.

cd /usr/local/brs/lib/cndm/misc tar -xvzf rbac.tar.gz cd rbac cat README.txt

4. Follow the instructions displayed on the screen.

Results

PowerProtect Data Manager has been updated and configured to work with Kubernetes.

20 PowerProtect Data Manager for Kubernetes Overview

Enabling the Kubernetes Cluster

Topics:

Adding a Kubernetes cluster asset source Prerequisites to Tanzu Kubernetes guest cluster protection Prerequisites to Kubernetes cluster discovery Enable an asset source Delete an asset source Add a VMware vCenter Server Add a Kubernetes cluster Protection engine limitations Add a VM Direct Engine

Adding a Kubernetes cluster asset source Adding a Kubernetes cluster as an asset source in PowerProtect Data Manager enables you to protect namespaces and Persistent Volume Claims (PVCs) within the cluster. You can use the Asset Sources window in the PowerProtect Data Manager UI to add a Kubernetes cluster asset source to the PowerProtect Data Manager environment.

Prerequisites to Tanzu Kubernetes guest cluster protection Review the following requirements before adding a Tanzu Kubernetes guest cluster in PowerProtect Data Manager for namespace and PVC protection.

Verify that the minimum required vSphere version is installed. The compatibility matrix at E-Lab Navigator provides details. Only NSX-T based networking deployments are supported. It is recommended that communication between vSphere and the Tanzu Kubernetes guest cluster be routable. Complete a one-time configuration to set up the Supervisor cluster, as described in the section Set up the Supervisor

cluster.

Set up the Supervisor cluster

The following one-time configuration is required to set up the Supervisor cluster:

Steps

1. Enable the vSphere operator in the Supervisor cluster, for example, the Velero vSphere Operator:

a. In the left pane of the vSphere Client, select the workload cluster, and then click the Configure tab in the right pane. b. In the Workload-Cluster window, scroll down and select Supervisor Services.

The right pane displays the available services.

c. Select the Velero vSphere Operator service, and then click Enable.

Once enabled, the new Kubernetes namespace gets created automatically with its own vSphere pods running with Supervisor affinity. This allows the Supervisor cluster to perform backups using the FCD snapshot.

d. Select Menu > Workload Management to view the namespaces running in the Supervisor cluster. For a selected namespace, click the Compute tab in the right pane to display the Tanzu guest clusters.

2. Add a Supervisor namespace for the Velero instance:

a. In the Workload Management window of the vSphere Client, click New Namespace.

2

Enabling the Kubernetes Cluster 21

b. After creating this namespace, select the namespace in the left navigation pane. c. If the user does not have the VI admin role, click Add Permissions under the Summary tab in the right pane

d. In the Add Permissions dialog, add the Can edit permission, and then click OK.

3. Download the command line binary velero-vsphere.

4. Log in to the Supervisor cluster:

kubectl-vsphere login --server=https://IPv4 address:443 --vsphere-username username -- insecure-skip-tls-verify

5. Switch the kubectl context to the Supervisor namespace by running the following command:

kubectl config use-context supervisor cluster namespace 6. Use the Velero vSphere command line to install Velero and the Velero plug-in for the vSphere Client:

velero-vsphere install --namespace velero --plugins vsphereveleroplugin/velero-plugin- for-vsphere:1.3.1 --no-secret --no-default-backup-location --use-volume-snapshots=false

7. Using the same command line, enable changed block tracking (CBT) in the guest clusters:

# velero-vsphere configure --enable-cbt-in-guests Once enabled, this setting is applied to the current cluster and all incoming guest clusters.

Uninstall the Velero plug-in and redeploy when updating PowerProtect Data Manager

For Tanzu Kubernetes clusters, when updating PowerProtect Data Manager to release 19.10, perform the following steps to uninstall the existing Velero plug-in and deploy the new one.

About this task

The velero-vsphere command line binary can be downloaded from this link.

Steps

1. In a command prompt, run ./velero-vsphere uninstall to uninstall the Velero instance in the Supervisor cluster.

2. Log in to the vSphere Client.

3. Go to Menu > Workload Management to view the namespaces running in the Supervisor cluster.

4. Select the velero namespace and click Remove to delete the existing Supervisor namespace for the velero instance.

5. Click New Namespace to add a Supervisor namespace with the name velero for the new velero instance.

6. Run the following command to install velero and velero-vsphere-plugin in the Supervisor context:

./velero-vsphere install --namespace velero --plugins vsphereveleroplugin/velero-plugin- for-vsphere:v1.3.1 --no-secret --no-default-backup-location --use-volume-snapshots=false

Prerequisites to Kubernetes cluster discovery Review the following prerequisites before adding an enabling a Kubernetes cluster as an asset source in PowerProtect Data Manager.

OpenShift cluster protection

PowerProtect Data Manager uses the OpenShift API for Data Protection (OADP) operator to set up and install Velero on the OpenShift platform. Dell Technologies recommends checking for any existing instances of the OADP operator in the OpenShift cluster that PowerProtect Data Manager has not deployed, and uninstalling these instances.

Pulling images from Docker Hub to a local registry

By default, the following images are pulled from Docker Hub at https://hub.docker.com/ after a successful discovery of the Kubernetes cluster asset source, PowerProtect Data Manager:

22 Enabling the Kubernetes Cluster

dellemc/powerprotect-k8s-controller dellemc/powerprotect-cproxy, which is pulled during the first backup

dellemc/powerprotect-velero-dd velero/velero vsphereveleroplugin/velero-plugin-for-vsphere (for Kubernetes clusters on vSphere that use VMware CSI)

vsphereveleroplugin/backup-driver (if using a private registry, this image must be pulled manually to the registry).

NOTE: You can obtain the tags for the containers that PowerProtect Data Manager uses from the /usr/ local/brs/lib/cndm/config/k8s-image-versions.info file in the PowerProtect Data Manager appliance.

If a Kubernetes cluster cannot access these sites due to firewall or other restrictions, you can pull these images to a local registry that the cluster can access. Ensure that you keep the image names and version tags the same in the local registry as they appear in Docker Hub. Also, if pulling the images to a private registry in environments that do not have an Internet connection, verify that PowerProtect Data Manager supports the version of the external image tags. The PowerProtect Data Manager Compatibility Matrix at E-Lab Navigator provides more information.

After pulling the images to a local registry, you must configure PowerProtect Data Manager to use the local registry when creating deployment resources.

To specify an internal registry for each Kubernetes cluster, see the section "Configuring internal registry per asset source" under Back up and restore Kubernetes in the PowerProtect Data Manager Public REST API documentation.

If all the Kubernetes clusters protected by PowerProtect Data Manager use the same internal registry, perform the following steps before the Kubernetes cluster discovery:

1. Create an application.properties file /usr/local/brs/lib/cndm/config/application.properties on the PowerProtect Data Manager appliance with the following contents: k8s.docker.registry=fqdn:port. For example, k8s.docker.registry=artifacts.example.com:8446 k8s.image.pullsecrets=secret resource name. Specify this entry only if you require an image pull secret.

The section Pull images from Docker Hub as authenticated user if Docker pull limits reached provides information about specifying image pull secrets for the powerprotect-controller and Velero deployment.

2. Run cndm restart to apply the properties.

NOTE: If using application.properties to specify an internal registry, and you perform a PowerProtect Data

Manager disaster recovery, repeat these steps after the recovery.

You can now add the Kubernetes cluster as an asset source in the PowerProtect Data Manager UI. If you already added the Kubernetes cluster as an asset source, perform these steps and then initiate a manual discovery of the Kubernetes cluster asset source to update the cluster. The configmap and deployment resources in the powerprotect namespace, and the deployment resource in the velero-ppdm namespace, automatically update to use the new images upon successful discovery.

NOTE: After you add and successfully discover the Kubernetes cluster asset source in PowerProtect Data Manager, if only

k8s.image.pullsecrets is updated, a restart of the powerprotect-controller pod on the cluster is required in

order to pick up the new pullsecrets.

Enable an asset source An asset source must be enabled in PowerProtect Data Manager before you can add and register the asset source for the protection of assets.

About this task

Only the Administrator role can manage asset sources.

In some circumstances, the enabling of multiple asset sources is required. For example, a vCenter Server and a Kubernetes cluster asset sources must be enabled for Tanzu Kubernetes guest cluster protection.

There are other circumstances where enabling an asset source is not required, such as the following:

For application agents and other agents such as File System and Storage Direct, an asset source is enabled automatically when you register and approve the agent host. For example, if you have not enabled an Oracle asset source but have registered the application host though the API or the PowerProtect Data Manager user interface, PowerProtect Data Manager automatically enables the Oracle asset source.

Enabling the Kubernetes Cluster 23

When you update to the latest version of PowerProtect Data Manager from an earlier release, any asset sources that were previously enabled appear in the PowerProtect Data Manager user interface. On a new deployment, however, no asset sources are enabled by default.

Steps

1. From the PowerProtect Data Manager user interface, select Infrastructure > Asset Sources, and then click + to reveal the New Asset Source tab.

2. In the pane for the asset source that you want to add, click Enable Source. The Asset Sources window updates to display a tab for the new asset source.

Results

You can now add or approve the asset source for use in PowerProtect Data Manager. For a vCenter server, Kubernetes cluster, SMIS Server, or PowerProtect Cloud Snapshot Manager tenant, select the appropriate tab in this window and click Add. For an application host, select Infrastructure > Application Agents and click Add or Approve as required.

NOTE: Although you can add a Cloud Snapshot Manager tenant to PowerProtect Data Manager in order to view its health,

alerts, and the status of its protection, recovery, and system jobs, you cannot manage the protection of its assets from

PowerProtect Data Manager. To manage the protection of its assets, use Cloud Snapshot Manager. For more information,

see the PowerProtect Cloud Snapshot Manager Online Help.

Disable an asset source

If you enabled an asset source that you no longer require, and the host has not been registered in PowerProtect Data Manager, perform the following steps to disable the asset source.

About this task

NOTE: An asset source cannot be disabled when one or more sources are still registered or there are backup copies of the

source assets. For example, if you registered a vCenter server and created policy backups for the vCenter virtual machines,

then you cannot disable the vCenter asset source. But if you register a vCenter server and then delete it without creating

any backups, you can disable the asset source.

Steps

1. From the PowerProtect Data Manager UI, select Infrastructure > Asset Sources, and then select the tab of the asset source that you want to disable. If no host registration is detected, a red Disable button appears.

2. Click Disable.

Results

PowerProtect Data Manager removes the tab for this asset source.

Delete an asset source If you want to remove an asset source that you no longer require, perform the following steps to delete the asset source in the PowerProtect Data Manager UI.

About this task

Only the Administrator role can manage the asset sources.

Steps

1. From the PowerProtect Data Manager UI, select Infrastructure > Asset Sources, and then select the tab for the type of asset source that you want to delete.

2. Select the asset source name in the asset source list, and then click Delete.

3. At the warning prompt that appears, click Continue. The asset source is deleted from the list.

24 Enabling the Kubernetes Cluster

Results

PowerProtect Data Manager removes the specified asset source in the Asset Sources window.

Any associated assets that are protected by the protection policy are removed from the protection policy and their status is changed to deleted. These assets can be deleted automatically or manually. The PowerProtect Data Manager Administration and User Guide provides details on how to remove assets from PowerProtect Data Manager.

The copies of assets from the asset source are retained (not deleted). You can delete the copies from the copies page, if required.

Next steps

Manually remove PowerProtect components from the Kubernetes cluster. Removing PowerProtect Data Manager components from a Kubernetes cluster provides more information.

Add a VMware vCenter Server Perform the following steps to add a vCenter Server as an asset source in the PowerProtect Data Manager UI for Tanzu Kubernetes guest cluster protection.

Prerequisites

Ensure that the asset source is enabled. Enable an asset source provides instructions. Log in as a user with the Administrator role. Only the Administrator role can manage asset sources. By default, PowerProtect Data Manager enforces SSL certificates during communication with vCenter Server. If a certificate

appears and you trust the certificate, click Verify.

Note, however, that SSL certificate enforcement requires that the common name (cn) of the x509 certificate on the vCenter Server matches the hostname of the vCenter URL. The common name of the x509 certificate is typically the vCenter server fully qualified domain name (FQDN), but it could be the vCenter server IP address. You can inspect the vCenter server SSL certificate to determine whether the x509 common name is an FQDN or IP. When creating an asset source resource, in order to pass SSL certificate enforcement, the asset source resource hostname must match the common name of the x509 certificate on the vCenter server.

NOTE: It is highly recommended that you do not disable certificate enforcement. If disabling the certificate is required,

carefully review the instructions in the section Disable vCenter SSL certificate validation.

Steps

1. From the left navigation pane, select Infrastructure > Asset Sources.

The Asset Sources window appears.

2. Select the vCenter tab.

3. Click Add. The Add vCenter dialog displays.

4. Specify the source attributes:

a. In the Name field, specify the vCenter Server name. b. In the Address field, specify the fully qualified domain name (FQDN) or the IP address.

NOTE: For a vCenter Server, it is recommended that you use the FQDN instead of the IP address.

c. In the Port field, specify the port for communication if you are not using the default port, 443.

5. Under Host Credentials, choose an existing entry from the list to use for the vCenter user credentials. Alternatively, you can click Add from this list to add new credentials, and then click Save.

NOTE: Ensure that you specify the credentials for a user whose role is defined at the vCenter level, as opposed to being

restricted to a lower-level container object in the vSphere object hierarchy.

6. If you want to make a subset of the PowerProtect Data Manager UI functionality available within the vSphere Client, move the vSphere Plugin slider to the right.

Available functionality includes: The monitoring of active virtual machine/VMDK protection policies, and Restore options such as Restore to Original, Restore to New, and Instant Access.

Enabling the Kubernetes Cluster 25

NOTE: You can unregister the vSphere plug-in at any time by moving the slider to the left.

7. By default, the vCenter discovery occurs automatically after adding the vCenter, and subsequent discoveries are incremental. If you want to schedule a full discovery at a certain time every day, move the Schedule Discovery slider to the right, and then specify a time.

8. If there is no hosting vCenter and you want to make this the vCenter Server that hosts PowerProtect Data Manager, select Add as hosting vCenter. If a vCenter Server has already been added as the hosting vCenter, this option will be greyed out.

Specify a vCenter server as the PowerProtect Data Manager host provides more information about adding a host vCenter.

9. If the vCenter server SSL certificate cannot be trusted automatically, a dialog box appears requesting certificate approval. Review the certificate, and then click Verify.

10. Click Save.

The vCenter Server information that you entered now appears as an entry in a table on the Asset Sources window. You can click the magnifying glass icon next to the entry to view more details, such as the next scheduled discovery, the number of assets within the vCenter, and whether the vSphere Plugin is enabled.

NOTE: Although PowerProtect Data Manager automatically synchronizes with the vCenter server under most

circumstances, certain conditions might require you to initiate a manual discovery.

After discovery, PowerProtect Data Manager starts an incremental discovery in the background periodically to keep updating PowerProtect Data Manager with vCenter changes. You can always do an on-demand discovery.

11. Optionally, you can set warning and failure thresholds for the available space on the datastore. Setting these thresholds enables you to check if enough storage space is available in the datastore to save the snapshot of the virtual machine during the backup process. The backup completes with a warning in the logs if the available free space in the datastore is less than or equal to the percentage indicated in the Datastore Free Space Warning Threshold. The backup fails if the available free space in the datastore is less than or equal to the percentage indicated in the Datastore Free Space Failure Threshold. To add Datastore Free Space Warning and Failure Thresholds:

a. Click the gear icon to open the vCenter Settings dialog. b. Type a percentage value to indicate when a warning message should display due to low datastore free space. c. Type a percentage value to indicate when a virtual machine backup failure should occur due to low datastore free space. d. Click Save.

NOTE: Datastore free space thresholds are disabled by default.

Results

Upon a successful discovery of the vCenter server asset source, the assets in the vCenter display in the Infrastructure > Assets window.

You can modify the details for the vCenter asset source by selecting the vCenter in the Infrastructure > Asset Sources window and clicking Edit. You cannot, however, clear the Add as hosting vCenter checkbox when editing an asset source if this vCenter Server has already been added as the hosting vCenter. For this operation, use the Hosting vCenter window, as described in the section Specify a vCenter server as the PowerProtect Data Manager host.

NOTE: Discovery time is based on networking bandwidth. The resources that are discovered and the resources that

are performing the discovery impact performance each time that you initiate a discovery process. It might appear that

PowerProtect Data Manager is not updating the Asset Sources data while the discovery is in progress.

Next steps

Add a VM Direct appliance to facilitate data movement, and then create Tanzu Kubernetes protection policies to back up these assets. The PowerProtect Data Manager software comes bundled with an embedded VM Direct engine, which is automatically used as a fallback proxy for performing backups and restores when the added external proxies fail or are disabled. It is recommended that external proxies be deployed since the embedded VM Direct engine has limited capacity for performing backup streams. To add a VM Direct Engine, select Infrastructure > Protection Engines.

26 Enabling the Kubernetes Cluster

Specify a vCenter server as the PowerProtect Data Manager host

You select a vCenter server to be used as the PowerProtect Data Manager host from those already added or discovered.

About this task

Perform the following operations:

Steps

1. From the PowerProtect Data Manager user interface, click , and then select Hosting vCenter.

The Hosting vCenter window appears.

2. Choose from one of the following options: Enter FQDN/IPSelect this option to manually enter the fully qualified domain name or IP of the vCenter server, the

port number, and to select the vCenter Host Credentials. The Host Credentials list is populated with vCenter servers that have already been added and discovered in PowerProtect Data Manager. If the host vCenter credentials do not appear in the list, select Add Credentials to enter this information.

Select FQDN/IP from asset sourcesSelect this option to obtain the host vCenter server information automatically from a vCenter asset source that has already been added and discovered in PowerProtect Data Manager.

3. Click Save.

Results

If the host vCenter server is added as an asset source in PowerProtect Data Manager, a icon displays next to this vCenter server in the Infrastructure > Asset Sources window.

Disable vCenter SSL certificate validation

If the vCenter server's SSL certificate cannot be trusted automatically, a dialog box appears when adding the vCenter server as an asset source in the PowerProtect Data Manager user interface, requesting certificate approval. It is highly recommended that you do not disable certificate enforcement.

If disabling of the SSL certificate is required, you can perform the following procedure.

CAUTION: These steps should only be performed if you are very familiar with certificate handling and the issues

that can arise from disabling a certificate.

1. Create a file named cbs_vmware_connection.properties in the /home/admin directory on the PowerProtect Data Manager appliance, with the following contents:

cbs.vmware_connection.ignore_vcenter_certificate=true 2. If not already created, create an application.yml file in the /usr/local/brs/lib/vmdm/config/ directory.

NOTE: The structure of this file requires that you separate fields into individual categories and sub categories, as shown

in the following step.

3. In the application.yml file, add the following contents:

vmware_connection: ignore_vcenter_cert: true

discovery: ignore_vcenter_cert: true

4. Run cbs stop to stop the cbs service, and then cbs start to restart the service.

5. Run vmdm stop to stop the vmdm service, and then vmdm start to restart the service.

6. If the SSL certificate uses an FQDN, perform a test to determine if SSL certificate disabling was successful by adding a vCenter server using the vCenter server's IP address, and then verify that the asset source was added and virtual machine discovery was successful.

Enabling the Kubernetes Cluster 27

Add a Kubernetes cluster Perform the following steps to add a Kubernetes cluster as an asset source in the PowerProtect Data Manager UI. When added, PowerProtect Data Manager automatically deploys resources on the cluster that enable the backup and recovery of namespaces.

Prerequisites

You must have Administrator privileges. If your environment has firewall or other restrictions that might prevent pulling of the required images from Docker Hub,

review the procedure in the section Prerequisites to Kubernetes cluster discovery. If adding a Kubernetes guest cluster for vSphere CSI-based Persistent Volume Claims (PVCs), add a VM Direct protection

engine in the vCenter where the Tanzu Kubernetes guest cluster is located.

About this task

NOTE: Discovery of a Kubernetes cluster discovers namespaces that contain volumes from both container storage

interface (CSI) and non-CSI based storage. However, backup and recovery are supported only from CSI-based storage.

Also, only PVCs with the VolumeMode Filesystem are supported.

Steps

1. From the left navigation pane, select Infrastructure > Asset Sources.

2. In the Asset Sources window, select the Kubernetes cluster tab.

3. Click Add.

4. In the Add Kubernetes cluster dialog box, specify the source attributes:

a. Tanzu ClusterIf adding a Kubernetes Tanzu guest cluster for protection of vSphere CSI-based PVCs, move the slider to the right.

b. Select vCenterFor a Kubernetes Tanzu guest cluster asset source, select the vCenter Server that contains the guest cluster from the list.

NOTE: Selecting a vCenter Server changes the method used for the Kubernetes protection policy backup. Instead of

cProxy, a VM proxy (the VM Direct engine) will be used for the management and transfer of backup data, similar to

what is used for virtual machine protection policies.

c. Namethe cluster name d. Addressthe fully qualified domain name (FQDN) or the IP address of the Kubernetes API server.

NOTE: It is recommended that you use the FQDN instead of the IP address.

e. Port specify the port to use for communication when not using the default port, 443.

NOTE: The use of any port other than 443 or 6443 requires you to open the port on PowerProtect Data Manager

first to enable outgoing communication. The procedure that is described in Recommendations and considerations

when using a Kubernetes cluster provides more information.

5. Under Host Credentials, click Add to add the service account token for the Kubernetes cluster, and then click Save.

The service account must have the following privileges: Get/Create/Update/List CustomResourceDefinitions Get/Create/Update ClusterRoleBinding for 'cluster-admin' role Create/Update 'powerprotect' namespace Get/List/Create/Update/Delete/List Get/List/Create/Update/Delete all kinds of resources inside 'powerprotect' namespace Get/List/Watch all namespaces in the cluster as well as PV, PVC, storageclass,

deployments and pods in all these namespaces NOTE: The admin-user service account in the kube-system namespace contains all these privileges. You can

provide the token of this account, or an existing similar service account. Alternatively, create a service account that is

bound to a cluster role that contains these privileges, and then provide the token of this service account.

28 Enabling the Kubernetes Cluster

If you do not want to provide a service account with cluster-admin privileges, the yaml files located in /usr/ local/brs/lib/cndm/misc/rbac.tar.gz on the PowerProtect Data Manager appliance provide the definition

of the cluster role with the required privileges required for PowerProtect Data Manager. Follow the instructions in the

README.txt within this tar file to create the required clusterroles and clusterrolebindings, and to provide

the token of the service account created in the yaml files.

6. Click Verify to review the certificate and token information, and then click Accept. Upon successful validation, the status for the new credentials updates to indicate Accepted.

7. Click Save.

The Kubernetes cluster information that you entered now appears as an entry on the Asset Sources window, with a Discovery status of Unknown.

NOTE: Although PowerProtect Data Manager automatically synchronizes with the Kubernetes cluster to perform the

initial discovery under most circumstances, certain conditions might require you to initiate a manual discovery.

8. (Optional) If you want to initiate a manual discovery, select the Kubernetes cluster, and then click Discover. Incremental discovery for a Kubernetes cluster in PowerProtect Data Manager is not supported. You can perform an on-demand (ad hoc) discovery at any time or set a scheduled discovery to update with changes in the Kubernetes cluster.

NOTE: Discovery time is based on networking bandwidth. The resources that are involved in the discovery process

impact performance each time you initiate a discovery. It might appear that PowerProtect Data Manager is not updating

the Asset Sources data while the discovery is in progress.

9. Verify that the Discovery Status column indicates OK, and then go to the Assets window.

Results

Upon adding the Kubernetes cluser as an asset source, a PowerProtect controller is installed on the cluster, which is also used to install Velero with the DD Object store plug-in and the vSphere plug-in. The namespaces in the Kubernetes cluster will appear in the Kubernetes tab of the Assets window. To view more details within this window, click the magnifying glass icon next to an entry. Also, if a namespace has associated PVCs that you want to exclude from a policy, you can click the link in the PVCs Exclusion column.

NOTE: If namespace assets are not discovered after adding a Kubernetes cluster asset source, ensure that the bearer

token that is provided for the Kubernetes asset source belongs to a service account that has the privileges as specified in

step 5.

Next steps

Create Kubernetes protection policies to back up namespaces and PVCs.

Protection engine limitations Observe the following points when planning and working with protection engines: Deploy protection engines with fully qualified domain names (FQDNs) or IP addresses only. Short names are no longer

supported. Existing protection engines which were deployed with short names are deprecated. A future release will require you to delete and redeploy these protection engine with FQDNs or IP addresses instead.

When you deploy protection engines with FQDNs, each FQDN must have a DNS record. Protection engines are part of server disaster recovery backups. However, the disaster-recovery process does not

automatically redeploy protection engines.

Add a VM Direct Engine Perform the following steps in the Protection Engines window of the PowerProtect Data Manager UI to deploy an external VM Direct Engine, also referred to as a VM proxy. The VM Direct Engine facilitates data movement for virtual machine protection policies, Kubernetes cluster protection policies that require a VM proxy instead of the cProxy, and network attached storage (NAS) protection policies.

Enabling the Kubernetes Cluster 29

Prerequisites

Review the sections Requirements for an external VM Direct Engine, Transport mode considerations, and Protection engine limitations.

If applicable, complete all of the virtual network configuration tasks before you assign any virtual networks. The PowerProtect Data Manager Administration and User Guide provides more information.

About this task

The PowerProtect Data Manager software comes bundled with an embedded VM Direct Engine, which is automatically used as a fallback proxy for performing backups and restores when the added external proxies fail or are disabled. Dell Technologies recommends that you deploy external proxies by adding a VM Direct Engine for the following reasons: An external VM Direct Engine for VM proxy backup and recovery can provide improved performance and reduce network

bandwidth utilization by using source-side deduplication. The embedded VM Direct Engine has limited capacity for backup streams. The embedded VM Direct Engine is not supported for VMware Cloud on AWS operations.

An external VM Direct Engine is not required for virtual machine protection policies that use the Transparent Snapshot Data Mover (TSDM) protection mechanism. For these policies, the embedded VM Direct Engine is sufficient.

NOTE: Cloud-based OVA deployments of PowerProtect Data Manager do not support the configuration of data-traffic

routing or VLANs. Those deployments skip the Networks Configuration page.

Steps

1. From the left navigation pane, select Infrastructure > Protection Engines.

The Protection Engines window appears.

2. In the VM Direct Engines pane of the Protection Engines window, click Add. The Add Protection Engine wizard displays.

3. On the Protection Engine Configuration page, complete the required fields, which are marked with an asterisk.

Hostname, Gateway, IP Address, Netmask, and Primary DNSNote that only IPv4 addresses are supported. vCenter to DeployIf you have added multiple vCenter server instances, select the vCenter server on which to deploy

the protection engine.

NOTE: Ensure that you do not select the internal vCenter server.

ESX Host/ClusterSelect on which cluster or ESXi host you want to deploy the protection engine. NetworkDisplays all the networks that are available under the selected ESXi Host/Cluster. For virtual networks

(VLANs), this network carries Management traffic. Data StoreDisplays all datastores that are accessible to the selected ESXi Host/Cluster based on ranking (whether

the datastores are shared or local), and available capacity (the datastore with the most capacity appearing at the top of the list).

You can choose the specific datastore on which the protection engine resides, or leave the default selection of to allow PowerProtect Data Manager to determine the best location to host the protection engine.

Transport ModeSelect Hot Add. Supported Protection TypeSelect whether this protection engine is intended for Virtual Machine, Kubernetes

Tanzu guest cluster, or NAS asset protection.

4. Click Next.

5. On the Networks Configuration page:

If this is a cloud-based OVA deployment of PowerProtect Data Manager, click Next and proceed to step 7.

The Networks Configuration page configures the virtual network (VLAN) to use for Data traffic. To continue without virtual network configuration, leave the Preferred Network Portgroup selection blank and then click Next.

a. From the Preferred Network Portgroup list, select a VST (Virtual Switch Tagging) or VGT (Virtual Guest Tagging) network. If you select a VGT portgroup, the list displays all virtual networks within the trunk range. If you select a VST portgroup, the list displays only the virtual network for the current VLAN ID.

b. Select one or more virtual networks from the list.

A protection engine requires an IP address from the static IP pool for each selected virtual network. If there are not enough IP addresses in a pool, the wizard prompts you to supply additional addresses for that network.

30 Enabling the Kubernetes Cluster

Ensure that the selected virtual networks support a traffic type that is compatible with protection engines. The PowerProtect Data Manager Administration and User Guide provides more information about traffic types.

c. If required, type an available static IP address or IP address range in the Additional IP Addresses column for the indicated virtual network.

For convenience when working with multiple virtual networks, you can also use one of the Auto Expand options:

Expand Last IPThe wizard increments the host portion of the last IP address in the static IP pool. Click Apply. Same Last DigitThe wizard adds the network portion of the IP address to the specified value. Type the host

portion of the IP address and then click Apply.

The wizard updates the value in the Additional IP addresses column for each selected network. Verify the proposed IP addresses.

d. Click Next.

6. When adding a VM Direct Engine for Kubernetes guest cluster protection, add a second network interface card (NIC) if the PowerProtect controller pod running in the guest cluster cannot reach the vProxy on the primary network. Provide information for the second NIC, and then click Next.

7. On the Summary page, review the information and then click Finish.

The protection engine is added to the VM Direct Engines pane. An additional column indicates the engine purpose. Note that it can take several minutes to register the new protection engine in PowerProtect Data Manager. The protection engine also appears in the vSphere Client.

Results

When an external VM Direct Engine is deployed and registered, PowerProtect Data Manager uses this engine instead of the embedded VM Direct Engine for any data protection operations that involve virtual machine protection policies. If every external VM Direct Engine is unavailable, PowerProtect Data Manager uses the embedded VM Direct Engine as a fallback to perform limited scale backups and restores. If you do not want to use the external VM Direct Engine, you can disable this engine. Additional VM Direct actions provides more information.

NOTE: The external VM Direct Engine is always required for VMware Cloud on AWS operations, Kubernetes cluster

protection policies that require a VM Proxy instead of the cProxy, and NAS protection policies. If no external VM Direct

Engine is available for these solutions, data protection operations fail.

Next steps

If the protection engine deployment fails, review the network configuration of PowerProtect Data Manager in the System Settings window to correct any inconsistencies in network properties. After successfully completing the network reconfiguration, delete the failed protection engine and then add the protection engine in the Protection Engines window.

When configuring the VM Direct Engine in a VMware Cloud on AWS environment, if you deploy the VM Direct Engine to the root of the cluster instead of inside the Compute-ResourcePool, you must move the VM Direct Engine inside the Compute- ResourcePool.

Enabling the Kubernetes Cluster 31

Managing Storage, Assets, and Protection for Kubernetes Clusters

Topics:

Add protection storage Add a protection policy for Kubernetes namespace protection Add a Cloud Tier schedule to a protection policy Extended retention Edit the retention period for backup copies Delete backup copies Add a service-level agreement

Add protection storage Add and configure protection storage to use as a target for protection policies. Adding protection storage requires the Administrator role.

Prerequisites

NOTE:

When adding a High Availability PowerProtect DD system, observe the following points:

Do not add the individual active and standby DD systems to PowerProtect Data Manager.

In the Address field, use the hostname that corresponds to the floating IP address of the High Availability PowerProtect

DD system.

The High Availability PowerProtect DD system is verified with the root certificate.

About this task

The PowerProtect Data Manager Administration and User Guide provides more information about protection storage and related concepts:

High availability options Smart Scale system pools, a single interface to a flexible group of pool members Working with protection storage Working with storage units

Steps

1. From the left navigation pane, select Infrastructure > Storage.

The Storage window appears.

2. In the Protection Storage tab, click Add.

3. In the Add Storage dialog box, select a storage system (PowerProtect DD System or PowerProtect DD Management Center).

For a system pool, select DDMC.

4. To add a High Availability PowerProtect DD system, select the checkbox.

5. Specify the storage system attributes:

a. In the Name field, specify a storage name. b. In the Address field, specify the hostname, fully qualified domain name (FQDN), or the IP address.

3

32 Managing Storage, Assets, and Protection for Kubernetes Clusters

c. In the Port field, specify the port for SSL communication. Default is 3009.

6. Under Host Credentials click Add, if you have already configured protection storage credentials that are common across storage systems, select an existing password. Alternatively, you can add new credentials, and then click Save.

7. If a trusted certificate does not exist on the storage system, a dialog box appears requesting certificate approval. Click Verify to review the certificate, and then click Accept.

8. Click Save to exit the Add Storage dialog and initiate the discovery of the storage system.

A dialog box appears to indicate that the request to add storage has been initiated.

9. In the Storage window, click Discover to refresh the window with any newly discovered storage systems. When a discovery completes successfully, the Status column updates to OK.

10. To modify a storage system location, complete the following steps:

A storage system location is a label that is applied to a storage system. If you want to store your copies in a specific location, the label helps you select the correct storage system during policy creation.

a. In the Storage window, select the storage system from the table. b. Click More Actions > Set Location.

The Set Location window appears. c. Click Add in the Location list.

The Add Location window appears. d. In the Name field, type a location name for the asset, and click Save.

Results

PowerProtect Data Manager displays external DD systems only in the Storage window Name column. PowerProtect Data Manager displays PowerProtect DD Management Center storage types in the Managed By column.

Add a protection policy for Kubernetes namespace protection A Kubernetes protection policy enables you to select namespaces in the Kubernetes cluster that you want to back up. Use the PowerProtect Data Manager UI to create a Kubernetes namespace protection policy.

Prerequisites

NOTE: Discovery of a Kubernetes cluster discovers namespaces that contain volumes from both container storage

interface (CSI) and non-CSI based storage. However, backup and recovery are supported only from CSI-based storage.

If you select a namespace from non-CSI storage, the backup fails.

Optionally, if you want to protect a namespace that contains non-CSI storage, you can exclude the non-CSI PVC from the

backup. If excluding the PVC, ensure that such a policy still meets your protection requirements.

If applicable, complete all of the virtual network configuration tasks before you assign any virtual networks to the protection policy.

The PowerProtect Data Manager Administration and User Guide provides more information about working with storage units, including applicable limitations and security considerations.

Before performing any backups on a weekly or monthly schedule from the protection policy, ensure that the PowerProtect Data Manager time zone is set to the local time zone.

About this task

When PowerProtect Data Manager backs up a Kubernetes namespace, the following items are included in the protection policy backup:

Kubernetes resources, in addition to the contents of the persistent volumes bound to PVCs in that namespace. Kubernetes resources are backed up using Velero. Upstream Kubernetes resources such as Deployments, StatefulSets, DaemonSets, Pods, Secrets, ConfigMap, and Custom Resources, are backed up as part of the Kubernetes resources.

Cluster resources are backed up automatically as part of the Kubernetes protection policy. These resources include cluster roles, cluster role bindings, and custom resource definitions (CRDs) that are associated with namespace-scoped resources.

For OpenShift, OpenShift-specific resources such as DeploymentConfig, BuildConfig, and ImageStream are also protected using the Velero OpenShift plug-in.

Managing Storage, Assets, and Protection for Kubernetes Clusters 33

NOTE: Container images are not protected as part of the ImageStream resource.

Steps

1. From the left navigation pane, select Protection > Protection Policies.

2. In the Protection Policies window, click Add.

The Add Policy wizard appears.

3. On the Type page, specify the following fields, and then click Next:

NameType a descriptive name for the protection policy. DescriptionType a description for the policy. TypeFor the policy type, select Kubernetes.

4. On the Purpose page, select from the following options to indicate the purpose of the new protection policy group, and then click Next:

Crash ConsistentSelect this type for point-in-time backup of namespaces. ExclusionSelect this type if there are assets within the protection policy that you plan to exclude from data protection

operations.

5. In the Assets page, select one or more unprotected namespaces that you want to back up as part of this protection policy.

If the namespace that you want to protect is not listed, perform one of the following: Click Find More Assets to perform an updated discovery of the Kubernetes cluster. Use the Search box to search by asset name.

6. (Optional) For the selected namespaces, click the link in the PVCs Excluded column, if available, to clear any PVCs that you want to exclude from the backup. By default, all PVCs are selected for inclusion.

7. Click Next. The Objectives page appears.

8. On the Objectives page, select a policy-level Service Level Agreement (SLA) from the Set Policy Level SLA list, or select Add to open the Add Service Level Agreement wizard and create a policy-level SLA.

Add a service-level agreement provides instructions.

9. Click Add under Primary Backup. The Add Primary Backup dialog appears.

10. On the Schedules pane of the Add Primary Backup dialog:

a. Specify the following fields to schedule the synthetic full backup of this protection policy:

Create a Synthetic Full...Specify how often to create a synthetic full backup. For Persistent Volume Claims (PVCs) on VMware first class disks (FCDs), a Synthetic Full backs up only the changed blocks since last backup to create a new full backup. Also, namespace metadata is backed up in full upon every backup.

Retain ForSpecify the retention period for the synthetic full backup.

You can extend the retention period for the latest primary backup copy by adding an Extend Retention backup. For example, your regular schedule for daily backups can use a retention period of 30 days, but you can apply extended retention backups to keep the full backups taken on Mondays for 10 weeks. Step 13 provides instructions.

NOTE: For database backups, PowerProtect Data Manager chains the dependent backups together. For

example, the synthetic full or transaction log backups are chained to their base full backup. The backups do not

expire until the last backup in the chain expires. Backup chaining ensures that all synthetic full and transaction log

backups are recoverable until they have all expired.

Start and EndFor the activity window, specify a time of day to start the synthetic full backup, and a time of day after which backups cannot be started.

NOTE: Any backups started before the End Time occurs continue until completion.

Click Save to save and collapse the backup schedule.

b. Click Add Backup to periodically force a full (level 0) backup, and then specify the following fields to schedule the full backup of this protection policy:

NOTE: When you select this option, the backup chain is reset.

Create a Full...Specify whether you want to create a weekly or monthly full backup. Repeat onDepending on the frequency of the full backup schedule, specify the day of the week or date of the

month to perform the full backup.

34 Managing Storage, Assets, and Protection for Kubernetes Clusters

Retain ForSpecify the retention period for the full backup. This can be the same value as the synthetic full backup schedule, or a different value.

Start and EndFor the activity window, specify a time of day to start the full backup, and a time of day after which backups cannot be started.

NOTE: Any backups started before the End Time occurs continue until completion.

Click Save to save and collapse the backup schedule.

11. On the Target pane of the Add Primary Backup dialog, specify the following fields:

a. Storage NameSelect a backup destination from the list of existing protection storage systems, or select Add to add a system and complete the details in the Storage Target window.

NOTE: The Space field indicates the total amount of space, and the percentage of available space, on the

protection storage system.

b. Storage UnitSelect whether this protection policy should use a New storage unit on the selected protection storage system, or select an existing storage unit from the list. Hover over a storage unit to view the full name and statistics for available capacity and total capacity, for example, testvmplc-ppdm-daily-123ab (300 GB/1 TB) When you select New, a new storage unit in the format policy name host name unique identifier is created in the storage system upon policy completion. For example, testvmplc-ppdm-daily-123cd.

c. Network InterfaceSelect a network interface from the list, if applicable. d. Retention LockMove the Retention Lock slider to the right to enable retention locking for these backups on the

selected system. PowerProtect Data Manager uses Governance mode for retention locking, which means that the lock can be reverted at any time if necessary. Moving the Retention Lock slider on or off applies to the current backup copy only, and does not impact the retention lock setting for existing backup copies.

NOTE: Primary backups are assigned a default retention lock period of 14 days. Replicated backups, however, are

not assigned a default retention lock period. If you enable Retention Lock for a replicated backup, ensure that you

set the Retain For field in the Add Replication backup schedule dialog to a minimum number of 14 days so that the

replicated backup does not expire before the primary backup.

e. SLASelect an existing service level agreement that you want to apply to this schedule from the list, or select Add to create an SLA within the Add Service Level Agreement wizard.

Add a service-level agreement provides instructions.

12. Click Save to save your changes and return to the Objectives page.

The Objectives page updates to display the name and location of the target storage system under Primary Backup.

NOTE: After completing a backup schedule, you can change any schedule details by clicking Edit next to the schedule.

13. Optionally, extend the retention period for the latest primary backup copy:

Extended retention provides more information about Extend Retention functionality.

a. Click Extend Retention next to Primary Backup. An entry for Extend Retention is created below Primary Backup. b. Under Extend Retention, click Add. The Add Extended Retention dialog appears. c. Retain the next scheduled full copy every...Specify a weekly, monthly, or yearly recurrence for the extended

retention backup schedule. d. Repeat onDepending on the frequency of the full backup schedule, specify the day of the week, the date of the

month, or the date of the year to perform the extended retention backup. e. Retain ForSpecify the retention period for the backup. You can retain an extended retention backup for a maximum of

70 years. f. Click Save to save your changes and return to the Objectives page.

14. Optionally, replicate these backups to a remote storage system:

a. Click Replicate next to Primary Backup or Extend Retention. An entry for Replicate is created to the right of the primary or extended retention backup.

NOTE: PowerProtect Data Manager supports replicating an extended retention backup only if the primary backup

already has one or more replication stages. Also, for replication of an extended retention backup, you can only select

the protection storage systems that the replication stages use, based on the primary stage.

For example, if there are 6 protection storage systems available (DD001-DD006), and the primary backup is on

DD0001:

Replicate1 based on the primary backup is replicated to DD002

Managing Storage, Assets, and Protection for Kubernetes Clusters 35

Replicate2 based on the primary backup is replicated to DD003

Extended retention backup is backed up to DD001

Replicate3 based on the extended retention backup must be replicated to DD002 or DD003.

b. Under Replicate, click Add. The Add Replication dialog appears.

NOTE: To enable replication, ensure that you add remote protection storage as the replication location. Add

protection storage provides detailed instructions about adding remote protection storage.

c. Complete the schedule details in the Add Replication dialog, and then click Save to save your changes and return to the Objectives page.

The schedule frequency can be every day, week, month, or x hours for replication of the primary backup, and every day, week, month, year, or x hours for replication of the extended retention backup. For daily, weekly, and monthly schedules, the numeric value cannot be modified. For hourly, however, you can edit the numeric value. For example, if you set Create a Full backup every 4 hours, you can set a value of anywhere 1 to 12 hours.

All replication copies of the primary backup schedule will use the same retention period, and by default, this retention period is inherited from the Retain For value of the synthetic full backup schedule. To specify a different retention period for all of the replication copies of this primary backup schedule, click Edit, change the value in the Retain For field, and then click Save. This retention period will be applied to all of the replicated copies (synthetic full and full) of this primary backup schedule.

When creating multiple replication copies of the same protection policy, Dell Technologies recommends selecting a different storage system for each copy.

15. Optionally, to move backups from DD storage to Cloud Tier, add a Cloud stage for the primary, replication, or extended retention schedule:

a. Click Cloud Tier next to Primary Backup or Extend Retention or, if adding a Cloud stage for a replication schedule that you have added, click Cloud Tier under Replicate. An entry for Cloud Tier is created to the right of the primary or extended retention backup schedule, or below the replication schedule.

b. Under the entry for Cloud Tier, click Add. The Add Cloud Tier Backup dialog appears, with summary schedule information for the parent node to indicate whether you are adding this Cloud Tier stage for the primary backup schedule, the extended retention backup schedule, or the replication schedule.

c. Complete the schedule details in the Add Cloud Tier Backup dialog, and then click Save to save your changes and return to the Objectives page.

Add a Cloud Tier schedule to a protection policy provides detailed instructions for adding a Cloud stage for a primary, replication, or extended retention schedule.

NOTE: In order to move a backup or replica to Cloud Tier, schedules must have a retention time of 14 days or more.

Also, discovery of protection storage that is configured with a Cloud unit is required.

16. Click Next. The Summary page appears.

17. Review the protection policy group configuration details, and then click Finish. Except for the protection policy type, you can click Edit next to any details to change the policy information.

An informational message appears to confirm that PowerProtect Data Manager has saved the protection policy.

When the new protection policy is created and assets are added to the protection policy, PowerProtect Data Manager performs backups according to the backup schedule.

18. Click OK to exit the window, or click Go to Jobs to open the Jobs window.

From the Jobs window, you can monitor the progress of the new Kubernetes cluster protection policy backup and associated tasks. You can also cancel any in-progress or queued job or task.

NOTE: If a Kubernetes cluster is running on vSphere and using vSphere CSI storage, the job details indicate that the

optimized data path is being used for the backup.

Next steps

If the backup fails with the error Failed to create Proxy Pods. Creating Pod exceeds safeguard limit of 10 minutes, verify that the CSI driver is functioning properly, such that the driver can create snapshots and a PVC from the VolumeSnapshot datasource. Also, ensure that you clean up any orphan VolumeSnapshot resources that still exist in the namespace.

36 Managing Storage, Assets, and Protection for Kubernetes Clusters

Add a Cloud Tier schedule to a protection policy For some protection policy types, you can add a Cloud Tier schedule to a protection policy in order to perform backups to Cloud Tier.

Prerequisites

Ensure that a protection storage system is set up for Cloud Tiering.

About this task

You can create the Cloud Tier schedule from Primary Backup, Replicate, and Extend Retention stages. Schedules must have a retention time of 14 days or more.

Cloud Tiering happens at 00:00 UTC each day. Depending on your time zone, this time may be within business hours and thus Cloud Tiering may impact available network bandwidth. Cloud Tiering applies to both centralized and self-service protection policies.

Steps

1. Log in to the PowerProtect Data Manager user interface as a user with the Administrator role.

2. From the PowerProtect Data Manager UI, select Protection > Protection Policies, and then click Add.

The Add Policy wizard appears.

3. On the Type page, enter a name and description, select the type of system to back up, and click Next.

The following protection policy types support Cloud Tiering:

Virtual machine Microsoft SQL Server Microsoft Exchange Server Oracle SAP HANA File System Kubernetes

4. On the Purpose page, select from the available options to indicate the purpose of the new protection policy, and then click Next.

5. On the Assets page, select the assets that you want to protect with this policy, and then click Next.

6. On the Objectives page, click Add under Primary Backup if the primary backup schedule is not already created, and fill out the fields in the Target and Schedules panes on the Add Primary Backup dialog.

NOTE: There is no minimum recurrence required for the Cloud stage, however, the Cloud Tier schedule requires a

minimum retention period of 14 days in the Retain for field.

7. Click Cloud Tier next to Primary Backup or Extend Retention or, if adding a Cloud stage for a replication schedule that you have added, click Cloud Tier under Replicate. An entry for Cloud Tier is created to the right of the primary backup or extended retention schedule, or below the replication schedule.

8. Under the entry for Cloud Tier, click Add. The Add Cloud Tier Backup dialog appears, with summary schedule information for the parent node. This information indicates whether you are adding this Cloud Tier stage for the primary backup schedule, the extended retention schedule, or the replication schedule.

9. In the Add Cloud Tier Backup dialog box, set the following parameters and then click Save:

Select the appropriate storage unit from the Cloud Target list. For Tier After, set a time of 14 days or more.

The protection policy schedule is now enabled with Cloud Tiering.

10. Click Next to proceed with the remaining pages of the Add Policy wizard, verify the information, and then click Finish. A new job is created, which you can view under the Jobs tab after the job completes.

Managing Storage, Assets, and Protection for Kubernetes Clusters 37

Extended retention You can extend the retention period for the primary backup copy for long term retention. For example, your regular schedule for daily backups can use a retention period of 30 days, but you can extend the retention period to keep the full backups taken on Mondays for 10 weeks.

Both centralized and self-service protection policies support weekly, monthly, and yearly recurrence schedules to meet the demands of your compliance objectives. For example, you can retain the last full backup containing the last transaction of a fiscal year for 10 years. When you extend the retention period of a backup in a protection policy, you can retain scheduled full backups with a repeating pattern for a specified amount of time.

For example:

Retain full yearly backups that are set to repeat on the first day of January for 5 years. Retain full monthly backups that are set to repeat on the last day of every month for 1 year. Retain full yearly backups that are set to repeat on the third Monday of December for 7 years.

Preferred alternatives

When you define an extended retention stage for a protection policy, you define a set of matching criteria that select preferred backups to retain. If the matching criteria do not identify a matching backup, PowerProtect Data Manager automatically retains the preferred alternative backup according to one of the following methods:

Look-backRetain the last available full backup that was taken before the matching criteria. Look-forwardRetain the next available full backup that was taken after the matching criteria.

For example, consider a situation where you configured a protection policy to retain the daily backup for the last day of the month to extended retention. However, a network issue caused that backup to fail. In this case, look-back matching retains the backup that was taken the previous day, while look-forward matching retains the backup that was taken the following day.

By default, PowerProtect Data Manager uses look-back matching to select the preferred alternative backup. A grace period defines how far PowerProtect Data Manager can look in the configured direction for an alternative backup. If PowerProtect Data Manager cannot find an alternative backup within the grace period, extended retention fails.

You can use the REST API to change the matching method or the grace period for look-forward matching. The PowerProtect Data Manager Public REST API documentation provides instructions. If there are no available backups for the defined matching period, you can change the matching method to a different backup.

For look-forward matching, the next available backup can be an ad-hoc backup or the next scheduled backup.

Selecting backups by weekday

This section applies to centralized protection policies. Self-service protection policies have no primary backup schedule configuration.

When you configure extended retention to match backups by weekday, PowerProtect Data Manager may identify a backup that was taken on one weekday as being taken on a different weekday. This behavior happens where the backup window does not align with the start of the day. PowerProtect Data Manager identifies backups according to the day on which the corresponding backup window started, rather than the start of the backup itself.

For example, consider a backup schedule with an 8:00 p.m. to 6:00 a.m. backup window:

Backups that start at 12:00 a.m. on Sunday and that end at 6:00 a.m. on Sunday are identified as Saturday backups, since the backup window started on Saturday.

Backups that start at 8:01 p.m. on Sunday and that end at 12:00 a.m. on Monday are identified as Sunday backups, since the backup window started on Sunday.

Backups that start at 12:00 a.m. on Monday and that end at 6:00 a.m. on Monday are identified as Sunday backups, since the backup window started on Sunday.

In this example, when you select Sunday backups for extended retention, PowerProtect Data Manager does not retain backups that were taken between 12:00 a.m. and 8:00 p.m. This behavior happens even though the backups occurred on Sunday. Instead, PowerProtect Data Manager selects the first available backup that started after 8:00 p.m. on Sunday for extended retention.

If no backups were created between 8:01 p.m. on Sunday and 6:00 a.m. on Monday, PowerProtect Data Manager retains the next alternative to extended retention. In this example, the alternative was taken after 6:00 a.m. on Monday.

38 Managing Storage, Assets, and Protection for Kubernetes Clusters

Extended retention backup behavior

When PowerProtect Data Manager identifies a matching backup, automatic extended retention creates a job at the beginning of the backup window for the primary stage. This job remains queued until the end of the backup window and then starts.

The following examples describe the behavior of backups with extended retention for centralized and self-service protection.

Centralized protection

For an hourly primary backup schedule that starts on Sunday at 8:00 p.m. and ends on Monday at 6:00 p.m. with a weekly extended retention schedule that is set to repeat every Sunday, PowerProtect Data Manager selects the first available backup starting after 8:00 p.m. on Sunday for long-term retention.

The following diagram illustrates the behavior of backups with extended retention for a configured protection policy. In this example, full daily backups starting at 10:00 p.m. and ending at 6:00 a.m. are kept for 1 week. Full weekly backups are set to repeat every Sunday and are kept for 1 month.

Figure 1. Extend retention backup behavior

Self-service protection

For self-service backups, PowerProtect Data Manager uses a default backup window of 24 hours. For a backup schedule that starts on Sunday at 12:00 p.m and ends on Monday at 12:00 p.m. with a weekly extended retention schedule that is set to repeat every Sunday, PowerProtect Data Manager selects the first available backup that is taken between 12:00 p.m. on Sunday and 12:00 p.m. on Monday for long-term retention.

Replication of extended retention backups

You can change the retention time of selected full primary backups in a replication stage by adding a replication stage to the extended retention backup. The rules in the extended retention stage define the selected full primary backups. Review the following information about replication of extended retention backups.

Before you configure replication of extended retention backups, create a replication stage for the primary backup. Configure the replication stage of the extended retention and match this stage with one of the existing replication stages

based on the primary backup. Any changes to a new or existing storage unit in the extended retention replication stage or the replication stage of the primary backup is applied to both replication stages.

The replication stage of extended retention backups only updates the retention time of replicated backup copies and does not create any new backup copies in the replication storage.

Managing Storage, Assets, and Protection for Kubernetes Clusters 39

Edit the retention period for backup copies You can edit the retention period of one or more backup copies to extend or shorten the amount of time that backups are retained.

About this task

You can edit retention for all asset types and backup types.

Steps

1. From the PowerProtect Data Manager UI, select Infrastructure > Assets.

2. On the Assets window, select the tab for the asset type for which you want to edit retention. If a policy has been assigned, the table lists the assets that have been discovered, along with the associated protection policy.

NOTE: For virtual machine assets, you can click the link in the Disk Excluded column next to a virtual machine asset to

view VMDKs that have been excluded from the protection policy. You cannot, however, edit disk inclusion or exclusion

from this window. To change the disks that are excluded for a protected asset, select the policy from the Protection

Policies window and click Edit.

3. Select a protected asset from the table, and then click View Copies. The Copy Locations pane identifies where the backups are stored.

4. In the left pane, click the storage icon to the right of the icon for the asset, for example, DD. The table in the right pane lists the backup copies.

5. Select one or more backup copies from the table and click Edit Retention.

6. Choose one of the following options: To select a calendar date as the expiration date for backups, select Retention Date. To define a fixed retention period in days, weeks, months, or years after the backup is performed, select Retention

Value. For example, you could specify that backups expire after 6 months.

NOTE: When you edit the retention period for copies that are retention locked, you can only extend the retention

period.

7. When satisfied with the changes, click Save. The asset is displayed in the list with the changes. The Retention column displays both the original and new retention period, and indicates whether the retention period has been extended or shortened.

Delete backup copies In addition to deleting backups upon expiration of the retention period, PowerProtect Data Manager enables you to manually delete backup copies from protection storage.

About this task

If you no longer require a backup copy and the retention lock is not enabled, you can delete backup copies prior to their expiration date.

You can perform a backup copy deletion that deletes only a specified part of a backup copy chain, without impacting the ability to restore other backup copies in the chain. When you select a specific backup copy for deletion, only that backup copy and the backup copies that depend on the selected backup copy are deleted. For example, when you select to delete a full backup copy, any other backup copies that depend on the full backup copy are also deleted.

Steps

1. From the PowerProtect Data Manager UI, select Infrastructure > Assets.

2. From the Assets window, select the tab for the asset type for which you want to delete copies. If a policy has been assigned, the table lists the assets that have been discovered, along with the associated protection policy.

3. Select a protected asset from the table, and then click View Copies. The Copy Locations pane identifies where the backups are stored.

4. In the left pane, click the storage icon to the right of the icon for the asset, for example, DD. The table in the right pane lists the backup copies.

40 Managing Storage, Assets, and Protection for Kubernetes Clusters

5. Select one or more copies from the table that you want to delete from the DD system, and then click Delete.

A preview window opens and displays the selected backup copies.

NOTE: For assets with backup copies that are chained together such as Microsoft SQL Server databases, Oracle

databases, SAP HANA databases, and application-aware virtual machines, the preview window lists all the backup copies

that depend on the specified backup copy. If you delete a backup copy, PowerProtect Data Manager deletes the

specified backup copy and all backup copies that depend on the specified backup copy.

6. For all asset types, you can choose to keep the latest backup copies or delete them. By default, PowerProtect Data Manager keeps the latest backup copies. To delete the latest backup copies, clear the checkbox next to Include latest copies.

For VMAX storage group backup copies, you can choose to delete copies that are grouped together in the same protection transaction or delete only selected copies. By default, PowerProtect Data Manager deletes copies that are grouped together in the same protection transaction. To delete only selected copies, clear the checkbox next to Include copies in the same protection transaction.

7. To delete the backup copies, in the preview window, click Delete.

NOTE: The delete operation may take a few minutes and cannot be undone.

An informational dialog box opens to confirm the copies are being deleted. To monitor the progress of the operation, click Go to Jobs. To view the list of backup copies and their status, click OK.

NOTE: If the data deletion is successful but the catalog deletion is unsuccessful, then the overall deletion job status

appears as Completed with Exceptions.

When the job completes, the task summary provides details of each deleted backup copy, including the time that each copy was created, the backup level, and the retention time. The time of copy creation and the retention time is shown in UTC.

An audit log is also generated and provides details of each deleted backup copy, including the time that each copy was created, the backup level, and the retention time. The time of copy creation and the retention time is shown in UTC. Go to Alerts > Audit Logs to view the audit log.

8. Verify that the copies are deleted successfully from protection storage. If the deletion is successful, the deleted copies no longer appear in the table.

Retry a failed backup copy deletion

If a backup copy is not deleted successfully, you can manually retry the operation.

Steps

1. From the PowerProtect Data Manager UI, select Infrastructure > Assets.

2. From the Assets window, select the tab for the asset type for which you want to delete copies. If a policy has been assigned, the table lists the assets that have been discovered, along with the associated protection policy.

3. Select a protected asset from the table, and then click View Copies. The Copy Locations pane identifies where the backups are stored.

4. In the left pane, click the storage icon to the right of the icon for the asset, for example, DD. The table in the right pane lists the backup copies.

5. Select one or more backup copies with the Deletion Failed status from the table, and then click Delete.

You can also filter and sort the list of backup copies by status in the Copy Status column.

The system displays a warning to confirm you want to delete the selected backup copies.

6. Click OK. An informational dialog box opens to confirm that the copies are being deleted. To monitor the progress of the operation, click Go to Jobs. To view the list of backup copies and their status, click OK.

7. Verify that the copies are successfully deleted from protection storage. If the deletion is successful, the deleted copies no longer appear in the table.

Managing Storage, Assets, and Protection for Kubernetes Clusters 41

Export data for deleted backup copies

This option enables you to export results of deleted backup copies to a .csv file so that you can download an Excel file of the data.

Steps

1. From the PowerProtect Data Manager UI, select Infrastructure > Assets.

2. From the Assets window, select the tab for the asset type for which you want to export results of deleted backup copies. If a policy has been assigned, the table lists the assets that have been discovered, along with the associated protection policy.

3. Select one or more protected assets from the table and then select More Actions > Export Deleted Copies.

If you do not select an asset, PowerProtect Data Manager exports the data for deleted backup copies for all assets for the specific asset type.

4. Specify the following fields for the export:

a. Time Range

The default is Last 24 Hours.

b. Copy Status

In order to export data for deleted backup copies, the backup copies must be in one of the following states:

DeletedThe copy is deleted successfully from protection storage, and, if applicable, the agent catalog is deleted successfully from the agent host.

DeletingCopy deletion is in progress. Deletion FailedCopy deletion from protection storage is unsuccessful.

NOTE: You cannot export data for backup copies that are in an Available state.

5. Click Download. If applicable, the navigation window appears for you to select the location to save the .csv file.

6. Save the .csv file in the desired location and click Save.

Remove backup copies from the PowerProtect Data Manager database

This option enables you to delete the backup copy records from the PowerProtect Data Manager database, but keep the backup copies in protection storage.

About this task

For backup copies that could not be deleted from protection storage, you can remove the backup copies from the PowerProtect Data Manager database. Removing the backup copies from PowerProtect Data Manager does not delete the copies in protection storage.

Steps

1. From the PowerProtect Data Manager UI, select Infrastructure > Assets.

2. From the Assets window, select the tab for the asset type for which you want to delete copies. If a policy has been assigned, the table lists the assets that have been discovered, along with the associated protection policy.

3. Select a protected asset from the table, and then click View Copies. The Copy Locations pane identifies where the backups are stored.

4. In the left pane, click the storage icon to the right of the icon for the asset, for example, DD. The table in the right pane lists the backup copies.

5. Select one or more backup copies with the Deletion Failed status from the table, and then click Remove from PowerProtect. The system displays a warning to confirm you want to delete the selected backup copies.

6. Click OK. An informational dialog box opens to confirm that the copies are being deleted. To monitor the progress of the operation, click Go to Jobs. To view the list of backup copies and their status, click OK.

42 Managing Storage, Assets, and Protection for Kubernetes Clusters

7. Verify that the copies are deleted from the PowerProtect Data Manager database. If the deletion is successful, the deleted copies no longer appear in the table. The backup copies remain in protection storage.

Add a service-level agreement SLA Compliance in the PowerProtect Data Manager UI enables you to add a service-level agreement (SLA) that identifies your service-level objectives (SLOs). You use the SLOs to verify that your protected assets are meeting the service-level agreements (SLAs).

About this task

NOTE: When you create an SLA for Cloud Tier, you can include only full backups in the SLA.

Steps

1. From the PowerProtect Data Manager UI, select Protection > SLA Compliance.

The SLA Compliance window appears.

2. Click Add or, if the assets that you want to apply the SLA to are listed, select these assets and then click Add.

The Add Service Level Agreement wizard appears.

3. Select the type of SLA that you want to add, and then click Next. Policy. If you choose this type, go to step 4. Backup. If you choose this type, go to step 5. Extended Retention. If you choose this type, go to step 6. Replication. If you choose this type, go to step 7. Cloud Tier. If you choose this type, go to step 8.

You can select only one type of Service Level Agreement.

4. If you selected Policy, specify the following fields regarding the purpose of the new Policy SLA:

a. The SLA Name. b. If applicable, select Minimum Copies, and specify the number of Backup, Replication, and Cloud Tier copies. c. If applicable, select Maximum Copies, and specify the number of Backup, Replication, and Cloud Tier copies. d. If applicable, select Available Location and select the applicable locations. To add a location, click Add Location.

Options include the following: InInclude locations of all copies in the SLO locations. Selecting this option does not require every SLO location to

have a copy. Must InInclude locations of all copies in the SLO locations. Selecting this option requires every SLO location to

have at least one copy. ExcludeLocations of all copies must be non-SLO locations.

e. If applicable, select Allowed in Cloud through Cloud Tier/Cloud DR. f. Click Finish, and then go to step 9.

5. If you selected Backup, specify the following fields regarding the purpose of the new Backup SLA:

a. The SLA Name. b. If applicable, select Recovery Point Objective required (RPO), and then set the duration. The purpose of an RPO is

business continuity planning, and indicates the maximum targeted period in which data (transactions) might be lost from an IT service due to a major incident.

NOTE: You can select only Recovery Point Objective required to configure as an independent objective in the

SLA, or select both Recovery Point Objective required and Compliance Window for copy type. If you select

both, the RPO setting must be one of the following:

Greater than 24 hours or more than the Compliance window duration, in which case RPO validation occurs

independent of the Compliance Window.

Less than or equal to the Compliance Window duration, in which case RPO validation occurs within the

Compliance Window.

c. If applicable, select Compliance Window for copy type, and then select a schedule level from the list (for example, All, Full, Cumulative) and set the duration. Duration indicates the amount of time necessary to create the backup

Managing Storage, Assets, and Protection for Kubernetes Clusters 43

copy. Ensure that the Start Time and End Time of backup copy creation falls within the Compliance Window duration specified.

This window specifies the time during which you expect the specified activity to take place. Any specified activity that occurs outside of this Start Time and End Time triggers an alert.

d. If applicable, select the Verify expired copies are deleted option.

Verify expired copies are deleted is a compliance check to see if PowerProtect Data Manager is deleting expired copies. This option is disabled by default.

e. If applicable, select Retention Time Objective, and specify the number of Days, Months, Weeks, or Years.

NOTE: For compliance validation to pass, the value set for the Retention Time Objective must match the lowest

retention value set for the backup levels of this policy's target objectives. For example, if you set the synthetic full

backup Retain For to 30 days but set the full backup Retain For to 60 days, the Retention Time Objective must be

set to the lower value, in this case, 30 days.

f. If applicable, select the Verify Retention Lock is enabled for all copies option. This option is disabled by default. g. Click Finish, and go to step 9.

The SLA Compliance window appears with the new SLA.

6. If you selected Extended Retention, specify the following fields regarding the purpose of the new Extended Retention SLA:

a. The SLA Name. b. If applicable, select Recovery Point Objective required (RPO), and then set the duration. The purpose of an RPO is

business continuity planning, and indicates the maximum targeted period in which data (transactions) might be lost from an IT service due to a major incident.

NOTE: By default, the RPO provides a grace period of 1 day for SLA compliance verification. For example, with

a weekly extended retention schedule, PowerProtect Data Manager provides 8 days for the RPO to pass the SLA

Compliance verification.

c. If applicable, select the Verify expired copies are deleted option.

Verify expired copies are deleted is a compliance check to see if PowerProtect Data Manager is deleting expired copies. This option is disabled by default.

d. If applicable, select Retention Time Objective, and specify the number of Days, Months, Weeks, or Years. e. If applicable, select the Verify Retention Lock is enabled for all copies option. This option is disabled by default. f. Click Finish, and go to step 9.

The SLA Compliance window appears with the newly added SLA.

7. If you selected Replication, specify the following fields regarding the purpose of the new Replication SLA:

a. The SLA Name. b. If applicable, select the Compliance Window, and specify the Start Time and End Time.

This window specifies the times that are permissible and during which you can expect the specified activity to occur. Any specified activity that occurs outside of this start time and end time triggers an alert.

c. If applicable, select the Verify expired copies are deleted option.

Verify expired copies are deleted is a compliance check to see if PowerProtect Data Manager is deleting expired copies. This option is disabled by default.

d. If applicable, select Retention Time Objective, and specify the number of Days, Months, Weeks, or Years.

NOTE: For compliance validation to pass, the value set for the Retention Time Objective must match the lowest

retention value set for the backup levels of this policy's target objectives.

e. If applicable, select the Verify Retention Lock is enabled for all copies option. This option is disabled by default. f. Click Finish, and go to step 9.

The SLA Compliance window appears with the newly added SLA.

8. If you selected Cloud Tier type SLA, specify the following fields regarding the purpose of the new Cloud Tier SLA:

a. The SLA Name. b. If applicable, select the Verify expired copies are deleted option.

This option is a compliance check to determine if PowerProtect Data Manager is deleting expired copies. This option is disabled by default.

c. If applicable, select Retention Time Objective and specify the number of Days, Months, Weeks, or Years.

NOTE: For compliance validation to pass, the value set for the Retention Time Objective must match the lowest

retention value set for the backup levels of this policy's target objectives.

44 Managing Storage, Assets, and Protection for Kubernetes Clusters

d. If applicable, select the Verify Retention Lock is enabled for all copies option. This option is disabled by default. e. Click Finish.

9. If the SLA has not already been applied to a protection policy:

a. Go to Protection > Protection Policies. b. Select the policy, and then click Edit.

10. In the Objectives row of the Summary window, click Edit.

11. Do one of the following, and then click Next: Select the added Policy SLA from the Set Policy Level SLA list. Create and add the SLA policy from the Set Policy Level SLA list. The Summary window appears.

12. Click Finish. An informational message appears to confirm that PowerProtect Data Manager has saved the protection policy.

13. Click Go to Jobs to open the Jobs window to monitor the backup and compliance results, or click OK to exit.

NOTE: Compliance checks occur automatically every day at 2 a.m. Coordinated Universal Time (UTC). If any objectives

are out of compliance, an alert is generated at 2 a.m. UTC. The Validate job in the System Jobs window indicates the

results of the daily compliance check.

For a backup SLA with a required RPO setting that is less than 24 hours, PowerProtect Data Manager performs real-time compliance checks. If you selected Compliance Window for copy type and set the backup level to All, the real-time compliance check occurs every 15 minutes only within the compliance window. If the backup level is not All, or if a compliance window is not specified, the real-time compliance check occurs every 15 minutes without stop.

NOTE: If the backup SLA has a required RPO setting of 24 hours or greater, compliance checks occur daily at 2 a.m.

UTC. Real-time compliance checks do not occur for backup SLAs with an RPO setting of 24 hours or greater.

Real-time compliance-check behavior

If the interval of time between the most recent backup of the asset and the compliance check is greater than the RPO requirement, then an alert indicates the RPO of the asset is out of compliance. This alert is generated once within an RPO period. If the same backup copy is missed when the next compliance check occurs, no further alerts are generated.

If the interval of time between the most recent backup of the asset and the compliance check is less than the RPO requirement, the RPO of the asset is in compliance.

If multiple assets in a policy are out of compliance at the same time when a compliance check occurs, a single alert is generated and includes information for all assets that are out of compliance in the policy. In the Alerts window, the asset count next to the alert summary indicates the number of assets that are out of compliance in the policy.

14. In the Jobs window, click next to an entry to view details on the SLA Compliance result.

Managing Storage, Assets, and Protection for Kubernetes Clusters 45

Restoring Kubernetes Namespaces and PVCs

Topics:

View backup copies available for restore Restoring a Kubernetes namespace Self-service restore of Kubernetes namespaces Quick recovery for server DR

View backup copies available for restore When a protection policy is successfully backed up, PowerProtect Data Manager displays details such as the name of the storage system containing the asset backup, location, the creation and expiry date, and the size. To view a backup summary:

Steps

1. From the PowerProtect Data Manager UI, select Infrastructure > Assets or Restore > Assets.

2. Select the tab that corresponds to the type of assets that you want to view. For example, for vCenter virtual machine assets, click Virtual Machine.

Assets that are associated with protection copies of this type are listed. By default, only assets with Available or Not Detected status display. You can also search for assets by name.

For virtual machines, you can also click the File Search button to search on specific criteria.

NOTE: In the Restore > Assets window, only tabs for asset types supported for recovery within PowerProtect Data

Manager display. Supported asset types include the following:

Virtual Machines

File System

Storage Group

Kubernetes

3. To view more details, select an asset and click View copies.

The copy map consists of the root node and its child nodes. The root node in the left pane represents an asset, and information about copy locations appears in the right pane. The child nodes represent storage systems.

When you click a child node, the right pane displays the following information:

Storage system where the copy is stored. The number of copies Details of each copy, including the time that each copy was created, the consistency level, the size of the copy, the

backup type, the copy status, and the retention time. The indexing status of each copy at the time of copy creation:

Success indicates that all files or disks are successfully indexed. Partial Success indicates that only some disks or files are indexed and might return partial results upon file search. Failed indicates that all files or disks are not indexed. In Progress indicates that the indexing job is in progress.

If indexing has not been configured for a backup copy, or if global expiration has been configured and indexed disks or files have been deleted before the backup copy expiration date, the File Indexing column displays N/A.

The indexing status updates periodically which enables you to view the latest status. For virtual machine backups, a Disk Excluded column enables you to view any virtual disks (VMDKs) that were excluded

from the backup.

4

46 Restoring Kubernetes Namespaces and PVCs

Restoring a Kubernetes namespace After namespace contents are backed up as part of a Kubernetes cluster protection policy in the PowerProtect Data Manager UI, you can perform restores from individual namespace backups.

All types of restore are performed from the Restore > Assets window. Restore options include the following:

Restore to Original: Restore to the original namespace on the original cluster. Restore to New: Create a namespace, and restore to this location on the original cluster or a different cluster. Restore to Existing: Restore to an existing namespace in the original cluster or a different cluster.

The Restore button, which launches the Restore wizard, is disabled until you select a namespace in the Restore > Assets window.

Select a namespace and then click Restore to launch the Restore wizard. Alternatively, you can select a namespace and then click View Copies.

In both instances, you must select a backup in the first page of the Restore wizard before proceeding to the Purpose page, which displays the available restore options.

NOTE: Manually replicating backups to DD storage will not create PCS records in PowerProtect Data Manager. It is

recommended to perform these backups on the local tier, as a Cloud Tier backup will require a recall operation.

Restore considerations

Review the following information and considerations for awareness prior to performing a Kubernetes namespace or PVC restore.

Shut down objects using PVCs before restore to original or restore to existing PVC

When PVCs are being used by a job that runs for a long period of time (for example, a job that spawns pods to download/ upload large content to or from a server), restores might not complete successfully. Performing a restore to original or restore to existing for Kubernetes PVCs requires that all objects using the PVCs be shut down.

Restricted security policies

If the cluster applies a restricted security policy to the namespace being restored, then a security policy that has readOnlyRootFilesystem set to false and runAsUser set to RunAsAny must be used.

NOTE: If you are restoring to a non-VMware CSI volume or to any volume that does not have a ppdm-serviceaccount

service account in the target namespace, the default service account will be used. If you do not want to bind the default

service account to a security policy with readOnlyRootFilesystem set to false and runAsUser set to RunAsAny, then create

a ppdm-serviceaccount service account for this purpose.

Custom resources excluded

For Kubernetes clusters on vSphere, the custom resources listed in https://github.com/vmware-tanzu/velero-plugin-for- vsphere/blob/main/docs/supervisor-notes.md are excluded during restore.

When restoring PVCs to the original or an existing namespace, PowerProtect Data Manager scales down the pods using the PVC being restored. If the application running in the namespace being restored is managed by an operator, the operator might interfere with the PowerProtect Data Manager scale down operation. In such scenarios, scale down the operators manually before performing the restore, and then scale back up after the restore is complete.

Restoring Kubernetes Namespaces and PVCs 47

Storage class mapping

When performing a restore to a new namespace in the PowerProtect Data Manager UI, you can choose a different storage class for some of the PVCs being restored, depending on the provisioner. For example, you can restore a PVC from a Ceph CSI storage class to a PowerFlex CSI storage class. Changing the storage class can be useful in the following scenarios:

When restoring PVCs and namespaces from one cluster to another cluster that uses different storage. When migrating data from one storage class to another, for example, when retiring the back-end storage. When migrating data between on-premises storage and cloud storage.

When selecting a storage class, some non-CSI storage classes that are not supported might be displayed for selection, such as vSphere volumes.

NOTE: If the PVC being restored already exists in the target cluster, the storage class of the existing PVC is not changed

upon restore. Also, restore from a vSPhere CSI storage class to other CSI storage classes is not supported.

Restoring Kubernetes resources controlled by a webhook

When restoring Kubernetes resources that are controlled by a webhook, changes might be required to the webhook configuration to successfully perform the restore. For example, the application appconnect.ibm.com contains an admission controller webhook mutate.configuration.upsert.appconnect.ibm.com that can prevent Velero restores. In such scenarios, review the application documentation for more information about making changes to the webhook configuration.

Restore to the original namespace

Perform the following steps to restore a Kubernetes protection policy backup to the original namespace within a Kubernetes cluster:

Steps

1. From the PowerProtect Data Manager UI, select Restore > Assets, and then select the Kubernetes tab.

The Restore window displays all protected and unprotected namespaces.

2. Select the checkbox next to a protected namespace and click Restore.

You can also use the filter in the Name column to search for a specific namespace, or use the Search field to search on specific criteria.

The Restore wizard appears.

3. On the Select Copy page:

a. Select the radio button next to a backup copy. b. Click Next.

NOTE: If you click Next without choosing a copy, the most recent backup copy is used.

4. On the Cluster page, select Restore to Original Cluster, and then click Next.

5. On the Purpose page, select from one of the following options: Restore Namespace and Select PVCs to restore namespace resources and selected persistent volume claims (PVCs).

Optionally, you can also select Include cluster scoped resources to restore the cluster resources that were backed up automatically as part of the Kubernetes protection policy. This option is only available for PowerProtect Data Manager 19.6 and later Kubernetes protection policy backups.

NOTE: Selecting Include cluster scoped resources restores all instances of cluster roles, cluster role bindings,

and custom resource definitions (CRDs) that were present at the time of the backup.

Restore Only PVCs to restore PVCs without namespace or cluster resources.

6. Click Next. The Restore Type page displays.

7. On the Restore Type page, select Restore to Original Namespace, and then click Next. The PVCs page appears, displaying the PVCs in the namespace that you plan to restore, along with the PVC configuration in the original target namespace.

8. On the PVCs page, if the configuration of the namespace you want to restore is different from the configuration in the target namespace:

48 Restoring Kubernetes Namespaces and PVCs

Select Overwrite content of existing PVCs to overwrite existing PVCs in the target location with the PVCs being restored if the PVCs have the same name.

Select Skip restore of existing PVCs to restore selected PVCs without overwriting existing PVCs in the target location if they have the same name.

9. Optionally, if you want to retire the storage class on the original cluster:

a. Select Change storage class for PVCs to compatible storage class. The PVCs that are part of the restore display.

b. Select the checkbox next to the PVCs for which you want to change the storage class on the target cluster.

NOTE: The storage class is not modified for existing PVCs being overwritten.

10. Click Next.

If you selected Change storage class for PVCs to compatible storage class in the previous page, the Storage Class page appears with a list of supported storage classes on the target cluster.

If you did not select this option, the Summary page appears with a confirmation message indicating that namespace resources, including pods, services, secrets, and deployments, are not overwritten during the restore, and that all resources that do not exist in the namespace will be restored. Go to step 12.

11. On the Storage Class page:

a. Select the checkbox next to a PVC for which you want to change the storage class on the target cluster, or select multiple PVCs to change all selections to the same storage class.

NOTE: When changing the PVC storage class on the target Kubernetes cluster, if you select more than one PVC

at a time on this page, only the storage classes that apply to all selected PVCs display. To view and select from all

available storage classes, select one PVC at a time.

b. Click Target Storage Class to select from the available storage classes. The Select Storage Class dialog appears.

c. Select from one of the available storage classes, and then click Save to save your changes and return to the Storage Class page.

NOTE: PVCs that were backed up using first class disk (FCD) data path can only be restored to FCD storage

classes. CSI-based PVCs, however, can be restored to FCD or CSI storage classes.

d. If changing the storage class for one PVC at a time, repeat steps a through c. e. Click Next.

PowerProtect Data Manager creates a mapping between the original storage class and the new storage class.

12. On the Summary page, click Restore. An informational dialog box appears indicating that the restore has started.

13. Go to the Jobs window to monitor the restore. A restore job appears with a progress bar and start time.

Restore to a new namespace

Perform the following steps to restore a Kubernetes protection policy backup to a new namespace within a Kubernetes cluster:

Steps

1. From the PowerProtect Data Manager UI, select Restore > Assets, and then select the Kubernetes tab.

The Restore window displays all protected and unprotected namespaces.

2. Select the checkbox next to a protected namespace and click Restore.

You can also use the filter in the Name column to search for a specific namespace, or use the Search field to search on specific criteria.

The Restore wizard appears.

3. On the Select Copy page:

a. Select the radio button next to a backup copy. b. Click Next.

NOTE: If you click Next without choosing a copy, the most recent backup copy is used.

4. On the Cluster page, select one of the following options, and then click Next:

Restoring Kubernetes Namespaces and PVCs 49

Restore to Original ClusterSelect this option to restore to a new namespace on the original cluster. Restore to an Alternate ClusterSelect this option to restore to a new namespace on a different cluster, and then

select the cluster from the list.

A restore to an alternate cluster can be useful when:

Migrating namespaces from a cluster on-premises to a cluster in the cloud. Moving namespaces from a lower cluster version to a higher cluster version. Moving from one environment to another (for example, from a test environment to a production environment).

NOTE: When restoring to an alternate cluster, ensure that this Kubernetes cluster has been added and discovered in

the PowerProtect Data Manager UI Asset Sources window.

5. On the Purpose page:

a. Select Restore Namespace and Select PVCs to restore namespace resources and selected persistent volume claims (PVCs). Optionally, you can also select Include cluster scoped resources to restore the cluster roles, cluster role bindings, and custom resource definitions (CRDs) that were backed up automatically as part of the Kubernetes protection policy. This option is only available for PowerProtect Data Manager 19.6 and later Kubernetes protection policy backups.

b. Click Next.

The Restore Type page displays.

6. On the Restore Type page, select Restore to New Namespace, and then type a name for the new namespace. Click Next. The PVCs page appears, displaying the PVCs in the namespace that are available for restore.

7. On the PVCs page:

a. Clear the checkbox next to any PVCs that you do not want to restore. b. Optionally, select Change storage class for PVCs to compatible storage class if you want to retire the storage class

on the original cluster and use the storage class on the target cluster for the PVCs being restored.

8. Click Next.

If you selected Change storage class for PVCs to compatible storage class in the previous page, the Storage Class page appears with a list of the PVCs in the namespace that are available for restore, along with their current storage class.

If you did not select this option, the Summary page appears with a confirmation message indicating that namespace resources, including pods, services, secrets, and deployments, are not overwritten during the restore, and that all resources that do not exist in the namespace will be restored. Go to step 11.

9. On the Storage Class page:

a. Select the checkbox next to a PVC for which you want to change the storage class on the target cluster, or select multiple PVCs to change all selections to the same storage class.

NOTE: When changing the PVC storage class on the target Kubernetes cluster, if you select more than one PVC

at a time on this page, only the storage classes that apply to all selected PVCs display. To view and select from all

available storage classes, select one PVC at a time.

b. Click Target Storage Class to select from the available storage classes. The Select Storage Class dialog appears.

c. Select from one of the available storage classes, and then click Save to save your changes and return to the Storage Class page.

NOTE: PVCs that were backed up using first class disk (FCD) data path can only be restored to FCD storage

classes. CSI-based PVCs, however, can be restored to FCD or CSI storage classes.

d. If changing the storage class for one PVC at a time, repeat steps a through c. e. Click Next.

PowerProtect Data Manager creates a mapping between the original storage class and the new storage class.

10. On the Summary page, click Restore. An informational dialog box appears indicating that the restore has started.

11. Go to the Jobs window to monitor the restore. A restore job appears with a progress bar and start time.

Next steps

To view the new namespace as an asset within the PowerProtect Data Manager UI, initiate a full discovery of the Kubernetes cluster from the Asset Sources window.

50 Restoring Kubernetes Namespaces and PVCs

Restore to an existing namespace

Perform the following steps to restore a Kubernetes protection policy backup to an existing namespace within a Kubernetes cluster:

Steps

1. From the PowerProtect Data Manager UI, select Restore > Assets, and then select the Kubernetes tab.

The Restore window displays all protected and unprotected namespaces.

2. Select the checkbox next to a protected namespace and click Restore.

You can also use the filter in the Name column to search for a specific namespace, or use the Search field to search on specific criteria.

The Restore wizard appears.

3. On the Select Copy page:

a. Select the radio button next to a backup copy. b. Click Next.

NOTE: If you click Next without choosing a copy, the most recent backup copy is used.

4. On the Cluster page, select one of the following options, and then click Next: Restore to Original ClusterSelect this option to restore to an existing namespace on the original cluster. Restore to an Alternate ClusterSelect this option to restore to an existing namespace on a different cluster, and

then select the cluster from the list. A restore to an alternate cluster can be useful when migrating namespaces from a cluster on-premises to a cluster in the cloud, when moving namespaces from a lower cluster version to a higher cluster version, or when moving from one environment to another (for example, from a test environment to a production environment).

NOTE: When restoring to an alternate cluster, ensure that this Kubernetes cluster has been added and discovered in

the PowerProtect Data Manager UI Asset Sources window.

5. On the Purpose page, select from one of the following options: Restore Namespace and Select PVCs to restore namespace resources and selected persistent volume claims (PVCs).

Optionally, you can also select Include cluster scoped resources to restore the cluster roles, cluster role bindings, and custom resource definitions (CRDs) that were backed up automatically as part of the Kubernetes protection policy. This option is only available for PowerProtect Data Manager 19.6 and later Kubernetes protection policy backups.

Restore Only PVCs to restore PVCs without namespace resources.

6. Click Next. The Restore Type page displays.

7. On the Restore Type page, select Restore to Existing Namespace, and then select a namespace from the Select Namespace list. Click Next. The PVCs page appears, displaying the PVCs in the namespace that you plan to restore, along with the PVC configuration in the original target namespace.

8. On the PVCs page, if the configuration of the namespace you want to restore is different from the configuration in the target namespace: Select Overwrite content of existing PVCs to restore selected PVCs and overwrite existing PVCs in the target

location if they have the same name. Select Skip restore of existing PVCs to restore selected PVCs without overwriting existing PVCs in the target location

if they have the same name.

9. Optionally, if you want to retire the storage class on the original cluster:

a. Select Change storage class for PVCs to compatible storage class. The PVCs that are part of the restore display.

b. Select the checkbox next to the PVCs for which you want to change the storage class on the target cluster.

NOTE: The storage class will not be modified for existing PVCs being overwritten.

10. Click Next.

If you selected Change storage class for PVCs to compatible storage class, the Storage Class page appears with a list of supported storage classes on the target cluster.

If you did not select this option, the Summary page appears with a confirmation message indicating that namespace resources, including pods, services, secrets, and deployments, are not overwritten during the restore, and that all resources that do not exist in the namespace will be restored. Go to step 12.

Restoring Kubernetes Namespaces and PVCs 51

11. On the Storage Class page:

a. Select the checkbox next to a PVC for which you want to change the storage class on the target cluster, or select multiple PVCs to change all selections to the same storage class.

NOTE: When changing the PVC storage class on the target Kubernetes cluster, if you select more than one PVC

at a time on this page, only the storage classes that apply to all selected PVCs display. To view and select from all

available storage classes, select one PVC at a time.

b. Click Target Storage Class to select from the available storage classes. The Select Storage Class dialog appears.

c. Select from one of the available storage classes, and then click Save to save your changes and return to the Storage Class page.

NOTE: PVCs that were backed up using first class disk (FCD) data path can only be restored to FCD storage

classes. CSI-based PVCs, however, can be restored to FCD or CSI storage classes.

d. If changing the storage class for one PVC at a time, repeat steps a through c. e. Click Next.

PowerProtect Data Manager creates a mapping between the original storage class and the new storage class.

12. On the Summary page, click Restore. An informational dialog box appears indicating that the restore has started.

13. Go to the Jobs window to monitor the restore. A restore job appears with a progress bar and start time.

Self-service restore of Kubernetes namespaces PowerProtect Data Manager supports the self-service restore of namespaces from within the Kubernetes cluster. The following procedure describes how to perform a self-service restore:

Prerequisites

NOTE: A Kubernetes administrator can list the 100 most recent PowerProtect Data Manager backups that have taken

place in the cluster within the last 30 days. Additionally, the last backup of every namespace backed up within the last 30

days using PowerProtect Data Manager is listed. Any backups not listed have to be restored from the PowerProtect Data

Manager UI.

Steps

1. Run the following command to list PowerProtect Data Manager backups performed within the last 30 days on the cluster:

kubectl get backupjob -n powerprotect The command output lists all available backupJob custom resources of PowerProtect Data Manager, in the form . For example:

admin@method:~> ~/k8s/kubectl get backupjob -n powerprotect NAME AGE testapp1-2019-11-16-14-15-47 3d9h testapp1-2019-11-16-17-00-49 3d7h

2. Select the backup that you want to restore from the list, and then create a RestoreJob yaml file in the following format:

apiVersion: "powerprotect.dell.com/v1beta1" kind: RestoreJob metadata: name: namespace: powerprotect spec:

recoverType: RestoreToNew #Default is RestoreToOriginal backupJobName: # For e.g. testapp1-2019-11-16-14-15-47 namespaces: - name: alternateNamespace: # Name for the

52 Restoring Kubernetes Namespaces and PVCs

recovered namespace. Needed only for RestoreToNew. Should not be specified for RestoreToOriginal persistentVolumeClaims: - name: "*" #volumes to be recovered. By default all volumes backed up will be recovered

3. Run the following command to apply the yaml:

kubectl apply -f -n powerprotect 4. Run the following command to track the restore progress:

kubectl get restorejob -n powerprotect -o yaml -w 5. Upon successful completion of the restore, run the following command to delete the RestoreJob:

kubectl delete restorejob -n powerprotect

Quick recovery for server DR After a disaster, the quick recovery feature enables you to restore assets and data that you replicated to a destination system at a remote site.

NOTE: Quick recovery does not re-create the original backup environment and source system which protected the restored

assets. Thus, quick recovery is not a substitute for a server DR restore. To continue backing up the restored assets at the

remote site, add the restored assets to a protection policy on the destination system.

Quick recovery is supported for the protected assets of the following PowerProtect Data Manager asset sources:

Virtual machines Kubernetes File system NAS

Quick recovery sends metadata from the source system to the destination system, following the flow of backup copies. This metadata makes the replication destination aware of the copies and enables the recovery view. You can recover your workloads at the remote site before you have the opportunity to restore the source PowerProtect Data Manager system.

For example, the following figures show two sites that are named A and B, with independent PowerProtect Data Manager and DD systems for protection storage. Each site contains unique assets. Figure Separate datacenters, before disaster shows the initial configuration with both sites replicating copies to each other. Figure Separate datacenters, after disaster shows the aftermath, with site A down. The site A assets have been restored with quick recovery into the site B environment from the replicated copies.

Restoring Kubernetes Namespaces and PVCs 53

Figure 2. Separate datacenters, before disaster

54 Restoring Kubernetes Namespaces and PVCs

Figure 3. Separate datacenters, after disaster

PowerProtect Data Manager supports quick recovery for alternate topologies. You can configure quick recovery for one-to- many and many-to-one replication. For example, the following figure shows a source PowerProtect Data Manager replicating to a standby DD system with its own PowerProtect Data Manager, all in the same data center. If the source system fails, the quick recovery feature ensures that you can still restore from those replicated copies before you restore the source.

Restoring Kubernetes Namespaces and PVCs 55

Figure 4. Standby DD system

The following topics explain the prerequisites, how to configure PowerProtect Data Manager to support quick recovery, and how to use the recovery view to restore assets.

Quick recovery prerequisites

Before you configure quick recovery, complete the following items: Attach at least two protection storage system systems to the source system: one for local protection storage and one for

replication. Ensure that the version of PowerProtect Data Manager is the same for both the source system and the remote (destination)

system. For agent quick recovery operations, ensure that the agent version on the destination client is 19.9 or later. For Kubernetes quick recovery operations, ensure that the same Kubernetes cluster is not managed by more than one

PowerProtect Data Manager instance. Add and enable the asset source on the remote PowerProtect Data Manager instance. Ensure that the replication protection storage is discovered in the remote (destination) system. Register asset sources with the source system and configure protection policies to protect those assets. Configure protection policies to replicate backup copies to the protection storage system at the remote site.

56 Restoring Kubernetes Namespaces and PVCs

Back up the protected assets and confirm that backup data successfully replicates to the destination protection storage system.

Before you use the quick recovery remote view, add the destination system to the list of remote systems on the source.

Identifying a remote system

Remote systems added to PowerProtect Data Manager for quick recovery can be identified using either a fully qualified domain name (FQDN) or an Internet protocol (IP) address. If the incorrect identification is used, quick recovery fails with a certificate error.

If a remote system is already identified in the PowerProtect Data Manager certificate list, it must be added to PowerProtect Data Manager for quick recovery with the same identification.

If you always use either FQDNs or IP addresses for all remote systems, do the same for quick recovery.

If a certificate entry for the remote system exists, you must use the same identification when adding it for quick recovery. If you are unsure if a remote system you want to add for quick recovery is already in the PowerProtect Data Manager certificate list, perform the following steps:

Log in to the console as the root user. Type keytool -list -keystore.

Review the output and look for a certificate entry that corresponds to either the FQDN or IP address of the remote system.

Add a remote system for quick recovery

Configure PowerProtect Data Manager to send metadata to another system to which you have replicated backups. Only the Administrator role can add remote systems.

Steps

1. Click , select Disaster Recovery, and then click Remote Systems.

The Remote Systems tab opens and displays a table of configured remote PowerProtect Data Manager systems.

2. Click Add. The Add Remote PowerProtect System window opens.

3. Complete the Name and FQDN/IP fields.

The Name field is a descriptive name to identify the remote system. To determine if you should enter the FQDN or IP address of the remote system, see Identifying a remote system.

4. In the Port field, type the port number for the REST API on the remote system.

The default port number for the REST API is 8443.

5. From the Credentials field, select an existing set of credentials from the list.

Alternatively, you can click Add Credentials from this list to add new credentials. Provide a descriptive name for the credentials, a username, and a password. Then, click Save to store the credentials.

6. Click Verify.

PowerProtect Data Manager contacts the remote system and obtains a security certificate for identity verification.

The Verify Certificate window opens to present the certificate details.

7. Review the certificate details and confirm each field against the expected value for the remote system. Then, click Accept to store the certificate. The Certificate field changes to VERIFIED and lists the server's identify.

8. Click Save. PowerProtect Data Manager returns to the Remote Systems tab of the Disaster Recovery window. The configuration change may take a moment to complete.

9. Click Cancel. The Disaster Recovery window closes.

10. Click , select Disaster Recovery, and then click Remote Systems.

The Remote Systems tab opens.

Restoring Kubernetes Namespaces and PVCs 57

11. Verify that the table of remote systems contains the new PowerProtect Data Manager system.

12. Click Cancel. The Disaster Recovery window closes.

Next steps

On the remote system, enable the same asset sources that are enabled on this system. Enable an asset source provides more information. Enabling an asset source on the remote system makes replicated backups of that type visible and accessible.

On the remote system, open the recovery view and verify that backups are visible and accessible. Dell Technologies recommends that you perform a test restore.

Metadata synchronizes between source and destination systems every three hours. If backups are not visible, allow sufficient time for the first synchronization before troubleshooting.

Edit a remote system

You can use the PowerProtect Data Manager user interface to change the descriptive name of the remote system, as well as the REST API port number and credentials. You can also enable or disable synchronization with the remote system. Only the Administrator role can edit remote systems.

Steps

1. Click , select Disaster Recovery, and then click Remote Systems.

The Remote Systems tab opens and displays a table of configured remote PowerProtect Data Manager systems.

2. Locate the row that corresponds to the appropriate remote system, and then select the checkbox for that row. The PowerProtect Data Manager enables the Edit button.

3. Click Edit. The Edit Remote PowerProtect System window opens.

4. Modify the appropriate parameters, and then click Save.

To enable or disable synchronization, select or deselect Enable sync. If you change the port number, you may need to re-verify the remote system security certificate.

PowerProtect Data Manager returns to the Remote Systems tab of the Disaster Recovery window. The configuration change may take a moment to complete.

5. Click Cancel. The Disaster Recovery window closes.

Quick recovery remote view

Use the remote view to work with replicated copies on the destination system after the source is no longer available. For example, to restore critical assets before you are able to restore the source system.

On the destination system, log in as a user with the Administrator role. The remote server contains an additional Remote

Systems icon in the banner.

When you click Remote Systems, PowerProtect Data Manager presents a drop-down that contains the names of the local system and any connected systems. Each entry has the identifying suffix (Local) or (Remote).

Select the source system from which you have replicated backups. PowerProtect Data Manager opens the remote view and presents a subset of the regular UI navigation tools:

Restore Assets Shows replicated copies. Running Sessions Allows you to manage and monitor Instant Access sessions.

Alerts Shows alert information in a table, including audit logs. Jobs Shows the status of any running restore jobs.

Each tool has the same function as for the local system. However, since the remote view is intended only for restore operations, the scope is limited to the replicated copies from the selected source system. While in remote view, a banner identifies the selected system.

58 Restoring Kubernetes Namespaces and PVCs

NOTE: For virtual machines, the quick recovery restore workflow does not include the Restore VM Tags option to restore

vCenter tags and categories from the backup.

Use Restore > Assets to locate copies. The instructions for restoring each type of asset provide more information about restore operations.

When the recovery is complete, click Remote Systems and select the name of the local system to exit remote view.

Restoring Kubernetes Namespaces and PVCs 59

Kubernetes Cluster Best Practices and Troubleshooting

Topics:

Configuration changes required for use of optimized data path and first class disks Recommendations and considerations when using a Kubernetes cluster Support Network File System (NFS) root squashing Update the Velero or OADP version used by PowerProtect Data Manager VM Direct protection engine overview Troubleshooting network setup issues Troubleshooting Kubernetes cluster issues

Configuration changes required for use of optimized data path and first class disks When the Kubernetes cluster is running on vSphere and using vSphere CNS storage, backup and recovery operations utilize the optimized data path, where persistent volumes on vSphere-managed storage are backed up by VMDKs called improvised virtual disks, or First Class Disks (FCDs). These FCDs are created on the back-end and assigned a globally unique UUID whenever persistent volumes are dynamically provisioned by vSphere CSI in Kubernetes. Since FCDs are not associated with any particular virtual machine, they can be managed independently.

PowerProtect Data Manager detects whether a persistent volume is backed by an FCD when the storageclass of the persistent volume has the provisioner as csi.vsphere.vmware.com. When this occurs, PowerProtect Data Manager switches to using the optimized data path. Optimized data path differs from CSI management in primarily two ways:

FCD uses the VMware VADP API to take the snapshot instead of using the CSI driver. Supports both incremental and full backups, making use of changed block tracking (CBT).

The following configuration changes are required prior to running the Kubernetes protection policy in order to make use of optimized data path:

FCD CSI support requires a minimum version of vCenter 6.7 U3. Enable Changed Block Tracking (CBT) on the Kubernetes worker node virtual machines before the pods (application) start

using dynamically provisioned PVCs.

To enable CBT on the nodes, run the command source /opt/emc/vproxy/unit/vproxy.env on the PowerProtect Data Manager host, and then run the following command for each node:

/opt/emc/vproxy/bin/vmconfig -u vCenter user with administrator privileges -p user password -v vCenter host FQDN or IP -l ip -k Kubernetes node IP -c enable-cbt" If your Kubernetes cluster nodes do not have VMWare Tools installed, you might not be able to use the IP address as one of the inputs to the tool. In this case, use the VM Moref as the identifier of the VMs:

/opt/emc/vproxy/bin/vmconfig -u vCenter user with administrator privileges -p user password -v vCenter host FQDN or IP -l moref -k Kubernetes VM node moref -c enable-cbt" PowerProtect Data Manager enables CBT on the PVCs by default. If you need to disable the autoenable setting for CBT, use the API to send a POST request using the configurations attribute before starting any backups for namespaces in this cluster. This process is described in the section "Disable the autoenableCBT setting" under Back up and restore Kubernetes in the PowerProtect Data Manager Public REST API documentation.

The PowerProtect Data Manager proxy pods use NBD protocol to read the contents of the FCD-based persistent volumes in order to back up these volumes. Ensure that the NBD default port 902 is open on all of the Kubernetes nodes, and that the worker nodes are able to reach the vCenter Server.

A

60 Kubernetes Cluster Best Practices and Troubleshooting

You can verify that a Kubernetes protection policy backup or restore is using optimized data path by viewing the details for the operation in the Jobs window of the PowerProtect Data Manager UI. Additionally, the Recent Tasks pane of the vSphere Client displays the message Create a virtual disk object when a new PVC is added.

Recommendations and considerations when using a Kubernetes cluster Review the following information that is related to the deployment, configuration, and use of the Kubernetes cluster as an asset source in PowerProtect Data Manager:

Add line to custom-ports file when not using port 443 or 6443 for Kubernetes API server

If a Kubernetes API server listens on a port other than 443 or 6443, an update is required to the PowerProtect Data Manager firewall to allow outgoing communication on the port being used. Before you add the Kubernetes cluster as an asset source, perform the following steps to ensure that the port is open:

1. Log in to PowerProtect Data Manager, and change the user to root.

2. Add a line to the file /etc/sysconfig/scripts/custom-ports that includes the port number that you want to open.

3. Run the command service SuSEfirewall2 restart.

This procedure should be performed after a PowerProtect Data Manager update, restart, or server disaster recovery.

Log locations for Kubernetes asset backup and restore operations and pod networking

All session logs for Kubernetes asset protection operations are pulled into the /logs/external-components/k8s folder on the PowerProtect Data Manager host.

PVC parallel backup and restore performance considerations

To throttle system performance, PowerProtect Data Manager supports only five parallel namespace backups and two parallel namespace restores per Kubernetes cluster. PVCs within a namespace are backed up and restored sequentially.

You can queue up to 100 namespace backups across protection policies in PowerProtect Data Manager.

Overhead of PowerProtect Data Manager components on Kubernetes cluster

At any time during backup, the typical footprint of PowerProtect Data Manager components (Velero, PowerProtect Controller, cProxy) is less than 2 GB RAM Memory and four CPU cores, and such usage is not sustained and visible only during the backup window.

The following resource limits are defined on the PowerProtect PODs, which are part of the PowerProtect Data Manager stack:

Velero maximum resource usage: 1 CPU core, 256 MiB memory PowerProtect Controller maximum resource usage: 1 CPU core, 256 MiB memory PowerProtect cProxy pods (maximum of 5 per cluster): Each cProxy pod typically consumes less than 300 MB memory and

less than 0.8 CPU cores. These pods are created and terminated within the backup job.

Only Persistent Volumes with VolumeMode Filesystem supported

Backup and recovery of Kubernetes cluster assets in PowerProtect Data Manager is only supported for Persistent Volumes with the VolumeMode Filesystem.

Kubernetes Cluster Best Practices and Troubleshooting 61

Objects using PVC scaled down before starting the restore

The following activities occur before a PVC restore to the original namespace or an existing namespace when PowerProtect Data Manager detects that the PVC is in use by a Pod, Deployment, StatefulSet, DaemonSet, ReplicaSet, or Replication Controller:

PowerProtect Data Manager scales down any objects using the PVC. PowerProtect Data Manager deletes the daemonSet and any Pods using PVCs.

Upon completion of the PVC restore, any objects that were scaled down are scaled back up, and any objects that were deleted are re-created. Ensure that you shut down any Kubernetes jobs that actively use the PVC before running a restore.

NOTE: If PowerProtect Data Manager is unable to reset the configuration changes due to a controller crash, it is

recommended to delete the Pod, Deployment, StatefulSet, DaemonSet, ReplicaSet, or Replication Controller from the

namespace, and then perform a Restore to Original again on the same namespace.

Restore to a different namespace that already exists can result in mismatch between UID of pod and UID persistent volume files

A PowerProtect Data Manager restore of files in persistent volumes restores the UID and GID along with the contents. When performing a restore to a different namespace that already exists, and the pod consuming the persistent volume is running with restricted Security Context Constraints (SCC) on OpenShift, the UID assigned to the pod upon restore might not match the UID of the files in the persistent volumes. This UID mismatch might result in a pod startup failure.

For namespaces with pods running with restricted SCC, Dell Technologies recommends one of the following restore options:

Restore to a new namespace where PowerProtect Data Manager restores the namespace resource as well. Restore to the original namespace if this namespace still exists.

Support Network File System (NFS) root squashing Using root squashing on a Network File System (NFS) volume prevents remote root users from having root access to the volume. Without additional configuration to support NFS root squashing, these volumes cannot be backed up or restored.

Prerequisites

All files and folders on the NFS volume must be owned by the same owner and group. If a file or folder uses a different user identifier (UID) or group identifier (GID) than the rest, then backups will fail.

Steps

1. Create a storage class with root-client access enabled. For example, set the property RootClientEnabled when creating an Isilon storage class.

2. Create a ConfigMap named ppdm-root-access-storage-class-mapping in the PowerProtect namespace.

3. In the data section of the ConfigMap, add a storage-class mapping in the following format:

name of storage class with root-client access disabled: name of storage class with root- client access enabled

For example, to map isilon-root-squashing-sc to isilon-allow-backups-sc, type:

isilon-root-squashing-sc: isilon-allow-backups-sc

Update the Velero or OADP version used by PowerProtect Data Manager When PowerProtect Data Manager is configured to protect Kubernetes clusters, Velero is used for backing of Kubernetes resources. In an OpenShift environment, PowerProtect Data manager uses OADP to deploy Velero. Each PowerProtect Data Manager release uses a specific version of Velero by default, which is documented in the file /usr/local/brs/lib/cndm/

62 Kubernetes Cluster Best Practices and Troubleshooting

config/k8s-image-versions.info. If you must update the Velero or OADP version that PowerProtect Data Manager uses in order to pick up the latest security fixes, perform the following procedure.

Prerequisites

NOTE: The Velero version should be updated to an incremental patch build only. A minor or major version of Velero or

OADP that is later than the default version that PowerProtect Data Manager uses might not be compatible.

Steps

1. Log in to PowerProtect Data Manager as an admin user.

2. Open the file /usr/local/brs/lib/cndm/config/k8s-dependency-versions-app.properties.

3. In a non-OpenShift environment, add the following line to this file to update the Velero version, and then save the file:

k8s.velero.version=vx.y.z Where vx.y.z is the Velero incremental patch version.

4. In an OpenShift environment, add the following line to this file to update the OADP version, and then save the file:

k8s.oadp.version=x.y.z Where x.y.z is the OADP incremental patch version.

5. Restart the CNDM service by running the command cndm restart, and then wait for a few seconds for the service to restart.

6. From the PowerProtect Data Manager UI, run a manual discovery of the Kubernetes cluster. When the discovery completes successfully, the configuration that is stored in the configuration map ppdm-controller- config on the Kubernetes cluster powerprotect namespace updates.

7. Run the following commands to delete the powerprotect-controller pod on the Kubernetes cluster. This action forces a restart, during which the changes take effect. This step should be performed when there are no backup or restore operations in progress.

kubectl get pod -n powerprotect kubectl delete pod powerprotect controller pod name -n powerprotect

8. Repeat steps six and seven for each Kubernetes cluster that is protected by PowerProtect Data Manager.

VM Direct protection engine overview The VM Direct protection engine provides two functions within PowerProtect Data Manager:

A virtual machine data protection solutionDeploy a VM Direct Engine in the vSphere environment to perform virtual machine snapshot backups, which improves performance and reduces network bandwidth utilization by using the protection storage source-side deduplication.

A Tanzu Kubernetes guest cluster data protection solutionDeploy a VM Direct Engine in the vSphere environment for protection of vSphere CSI-based persistent volumes, for which it is required to use a VM Proxy instead of the cProxy, for the management and transfer of backup data.

The VM Direct protection engine is enabled after you add a vCenter server in the Asset Sources window, and allows you to collect VMware entity information from the vCenter server and save VMware virtual machines and Tanzu Kubernetes guest cluster namespaces and PVCs as PowerProtect Data Manager resources for the purposes of backup and recovery.

To view statistics for the VM Direct Engine, manage and monitor VM Direct appliances, and add an external VM Direct appliance to facilitate data movement, select Infrastructure > Protection Engines. Add a VM Direct Engine provides more information.

NOTE: In the VM Direct Engines pane, VMs Protected refers to the number of assets protected by PowerProtect Data

Manager. This count does not indicate that all the virtual machines have been protected successfully. To determine the

success or failure of asset protection, use the Jobs window.

When you add an external VM Direct appliance, the VM Direct Engines pane provides the following information:

The VM Direct appliance IP address, name, gateway, DNS, network, and build version. This information is useful for troubleshooting network issues.

The vCenter and ESXi server hostnames.

Kubernetes Cluster Best Practices and Troubleshooting 63

The VM Direct appliance status (green check mark if the VM Direct appliance is ready, red x if the appliance is not fully operational). The status includes a short explanation to help you troubleshoot the VM Direct Engine if the VM Direct appliance is not in a fully operational state.

The transport mode that you selected when adding the VM Direct appliance (Hot Add, Network Block Device, or the default setting Hot Add, Failback to Network Block Device).

Transport mode considerations

Review the following information for recommendations and best practices when selecting a transport mode to use for virtual machine data protection operations and Tanzu Kubernetes guest cluster protection in PowerProtect Data Manager.

Hot Add transport mode recommended for large workloads

For workloads where full backups of large sized virtual machines or backups of virtual machines with a high data change rate are being performed, Hot Add transport mode provides improved performance over other modes. With Hot Add transport mode, a VM Direct Engine must be deployed on the same ESXi host or cluster that hosts the production virtual machines. During data protection operations, a VM Direct Engine capable of performing Hot Add backups is recommended. The following selection criteria is used during data protection operations:

If a VM Direct Engine is configured in Hot Add only mode, then this engine is used to perform Hot Add virtual machine backups. If one or more virtual machines are busy, then the backup is queued until the virtual machine is available.

If a virtual machine is in a cluster where the VM Direct Engine is not configured in Hot Add mode, or the VM Direct Engine with Hot Add mode configured is disabled or in a failed state, then PowerProtect Data Manager selects a VM Direct Engine within the cluster that can perform data protection operations in NBD mode. Any VM Direct Engine with Hot Add mode configured that is not in the cluster is not used.

Any VM Direct Engine that is configured in NBD only mode, or in Hot Add mode with failback to NBD, is used to perform NBD virtual machine backups. If every VM Direct Engine that is configured in NBD mode is busy, then the backup is queued until one of these engines is available.

If there is no VM Direct Engine that is configured in NBD mode, or the VM Direct Engine with NBD mode configured is disabled or in a failed state, then the PowerProtect Data Manager embedded VM Direct engine is used to perform the NBD backup.

Other transport mode recommendations

Review the following additional transport mode recommendations:

Use Hot Add mode for faster backups and restores and less exposure to network routing, firewall, and SSL certificate issues. To support Hot Add mode, deploy the VM Direct Engine on an ESXi host that has a path to the storage that holds the target virtual disks for backup.

NOTE: Hot Add mode requires VMware hardware version 7 or later. Ensure all virtual machines that you want to back

up are using Virtual Machine hardware version 7 or later.

In order for backup and recovery operations to use Hot Add mode on a VMware Virtual Volume (vVol) datastore, the VM Direct proxy should reside on the same vVol as the virtual machine.

If you have vFlash-enabled disks and are using Hot Add transport mode, ensure that you configure the vFlash resource for the VM Direct host with sufficient resources (greater than or equal to the virtual machine resources), or migrate the VM Direct Engine to a host with vFlash already configured. Otherwise, backup of any vFlash-enabled disks fails with the error VDDK Error: 13: You do not have access rights to this file and the error on the vCenter server The available virtual flash resource '0' MB ('0' bytes) is not sufficient for the requested operation.

For sites that contain many virtual machines that do not support Hot Add requirements, Network Block Device (NBD) transport mode is used. This mode can cause congestion on the ESXi host management network. Plan your backup network carefully for large scale NBD installs, for example, consider configuring one of the following options: Setting up Management network redundancy. Setting up backup network to ESXi for NBD. Setting up storage heartbeats.

See https://www.vmware.com/content/dam/digitalmarketing/vmware/en/pdf/techpaper/vmw-vsphere-high-availability- whitepaper.pdf for more information.

If performing NBD backups, ensure that your network has a bandwidth of 10 Gbps or higher.

64 Kubernetes Cluster Best Practices and Troubleshooting

Requirements for an external VM Direct Engine

When adding an external VM Direct Engine, note the following system requirements:

CPU: 4 * 2 GHz (4 virtual sockets, 1 core for each socket) Memory: 8 GB RAM Disks: 2 disks (59 GB and 98 GB) Internet Protocol: IPv4 only SCSI controller: maximum of 4 NIC: One vmxnet3 NIC with one port

Additional VM Direct actions

For additional VM Direct actions, such as enabling, disabling, redeploying, or deleting the VM Direct Engine, or changing the network configuration, use the Protection Engines window in the PowerProtect Data Manager UI. To throttle the capacity of a VM Direct Engine, use a command-line tool on PowerProtect Data Manager.

To get external VM Direct Engine credentials, see the procedure in the PowerProtect Data Manager Security Configuration Guide.

Disable a VM Direct Engine

You can disable an added VM Direct Engine that you do not currently require for virtual machine backup and recovery. To disable a VM Direct Engine:

1. On the Protection Engines window, select the VM Direct Engine that you want to disable from the table in the VM Direct Engines pane.

2. In the far right of the VM Direct Engines pane, click the three vertical dots. 3. From the menu, select Disable.

NOTE: A disabled VM Direct Engine is not used for any new protection activities, and is not automatically updated during a

PowerProtect Data Manager update.

Delete a VM Direct Engine

When you disable a VM Direct Engine, the Delete button is enabled. If you no longer require the VM Direct Engine, perform the following steps to delete the engine:

1. On the Protection Engines window, select the VM Direct Engine that you want to remove from the table in the VM Direct Engines pane.

2. In the far right of the VM Direct Engines pane, click the three vertical dots. 3. From the menu, select Disable. 4. Click Delete.

Enable a disabled VM Direct Engine

When you want to make a disabled VM Direct Engine available again for running new protection activities, perform the following steps to re-enable the VM Direct Engine.

1. On the Protection Engines window, select the VM Direct Engine that you want to re-enable from the table in the VM Direct Engines pane.

2. In the far right of the VM Direct Engines pane, click the three vertical dots. 3. From the menu, select Enable.

NOTE: If a PowerProtect Data Manager version update occurred while the VM Direct Engine was disabled, a manual

redeployment of the VM Direct Engine is also required.

Kubernetes Cluster Best Practices and Troubleshooting 65

Redeploy a VM Direct Engine

If a PowerProtect Data Manager software update occurred while a VM Direct Engine was disabled, or an automatic update of the VM Direct Engine did not occur due to network inaccessibility or an environment error, the Redeploy option enables you to manually update the VM Direct Engine to the version currently in use with the PowerProtect Data Manager software. Perform the following steps to manually redeploy the VM Direct Engine.

1. On the Protection Engines window, select the VM Direct Engine that you want to redeploy from the table in the VM Direct Engines pane.

2. In the far right of the VM Direct Engines pane, click the three vertical dots. 3. If the VM Direct Engine is not yet enabled, select Enable from the menu. 4. When the VM Direct Engine is enabled, select Redeploy from the menu.

The VM Direct Engine is redeployed with its previous configuration details.

Update the DNS or gateway during redeployment

Optionally, if you want to update the vProxy DNS and/or gateway during the VM Direct Engine redeployment, you can use one of the following commands:

To update both the gateway and DNS, run ./vproxymgmt redeploy -vproxy_id VM Direct Engine ID -updateDns DNS IPv4 address -updateGateway Gateway IPv4 address

To update the gateway only, run ./vproxymgmt redeploy -vproxy_id VM Direct Engine ID -updateGateway Gateway IPv4 address

To update DNS only, run ./vproxymgmt redeploy -vproxy_id VM Direct Engine ID -updateDns DNS IPv4 address

Edit the network configuration for a VM Direct Engine

The PowerProtect Data Manager Administration and User Guide provides more information about virtual networks.

For example, if VM Direct Engine deployment failed because of a virtual network configuration problem, you can update the configuration to add additional IP addresses to the static IP pool. You can also add the VM Direct Engine to a virtual network in the same VGT port group.

Perform the following steps to change the network configuration:

1. On the Protection Engines window, select the VM Direct Engine from the table in the VM Direct Engines pane. 2. Click Edit.

3. Virtual networks with a warning symbol ( ) beside the network name require attention and review. For example, if you changed the network configuration, the configured traffic types may not support VM Direct Engines. Clear any interfaces which no longer apply to the VM Direct Engine.

Select the row that corresponds to the virtual network with the configuration error, or the virtual network to which you want to add the VM Direct Engine.

4. Type an available static IP address or IP address range in the Additional IP Addresses column. 5. Click Next. 6. On the Summary page, verify the network settings, and then click Next.

To change other network configuration settings, delete the VM Direct Engine and then deploy a new VM Direct Engine.

Throttle the capacity of a VM Direct Engine

In performance-limited environments, you can use a command-line tool on PowerProtect Data Manager to reduce the maximum capacity of a VM Direct Engine.

The default value for VM Configured Capacity Units of an external VM Direct Engine is 100. The minimum value is 4. A VM Direct Engine can backup one disk with 4 units of capacity at a time.

Perform these steps to throttle the capacity of a VM Direct Engine:

1. Connect to the PowerProtect Data Manager console and change to the root user. 2. Type: source /opt/emc/vmdirect/unit/vmdirect.env

66 Kubernetes Cluster Best Practices and Troubleshooting

3. To view the list of every VM Direct Engine and its ID, type: /opt/emc/vmdirect/bin/vproxymgmt get -list 4. To change the capacity of a VM Direct Engine, type (once per engine): /opt/emc/vmdirect/bin/vproxymgmt

modify -vproxy_id [VProxy ID] -capacity [percentage] 5. To verify the change in VM Configured Capacity Units, type: /opt/emc/vmdirect/bin/vproxymgmt get -list

Troubleshooting network setup issues vCenter registration and proxy deployment fails if the PowerProtect Data Manager server is deployed in the same private network as the internal Docker network.

PowerProtect Data Manager uses an internal private Docker network. If the PowerProtect Data Manager server is deployed in the same private network as the internal Docker network, or if some data sources have already been deployed within the private network, PowerProtect Data Manager fails to protect the data sources.

To resolve this issue, deploy the PowerProtect Data Manager server and other data sources in a different network. If you cannot modify the deployed network, run a script tool within PowerProtect Data Manager to switch the private Docker network to a different network.

To switch the private Docker network to a different network:

1. Connect to the PowerProtect Data Manager console and change to the root user. 2. Modify the Docker network by running the following command:

/usr/local/brs/puppet/scripts/docker_network_switch.sh subnet gateway Where:

subnet describes the new network in the format 172.25.0.0/24 gateway is the gateway for the private network. For example: 172.25.0.1

Ensure that you specify a subnet and gateway that is not in use.

Troubleshooting Kubernetes cluster issues Review the following information that is related to troubleshooting issues with the Kubernetes cluster in PowerProtect Data Manager:

Only native Kubernetes resources are supported for protection in PowerProtect Data Manager

PowerProtect Data Manager supports protection of native Kubernetes resources only. If a namespace contains any Kubernetes distribution-specific resource or any other kind of custom resource, backup and recovery operations might fail. Therefore, ensure that you do not include such namespaces in PowerProtect Data Manager Kubernetes protection policies.

Application pods might not appear in running state after restore when restoring to a new namespace with a different name

When performing a Kubernetes restore to a new namespace that has a different name than the namespace the backup copy was created from, the application pods might not appear in running state after restore in some scenarios. For example, this can occur if the application has environment variables or other configuration elements that adhere to the namespace from which the backup copy was created, such as variables that point to services using FQDN in the form my-svc.my- namespace.svc.cluster-domain.example or headless services using FQDN in the form pod-name.my-headless- svc.my-namespace.svc.cluster-domain.example.

If this issue occurs, manually edit the deployments after the restore.

Kubernetes Cluster Best Practices and Troubleshooting 67

Backups of persistent volumes on FCD fail when VMware CSI driver and storageclass are installed after Kubernetes cluster asset source is added

The PowerProtect controller configures itself and Velero for the protection of persistent volumes on first class disks (FCDs) if the controller detects a storage class with the VMware CSI provisioner csi.vsphere.vmware.com. If the VMware CSI driver and storageclass are installed after the Kubernetes cluster is added as an asset source to PowerProtect Data Manager, FCD backups fail with an error indicating failed to create backup job.

To resolve this issue, restart the PowerProtect controller by running the following commands:

kubectl get pod -n powerprotect kubectl delete pod pod name obtained above -n powerprotect

ApplicationTemplate considerations when performing Kubernetes cluster disaster recovery

When performing a Kubernetes cluster disaster recovery, if any changes were made to ApplicationTemplate, the Kubernetes administrator will need to recreate the ApplicationTemplate in the PowerProtect Data Manager namespace.

The section Disaster recovery considerations provides more information.

Pods in pending state due to missing PVC cause namespace backups to fail

If a Kubernetes namespace contains a pod that is in pending state because the pod references a PVC that is not present, the backup of that namespace will fail.

To resolve this issue, perform one of the following:

Create the missing PVC, or Delete the pod if it is no longer required.

Troubleshooting Velero or Controller pod failures

The PowerProtect Data Manager Velero or Controller pod might fail to start, for example, due to a deployment failure or a bad image URI. If one of these pods fails to start, an alert appears indicating that the pod is not running on the cluster.

If the PowerProtect Data Manager Controller pod is not running, run the following command:

kubectl describe pod -n powerprotect If the PowerProtect Data Manager Velero pod is not running, run the following command:

kubectl describe pod -n velero-ppdm Errors or events in the command output enable you to determine why the failure occurred.

Verify CSI driver functioning properly if "Failed to create Proxy Pods" error appears during restore

If the restore fails with the error Failed to create Proxy Pods. Creating Pod exceeds safeguard limit of 10 minutes, verify that the CSI driver is functioning properly and is able to dynamically provision volumes.

68 Kubernetes Cluster Best Practices and Troubleshooting

Add alternate storage class mapping if mismatch between original cluster and target cluster for restore (API restore only)

When restoring to a different cluster using the API, the storage class of the target cluster might not have the same name and underlying storage provider as the original cluster of the namespace backup. If there is a mismatch, then the restore fails.

To add an alternate storage class mapping for restores performed via the API, complete the following steps:

1. Create a ConfigMap ppdm-restore-storage-class-mapping in the PowerProtect namespace on the target cluster for the restore.

2. In the data section of the ConfigMap, add a storage class mapping in the following format:

old storage class: new storage class For example, if all PVCs that were backed up using the storage class csi-hostpath-sc will be restored to a cluster using the storage class xio-csi-sc, type:

csi-hostpath-sc: xio-csi-sc

NOTE: Restore of a First Class Disk (FCD) backup to a cluster with a storage class that is not FCD-based is not supported.

Add alternate storage class mapping for temporary PVCs created from snapshot during non-FCD backup

In some scenarios, the storage class of the PVC being backed up might not be the most appropriate storage class for the temporary PVC created from the snapshot during the backup. For example, when creating a volume from a snapshot, a dedicated storage class that does not allocate space for the temporary PVC might be preferred. This can be useful for backing up NFS PVs and use a storageclass with rootclient enabled.

To add an alternate storage class mapping for temporary PVCs, perform the following:

1. Create a ConfigMap ppdm-snapshot-storage-class-mapping in the PowerProtect namespace.

2. In the data section of the ConfigMap, add a storage class mapping in the following format:

storage class of PVC being backed up: storage class to use for snapshot PVC For example, in the mapping xio-csi-sc: xio-csi-sc-snapshot-promoter, if the PVC being backed up uses the storage class xio-csi-sc, the snapshot PVC will be created using the storage class xio-csi-sc-snapshot- promoter.

NOTE: This mapping applies only to non-FCD based backups because only snapshot PVCs are created in this data path.

Specify volumesnapshotclass for v1 CSI snapshots

By default, PowerProtect Data Manager picks up the default volumesnapshotclass for a storageclass provisioner when creating the CSI snapshot. If the cluster has multiple volumesnapshotclass resources for the same storageclass provisioner, perform the following steps to create a mapping for the volumesnapshotclass to be used for the provided storageclass. This procedure is supported for v1 CSI snapshots only.

Steps

1. Create a ConfigMap ppdm-snapshot-class-mapping in the powerprotect namespace.

2. In the data section of the ConfigMap, add a snapshotclass mapping in the format storage class name: volumesnapshotclass name. For example:

mystorageclass: myvolumesnapshotclass

Kubernetes Cluster Best Practices and Troubleshooting 69

Enabling protection when the vSphere CSI driver is installed as a process

PowerProtect Data Manager leverages the vSphere Velero plug-in to protect VMware Cloud Native Storage volumes that use VADP snapshots. To take these snapshots, PowerProtect Data Manager and the Velero plug-in require the location and credentials of the vCenter Server. This information is provided in the VMware CSI driver secret vsphere-config-secret in the Kubernetes namespace.

Some distributions, such as TKGI 1.11 and later, automatically install the CSI driver as a process rather than the method specified in the VMware vSphere Container Storage Plug-in Documentation. If the CSI driver is installed automatically as a process, PowerProtect Data Manager and the Velero plug-in are unable to obtain the CSI secret in the Kubernetes cluster. Without this information, PowerProtect Data Manager is unable to protect these environments.

To provide the vCenter information and credentials to PowerProtect Data Manager, follow the instructions in the section "Enabling protection when the vSphere CSI driver is installed as a process" under Back up and restore Kubernetes in the PowerProtect Data Manager Public REST API documentation.

Also, to protect the Kubernetes environment on which CSI is installed as process, the following minimum vCenter user privileges are required when adding the vCenter that is associated to the Kubernetes cluster:

Datastore.Low level file operations Tasks.Create task Tasks.Update task

NOTE: The namespace location of the CSI driver secret vsphere-config-secret depends on the VMware CSI version

in use:

For CSI versions earlier than 2.3, the secret is in the kube-system namespace.

For CSI versions 2.3 and later, the secret is in the vmware-system-csi namespace.

Customizing PowerProtect Data Manager pod configuration

In some scenarios, such as for adding additional Network Interface Cards (NICs) or setting DNS configuration for pods, you might want to update the PowerProtect Controller, Velero, and cProxy pod configurations to apply additional attributes or change existing attributes. Pod information is specified in a configurations attribute within the API request.

To make changes to a pod configuration, you can create a yaml file that contains these changes, and then specify this yaml in the API so that this information is used by PowerProtect Data Manager to apply the changes, for example, to create custom ports. This process is described under Back up and restore Kubernetes in the PowerProtect Data Manager Public REST API documentation.

Backups fail or hang on OpenShift after a new PowerProtect Data Manager installation or update from a 19.9 or earlier release

After a new installation of PowerProtect Data Manager or an update to PowerProtect Data Manager 19.10 from a 19.9 or earlier release, backups fail or hang on OpenShift. Also, alerts may appear in the PowerProtect Data Manager UI indicating that the Velero pod is not running.

PowerProtect Data Manager uses the OADP operator to deploy Velero on OpenShift. When updating from a PowerProtect Data Manager 19.8 or earlier release, upstream Velero in the velero-ppdm namespace is replaced with the Velero deployed by OADP.

Perform the following steps to diagnose the problem.

1. Verify that the OADP operator pod is running in the velero-ppdm namespace by running the command oc get pod -n velero-ppdm -l name=oadp-operator.

a. If no pods are listed, verify that the cluster role bound to the service account used by the powerprotect- controller has all of the required privileges for operators.coreos.com, oadp.openshift.io and konveyor.openshift.io API groups. Use the command oc get clusterrolebinding powerprotect:cluster-role-binding to identify the cluster role being used.

b. Review the logs of the Powerprotect controller pod by running the command oc logs -n powerprotect

c. Review the events in the velero-ppdm namespace by running the command oc get events -n velero-ppdm

70 Kubernetes Cluster Best Practices and Troubleshooting

2. Verify that the operator group oadp-operator was created properly in the velero-ppdm namespace by running the command oc get operatorgroup -n velero-ppdm -o yaml

3. Verify that the subscription oadp-operator was created properly in the velero-ppdm namespace by running the command oc get subscription -n velero-ppdm -o yaml

4. Verify that the install plan of OADP version 0.4.2 was approved properly in the velero-ppdm namespace by running the command oc get ip -n velero-ppdm.

NOTE: The APPROVED field of the CVS oadp-operator.v0.4.2 should be set to true. If an install plan from a

later version appears, the APPROVED field for these versions should be set to false.

5. Verify that the Velero pod is running in the velero-ppdm namespace by running the command oc get pod -n velero-ppdm -l component=velero, and review the events in the velero-ppdm namespace by running the command oc get events -n velero-ppdm .

Data protection operations for high availability Kubernetes cluster might fail when API server not configured to send ROOT certificate

If the Kubernetes cluster is set up in high availability mode and the Kubernetes API server is not configured to send the ROOT certificate as part of the TLS communication setup, backup and restore operations might fail with the following error:

javax.net.ssl.SSLHandshakeExcept ion: sun.security.validator.Validator Exception: PKIX path building failed: sun.security.provider.certpath.S unCertPathBuilderException: unable to find valid certification path to requested target To resolve the error, perform the following steps:

1. Copy the root certificate of the Kubernetes cluster to the PowerProtect Data Manager server. 2. As an administrator on the PowerProtect Data Manager server, import the certificate to the PowerProtect Data Manager

trust store by running the following command:

ppdmtool -importcert -alias certificate alias -file file with certificate -type BASE64| PEM

Where:

i or importcert imports the certificate.

a or alias certificate alias is used to specify the alias of the certificate.

f or file file with certificate is used to specify the file with the certificate.

t or type BASE64|PEM is used to specify the certificate type. The default type is PEM.

NOTE: Since the root certificate is in PEM format, this command should not require the type input.

Sample command to import certificate to PowerProtect Data Manager trust store

ppdmtool -importcert -alias apiserver.xyz.com -file root-certificate

Kubernetes cluster on Amazon Elastic Kubernetes Service certificate considerations

Running a Kubernetes cluster on Amazon Elastic Kubernetes Service (EKS) requires you to manually copy the cluster root certificate authority and import to the PowerProtect Data Manager trust store. Perform the following steps:

1. From the Kubernetes node, retrieve the cluster root certificate by running the following command:

aws eks describe-cluster --region region --name Kubernetes cluster name --query "cluster.certificateAuthority.data" --output certificate file name

2. Copy the certificate to the PowerProtect Data Manager server. 3. As an administrator on the PowerProtect Data Manager server, import the certificate to the PowerProtect Data Manager

trust store by running the following command:

ppdmtool -importcert -alias certificate alias -file file with certificate -type BASE64| PEM

Where:

Kubernetes Cluster Best Practices and Troubleshooting 71

i or importcert imports the certificate.

a or alias certificate alias is used to specify the alias of the certificate.

f or file file with certificate is used to specify the file with the certificate.

t or type BASE64|PEM is used to specify the certificate type. The default type is PEM.

NOTE: Since the root certificate is a text file, specify BASE64 format for the type input, as shown in the following

example.

Sample command to import certificate to PowerProtect Data Manager trust store

ppdmtool -i -a eks.ap-south-1.amazonaws.com -f aws-certificate.txt -t BASE64

Removing PowerProtect Data Manager components from a Kubernetes cluster

Review the following sections if you need to remove PowerProtect Data Manager components from the Kubernetes cluster:

Remove PowerProtect Data Manager components

Run the following commands to remove the PowerProtect Data Manager components:

kubectl delete crd -l app.kubernetes.io/part-of=powerprotect.dell.com kubectl delete clusterrolebinding powerprotect:cluster-role-binding kubectl delete namespace powerprotect

Remove Velero components

Run the following commands to remove the Velero components:

kubectl delete crd -l component=velero kubectl delete clusterrolebinding -l component=velero kubectl delete namespace velero-ppdm

Remove images from cluster nodes

Run the following commands to remove the Docker Hub images from the cluster nodes:

On the worker nodes, run sudo docker image ls To remove any images that return powerprotect-cproxy, powerprotect-k8s-controller, powerprotect-

velero-dd, or velero, run sudo docker image remove IMAGEID

Increase the number of worker threads in Supervisor cluster backup-driver if Velero timeout occurs

During a Kubernetes protection policy backup, the snapshots of persistent volumes in Tanzu Kubernetes guest clusters are performed sequentially by the backup driver in the Supervisor cluster.

If backups fail because of a Velero timeout, even when the Velero pods are running without issues, and there are 10 or more guest clusters in the environment, Dell Technologies recommends increasing the number of worker threads in the Supervisor cluster backup-driver.

To update the Supervisor cluster backup-driver deployment to increase the number of worker threads, apply the following change to the --backup-workers line in the Supervisor cluster backup-driver:

spec: containers: - args:

72 Kubernetes Cluster Best Practices and Troubleshooting

- server - --backup-workers=5 command: - /backup-driver

NOTE: Making this change restarts the backup-driver. After the restart, verify that the backup-driver pod is

available and running in the Supervisor cluster.

Velero pod backup and restore might fail if namespace being protected contains a large number of resources

Kubernetes namespaces can include a large number of resources, such as secrets, configuration maps, custom resources, and so on. When a namespace being protected by PowerProtect Data Manager contains 1500 or more of these resources, the Velero pod might run out of memory, causing backup and restore operations to fail with an error.

If the namespace being protected has more than 1500 resources and protection jobs are failing with a Velero backup or restore error, increase the memory limits of the Velero pod by performing the following:

1. Run the command kubectl edit deployment velero -n velero-ppdm, or an equivalent command.

2. Review the output for the section on resource limits, which appears similar to the following:

resources: limits: cpu: "1" memory: 256Mi

3. Change the memory limit, depending on the number of Kubernetes resources in the namespace. For example: If the number of Kubernetes resources in the namespace exceeds 2000, change the memory limit to 512 Mi. If the number of Kubernetes resources in the namespace exceeds 3000, change the memory limit to 1024 Mi.

Pull images from Docker Hub as authenticated user if Docker pull limits reached

PowerProtect container images are hosted in Docker Hub. If the Kubernetes cluster is unable to pull images from Docker Hub because of Docker pull limits, it might be required to pull images from Docker Hub as an authenticated user.

Perform the following steps to pull the images from Docker Hub:

1. Create an imagePullSecret in the powerprotect namespace that contains the Docker user credentials. For example:

kubectl create secret docker-registry secretname -- docker-server=DOCKER_REGISTRY_SERVER --docker-username=DOCKER_USER --docker- password=DOCKER_PASSWORD --docker-email=DOCKER_EMAIL -n powerprotect

2. Update the PowerProtect Data Manager deployment in the powerprotect namespace to add a reference to the imagePullSecret created in the previous step:

a. Run the command kubectl edit deployment powerprotect-controller -n powerprotect b. Review the output for the following lines:

spec: containers:

c. Add the following lines:

spec: imagePullSecrets: <--- New line - name: <---- New line containers:

d. Repeat these steps for the Velero deployment in the velero-ppdm namespace,

Kubernetes Cluster Best Practices and Troubleshooting 73

The article at https://www.docker.com/increase-rate-limits provides more information about Docker rate limits and creating a Docker user account.

74 Kubernetes Cluster Best Practices and Troubleshooting

Application-Consistent Database Backups in Kubernetes

Topics:

About application-consistent database backups in Kubernetes Obtain and deploy the CLI package About application templates Deploy application templates Perform application-consistent backups Verify application-consistent backups Disaster recovery considerations Granular-level restore considerations Log truncation considerations

About application-consistent database backups in Kubernetes The PowerProtect Data Manager supports agentless, application-consistent backups of database applications that reside in Kubernetes pods. The existing infrastructure handles database backups, no pod compute resources are required.

Application-consistent backups occur when the database application is informed of a pending backup. The database completes all pending transactions and operations, while typically queuing new requests. This process places the database in a quiescent state of relative inactivity where the backup represents a true snapshot of the application. This backup now captures items that would have otherwise been stored only in memory. After the snapshot, the application resumes normal functionality. In most environments, the snapshot operation is instantaneous, so downtime is minimal.

These backups are agentless, in that the PowerProtect Data Manager can take a snapshot of containers without the need for software installation in the database application environment. That snapshot is then backed up using the normal procedures for the Kubernetes environment.

The PowerProtect Data Manager provides a standardized way to quiesce a supported database, back up the data from that database, and then return the database to operation. Application templates serve as a bridge between a specific database environment and the Kubernetes backup architecture for the PowerProtect Data Manager. Depending on the differences between database environments, each deployment may require a different configuration file.

Supported database applications

Supported applications include: MySQL, in the following configurations:

Standalone deployment in one pod. Cluster (primary/secondary) deployment with multiple StatefulSets or ReplicaSets. For example, through Helm.

MongoDB, without shards. PostgreSQL, in the following configurations:

Standalone deployment in one pod. Cluster (primary/secondary) deployment with multiple StatefulSets. For example, through Helm.

Cassandra, without shards.

Because data syncs from the primary pods to secondary pods, the PowerProtect Data Manager backs up secondary pods first.

B

Application-Consistent Database Backups in Kubernetes 75

NOTE: This guide uses primary and secondary terminology. Some databases may use other terms, such as source and

replica, primary and replica, or master and standby.

Prerequisites

The application-consistent database backup functions assume that you have met the following prerequisites:

You must set labels on pods during the deployment process. The database application deploys with a known label on every associated pod, which is required to configure the application template.

The default template for PostgreSQL requires the presence of psql in the PostgreSQL container.

Obtain and deploy the CLI package The CLI package contains the control commands for application template functionality, readme files, and some examples.

About this task

The CLI package exists on the PowerProtect Data Manager host at /usr/local/brs/lib/cndm/misc/ppdmctl.tar.gz and is part of the PowerProtect Data Manager deployment. There is no separate download for the CLI package.

All application-consistent database backup CLI commands run on the host where the Kubernetes administrator runs control commands, not on the PowerProtect Data Manager host.

Steps

1. The backup administrator uses SCP or another file transfer utility to download the CLI package from the PowerProtect Data Manager host to a local system.

2. The backup administrator provides the CLI package to the Kubernetes administrator.

The Kubernetes administrator completes the remaining steps in this task.

3. Extract the CLI package on the local system.

4. Use SCP or another file transfer utility to copy the CLI package files from the local system to the Kubernetes cluster.

You can also copy the package to any host where the Kubernetes administrator can use the kubectl or equivalent tools to manage the Kubernetes cluster.

Place the CLI package files in a directory that is part of the system path ($PATH) or add the directory to the system path if necessary.

5. Log in to the Kubernetes cluster.

6. Change directory to the location where you uploaded the CLI package files.

7. Make the CLI utility executable by typing the following command:

chmod +x ppdmctl 8. Ensure that the $HOME/.kube directory contains a copy of the Kubernetes cluster config file.

Alternatively, you can add the --kubeconfig parameter to every CLI command to specify the path to the config file.

About application templates Application templates translate the specific configuration details and required interface steps for each database application deployment to the standard PowerProtect Data Manager backup functionality for Kubernetes.

CAUTION: Do not create more than one template with the same label and the same namespace. In this

circumstance, only the last-deployed template takes effect, which may cause undesirable results.

Application templates are typically deployed from customizable YAML files that come with the CLI package. When complete, the application template contains the following items:

AppLabel corresponds to the label that you applied to each pod during deployment. The label identifies all pods that belong to the indicated database application. Labels can contain multiple key-value pairs in a comma-separated list.

76 Application-Consistent Database Backups in Kubernetes

If more than one instance of each database application exists in the same namespace, two application templates are required. In this case, each application must use different values for AppLabel.

For example, the label app=mysql matches the template with any pod which has a label that takes the form of a key named app and the value mysql.

Type identifies the type of database application inside the pod or pods.

AppActions matches a prescribed action or filter to a resource type, such as pods.

The next topics explain application actions in more detail.

You can deploy application templates to the PowerProtect namespace or to a specific user-defined namespace. Using a template in the PowerProtect namespace applies the template to all other namespaces. This result can include namespaces where you may not have credentials to run some user-supplied commands or where the expected context may differ from the real context. If you deploy a template to the PowerProtect namespace, that template can use only the default hook actions that are described in a subsequent topic.

When you require specific user-supplied commands for a database application, create an application template for each namespace. Templates in specific namespaces override any behavior that would come from a template of the same name in the PowerProtect namespace.

Default application templates

When you deploy application templates without specifying custom values in a YAML file, the deployment uses values from the default configuration files.

For example, the default MySQL application template supports both stand-alone and cluster instances of MySQL, with a single StatefulSet. In this StatefulSet, the primary pod has index 0. Secondary pods have an index that ranges from 1 to n-1, where n is the number of replicas.

The default MongoDB template supports only stand-alone instances, with similar StatefulSet pod parameters.

Application template example

The following example illustrates the syntax for a MySQL database:

apiVersion: "powerprotect.dell.com/v1beta1" kind: ApplicationTemplate metadata: name: ClusteredMySQLTemplate namespace: examplenamespace spec: type: "MYSQL" enable: true appLabel: "app=mysql" appActions: Pod: preHook: command: '["/bin/sh", "-c", "mysql -uroot -p$MYSQL_ROOT_PASSWORD -e \"FLUSH TABLES WITH READ LOCK; FLUSH LOGS;SELECT SLEEP(100);\" >/tmp/quiesce.log 2>&1 & for i in 1..10; do sleep 1; mysql -uroot -p$MYSQL_ROOT_PASSWORD -e \"SHOW PROCESSLIST\" | grep \"SLEEP(100)\" > /tmp/sleep.pid ; if [ $? -eq 0 ]; then exit 0; fi; done; exit 1"]' postHook: command: '["/bin/sh", "-c", "SLEEPPID=`cut -f1 /tmp/sleep.pid` ; mysql -uroot -p$MYSQL_ROOT_PASSWORD -e \"KILL $SLEEPPID\" ; rm /tmp/sleep.pid"]' StatefulSet: selectors: - selectorTerms: - field: "Labels" selectorExpression: "app=mysql" - field: "Name" selectorExpression: ".*-[1-9][0-9]*$"' # Secondary pods with index > 0 - selectorTerms: - field: "Labels" selectorExpression: "app=mysql" - field: "Name" selectorExpression: ".*-0$"' # Primary pod index 0

Application-Consistent Database Backups in Kubernetes 77

After you obtain and extract the CLI package, you can find more sample templates in the examples directory.

YAML configuration files

The YAML configuration files form the core of each application template. These files serve as user-configurable inputs to the process of deploying application templates for namespaces.

The YAML files help you quickly deploy application templates with similar properties by reusing the same YAML file for multiple databases across different namespaces. The CLI package comes with sample configuration files for each supported type of database application. You can copy and then customize these files for your environment.

Each sample from the CLI package contains examples of different application actions, such as selectors that filter by name and by regular expression. The CLI package also comes with a readme file for additional information, including the expected environment variables for each default deployment. Different application types may use different terminology.

The samples explicitly spell out the quiesce and unquiesce command strings, they do not use the default commands that are described in a subsequent topic. This method is normal for templates that are deployed to a specific namespace. If you intend to create a template for deployment to the PowerProtect namespace, you must replace the command strings with the default commands.

You can start building your own command strings by copying the samples and customizing as necessary to change the values. Customization can include changing the location of the lock file, changing the sleep counts, and so forth. You are responsible for any changes to the default command strings.

Application actions

The application template defines actions that the PowerProtect Data Manager automatically performs on discovered resources, including ways to order the actions into a sequence.

Each action is associated with a supported resource type:

Pod defines actions that happen at the pod level. Each application template must have actions for pods that specify how to quiesce and unquiesce the database application inside. Templates for stand-alone applications usually contain only pod-level actions.

StatefulSet and ReplicaSet define actions that happen at the cluster level. This level typically contains the selectors that allow the PowerProtect Data Manager to back up pods in the correct order, before the template applies actions at the pod level.

Pod actions

When the template matches with a pod, there are two available actions:

preHook Provides a command or sequence of commands that quiesce the database application and write its data to disk in preparation for the backup.

postHook Provides a command or sequence of commands that unquiesce the database application and restore normal operation.

MySQL application templates come with default values for these actions: DefaultMySQLQuiesce and DefaultMySQLUnquiesce.

MongoDB application templates come with default values for these actions: DefaultMongoDBQuiesce and DefaultMongoDBUnquiesce.

PostgreSQL application templates come with default values for these actions: DefaultPostgresqlQuiesce and DefaultPostgresqlUnquiesce.

For PostgreSQL, the prehook action does not quiesce the database. Rather, the action places the database into hot backup mode. Similarly, the posthook action removes the database from hot backup mode.

Cassandra application templates come with a default value for these actions: DefaultCassandraFlush.

For Cassandra, the prehook action flushes the database to disk. The database provides neither explicit quiescing during the prehook, nor a corresponding unquiesce command for a posthook action.

78 Application-Consistent Database Backups in Kubernetes

These default values are reserved keywords in the YAML file. Creating an application template from the YAML file replaces these keywords with relatively safe and standard sequences that quiesce and unquiesce supported database applications, where applicable.

The other parameters that are associated with these default values are:

Timeout defaults to 30 s.

Container defaults to the first container in the pod.

OnError defaults to Fail. The possible values are Fail and Continue.

You can replace these default hooks with sequences of commands that are specific to the database application environment. All values other than the defaults are treated as commands to run.

You can also replace the default parameters with new values, such as the name of a different container or a longer timeout.

Example

An application template applies to a MySQL database that resides in a pod. The following template fragment provides custom commands for quiescing and unquiescing the database.

Pod: preHook: command: "[\"/bin/sh\", \"-c\", \"mysql -uroot -p$MYSQL_ROOT_PASSWORD -e \\ \"FLUSH TABLES WITH READ LOCK; FLUSH LOGS;SELECT SLEEP(100);\\\" >/tmp/quiesce.log 2>&1 & for i in 1..10; do sleep 1; mysql -root -p$MYSQL_ROOT_PASSWORD -e \\\"SHOW PROCESSLIST\\\" | grep \\\"SLEEP(100)\\\" > /tmp/sleep.pid ; if [ $? -eq 0 ]; then exit 0; fi; done; exit 1\"]" postHook: command: "[\"/bin/sh\", \"-c\", \"SLEEPPID=`cut -f1 /tmp/sleep.pid` ; mysql -uroot -p$MYSQL_ROOT_PASSWORD -e \\\"KILL $SLEEPPID\\\" ; rm /tmp/sleep.pid\"]"

Selectors

Selectors are an array of criteria that match resources which belong to the database application. For example, if the action is associated with a StatefulSet, then the selectors describe how to match the pods within the StatefulSet.

Selectors can have multiple logical terms, which are logically combined with AND statements to match resources. Logical terms can match on the Labels, Annotations, or Name fields, and provide filter expressions.

Labels and annotations support key-value pair matching. Names support regular-expression matching.

The selector order serializes the actions on each resource. For pods, the selector order controls the order in which each pod is backed up.

Before deploying the application template, verify that your key-value pairs and regular expressions correctly match all pods and select the pods in the correct order.

Example

An application template applies to a MySQL cluster with one StatefulSet. The application label is a key-value pair that is named app with the value mysql. The following selectors match:

A primary pod with a name that contains the suffix "-0". Secondary pods with names that contain suffixes that start at "-1" and increment.

Remember that secondary pods are backed up before the primary pod.

StatefulSet: selectors: - selectorTerms: - field: "Labels" selectorExpression: "app=mysql" - field: "Name" selectorExpression: ".*-[1-9][0-9]*$" - selectorTerms: - field: "Labels" selectorExpression: "app=mysql"

Application-Consistent Database Backups in Kubernetes 79

- field: "Name" selectorExpression: ".*-0$"

Deploy application templates You can deploy application templates from customized source YAML files or from the default YAML files.

Prerequisites

Obtain and deploy the CLI package. If required, copy and customize a source YAML file for the appropriate database environment.

Even where the default templates contain actions for cluster instances of supported databases, the default deployment command creates a template for a single-instance database. For supported cluster databases, use the --inputfile parameter to specify a YAML file. This YAML file can be one of the examples.

About this task

This task uses the following placeholders:

template-type is one of the following values: mysqltemplate, mongodbtemplate, postgrestemplate, or cassandratemplate

db-type is one of the following values: mysql, mongodb, postgresql, or cassandra user-namespace is a specific namespace file is the name of a customized YAML file, where applicable

Steps

1. Log in to the Kubernetes cluster.

2. To deploy a default application template for a specific namespace, type the following command:

ppdmctl applicationtemplate create template-type --type=db-type --namespace=user- namespace

For example:

a. To deploy a default MySQL application template for a specific namespace, type the following command:

ppdmctl applicationtemplate create mysqltemplate --type=mysql --namespace=user- namespace

b. To deploy a default MySQL application template for the PowerProtect namespace, which applies to all namespaces, type the following command:

ppdmctl applicationtemplate create mysqltemplate --type=mysql --namespace=powerprotect 3. To deploy an application template from a customized YAML file, type the following command:

ppdmctl applicationtemplate create template-type --type=db-type --namespace=user- namespace --inputfile=file.yaml

4. To list the application templates for a specific namespace, type the following command:

kubectl get applicationtemplate --namespace=user-namespace 5. To edit an application template in a specific namespace, type one of the following commands:

kubectl edit applicationtemplate template-type --namespace=user-namespace For example:

kubectl edit applicationtemplate mysqltemplate --namespace=powerprotect

Perform application-consistent backups After you deploy application templates, the agentless nature of the backups means that no special steps are required to perform an application-consistent database backup.

The PowerProtect Data Manager infrastructure detects the presence of a deployed template and follows the template instructions when backing up the namespace to which the database application belongs.

80 Application-Consistent Database Backups in Kubernetes

Add a protection policy for Kubernetes namespace protection provides more information about configuring protection policies for Kubernetes namespace protection.

For example, you can perform a manual backup of the Kubernetes protection policy and then verify that the resulting backup is application-consistent.

Verify application-consistent backups After you back up a database application, you can verify that the application template is correctly configured and that the backup type is application-consistent.

About this task

If at least one template selector matched a resource in the namespace, the PowerProtect Data Manager marks a copy as Application Consistent. For example, if a namespace has ten pods and one pod matched the template selector rules, the entire copy is marked as Application Consistent.

However, you can verify how many resources matched the template and ensure that this number matches your expectations for the template rules.

Steps

1. From the PowerProtect Data Manager UI, select Infrastructure > Assets or Recovery > Assets.

Assets that have copies are listed.

2. Locate the assets that are protected by Kubernetes protection policies.

3. Select an application-consistent database application and click View copies.

The copy map consists of the root node and its child nodes. The root node in the left pane represents an asset, and information about copy locations appears in the right pane. The child nodes represent storage systems.

4. Click a child node.

When you click a child node, the right pane displays information about the copy, such as the creation time, consistency level, size, and so forth.

5. Verify that the consistency level for the copy is Application Consistent.

Without the presence of an application template in the namespace, the consistency level is Crash Consistent.

Now you can verify the number of volumes that matched the template.

6. From the PowerProtect Data Manager UI, select Jobs > Protection and sort by Completed status.

The Jobs window appears.

7. Locate a job that corresponds to a Kubernetes protection policy which protects the database application.

8. Click the magnifying glass icon in the Details column next to the job name.

The Details pane appears on the right, with a Task Summary at the bottom.

9. Next to Task Summary, click the link that indicates the total number of tasks.

A new window opens to display a list of all tasks for the job and details for each task.

10. Click the magnifying glass icon in the Details column next to the individual task, and then complete the following steps:

11. On the Steps tab, review the summary information, which describes the task activity.

12. Click to expand the step and view additional information.

The PowerProtect Data Manager provides a summary of the protection task.

13. In the task result section, locate the applications parameter.

The applications parameter indicates how many PVCs matched the template selector rules.

Because the relationship between pods and PVCs is not necessarily one to one, this result is not the number of pods which matched the rules. The PowerProtect Data Manager cannot identify which specific volumes matched the rules. However, you can verify that the number of volumes aligns with your expectations for the contents of the namespace.

If the number of volumes is incorrect, review the template and ensure that the selector expressions match all pods.

Application-Consistent Database Backups in Kubernetes 81

Disaster recovery considerations Remember that application templates can be deployed to the PowerProtect namespace or to a user-defined namespace. The application template is a required component for working with application-consistent database backups.

When backing up a user-defined namespace, the PowerProtect Data Manager also backs up the application template from the user-defined namespace. The template is thus preserved if a disaster strikes.

However, application templates in the PowerProtect namespace are not backed up and are not automatically preserved. If you deploy an application template to the PowerProtect namespace, you must manually copy or back up these templates yourself. This manual copy preserves the template source in the event of disaster.

After the disaster, complete the following tasks:

1. Recover the Kubernetes cluster through the normal disaster-recovery procedure. 2. Manually restore the templates to the Kubernetes cluster. 3. Redeploy the templates from the backup to the PowerProtect namespace.

Granular-level restore considerations Granular-level restores (GLRs) consists of restoring only a subset of the database or namespace. The PowerProtect Data Manager application-consistent database backups in Kubernetes do not support GLRs.

However, to achieve the effect of a GLR, complete the following steps:

1. Restore from the selected database backup to a new instance. This step restores the entire database to the new namespace.

2. Connect to the new database. Use database application commands to dump the required portion of the database to a local file.

3. Use any appropriate method to move the local file to the original database instance. 4. Connect to the original database. Use database application commands to import the contents of the dump file into the

original database. This step reverts the selected portion of the original database to match the contents of the backup. 5. Delete the new database instance.

Log truncation considerations MySQL generates binary log files in the MySQL persistent volume claim (PVC) when you perform application-consistent backups and restores. These log files follow the naming convention mysql-bin.xxx and are part of the MySQL application log.

You may have a requirement to truncate these log files for management purposes. However, these files contain both application-consistent information and other customer-specific information. The PowerProtect Data Manager cannot intercept the customer-specific portions of the log, nor determine where to truncate around this information.

Instead, you must review the database log files and decide where to manually truncate the log, if appropriate. Dell EMC recommends that you manage these binary log files in concert with the other MySQL log files.

82 Application-Consistent Database Backups in Kubernetes

Manualsnet FAQs

If you want to find out how the PowerProtect Dell works, you can view and download the Dell PowerProtect 19.10 Data Manager Kubernetes User Guide on the Manualsnet website.

Yes, we have the Kubernetes User Guide for Dell PowerProtect as well as other Dell manuals. All you need to do is to use our search bar and find the user manual that you are looking for.

The Kubernetes User Guide should include all the details that are needed to use a Dell PowerProtect. Full manuals and user guide PDFs can be downloaded from Manualsnet.com.

The best way to navigate the Dell PowerProtect 19.10 Data Manager Kubernetes User Guide is by checking the Table of Contents at the top of the page where available. This allows you to navigate a manual by jumping to the section you are looking for.

This Dell PowerProtect 19.10 Data Manager Kubernetes User Guide consists of sections like Table of Contents, to name a few. For easier navigation, use the Table of Contents in the upper left corner.

You can download Dell PowerProtect 19.10 Data Manager Kubernetes User Guide free of charge simply by clicking the “download” button in the upper right corner of any manuals page. This feature allows you to download any manual in a couple of seconds and is generally in PDF format. You can also save a manual for later by adding it to your saved documents in the user profile.

To be able to print Dell PowerProtect 19.10 Data Manager Kubernetes User Guide, simply download the document to your computer. Once downloaded, open the PDF file and print the Dell PowerProtect 19.10 Data Manager Kubernetes User Guide as you would any other document. This can usually be achieved by clicking on “File” and then “Print” from the menu bar.