Contents

Dell PowerFlex Appliance R6525 Solution Administration Guide PDF

1 of 238
1 of 238

Summary of Content for Dell PowerFlex Appliance R6525 Solution Administration Guide PDF

Dell EMC PowerFlex Appliance Administration Guide

September 2022 Rev. 8.5

Notes, cautions, and warnings

NOTE: A NOTE indicates important information that helps you make better use of your product.

CAUTION: A CAUTION indicates either potential damage to hardware or loss of data and tells you how to avoid

the problem.

WARNING: A WARNING indicates a potential for property damage, personal injury, or death.

2019 - 2022 Dell Inc. or its subsidiaries. All rights reserved. Dell Technologies, Dell, and other trademarks are trademarks of Dell Inc. or its subsidiaries. Other trademarks may be trademarks of their respective owners.

Revision history......................................................................................................................................................................... 11

Chapter 1: Introduction................................................................................................................ 13

Chapter 2: Administering the network......................................................................................... 14 Customer switch port configuration examples........................................................................................................... 14

VLAN mapping for access and aggregation switches......................................................................................... 14 VLAN mapping for leaf-spine switches...................................................................................................................15

Configuration data............................................................................................................................................................. 16 Port-channel with LACP for full network automation or partial network automation................................ 16 Port-channel for full network automation.............................................................................................................. 17 Individual trunk for full network automation or partial network automation................................................. 18

Configuring the Cisco Nexus access aggregation network.................................................................................... 20 Configuring the Dell EMC PowerSwitch access switches.......................................................................................21 Using an embedded operating system-based jump server..................................................................................... 22

Jump server ............................................................................................................................................................... 22 Jump server tools........................................................................................................................................................ 23 Jump server access.................................................................................................................................................... 23 File sharing services....................................................................................................................................................23 Security and administration...................................................................................................................................... 24 Jump server updates.................................................................................................................................................. 24

Install the embedded operating system-based iDRAC tools...................................................................................24 Converting the Windows jump VM to the embedded operating system jump VM........................................... 24

Installing the offline repository.................................................................................................................................25 Verifying connectivity between Storage Data Server (SDS) and Storage Data Client (SDC)...................... 25 Verifying connectivity between Storage Data Server (SDS) and PowerFlex Gateway...................................26 Checking the maximum transmission unit on all switches and servers................................................................27

Checking the maximum transmission unit on the access switch.....................................................................27 Checking the maximum transmission unit on a VMkernel port........................................................................ 27 Checking the maximum transmission unit on all port groups or ports........................................................... 28

Adding a network to the deployed service using PowerFlex Manager................................................................ 28 Add a network to a service ............................................................................................................................................29 Add a VLAN to an access switch connected to a PowerFlex appliance cluster................................................29 Verifying a VLAN configuration..................................................................................................................................... 30 Gather logs from the network switch for troubleshooting..................................................................................... 30 Customer switch port configuration examples........................................................................................................... 31 Upgrade the Dell EMC network..................................................................................................................................... 44

Check the current version of switch operating system ....................................................................................44 Save the license file and the configuration........................................................................................................... 44 Download Dell EMC Networking OS10...................................................................................................................46 Connect to a switch................................................................................................................................................... 46 Configure a USB drive for OS10 installation......................................................................................................... 47 Manual install using USB............................................................................................................................................47 Upgrade OS10 image from existing OS10 install.................................................................................................. 48 Upgrade from non-OS10 operating system to OS10 using ONIE.................................................................... 50

Contents

Contents 3

Install OS from ONIE................................................................................................................................................... 51 Update ONIE using an existing ONIE and TFTP...................................................................................................51 DIAG OS installation or update................................................................................................................................ 52 Install or upgrade EDA-DIAG tools.......................................................................................................................... 54 Firmware requirements.............................................................................................................................................. 55 Verify Dell switch firmware.......................................................................................................................................56 IP address assignment in ONIE................................................................................................................................ 57

Chapter 3: Administering the storage..........................................................................................58 PowerFlex management controller datastore and virtual machine details..........................................................58 Determining and switching the MDM...........................................................................................................................59 Update resource inventory............................................................................................................................................. 60 Add volumes to the service............................................................................................................................................ 60 Add volumes to a service in lifecycle mode................................................................................................................ 62 Adding a PowerFlex appliance node to an existing cluster..................................................................................... 63 Removing a PowerFlex node for maintenance ......................................................................................................... 64

Entering and exiting service mode.......................................................................................................................... 64 Rebooting a PowerFlex node......................................................................................................................................... 65 Resize a volume.................................................................................................................................................................65 Resize a volume in lifecycle mode.................................................................................................................................66 Unmapping a volume........................................................................................................................................................ 66 Unmap a volume on PowerFlex management controller 2.0.................................................................................. 67 Unmapping a volume using a PowerFlex version prior to 3.5.................................................................................67 Removing a volume ..........................................................................................................................................................67 Remove a volume on the PowerFlex management controller 2.0.........................................................................68 Removing a volume using a PowerFlex version prior to 3.5................................................................................... 68 Disabling persistent checksum on medium granularity storage pools.................................................................. 69

Using PowerFlex GUI presentation server to disable persistent checksum................................................. 69 Enabling persistent checksum for medium granularity storage pools.................................................................. 69

Using PowerFlex to enable persistent checksum................................................................................................ 70 Enable fine granularity metadata read cache using the command line................................................................ 70 Add licenses to PowerFlex and PowerFlex Manager.................................................................................................71 Managing volumes, nodes, and network components.............................................................................................. 71 Monitoring system health................................................................................................................................................ 72 Upgrading PowerFlex appliance firmware................................................................................................................... 73 Upgrade Windows and Linux compute-only nodes................................................................................................... 74 Mapping a volume using a PowerFlex version prior to 3.5 to a Windows PowerFlex compute-only

node..................................................................................................................................................................................75 Mapping a volume using Windows PowerFlex compute-only node...................................................................... 75 Enabling and disabling SDC authentication.................................................................................................................76

Preparing for SDC authentication........................................................................................................................... 76 Configuring SDCs to use authentication................................................................................................................76 Windows and Linux SDC nodes................................................................................................................................77 Enabling SDC authentication ................................................................................................................................... 78 Disabling SDC authentication................................................................................................................................... 78 Expanding an existing PowerFlex cluster with SDC authentication enabled................................................ 79

Chapter 4: Administering the storage with asynchronous replication.......................................... 80 Remote replication on PowerFlex hyperconverged nodes .................................................................................... 80 Remote consistency group (RCG)................................................................................................................................80

4 Contents

Replication direction and mapping................................................................................................................................ 80 Adding a replication consistency group........................................................................................................................ 81 Checking the current copy status.................................................................................................................................82 Modifying the recovery point objective....................................................................................................................... 82 Adding a replication pair to a remote consistency group........................................................................................ 82 Unpairing from a remote consistency group.............................................................................................................. 82 Freezing a remote consistency group.......................................................................................................................... 83 Unfreezing a remote consistency group......................................................................................................................83 Setting the target to inconsistent mode..................................................................................................................... 83 Setting the target to consistent mode........................................................................................................................ 84 Running a test failover..................................................................................................................................................... 84 Stopping test failover.......................................................................................................................................................85 Running a failover..............................................................................................................................................................85 Restoring replication.........................................................................................................................................................85 Reversing replication........................................................................................................................................................ 86 Creating a snapshot of the remote consistency group (RCG) volume................................................................86 Pausing the remote consistency group........................................................................................................................87 Pausing the initial copy.................................................................................................................................................... 87 Resuming the initial copy.................................................................................................................................................87 Resuming the replication consistency group.............................................................................................................. 87 Setting priority................................................................................................................................................................... 88 Mapping remote consistency groups to the Storage Data Clients (SDC).......................................................... 88 Mounting a VMFS datastore copy on the target VMware ESXi cluster..............................................................88 Unmapping an Storage Data Client (SDC) from the remote consistency group target volumes................. 89 Configuring replication on PowerFlex storage-only nodes..................................................................................... 89

Add storage data replication to PowerFlex...........................................................................................................89 Extract and add the MDM certificate.................................................................................................................... 90 Create the replication consistency group............................................................................................................. 90 Disabling replication on PowerFlex storage-only nodes.....................................................................................92 Freeze the remote consistency group................................................................................................................... 92 Remove the remote consistency group................................................................................................................. 92 Remove a peer system...............................................................................................................................................92 Remove replication trust for peer system.............................................................................................................93 Enter SDS in maintenance mode............................................................................................................................. 93 Remove storage data replication from PowerFlex.............................................................................................. 93 Remove a storage data replication RPM............................................................................................................... 94 Clean up network configurations.............................................................................................................................94 Exit SDS in maintenance mode................................................................................................................................ 94 Remove journal capacity............................................................................................................................................95 Remove target volumes from the destination system....................................................................................... 95

Chapter 5: Configuring and viewing alerts...................................................................................96 Configure the alert connector........................................................................................................................................96 Configuring SNMP trap and syslog forwarding......................................................................................................... 97

Configure SNMP trap forwarding........................................................................................................................... 98 Configure syslog forwarding.....................................................................................................................................99

Chapter 6: Administering PowerFlex Manager............................................................................ 101 PowerFlex Manager limits.............................................................................................................................................. 101

Contents 5

Back up and restore.........................................................................................................................................................101 Back up and restore PowerFlex Manager............................................................................................................ 101 Back up the appliance SSL and trusted certificates......................................................................................... 102 Restore the appliance SSL and trusted certificates......................................................................................... 102

Add or modify user accounts........................................................................................................................................ 103 Assigning users to services........................................................................................................................................... 104 Recovering a lost password.......................................................................................................................................... 104 Access switch password management...................................................................................................................... 104 VMware vCenter password management................................................................................................................. 105 VMware ESXi operating system password management...................................................................................... 106

Adding a non-root user to VMware ESXi.............................................................................................................106 Minimum VMware vCenter permissions.................................................................................................................... 106

Create a user in monitoring mode..........................................................................................................................107 Create a user in lifecycle mode.............................................................................................................................. 107 Create a user in managed mode.............................................................................................................................108

Windows server operating system password management ..................................................................................115 Updating passwords in PowerFlex Manager..............................................................................................................116

Update passwords for the PowerFlex Gateway................................................................................................. 116 Updating passwords for PowerFlex Gateway components ............................................................................ 116 Updating passwords for system components......................................................................................................117 Updating passwords for nodes................................................................................................................................ 117

Embedded operating system password management............................................................................................. 118 Adding users................................................................................................................................................................ 119 Granting sudo privileges to a user.......................................................................................................................... 119 Managing users with sudo privileges..................................................................................................................... 119 Deleting users............................................................................................................................................................. 120

Presentation server root password management....................................................................................................120 Red Hat Enterprise Linux user and password management................................................................................. 120

Enabling sudo on a user............................................................................................................................................ 121 SUSE user and password management...................................................................................................................... 121

Creating users............................................................................................................................................................. 121 Deleting users............................................................................................................................................................. 122 Enabling sudo on a user............................................................................................................................................122

Credentials management............................................................................................................................................... 122 Restarting the PowerFlex Manager virtual appliance............................................................................................. 123

Chapter 7: Deploying PowerFlex nodes using PowerFlex Manager.............................................. 124 Deployment modes.......................................................................................................................................................... 124

Managed mode........................................................................................................................................................... 124 Lifecycle mode............................................................................................................................................................124 Alerting mode..............................................................................................................................................................125

Full network automation................................................................................................................................................ 125 Full network automation: Deploying a PowerFlex compute-only node with Red Hat Enterprise

Linux or CentOS.....................................................................................................................................................125 Full network automation: Deploying a PowerFlex storage-only node........................................................... 129 Full network automation: Deploying a VMware ESXi PowerFlex hyperconverged node or

PowerFlex compute-only node...........................................................................................................................133 Adding volumes to a PowerFlex hyperconverged node or PowerFlex compute-only node ................... 138

Partial network automation........................................................................................................................................... 139

6 Contents

Partial network automation: Deploying a PowerFlex compute-only node with Red Hat Enterprise Linux or CentOS.....................................................................................................................................................139

Partial network automation: Deploying a PowerFlex storage-only node......................................................143 Partial network automation: Deploying a VMware ESXi PowerFlex hyperconverged node or

PowerFlex compute-only node........................................................................................................................... 147 Adding volumes to a PowerFlex hyperconverged node or PowerFlex compute-only node .................... 151

Chapter 8: Restoring the PowerFlex Gateway............................................................................ 153 Configure SNMP for PowerFlex Gateway.................................................................................................................154 Installing the PowerFlex Gateway............................................................................................................................... 155 Installing the PowerFlex Gateway prior to PowerFlex 3.5.................................................................................... 155 Changing the root password on the VM....................................................................................................................156 Configuring the PowerFlex Gateway network interfaces......................................................................................156 Configuring the PowerFlex Gateway NTP client..................................................................................................... 158 Configuring the PowerFlex Gateway hostname...................................................................................................... 158 Installing the Java and PowerFlex Gateway RPMs................................................................................................. 158 Restoring the PowerFlex Gateway configuration....................................................................................................159 Deploying the PowerFlex GUI presentation server................................................................................................. 159 Linking and unlinking the MDM to the presentation server web UI....................................................................160

Link the MDM to the presentation server web UI.............................................................................................160 Unlink the MDM to the presentation server web UI.......................................................................................... 161

Chapter 9: Upgrading VMware vCenter...................................................................................... 162 Upgrading VMware vCenter infrastructure management components............................................................. 162 Stage and upgrade the iDRAC and firmware............................................................................................................ 163 Shutting down all the VMs running on the controller host................................................................................... 164 Upgrading VMware vSphere ESXi...............................................................................................................................164 Powering on all the VMs running on the controller host....................................................................................... 165 Upgrading the iDRAC service module........................................................................................................................ 165 Change the SVM CPU clock reservation...................................................................................................................165

Find the CPU and clock speed............................................................................................................................... 166 Migrating vCLS VMs on controller nodes ................................................................................................................ 166 Upgrading the embedded operating system jump VM........................................................................................... 166 Installing the offline repository..................................................................................................................................... 167

Chapter 10: Upgrading a PowerFlex appliance environment........................................................168 Intelligent catalog (IC) trains and the upgrade process.........................................................................................169 Verify and change the maximum transmission unit (MTU) value........................................................................ 169

Back up and verify the dvSwitch configuration................................................................................................. 170 Change the maximum transmission unit (MTU) on the access switch ....................................................... 170 Change the maximum transmission unit (MTU) on the cust_dvswitch........................................................171 Change the maximum transmission unit (MTU) for VMware vMotion VMK............................................... 171

Add a new compatibility management file.................................................................................................................. 171 Upgrade the PowerFlex Manager virtual appliance.................................................................................................172

Back up using PowerFlex Manager........................................................................................................................172 Back up the appliance SSL and trusted certificates......................................................................................... 173 Power off the PowerFlex Manager appliance..................................................................................................... 173 Take a snapshot of the PowerFlex Manager appliance.................................................................................... 174 Upgrading the PowerFlex Manager virtual appliance without using Secure Remote Services

(from a local repository path)............................................................................................................................. 174

Contents 7

Confirming service settings.....................................................................................................................................176 Adding a new Intelligent Catalog file and OS images to PowerFlex Manager...................................................177 Upgrade the PowerFlex presentation server............................................................................................................ 178

Discover the presentation server manually..........................................................................................................178 Upgrade PowerFlex GUI presentation server using PowerFlex Manager.................................................... 178 Embedded OS RPM patching on the PowerFlex Gateway VM and presentation server VM................. 179 Deploy and configure the PowerFlex GUI presentation server...................................................................... 179

Upgrade CloudLink Center............................................................................................................................................ 180 Validate SNMP in CloudLink Center...................................................................................................................... 181 Validate the syslog status in CloudLink Center................................................................................................... 181

Upgrading PowerFlex...................................................................................................................................................... 181 Upgrading Java on the PowerFlex Gateway and PowerFlex GUI presentation server.............................183

Update PowerFlex appliance nodes............................................................................................................................ 184 Migrating VMware vSphere Cluster Services (vCLS) VMs.................................................................................. 185 Upgrading Cisco NX-OS 7.x to Cisco NX-OS 9.x....................................................................................................185 Upgrading the electronic programmable logic device (EPLD)..............................................................................187 Upgrade firmware for IPI G5 network controller..................................................................................................... 190

Chapter 11: Upgrading VMware NSX-T Edge nodes..................................................................... 191 Stage and upgrade the iDRAC and firmware.............................................................................................................191 Validate the vSAN health ............................................................................................................................................. 192 Shut down all the VMs on the NSX-T Edge Gateway host.................................................................................. 192 Put VMware NSX-T Edge Gateway host into maintenance mode......................................................................192 Upgrade VMware vSphere ESXi.................................................................................................................................. 192 Exit maintenance mode.................................................................................................................................................. 193 Power on all VMs running on the VMware NSX-T Edge Gateway host............................................................ 194 Upgrade the iDRAC service module............................................................................................................................ 194 Upgrade the VMware vSphere Distributed Switch................................................................................................. 194 Upgrade the VMware vSAN disk format (vSAN storage option only)............................................................... 195 Verifying VMware vSAN health (vSAN storage option only)............................................................................... 195

Chapter 12: Enable replication on existing PowerFlex hyperconverged nodes............................. 196 Prerequisites..................................................................................................................................................................... 196 Workflow............................................................................................................................................................................196 Remove an existing PowerFlex hyperconverged service from PowerFlex Manager...................................... 197 Create and configure replication port groups........................................................................................................... 197 Preparing the SVMs for replication............................................................................................................................. 197

Set the SDS NUMA................................................................................................................................................... 198 Enabling replication on a PowerFlex appliance with FG Pool..........................................................................198 Verify Network Manager is disabled..................................................................................................................... 198 Update the network configuration........................................................................................................................ 198 Update the grub configuration file........................................................................................................................ 199

Enter the SDS nodes into maintenance mode and power off............................................................................. 200 Add virtual NICs to SVMs.............................................................................................................................................200 Record the MAC address of the newly added network interface controllers................................................. 200 Modifying the vCPU, memory, vNUMA and CPU reservation settings on SVMs........................................... 201

Modify the memory size...........................................................................................................................................201 Increase the vCPU count.........................................................................................................................................201 Setting the vNUMA advanced option...................................................................................................................201

8 Contents

Set the vNUMA advanced option......................................................................................................................... 202 Modifying the memory size according to the SDR requirements for FG pool-based PowerFlex

systems with replication .................................................................................................................................... 202 Increasing the vCPU count according to the SDR requirement....................................................................203 Setting the vNUMA advanced option..................................................................................................................203 Editing the SVM configuration.............................................................................................................................. 203

Powering on the SVM and configuring network interfaces ................................................................................ 204 Configure the newly added network interface controllers for SVMs...........................................................204 Add a permanent static route for replication external networks ................................................................. 204

Install SDR RPMs on the SDS nodes (SVMs)......................................................................................................... 205 Exit SDS maintenance mode........................................................................................................................................205 Verify communication between the source and destination................................................................................ 205 Add journal capacity percentage.................................................................................................................................206

Calculate journal capacity to allocate...................................................................................................................206 Add allocated journal capacity................................................................................................................................207

Adding the Storage Data Replicator to a PowerFlex appliance...........................................................................207 Create the peer system between the source and destination site ....................................................................208 Add the peer system .....................................................................................................................................................208 Create the replication consistency group.................................................................................................................209

Find the current copy status..................................................................................................................................209 Modify the recovery point objective..................................................................................................................... 210

Define the network for replication in PowerFlex Manager....................................................................................210 Add an existing service to PowerFlex Manager....................................................................................................... 210

Chapter 13: Retrieving PowerFlex performance metrics............................................................. 214 Retrieving PowerFlex performance metrics using the PowerFlex GUI.............................................................. 214 Retrieving PowerFlex performance metrics using a PowerFlex version prior to 3.5...................................... 214

Chapter 14: Performing maintenance activities in a PowerFlex cluster....................................... 216 Data assurance during maintenance........................................................................................................................... 217 Entering protected maintenance mode......................................................................................................................218 Exiting protected maintenance mode.........................................................................................................................219

Chapter 15: Administering the CloudLink Center....................................................................... 220 Adding and managing CloudLink Center licenses....................................................................................................220

License CloudLink Center....................................................................................................................................... 220 Add the CloudLink Center license in PowerFlex Manager.............................................................................. 220 Delete expired or unused CloudLink Center licenses from PowerFlex Manager.......................................220 Configure custom syslog message format...........................................................................................................221 Registering KMIP on CloudLink Center................................................................................................................ 221

Manage a self-encrypting drive (SED) from CloudLink Center...........................................................................222 Manage a self-encrypting drive from the command line.......................................................................................222 Release a self-encrypting drive................................................................................................................................... 223 Release management of a self-encrypting drive from the command line.........................................................224 Changing the CloudLink secadmin user password..................................................................................................224 Unlocking the CloudLink secadmin user....................................................................................................................225 Setting CloudLink Vault passcodes............................................................................................................................ 225 Back up and restore CloudLink Center......................................................................................................................225

Viewing back up information.................................................................................................................................. 225

Contents 9

Changing the schedule for automatic backups..................................................................................................226 Generating a backup file manually........................................................................................................................ 226 Generating a backup key pair................................................................................................................................. 227 Downloading the current backup file....................................................................................................................227 Restoring the CloudLink backup............................................................................................................................228

Chapter 16: Powering off and on the PowerFlex appliance cluster............................................. 229 Power off the PowerFlex management controller 2.0...........................................................................................229 Power on the PowerFlex management controller 2.0........................................................................................... 229 Powering off a PowerFlex appliance hyperconverged cluster.............................................................................230 Powering on a PowerFlex appliance hyperconverged cluster.............................................................................. 231 Powering off PowerFlex appliance two-layer cluster............................................................................................ 232 Powering on PowerFlex appliance two-layer cluster............................................................................................. 234 Powering off PowerFlex compute-only nodes with Windows Server 2016 or 2019.......................................235 Powering off PowerFlex compute-only nodes with Red Hat...............................................................................235

Chapter 17: Ports and authentication protocols......................................................................... 236 PowerFlex Manager ports and protocols..................................................................................................................236 PowerFlex ports and authentication.......................................................................................................................... 237 VMware vSphere ports and protocols....................................................................................................................... 237 CloudLink Center ports and protocols....................................................................................................................... 237

Chapter 18: Additional documentation....................................................................................... 238 Configure VMware vCenter high availability............................................................................................................ 238

10 Contents

Revision history

Date Document revision Description of changes

September 2022 8.5 Added content for: Upgrading the electronic

programmable logic device (EPLD) Configure vCenter high availability

August 2022 8.4 Added content for: Run inventory of the controller vCSA

using PowerFlex Manager Disconnecting the Patch-FP ISO Converting Secure Remote Services

to Secure Connect Gateway

Updated content for: Modes to deploy PowerFlex Manager Service node scenarios Java file path Intelligent Catalog (IC) trains

June 2022 8.3 Added content for downloading the minimal embedded operating system images.

May 2022 8.2 Added support for VMware vSphere Client 7.0 U3c.

Added content for Upgrade the Dell EMC network

Updated content for Determining and switching the MDM Intelligent catalog (IC) trains and the

upgrade process

March 2022 8.1 Added content for backing up and restoring SSL and trusted certificates.

November 2021 8.0 Added content for PowerFlex management controller

2.0, an R650-based controller that uses PowerFlex storage and a VMware ESXi hypervisor

PowerFlex Manager 3.8 VMware vCSA 7.0 Update 2c VMware ESXi 7.0 Update 2d Mellanox ConnectX-5 CloudLink 7.1

July 2021 7.1 Updated Upgrade PowerFlex Manager using backup and restore process.

June 2021 7.0 Added content for Administering storage with

asynchronous replication Remote replication on PowerFlex

storage-only nodes

Revision history 11

Date Document revision Description of changes

Minimum VMware vCenter permissions required to support PowerFlex Manager

VMware vCLS VM migration Enabling replication on existing

PowerFlex hyperconverged nodes Dell PowerSwitch S5296F Upgrading VMware NSX-T Edge

Gateway nodes

December 2020 6.1 Added content for Upgrading VMware vSphere for

patch releases Updated content for Native asynchronous replication

November 2020 6.0 Added content for Customer switch port examples Persistent checksum for data

integrity SDC authentication Full and partial network automation

Updated content for

CloudLink

September 2020 5.1 Updated content for PowerFlex Gateway

June 2020 5.0 Added content for Storage data replication (SDR) Cisco NX-OS upgrade to 9.x PowerFlex 3.5 CloudLink 6.9 Protected maintenance mode (PMM)

March 2020 4.0 Updated content for CloudLink Windows compute-only nodes Dell EMC Networking

November 2019 3.0 Updated for CloudLink support Windows Server OS support changes to embedded operating

systems Removed OpenManage Enterprise tasks

September 2019 2.0 Updating and adding new topics for the September release

August 2019 1.0 Initial release

12 Revision history

Introduction This guide provides procedures for administering and upgrading the PowerFlex appliance.

It provides the following information: Administering the operating system, network, and storage Managing components of the management and customer cluster with PowerFlex Manager Upgrading a PowerFlex appliance environment Monitoring system health Monitoring and alerting using Secure Remote Services Configuring SNMP trap and syslog forwarding Backing up and restoring Administering the CloudLink Center Managing PowerFlex appliance passwords Powering on and off Ports and authentication protocols

The dvswitch names are for example only and may not match the configured system. Do not change these names or a data unavailable or data lost event may occur.

Depending on when the system was built, it uses an embedded operating system-based jump server or a Windows-based jump server. The specific procedures in this guide describe using the Windows-based jump server. You can accomplish the same tasks using the tools available for the embedded operating system-based jump server. Refer to Using an embedded operating system-based jump server for more details.

Depending on when the system was built, it will have one of the following PowerFlex management controllers:

Controller Description

PowerFlex management controller 2.0 R650-based PowerFlex management controller that uses PowerFlex storage and a VMware ESXi hypervisor

PowerFlex management controller 1.0 R640-based PowerFlex management controller that uses VSAN storage and a VMware ESXi hypervisor

In a default PowerFlex setup two data networks are standard. Four data networks are only required for specific customer requirements, for example, high performance or use of trunk ports.

Dell EMC PowerFlex appliance was previously known as Dell EMC VxFlex appliance. Similarly, Dell EMC PowerFlex Manager was previously known as Dell EMC VxFlex Manager, and Dell EMC PowerFlex was previously known as Dell EMC VxFlex OS. References in the documentation will be updated over time.

PowerFlex management controller 2.0 with PERC H755 raid controllers will be added as a service in lifecycle mode in PowerFlex Manager.

PowerFlex management controller 2.0 with HBA355 raid controllers will be added as a service in managed mode in PowerFlex Manager.

PowerFlex appliance architecture is based on Dell EMC PowerEdge R650, R750, R6525, R640, R740xd, and R840 servers.

PowerFlex Manager provides the management and orchestration functionality for PowerFlex appliance.

See the Glossary for terms, definitions, and acronyms.

1

Introduction 13

Administering the network Perform these procedures to administer the PowerFlex appliance network.

NOTE: If Cisco switches require TACACS+ authentication, PowerFlex Manager functions normally.

Customer switch port configuration examples

VLAN mapping for access and aggregation switches

VLAN L2/L3 MTU Default VLAN number

flex-oob-mgmt- L3 1500 101

flex-vcsa-ha- L2 1500 103

flex-install- L2 1500 104

flex-node-mgmt- L3 1500/9000 105

flex-vmotion- L2 9000/1500 106 - only for PowerFlex management controller 1.0

flex-vsan- L2 9000 113 - only for PowerFlex management controller 1.0

flex-stor-mgmt- L3 1500 150

flex-data1- L2/L3 9000/1500 151 - L3 If external SDC to SDS communication is enabled

flex-data2- L2/L3 9000/1500 152 - L3 If external SDC to SDS communication is enabled

flex-data3- L2/L3 9000/1500 153 - L3 If external SDC to SDS communication is enabled

flex-data4- L2/L3 9000/1500 154 - L3 If external SDC to SDS communication is enabled

flex-rep1- L3 1500 161 - only if data replication is enabled

flex-rep2- L3 1500 162 - only if data replication is enabled

pfmc-sds-mgmt- L3 1500 140 - only for PowerFlex management controller 2.0

pfmc-sds-data1- L2 9000 141 - only for PowerFlex management controller 2.0

pfmc-sds-data2- L2 9000 142 - only for PowerFlex management controller 2.0

2

14 Administering the network

VLAN L2/L3 MTU Default VLAN number

pfmc-vmotion- L2 1500 143 - only for PowerFlex management controller 2.0

nsx-transport- L2 9000 121

nsx-vsan- L2 9000 116

nsx-edge1- L3 1500 122 (nsx-edge1-ext-link1,nsx- edge2-ext-link1)

nsx-edge2- L3 1500 123 (nsx-edge1-ext-link2,nsx- edge2-ext-link2)

VLAN mapping for leaf-spine switches

Name L2/L3 Default VLAN VxLAN VRF name (For routed network)

flex-oob-mgmt- L3 101 10101 FLEX_Management_VRF

flex-vcsa-ha- L2 103 10103 FLEX_Management_VRF

flex-install- L2 104 10104 FLEX_Management_VRF

flex-node-mgmt- L3 105 10105 FLEX_Management_VRF

flex-node-mgmt- L2 106 - PowerFlex management controller 1.0 only

10106 FLEX_Management_VRF

flex-vsan- L2 113 - PowerFlex management controller 1.0 only

10113 FLEX_Management_VRF

flex-stor-mgmt- L3 150 10150 FLEX _Management_VRF

flex-data1- L2 151 10151 FLEX_Management_VRF

flex-data2- L2 152 10152 FLEX_Management_VRF

flex-data3- (if required)

L2 153 10153 FLEX_Management_VRF

flex-data4- (if required)

L2 154 10154 FLEX_Management_VRF

flex-data1- L3 151 - For L3 external SDS 10151 FLEX_SDS_VRF

flex-data2- L3 152 - For L3 external SDS 10152 FLEX_SDS_VRF

flex-data3- (if required)

L3 153 - For L3 external SDS 10153 FLEX_SDS_VRF

flex-data4- (if required)

L3 154 - For L3 external SDS 10154 FLEX_SDS_VRF

flex-tenant1-data1- L3 171 - For multi-tenant SDC 10151 FLEX_ _SDC_VRF

flex-tenant1-data2- L3 172 - For multi-tenant SDC 10152 FLEX_ _SDC_VRF

flex-tenant1-data3- L3 173 - For multi-tenant SDC 10153 FLEX_ _SDC_VRF

flex-tenant1-data4- L3 174 - For multi-tenant SDC 10154 FLEX_ _SDC_VRF

flex-tenant2-data1- L3 181 - For multi-tenant SDC 10171 FLEX_ _SDC_VRF

flex-tenant2-data2- L3 182 - For multi-tenant SDC 10172 FLEX_ _SDC_VRF

Administering the network 15

Name L2/L3 Default VLAN VxLAN VRF name (For routed network)

flex-tenant2-data3- L3 183 - For multi-tenant SDC 10173 FLEX_ _SDC_VRF

flex-tenant2-data4- L3 184 - For multi-tenant SDC 10174 FLEX_ _SDC_VRF

flex-rep1- L3 161 - For data replication 10161 FLEX_REP_VRF

flex-rep2- L3 162 - For data replication 10162 FLEX_REP_VRF

pfmc-sds-mgmt- L3 140 - PowerFlex management controller 2.0 only

10140 FLEX_Management_VRF

pfmc-sds-data1- L2 141 - PowerFlex management controller 2.0 only

10141 FLEX_Management_VRF

pfmc-sds-data2- L2 142 - PowerFlex management controller 2.0 only

10142 FLEX_Management_VRF

pfmc-vmotion- L3 143 - PowerFlex management controller 2.0 only

10143 FLEX_Management_VRF

nsx-transport- L2 121 10121 FLEX_NSX_VRF

nsx-edge1- L3 122 10122

nsx-edge2- L3 123 10123

temp-dns- L3 999 10999 FLEX_Management_VRF

FLEX_MGMT_VRF- 1231 101231 FLEX_Management_VRF

FLEX_REP_VRF- 1232 101232 FLEX_REP_VRF

FLEX_SDS_VRF- 1233 101233 FLEX_SDS_VRF

FLEX_ _SDC_VRF-

1234 101234 FLEX_ _SDC_VRF

FLEX_ _SDC_VRF-

1235 101235 FLEX_ _SDC_VRF

Configuration data This section provides the port channel and individual trunk configuration data for full network automation (FNA) or partial network automation (PNA).

Port-channel with LACP for full network automation or partial network automation

All nodes are connected to access and leaf pair switches.

Node vSwitch Port-channel/ Interface mode

Speed (GB) Mode Required VLANs

Node LB

PowerFlex management controller 1.0

FE_DvSwitch 91,92,93,94 10/25 Active 104,105,150 LAG-Active-Src and dest IP and TCP/UDP

PowerFlex management controller 1.0

BE_DvSwitch 81,82,83,84 10/25 Active 103,106,113,151-1 54 (153 and 154 optional)

LAG-Active-Src and dest IP and TCP/UDP

16 Administering the network

Node vSwitch Port-channel/ Interface mode

Speed (GB) Mode Required VLANs

Node LB

PowerFlex management controller 2.0

FE_DvSwitch 91,92,93,94 10/25 Active 104,105,140,150 LAG-Active-Src and dest IP and TCP/UDP

PowerFlex management controller 2.0

BE_DvSwitch 81,82,83,84 10/25 Active 103,141,142,143,1 51-154 (153 and 154 optional)

LAG-Active-Src and dest IP and TCP/UDP

PowerFlex management controller 1.0 or PowerFlex management controller 2.0

oob_DvSwitch Access 1/10 NA 101 NA

PowerFlex compute-only node (VMware ESXi)

Cust_DvSwitch 2,4,6 10/25/100 Active 104-106 LAG-Active-Src and dest IP and TCP/UDP

PowerFlex compute-only node (VMware ESXi)

Flex-DvSwitch 1,3,5 10/25/100 Active 151-154 (153 and 154 optional)

LAG-Active-Src and dest IP and TCP/UDP

PowerFlex compute-only node (Linux)

Bond0 2,4,6 10/25/100 Active 104 - 105 LAG-Active-Src and dest IP and TCP/UDP

PowerFlex compute-only node (Linux)

Bond1 1,3,5 10/25/100 Active 151 - 154 (153, 154 optional)

Mode 4

PowerFlex hyperconverged node

Cust_DvSwitch 2,4,6 10/25/100 Active 104-106,150 LAG-Active-Src and dest IP and TCP/UDP

PowerFlex hyperconverged node

Flex-DvSwitch 1,3,5 10/25/100 Active 151-154 (153 and 154 optional)

LAG-Active-Src and dest IP and TCP/UDP

PowerFlex storage-only node

Bond0 2,4,6 10/25/100 Active 150,151,153,161 (153 optional)

Mode 4

PowerFlex storage-only node

Bond1 1,3,5 10/25/100 Active 152,154,162 (154 optional)

Mode 4

Aggregation 1900 100 Active All vlans as specified vlan mapping section

Port-channel for full network automation

Node vSwitch Port-channel/ Interface mode

Speed (GB) Mode Required VLANs

Node LB

PowerFlex management controller 1.0

FE_dvSwitch 91,92,93,94 10/25 ON 104,105,150 Route based on IP hash

Administering the network 17

Node vSwitch Port-channel/ Interface mode

Speed (GB) Mode Required VLANs

Node LB

PowerFlex management controller 1.0

BE_dvSwitch 81,82,83,84 10/25 ON 103,106,113,151-1 54 (153 and 154 optional)

Route based on IP hash

PowerFlex management controller 2.0

FE_dvSwitch 91,92,93,94 10/25 Active 104,105,140,150 Route based on IP hash

PowerFlex management controller 2.0

BE_dvSwitch 81,82,83,84 10/25 Active 103,141,142,143,1 51-154 (153 and 154 optional)

Route based on IP hash

PowerFlex management controller 1.0 or PowerFlex management controller 2.0

oob_dvSwitch Access 1/10 NA 101 NA

PowerFlex compute-only nodes

Cust_dvSwitch 2,4,6 10/25/100 ON 104-106 Route based on IP hash

PowerFlex compute-only nodes

Flex_dvSwitch 1,3,5 10/25/100 ON 151-154 (153 and 154 optional)

Route based on IP hash

PowerFlex hyperconverged nodes

Cust_dvSwitch 2,4,6 10/25/100 ON 104-106,150 Route based on IP hash

PowerFlex hyperconverged nodes

Flex_dvSwitch 1,3,5 10/25/100 ON 151-154 (153 and 154 optional)

Route based on IP hash

PowerFlex storage-only nodes

NA NA NA NA NA NA

PowerFlex storage-only nodes

NA NA NA NA NA NA

Aggregation 1900 100 ON All vlans (not for leaf-spine)

Individual trunk for full network automation or partial network automation

Node vSwitch Port-channel/ Interface mode

Speed (GB) Required VLANs Node LB

PowerFlex management controller 1.0

FE_dvSwitch Trunk 10/25 104,105,150 originating virtual port (recommended)

physical NIC load

Source MAC hash

18 Administering the network

Node vSwitch Port-channel/ Interface mode

Speed (GB) Required VLANs Node LB

PowerFlex management controller 1.0

BE_dvSwitch Trunk 10/25 103,106,113,151-154 (153 and 154 optional)

originating virtual port (recommended)

physical NIC load

Source MAC hash

PowerFlex management controller 2.0

FE_dvSwitch Trunk 10/25 104,105,140,150 originating virtual port (recommended)

physical NIC load

Source MAC hash

PowerFlex management controller 2.0

BE_dvSwitch Trunk 10/25 103,141,142,143,151- 154 (153 and 154 optional)

originating virtual port (recommended)

physical NIC load

Source MAC hash

PowerFlex management controller 1.0 or PowerFlex management controller 2.0

oob_dvSwitch Access 1/10 101 NA

PowerFlex compute-only nodes

Cust_dvSwitch Trunk 10/25/100 104-106 originating virtual port (recommended)

physical NIC load

Source MAC hash

PowerFlex compute-only nodes

Flex_dvSwitch Trunk 10/25/100 151-154 (153 and 154 optional)

originating virtual port (recommended)

physical NIC load

Source MAC hash

PowerFlex hyperconverged nodes

Cust_dvSwitch Trunk 10/25/100 104-106,150 originating virtual port (recommended)

physical NIC load

Source MAC hash

PowerFlex hyperconverged nodes

Flex_dvSwitch Trunk 10/25/100 151-154 (153 and 154 optional)

originating virtual port (recommended)

physical NIC load

Source MAC hash

Administering the network 19

Node vSwitch Port-channel/ Interface mode

Speed (GB) Required VLANs Node LB

PowerFlex storage- only nodes (Option 1)

Bond0 Trunk 10/25/100 150,151,153,161 (153 optional)

Mode0-RR Mode1- Active

backup Mode6-

Adaptive LB (recommended)

PowerFlex storage- only nodes (Option 1)

Bond1 Trunk 10/25/100 152,154,162 (154 optional)

Mode0-RR Mode1- Active

backup Mode6-

Adaptive LB (recommended)

PowerFlex storage- only nodes (Option 2)

Per NIC VLAN Trunk 10/25/100 151,152,153,154 Bonded(150,161,162 )

Mode0-RR Mode1- Active

backup Mode6-

Adaptive LB (recommended)

Aggregation 1900 100 All vlans as specified vlan mapping section

Configuring the Cisco Nexus access aggregation network

Configure port channels

This section is applicable for compute-only, hyperconverged, storage-only and controller nodes. Use this section if the "configuration data" section specified the interface type as port-channel / Port-channel with LACP.

interface port-channel Description "Port Channel to switchport trunk allowed vlan add spanning-tree port type edge trunk spanning-tree bpduguard enable spanning-tree guard root switchport mode trunk no lacp suspend-individual lacp vpc-convergence # only for LACP based Network speed vpc

Interface configuration

If you are using a 25G controller, type interface breakout module 1 port map 25g-4x to split 100G to 25G for 9364C-GX and 9336C-FX2 devices.

This section is applicable for compute-only, hyperconverged, storage-only and controller nodes. Use this section if the "configuration data" section specified the interface type as port-channel / Port-channel with LACP.

interface Description Connected to " channel-group mode

20 Administering the network

no shutdown

# applicable for v1 Logical Network SO node data interfaces.

Use this section if the "configuration data" section specified the interface type as access.

interface switchport mode access switchport access vlan spanning-tree port type edge spanning-tree bpduguard enable spanning-tree guard root speed

Use this section if the "configuration data" section specified the interface type as Trunk.

interface switchport mode trunk switchport trunk allowed vlan spanning-tree port type edge spanning-tree bpduguard enable spanning-tree guard root speed

Configuring the Dell EMC PowerSwitch access switches

Configure port channels

This section is applicable for compute-only, hyperconverged, storage-only, or management nodes. Use this section if the "configuration data" section specified the interface type as port-channel / port-channel with LACP.

interface port-channel Description "Port Channel to switchport trunk allowed vlan spanning-tree port type edge spanning-tree bpduguard enable spanning-tree guard root switchport mode trunk lacp fallback enable # applicable only for port-channel with LACP speed vlt-port-channel

Interface configuration

This section is applicable for compute-only, hyperconverged, storage-only, or management nodes. Use this section if the "configuration data" section specified the interface type as port-channel.

interface Description Connected to " channel-group mode no shutdown

Use this section if the "configuration data" section specified the interface type as access.

interface # applicable only for access interface switchport mode access switchport access vlan spanning-tree port type edge spanning-tree bpduguard enable

Administering the network 21

spanning-tree guard root speed

Use this section if the "configuration data" section specified the interface type as Trunk.

interface switchport mode trunk switchport trunk allowed vlan spanning-tree port type edge spanning-tree bpduguard enable spanning-tree guard root speed

Using an embedded operating system-based jump server Depending on when the system was built, it will use either an embedded operating system-based jump server or a Windows- based jump server. The tools available for each jump server accomplish the same tasks. The procedures in this guide use the Windows-based jump server. If you are using a system with an embedded operating system-based jump server, refer to this topic for what tools to use instead.

A Windows-based jump server configuration uses the following tools: WinSCP for secure copies PuTTY for SSH access Remote Desktop (RDP) for remote login

An embedded operating system-based jump server configuration uses the following tools: SCP for secure copies SSH for login through secure shell VNC for remote login Filezilla for secure FTP (interactive SCP is not supported) Browsers, for example Chrome and Firefox

The following table lists Windows-based tools and the equivalent embedded operating system-based tool location:

Windows-based tool Embedded operating system-based tool

WinSCP SCP (from a terminal or console window)

D:\ /shares/

SSH (PuTTY) SSH (from a terminal or console window)

RDP VNC

PowerShell (Windows command terminal) bash (from a terminal or console window)

Jump server The PowerFlex appliance management environment may include a jump server used to complete routine maintenance and troubleshooting. Remote access is provided using VNC (GUI) and SSH, which is always on. The jump server has an integrated configuration for various file sharing services which can be enabled and disabled as needed. The enable and disable services scripts are located on the desktop.

The VM installation is a relatively minimal installation, but also has Xorg and KDE (a graphical desktop environment). A nonroot account (admin) is provided for use. The admin account does have full administrator escalation privileges (sudo) which must be used in order to perform some tasks (account password is required). All yum repos are disabled or non-existent, to prevent inadvertent, or ad hoc updates from being applied.

NOTE: Most maintenance, management, and orchestration operations are still intended to be performed using PowerFlex

Manager.

22 Administering the network

Jump server tools

The jump server is a standard embedded operating system based on CentOS 7 installation running Xorg and KDE for a desktop environment. Current versions oftftp-server,nfs-server, httpd (Apache), samba (CIFS), andvsftpd (FTP) are installed for use, as needed. Desktop icons for starting and stopping each service have been prepared to make daemon and firewall control relatively simple. Also installed are versions of Firefox and Google Chrome browsers, command-line SCP is available to transfer files to and from the jump server.

Jump server service scripts Each service has a bash script for starting and stopping the associated daemon. The use ofsudois built into each script. The scripts test if the target daemon is (or is not) running. The scripts then perform the set tasks (start and stop service, open and close firewall rules). After successfully running, the scripts sleep (persist) for 10 seconds on screen, then exit the running script shell.

The scripts are located in the ~admin/service-scripts directory, these scripts are also run by the desktop icons.

Jump server access

You can access the jump server in several ways: A network based graphical login is provided using VNC (tigervnc-server), avncviewerclient is needed in order to

access this service. OpenSSH provides a command line or text-based interface access. Any SSH client can access the VM using this method. The VMware vSphere client can be used to access the VM either using a graphical or text-based console.

Virtual networking computing (VNC)

tigervnc-server is installed and running as the administration user on ports 5901 and 5092. To access this network based graphical log, download thevncviewerbinary. Downloads forMacOSX,Linux, and Windows are available. Select the appropriate package for your environment.

Once installed, run the viewer binary and enter the hostname or the IP address of the VNC server, appending 5901 or 5092 to connect to the appropriate port. This stage of the authentication is completed usingvncserverdirectly. Unless X509 certificates are configured and installed on both ends for identification purposes, the client reports that the connection is insecure. This alarm is related to server identity only, as the connection itself is encrypted (similar to Microsoft Remote Desktop).

A (encrypted) separate, password protects the VNC, only the first eight characters of which are significant. The Xorgor KDE screen is configured to lock after 15 min of inactivity, and will require the account password in order to regain access.

OpenSSH

AnopenSSHserver is listening on the default port (22/tcp). Non-root connections are permitted, and any client capable of handling the ciphers suites that are presented can connect without issue. SSH client selection and configuration are beyond the scope of this guide.

VMware web console

VMware vSphere client is the integrated console connection method present within the vCSA. The running VM allows both admin and root access from the console. Use Ctrl + Alt + F2 to switch to a virtual text-based login screen.

File sharing services

NFS, CIFS, and HTTPD all use the same share or document root: /shares. This is a Logical Volume Manager (LVM) volume that is mounted as a separate disk device. FTP is restricted to user accounts, and the administrator has the shares directory as a bind mount underneath the users home directory. Main features of the file sharing services are:

Administering the network 23

FTP uses the user account password in order to access. CIFS relies on a separatesmbpassworddb.

NFS and HTTP are not secured by a password. TFTP UDP-based and not secure. The other end must allow for UDP packets on port 69 in order to retrieve any files.

Security and administration

A nonroot user account (admin) is provided for regular use. The account has access to fullsudo(root privilege) escalation, and a bind mount for the large secondary drive (/shares) where most data should be stored. If the password for the admin account is changed, it is recommended that you also change thevncandsmb passwords (as the admin user, runvncpasswdand orsmbpasswdto fall through a password change cycle).

The server hasselinuxenabled in enforcing mode, and only provides two publicly available service ports (22/tcpand 5901/ tcp). Other services are only permitted through the firewall when their associated service scripts are run.

Jump server updates

The new embedded operating system based on CentOS jump server is an RCM/Intelligent Catalog object, and will have patch releases associated with it. These patches are applied in the same process as the embedded operating systems SVMs. The patch clusters are part of an RCM payload in which one is present.

The yum update model has been disabled where possible (repositories are removed or disabled).

Install the embedded operating system-based iDRAC tools Perform this procedure to install the iDRAC tools on an embedded operating system-based jump server.

Steps

1. Locate the embedded operating system-based iDRAC tools and installation instructions on the Dell Technologies Support site. The latest Linux version is available here.

2. Run the following command on the embedded operating system-based jump box to create a specific symlink to satisfy SSL requirements:

sudo ln -s /usr/lib64/libssl.so.10 /usr/lib64/libssl.so

When the symlink is in place, RACADM tools will function as expected.

Converting the Windows jump VM to the embedded operating system jump VM Use this procedure to convert the Windows jump VM to the embedded operating system jump VM.

Steps

1. Obtain the updated embedded OS image from the IC software repository.

2. Deploy the existing the embedded jump VM and assign a valid IP address with internet connectivity. A valid DNS entry must be defined. The embedded OS jump VM will replace the existing Windows server.

3. Run df -h to verify that there is enough available free space on the /shares partition of the embedded jump VM to download the RPM packages and create the ZIP file. At least 15 GB is recommended.

24 Administering the network

4. Run uname -a to determine the embedded operating system version and verify the Linux kernel version by reviewing the output and the values in the file (/etc/centos-release).

5. Run cat /etc/centos-release to verify the embedded operating system version.

Installing the offline repository

Use this procedure to install an offline repository.

Steps

1. Create a directory in the /shares volume called centos-RPM, type: sudo mkdir /shares/Centos-RPM.

2. Copy the repository update ZIP file to the /tmp directory of the embedded operating system VM using WinSCP or similar.

3. Extract the contents of the repository update ZIP file to the /shares/Centos-RPM directory, type: sudo unzip /tmp/ repofilename.zip -d /shares/Centos-RPM.

4. Create and modify a new repository file in the (/etc/yum.repos.d) directory, type: sudo vi /etc/yum.repos.d/ centos.rpm.repo. In this example, the file that is created is (/etc/yum.repos.d/centos.rpm.repo).

5. Clean the yum cache, type: # sudo yum clean all.

6. Verify access to the new repository, type: # sudo yum repolist.

7. Deploy the updates from the repository, type: yum update. When prompted answer (y).

8. When the process is complete, reboot the system, type: reboot.

9. Once the system reboot has completed, verify kernel version, type: uname -a viewing the (/etc/centos-release) file.

10. Verify the embedded operating system version, type: cat /etc/centos-release.

11. Remove the RPM files, type: sudo rm -f -r /shares/Centos-RPM.

12. Remove the repository index file, type: sudo rm /etc/yum.repos.d/centos.rpm.repo.

13. Clean yum cache, type sudo yum clean all.

Verifying connectivity between Storage Data Server (SDS) and Storage Data Client (SDC) Use this procedure to ping the Storage Data Server (SDS) from Storage Data Client (SDC).

Steps

1. Open an SSH session with a VMware ESXi host using PuTTy or a similar SSH client.

2. Log in to the host using root.

3. Type vmkping to ping each SDC using the following commands:

Administering the network 25

Ping command Description

-s Specifies the packet size, and the number of data bytes sent. (The packet size is 8972 because the IP header is 20 bytes and the ICMP header is 8 bytes.)

-d Prohibits fragmentation.

-I Specifies the outgoing VMkernel interface.

4. Repeat from each VMware ESXi host using the following commands:

Ping command Description

-s Specifies the packet size, and the number of data bytes sent. (The packet size is 8972 because the IP header is 20 bytes and the ICMP header is 8 bytes.)

-d Prohibits fragmentation.

-I Specifies the outgoing VMkernel interface.

For example: NOTE: The following command requires to use vmk number to reference the port group. For a standard build, flex-

data3- (if required) is vmk2 and flex-data4- (if required) is vmk3.

[root@node7:~] vmkping -d -s 8972 -I vmk3 192.168.176.6 8980 bytes from 192.168.176.6: icmp_seq=0 ttl=64 time=0.191 ms

Verifying connectivity between Storage Data Server (SDS) and PowerFlex Gateway Use this procedure to ping the SDS and PowerFlex Gateway from SDS.

Steps

1. Open an SSH session with an SDS host using PuTTy or a similar SSH client.

2. Log in to the host using root.

3. Ping each SDS and the PowerFlex using a 9000 byte packet (MTU) without fragmentation on the SDS to SDS data networks.

4. Repeat for each SDS host.

5. Repeat for the PowerFlex Gateway.

In the ping command example below:

Ping command Description

-s Specifies the packet size and the number of data bytes to be sent. (The packet size is 8972 because the IP header is 20 bytes and the ICMP header is 8 bytes.)

-M do Prohibits fragmentation

For example:

[root@node4 ~]# ping -s 8972 -M do 192.168.152.102 8980 bytes from 192.168.152.102: icmp_seq=1 ttl=64 time=0.299 ms

26 Administering the network

Checking the maximum transmission unit on all switches and servers Maximum transmission unit (MTU) is the largest physical packet size, measured in bytes, that a network can transmit. Any messages larger than the MTU are divided into smaller packets before transmission.

Checking the maximum transmission unit on the access switch

Use this procedure to check the maximum transmission unit on either the Dell EMC PowerSwitch switch or the Cisco Nexus switch.

Steps

1. From the switch CLI, log in to the switch you want to check.

2. Check each interface for their MTU configuration.

Switch type MTU configuration

Dell EMC PowerSwitch For the Dell EMC S5048F PowerSwitch switch, type the following:

NOTE: port-channel 100 is used as an example in

the following:

R1-BDC-TOR-A#show interface port-channel 1 | grep MTU MTU 9216 bytes, IP MTU 9184 bytes

For the Dell EMC S5224F PowerSwitch switch, type the following:

S5224F#show interfaces port-channel 100 | grep MTU MTU 9216 bytes, IP MTU 9198 bytes

For the Dell EMC S4148F PowerSwitch switch, type the following:

S4148F#show interfaces port-channel 100 | grep MTU MTU 9216 bytes, IP MTU 9198 bytes

Cisco Nexus Cisco_Access-A# show interface port- channel 100 | grep MTU MTU 9216 bytes, BW 1000000 Kbit, DLY 10 usec

Checking the maximum transmission unit on a VMkernel port

Use this procedure to check maximum transmission unit on a VMkernel port.

Steps

1. In VMware vSphere Client, navigate to the VMware ESXi host.

2. Click the Configure tab, and click Networking.

3. Select VMkernel adapters.

4. Select the VMkernel adapter from the table.

Administering the network 27

5. Click Edit.

6. Verify the MTU setting is set to 9000.

Checking the maximum transmission unit on all port groups or ports

Use this procedure for checking the maximum transmission unit on all port groups or ports.

Steps

1. Log in to VMware vCenter web interface.

2. On the Menu click Home.

3. From the navigation pane, click Networking.

4. Select the virtual switch that you want to check.

5. Click the Configure tab.

6. In the navigation pane, select Settings > Properties.

7. In the Properties window, under Advanced, verify the MTU setting is set to 9000.

Adding a network to the deployed service using PowerFlex Manager Use this procedure to add a network to the deployed service using PowerFlex Manager.

Steps

1. Log in to PowerFlex Manager.

2. From the menu, click Services.

3. Select a service for which you want to add a network and in the right pane, click View Details.

4. Under Resource Action, from the Add Resources list, click Add Network. The Add Network window is displayed. All used resources and networks are displayed under Resource Name and Networks.

5. From the Available Networks list, select the network, and click Add.

The selected network is displayed under Network Name. You can define a new network by clicking Define a new network and select check box to configure Static IP Ranges.

Name Description Network Type

VLAN ID

Gateway Primary DNS Starting IP address

Ending IP address

app10 Customer Applications_1

General Purpose LAN

10 192.168.10.254 192.168.200.101 192.168.10.1 192.168.10.10

6. Select Port Group from the Select Port Group list.

NOTE: Select New Port group to create a port group for the newly defined network.

7. Click Save.

It may take about 15 minutes for PowerFlex Manager to complete the actions of adding the VLAN to the access switches and the VMware ESXi cluster.

NOTE: PowerFlex Manager supports scale up to 400 general-purpose LAN networks.

28 Administering the network

Add a network to a service You can add an available network to a service or choose to define a new network for a configuration that was initially deployed outside of PowerFlex Manager. You cannot remove an added network using PowerFlex Manager.

About this task

Before you can add a network to a service, define the network.

You can add a static route to allow nodes to communicate across different networks. The static route can also be used to support replication in storage-only and hyperconverged services.

Prerequisites

Ensure that a new VLAN is created on any switches that need access to that VLAN and is added to any management cluster server-facing ports. The VLAN is then added it to any northbound trunks to other switches that it must communicate with.

Steps

1. Log in to PowerFlex Manager.

2. On the menu bar, click Services.

3. Select a service for which you want to add a network and in the right pane, click View Details.

4. Under Resource Action, from the Add Resources list, click Add Network. The Add Network window is displayed. All used resources and networks are displayed under Resource Name and Networks.

5. Click Add Additional Network to add an additional network:

a. From the Available Networks list, select the network, and click Add. The selected network is displayed under Network Name. You can define a new network by clicking Define a New Network.

b. Select Port Group from the Select Port Group list. c. Click Save.

6. Click Add Additional Static Route to add an additional static route:

a. Click Add New Static Route. b. Select a Source Network.

The source network must be a PowerFlex data network or a replication network.

c. Select a Destination Network.

The destination network must be a PowerFlex data network or a replication network.

d. Type the IP address for the Gateway. e. Click Save.

Add a VLAN to an access switch connected to a PowerFlex appliance cluster Configure an access switch that is connected to a PowerFlex appliance cluster.

About this task

The following commands are an example of how to add VLAN 10 to the uplink port channel 100. You must perform these commands on both access switches.

Steps

On the command prompt, type the following:

Administering the network 29

Switch type Command

Dell EMC PowerSwitch Dell#configure Dell(conf)#interface vlan Dell(config-vlan)#no shutdown Dell#copy running-config startup-config

Cisco Nexus Cisco_Access-A# configure Cisco_Access-A(config)# vlan 10 Cisco_Access-A(config-vlan)# exit Cisco_Access-A(config)# interface port-channel 100 Cisco_Access-A(config-if)# switchport trunk allowed vlan add 10 Cisco_Access-A(config-if)# end Cisco_Access-A# copy running-config startup-config

Verifying a VLAN configuration Verify the VLAN as part of adding a VLAN to the production network.

Steps

1. Create a VM on PowerFlex compute-only node or PowerFlex hyperconverged node.

2. Assign the newly created distributed port group to the VM.

3. Configure an IP address, mask, and gateway on the VM that corresponds to the new VLAN.

4. Ping the gateway from the VM.

5. After you have successfully pinged the gateway from the VM, delete the VM.

Gather logs from the network switch for troubleshooting Generate logs to troubleshoot your network switch.

Steps

1. Open an SSH session with the Cisco Nexus switch using PuTTY or a similar SSH client.

2. Log in with admin or other credentials with privileges and type show tech-support.

3. Enable session logging. If using PuTTY, right-click the menu bar and go to Change settings > Sessions > Logging.

4. Select All session output.

5. Type a log file name and click Apply.

6. In the switch CLI, type the following:

Switch type Code required

Dell EMC PowerSwitch show tech-support show process cpu

Cisco Nexus show tech-support details | no-more show tech-support vpc | no-more show process cpu history| no more

30 Administering the network

Customer switch port configuration examples If PowerFlex Manager is deploying a template with partial network automation, you must configure the access switches manually before deployment. This section explains how to configure link aggregation control protocol (LACP) if you are using your own access switches.

Each brand of switch commands can be configured differently, see the vendor documentation for correct commands. Due to the number of switch vendors available, it is not possible to provide configurations for each switch. However, here are four configuration examples that are provided for the following switches: Cisco Nexus 93180YC-EX Dell EMC PowerSwitch S5248 Dell EMC PowerSwitch S5048 Dell EMC PowerSwitch S5296F Arista 7280-C

The following VLANs within the examples are represented as follows:

flex-node-mgmt- flex-vmotion- flex-stor-mgmt- flex-data1- flex-data2- flex-data3- (if required) flex-data4- (if required) 1000 - flex-prod

Cisco Nexus 93180YC-EX switch configuration example

The following example pertains to PowerFlex hyperconverged node or VMware ESXi PowerFlex compute-only node connectivity. Port examples for management as follows:

Switch A Switch B

Port channel interface port-channel37

switchport switchport mode trunk switchport trunk allowed vlan 104,106,150 spanning-tree port type edge trunk spanning-tree bpduguard enable spanning-tree guard root speed 25000 mtu 9216 no lacp suspend-individual vpc 37

interface port-channel37 switchport switchport mode trunk switchport trunk allowed vlan 104,106,150 spanning-tree port type edge trunk spanning-tree bpduguard enable spanning-tree guard root speed 25000 mtu 9216 no lacp suspend-individual vpc 37

Ethernet port interface Ethernet1/15

switchport switchport mode trunk switchport trunk allowed vlan 104,106,150 spanning-tree port type edge trunk spanning-tree bpduguard enable spanning-tree guard root speed 25000 mtu 9216 channel-group 37 mode active no shutdown

interface Ethernet1/15 switchport switchport mode trunk switchport trunk allowed vlan 104,106,150 spanning-tree port type edge trunk spanning-tree bpduguard enable spanning-tree guard root speed 25000 mtu 9216 channel-group 37 mode active no shutdown

The following table provides port examples for PowerFlex:

Administering the network 31

Switch A Switch B

Port channel interface port-channel38

switchport switchport mode trunk switchport trunk allowed vlan 151,152,153,154 spanning-tree port type edge trunk spanning-tree bpduguard enable spanning-tree guard root speed 25000 mtu 9216 no lacp suspend-individual vpc 38

interface port-channel38 switchport switchport mode trunk switchport trunk allowed vlan 151,152,153,154 spanning-tree port type edge trunk spanning-tree bpduguard enable spanning-tree guard root speed 25000 mtu 9216 no lacp suspend-individual vpc 38

Ethernet port interface Ethernet1/16

switchport switchport mode trunk switchport trunk allowed vlan 151,152,153,154 spanning-tree port type edge trunk spanning-tree bpduguard enable spanning-tree guard root speed 25000 mtu 9216 channel-group 38 mode active no shutdown

interface Ethernet1/16 switchport switchport mode trunk switchport trunk allowed vlan 151,152,153,154 spanning-tree port type edge trunk spanning-tree bpduguard enable spanning-tree guard root speed 25000 mtu 9216 channel-group 38 mode active no shutdown

The following example pertains to PowerFlex storage-only node connectivity. NOTE: In a non-two layer deployment, the data1 network and data3 network (if required) are defined on port 1 along with

PowerFlex management. Port 2 will have data2 network and data4 network (if required).

Port examples for management as follows:

Switch A Switch B

Port channel interface port-channel37

switchport switchport mode trunk switchport trunk allowed vlan 150,151,153,1000 spanning-tree port type edge trunk spanning-tree bpduguard enable spanning-tree guard root speed 25000 mtu 9216 no lacp suspend-individual vpc 37

interface port-channel37 switchport switchport mode trunk switchport trunk allowed vlan 150,151,153,1000 spanning-tree port type edge trunk spanning-tree bpduguard enable spanning-tree guard root speed 25000 mtu 9216 no lacp suspend-individual vpc 37

Ethernet port interface Ethernet1/15

switchport switchport mode trunk switchport trunk allowed vlan 150,151,153,1000 spanning-tree port type edge trunk spanning-tree bpduguard enable spanning-tree guard root speed 25000 mtu 9216 channel-group 37 mode active no shutdown

interface Ethernet1/15 switchport switchport mode trunk switchport trunk allowed vlan 150,151,153,1000 spanning-tree port type edge trunk spanning-tree bpduguard enable spanning-tree guard root speed 25000 mtu 9216 channel-group 37 mode active no shutdown

The following table provides port examples for PowerFlex:

32 Administering the network

Switch A Switch B

Port channel interface port-channel38

switchport switchport mode trunk switchport trunk allowed vlan 152,154 spanning-tree port type edge trunk spanning-tree bpduguard enable spanning-tree guard root speed 25000 mtu 9216 no lacp suspend-individual vpc 38

interface port-channel38 switchport switchport mode trunk switchport trunk allowed vlan 152,154 spanning-tree port type edge trunk spanning-tree bpduguard enable spanning-tree guard root speed 25000 mtu 9216 no lacp suspend-individual vpc 38

Ethernet port interface Ethernet1/16

switchport switchport mode trunk switchport trunk allowed vlan 152,154 spanning-tree port type edge trunk spanning-tree bpduguard enable spanning-tree guard root speed 25000 mtu 9216 channel-group 38 mode active no shutdown

interface Ethernet1/16 switchport switchport mode trunk switchport trunk allowed vlan 152,154 spanning-tree port type edge trunk spanning-tree bpduguard enable spanning-tree guard root speed 25000 mtu 9216 channel-group 38 mode active no shutdown

The following example pertains to PowerFlex storage-only node connectivity with SDS and SDC traffic. NOTE: In a two layer deployment, the SDC only data1 (SDC traffic only) network and SDC only data2 (SDC traffic only)

network are defined on port 1 along with PowerFlex management. Port 2 will have SDS only data1 (SDS traffic only) and

SDS only data2 (SDS traffic only).

Port examples for management as follows:

Switch A Switch B

Port channel interface port-channel37

switchport switchport mode trunk switchport trunk allowed vlan 150,151,152,1000 spanning-tree port type edge trunk spanning-tree bpduguard enable spanning-tree guard root speed 25000 mtu 9216 no lacp suspend-individual vpc 37

interface port-channel37 switchport switchport mode trunk switchport trunk allowed vlan 150,151,152,1000 spanning-tree port type edge trunk spanning-tree bpduguard enable spanning-tree guard root speed 25000 mtu 9216 no lacp suspend-individual vpc 37

Ethernet port interface Ethernet1/15

switchport switchport mode trunk switchport trunk allowed vlan 150,151,152,1000 spanning-tree port type edge trunk spanning-tree bpduguard enable spanning-tree guard root speed 25000 mtu 9216 channel-group 37 mode active no shutdown

interface Ethernet1/15 switchport switchport mode trunk switchport trunk allowed vlan 150,151,152,1000 spanning-tree port type edge trunk spanning-tree bpduguard enable spanning-tree guard root speed 25000 mtu 9216 channel-group 37 mode active no shutdown

The data networks for these ports are used for SDS traffic only. The following table provides port examples for PowerFlex:

Administering the network 33

Switch A Switch B

Port channel interface port-channel38

switchport switchport mode trunk switchport trunk allowed vlan 153,154 spanning-tree port type edge trunk spanning-tree bpduguard enable spanning-tree guard root speed 25000 mtu 9216 no lacp suspend-individual vpc 38

interface port-channel38 switchport switchport mode trunk switchport trunk allowed vlan 153,154 spanning-tree port type edge trunk spanning-tree bpduguard enable spanning-tree guard root speed 25000 mtu 9216 no lacp suspend-individual vpc 38

Ethernet port interface Ethernet1/16

switchport switchport mode trunk switchport trunk allowed vlan 153,154 spanning-tree port type edge trunk spanning-tree bpduguard enable spanning-tree guard root speed 25000 mtu 9216 channel-group 38 mode active no shutdown

interface Ethernet1/16 switchport switchport mode trunk switchport trunk allowed vlan 153,154 spanning-tree port type edge trunk spanning-tree bpduguard enable spanning-tree guard root speed 25000 mtu 9216 channel-group 38 mode active no shutdown

Dell PowerSwitch S5248 and Dell PowerSwitch S5296F-ON switch configuration example

The following example pertains to PowerFlex hyperconverged node or VMware ESXi PowerFlex compute-only node connectivity. Port examples for management as follows:

Switch A Switch B

Port channel interface port-channel117

no shutdown switchport mode trunk switchport trunk allowed vlan 104,105,150 spanning-tree bpduguard enable spanning-tree guard root spanning-tree port type edge lacp fallback enable mtu 9216 vlt-port-channel 117 spanning-tree port type edge

interface port-channel117 no shutdown switchport mode trunk switchport trunk allowed vlan 104,105,150 spanning-tree bpduguard enable spanning-tree guard root spanning-tree port type edge lacp fallback enable mtu 9216 vlt-port-channel 117 spanning-tree port type edge

Ethernet port interface Ethernet1/1/7

switchport mode trunk switchport trunk allowed vlan 104,105,150 spanning-tree bpduguard enable spanning-tree guard root spanning-tree port type edge no switchport mtu 9216 speed 25000 flowcontrol receive off channel-group 117 mode active no shutdown

interface Ethernet1/1/7 switchport mode trunk switchport trunk allowed vlan 104,105,150 spanning-tree bpduguard enable spanning-tree guard root spanning-tree port type edge no switchport mtu 9216 speed 25000 flowcontrol receive off channel-group 117 mode active no shutdown

The following table provides port examples for PowerFlex:

34 Administering the network

Switch A Switch B

Port channel interface port-channel118

no shutdown switchport mode trunk switchport trunk allowed vlan 150,152,154 spanning-tree bpduguard enable spanning-tree guard root spanning-tree port type edge lacp fallback enable mtu 9216 vlt-port-channel 117 spanning-tree port type edge

interface port-channel118 no shutdown switchport mode trunk switchport trunk allowed vlan 150,152,154 spanning-tree bpduguard enable spanning-tree guard root spanning-tree port type edge lacp fallback enable mtu 9216 vlt-port-channel 117 spanning-tree port type edge

Ethernet port interface Ethernet1/1/8

switchport mode trunk switchport trunk allowed vlan 150,152,154 spanning-tree bpduguard enable spanning-tree guard root spanning-tree port type edge no switchport mtu 9216 speed 25000 flowcontrol receive off channel-group 118 mode active no shutdown

interface Ethernet1/1/8 switchport mode trunk switchport trunk allowed vlan 150,152,154 spanning-tree bpduguard enable spanning-tree guard root spanning-tree port type edge no switchport mtu 9216 speed 25000 flowcontrol receive off channel-group 118 mode active no shutdown

The following example pertains to PowerFlex storage-only node connectivity. NOTE: In a two layer deployment, the data1 network and data3 network (if required) are defined on port 1 along with

PowerFlex management. Port 2 will have the data2 network and data4 network (if required).

Port examples for management as follows:

Switch A Switch B

Port channel interface port-channel117

no shutdown switchport mode trunk switchport trunk allowed vlan 150,151,152 spanning-tree bpduguard enable spanning-tree guard root spanning-tree port type edge lacp fallback enable mtu 9216 vlt-port-channel 117 spanning-tree port type edge

interface port-channel117 no shutdown switchport mode trunk switchport trunk allowed vlan 150,151,152 spanning-tree bpduguard enable spanning-tree guard root spanning-tree port type edge lacp fallback enable mtu 9216 vlt-port-channel 117 spanning-tree port type edge

Ethernet port interface Ethernet1/1/7

switchport mode trunk switchport trunk allowed vlan 150,151,152 spanning-tree bpduguard enable spanning-tree guard root spanning-tree port type edge no switchport mtu 9216 speed 25000 flowcontrol receive off channel-group 117 mode active no shutdown

interface Ethernet1/1/7 switchport mode trunk switchport trunk allowed vlan 150,151,152 spanning-tree bpduguard enable spanning-tree guard root spanning-tree port type edge no switchport mtu 9216 speed 25000 flowcontrol receive off channel-group 117 mode active no shutdown

The following table provides port examples for PowerFlex:

Administering the network 35

Switch A Switch B

Port channel interface port-channel118

no shutdown switchport mode trunk switchport trunk allowed vlan 150,153,154 spanning-tree bpduguard enable spanning-tree guard root spanning-tree port type edge lacp fallback enable mtu 9216 vlt-port-channel 117 spanning-tree port type edge

interface port-channel118 no shutdown switchport mode trunk switchport trunk allowed vlan 150,153,154 spanning-tree bpduguard enable spanning-tree guard root spanning-tree port type edge lacp fallback enable mtu 9216 vlt-port-channel 117 spanning-tree port type edge

Ethernet port interface Ethernet1/1/8

switchport mode trunk switchport trunk allowed vlan 150,153,154 spanning-tree bpduguard enable spanning-tree guard root spanning-tree port type edge no switchport mtu 9216 speed 25000 flowcontrol receive off channel-group 118 mode active no shutdown

interface Ethernet1/1/8 switchport mode trunk switchport trunk allowed vlan 150,153,154 spanning-tree bpduguard enable spanning-tree guard root spanning-tree port type edge no switchport mtu 9216 speed 25000 flowcontrol receive off channel-group 118 mode active no shutdown

The following example pertains to PowerFlex storage-only node connectivity with SDS and SDC traffic only. NOTE: In a two layer deployment, the SDC only data1 (SDC traffic only) network and SDC only data2 (SDC traffic only)

network defined on port 1 along with PowerFlex management. Port 2 will have SDS only data1 (SDS traffic only) and only

data2 (SDS traffic only).

Port examples for management as follows:

Switch A Switch B

Port channel interface port-channel117

no shutdown switchport mode trunk switchport trunk allowed vlan 150,151,152 spanning-tree bpduguard enable spanning-tree guard root spanning-tree port type edge lacp fallback enable mtu 9216 vlt-port-channel 117 spanning-tree port type edge

interface port-channel117 no shutdown switchport mode trunk switchport trunk allowed vlan 150,151,152 spanning-tree bpduguard enable spanning-tree guard root spanning-tree port type edge lacp fallback enable mtu 9216 vlt-port-channel 117 spanning-tree port type edge

Ethernet port interface Ethernet1/1/7

switchport mode trunk switchport trunk allowed vlan 150,151,152 spanning-tree bpduguard enable spanning-tree guard root spanning-tree port type edge no switchport mtu 9216 speed 25000 flowcontrol receive off channel-group 117 mode active no shutdown

interface Ethernet1/1/7 switchport mode trunk switchport trunk allowed vlan 150,151,152 spanning-tree bpduguard enable spanning-tree guard root spanning-tree port type edge no switchport mtu 9216 speed 25000 flowcontrol receive off channel-group 117 mode active no shutdown

36 Administering the network

The following table provides port examples for PowerFlex:

Switch A Switch B

Port channel interface port-channel117

no shutdown switchport mode trunk switchport trunk allowed vlan 150,153,154 spanning-tree bpduguard enable spanning-tree guard root spanning-tree port type edge lacp fallback enable mtu 9216 vlt-port-channel 117 spanning-tree port type edge

interface port-channel117 no shutdown switchport mode trunk switchport trunk allowed vlan 150,153,154 spanning-tree bpduguard enable spanning-tree guard root spanning-tree port type edge lacp fallback enable mtu 9216 vlt-port-channel 117 spanning-tree port type edge

Ethernet port interface Ethernet1/1/7

switchport mode trunk switchport trunk allowed vlan 150,153,154 spanning-tree bpduguard enable spanning-tree guard root spanning-tree port type edge no switchport mtu 9216 speed 25000 flowcontrol receive off channel-group 117 mode active no shutdown

interface Ethernet1/1/7 switchport mode trunk switchport trunk allowed vlan 150,153,154 spanning-tree bpduguard enable spanning-tree guard root spanning-tree port type edge no switchport mtu 9216 speed 25000 flowcontrol receive off channel-group 117 mode active no shutdown

Dell PowerSwitch S5048 switch configuration example

The following example pertains to PowerFlex hyperconverged node or VMware ESXi PowerFlex compute-only node connectivity. Port examples for management as follows:

Switch A Switch B

Port channel interface Port-channel 37

no ip address mtu 9216 portmode hybrid switchport spanning-tree mstp edge-port spanning-tree rstp edge-port spanning-tree 0 portfast spanning-tree pvst edge-port vlt-peer-lag port-channel 37 no shutdown

interface Port-channel 37 no ip address mtu 9216 portmode hybrid switchport spanning-tree mstp edge-port spanning-tree rstp edge-port spanning-tree 0 portfast spanning-tree pvst edge-port vlt-peer-lag port-channel 37 no shutdown

LACP lacp ungroup member-independent port- channel 37

lacp ungroup member-independent port-channel 37

Ethernet port interface twentyFiveGigE 1/35

no ip address mtu 9216 port-channel-protocol LACP port-channel 37 mode active no shutdown

interface twentyFiveGigE 1/35 no ip address mtu 9216 port-channel-protocol LACP port-channel 37 mode active no shutdown

Administering the network 37

Switch A Switch B

Add VLANs Interface vlan 105 tagged Port-channel 37 Interface vlan 106 tagged Port-channel 37 Interface vlan 150 tagged Port-channel 37 Interface vlan 1000 tagged Port-channel 37

Interface vlan 105 tagged Port-channel 37 Interface vlan 106 tagged Port-channel 37 Interface vlan 150 tagged Port-channel 37 Interface vlan 1000 tagged Port-channel 37

The following table provides port examples for PowerFlex:

Switch A Switch B

Port channel interface Port-channel 38

no ip address mtu 9216 portmode hybrid switchport spanning-tree mstp edge-port spanning-tree rstp edge-port spanning-tree 0 portfast spanning-tree pvst edge-port vlt-peer-lag port-channel 38 no shutdown

interface Port-channel 38 no ip address mtu 9216 portmode hybrid switchport spanning-tree mstp edge-port spanning-tree rstp edge-port spanning-tree 0 portfast spanning-tree pvst edge-port vlt-peer-lag port-channel 38 no shutdown

LACP lacp ungroup member-independent port- channel 38

lacp ungroup member-independent port-channel 38

Ethernet port interface twentyFiveGigE 1/35

no ip address mtu 9216 port-channel-protocol LACP port-channel 38 mode active no shutdown

interface twentyFiveGigE 1/35 no ip address mtu 9216 port-channel-protocol LACP port-channel 38 mode active no shutdown

Add VLANs Interface vlan 151 tagged Port-channel 38 Interface vlan 152 tagged Port-channel 38 Interface vlan 153 tagged Port-channel 38 Interface vlan 154 tagged Port-channel 38

Interface vlan 151 tagged Port-channel 38 Interface vlan 152 tagged Port-channel 38 Interface vlan 153 tagged Port-channel 38 Interface vlan 154 tagged Port-channel 38

The following example pertains to PowerFlex storage-only node connectivity. NOTE: In a non-two layer deployment, the data1 network and data3 network (if required) are defined on port 1 along with

PowerFlex management. Port 2 will have the data2 network and data4 network (if required).

Port examples for management as follows:

Switch A Switch B

Port channel interface Port-channel 37

no ip address mtu 9216 portmode hybrid switchport spanning-tree mstp edge-port spanning-tree rstp edge-port spanning-tree 0 portfast

interface Port-channel 37 no ip address mtu 9216 portmode hybrid switchport spanning-tree mstp edge-port spanning-tree rstp edge-port spanning-tree 0 portfast

38 Administering the network

Switch A Switch B

spanning-tree pvst edge-port vlt-peer-lag port-channel 37 no shutdown

spanning-tree pvst edge-port vlt-peer-lag port-channel 37 no shutdown

LACP lacp ungroup member-independent port- channel 37

lacp ungroup member-independent port-channel 37

Ethernet port interface twentyFiveGigE 1/35

no ip address mtu 9216 port-channel-protocol LACP port-channel 37 mode active no shutdown

interface twentyFiveGigE 1/35 no ip address mtu 9216 port-channel-protocol LACP port-channel 37 mode active no shutdown

Add VLANs Interface vlan 150 tagged Port-channel 37 Interface vlan 151 tagged Port-channel 37 Interface vlan 153 tagged Port-channel 37 Interface vlan 1000 tagged Port-channel 37

Interface vlan 150 tagged Port-channel 37 Interface vlan 151 tagged Port-channel 37 Interface vlan 153 tagged Port-channel 37 Interface vlan 1000 tagged Port-channel 37

The following table provides port examples for PowerFlex:

Switch A Switch B

Port channel interface Port-channel 37

no ip address mtu 9216 portmode hybrid switchport spanning-tree mstp edge-port spanning-tree rstp edge-port spanning-tree 0 portfast spanning-tree pvst edge-port vlt-peer-lag port-channel 37 no shutdown

interface Port-channel 37 no ip address mtu 9216 portmode hybrid switchport spanning-tree mstp edge-port spanning-tree rstp edge-port spanning-tree 0 portfast spanning-tree pvst edge-port vlt-peer-lag port-channel 37 no shutdown

LACP lacp ungroup member-independent port- channel 37

lacp ungroup member-independent port-channel 37

Ethernet port interface twentyFiveGigE 1/35

no ip address mtu 9216 port-channel-protocol LACP port-channel 37 mode active no shutdown

interface twentyFiveGigE 1/35 no ip address mtu 9216 port-channel-protocol LACP port-channel 37 mode active no shutdown

Add VLANs Interface vlan 152 tagged Port-channel 37 Interface vlan 154 tagged Port-channel 37

Interface vlan 152 tagged Port-channel 37 Interface vlan 154 tagged Port-channel 37

The following example pertains to PowerFlex storage-only node connectivity with SDS and SDC traffic only.

Administering the network 39

NOTE: In a two layer deployment, the SDC only data1 (SDC traffic only) network and SDC only data2 (SDC traffic only)

network defined on port 1 along with PowerFlex management. Port 2 will have SDS only data1 (SDS traffic only) and only

data2 (SDS traffic only).

Port examples for management as follows:

Switch A Switch B

Port channel interface Port-channel 37

no ip address mtu 9216 portmode hybrid switchport spanning-tree mstp edge-port spanning-tree rstp edge-port spanning-tree 0 portfast spanning-tree pvst edge-port vlt-peer-lag port-channel 37 no shutdown

interface Port-channel 37 no ip address mtu 9216 portmode hybrid switchport spanning-tree mstp edge-port spanning-tree rstp edge-port spanning-tree 0 portfast spanning-tree pvst edge-port vlt-peer-lag port-channel 37 no shutdown

LACP lacp ungroup member-independent port- channel 37

lacp ungroup member-independent port-channel 37

Ethernet port interface twentyFiveGigE 1/35

no ip address mtu 9216 port-channel-protocol LACP port-channel 37 mode active no shutdown

interface twentyFiveGigE 1/35 no ip address mtu 9216 port-channel-protocol LACP port-channel 37 mode active no shutdown

Add VLANs Interface vlan 150 tagged Port-channel 37 Interface vlan 151 tagged Port-channel 37 Interface vlan 152 tagged Port-channel 37 Interface vlan 1000 tagged Port-channel 37

Interface vlan 150 tagged Port-channel 37 Interface vlan 151 tagged Port-channel 37 Interface vlan 152 tagged Port-channel 37 Interface vlan 1000 tagged Port-channel 37

The following table provides port examples for PowerFlex. The data networks for these ports are used for SDS traffic only.

Switch A Switch B

Port channel interface Port-channel 37

no ip address mtu 9216 portmode hybrid switchport spanning-tree mstp edge-port spanning-tree rstp edge-port spanning-tree 0 portfast spanning-tree pvst edge-port vlt-peer-lag port-channel 37 no shutdown

interface Port-channel 37 no ip address mtu 9216 portmode hybrid switchport spanning-tree mstp edge-port spanning-tree rstp edge-port spanning-tree 0 portfast spanning-tree pvst edge-port vlt-peer-lag port-channel 37 no shutdown

LACP lacp ungroup member-independent port- channel 37

lacp ungroup member-independent port-channel 37

Ethernet port interface twentyFiveGigE 1/35

no ip address mtu 9216 port-channel-protocol LACP

interface twentyFiveGigE 1/35 no ip address mtu 9216

40 Administering the network

Switch A Switch B

port-channel 37 mode active no shutdown

port-channel-protocol LACP port-channel 37 mode active no shutdown

Add VLANs Interface vlan 153 tagged Port-channel 37 Interface vlan 154 tagged Port-channel 37

Interface vlan 153 tagged Port-channel 37 Interface vlan 154 tagged Port-channel 37

Arista 7280-C switch configuration example

The following example pertains to PowerFlex hyperconverged node or VMware ESXi PowerFlex compute-only node connectivity. Port examples for management as follows:

Switch A Switch B

Port channel interface Port-Channel104

switchport mode trunk switchport trunk allowed vlan 105,150,1000 port-channel lacp fallback individual port-channel lacp fallback timeout 5 mlag 104 spanning-tree portfast no spanning-tree portfast auto spanning-tree bpduguard enable

interface Port-Channel104 switchport mode trunk switchport trunk allowed vlan 105,150,1000 port-channel lacp fallback individual port-channel lacp fallback timeout 5 mlag 104 spanning-tree portfast no spanning-tree portfast auto spanning-tree bpduguard enable

Ethernet port interface Ethernet35/4

switchport mode trunk switchport trunk allowed vlan 105,150,1000 spanning-tree portfast no spanning-tree portfast auto spanning-tree bpduguard enable mtu 9216 speed forced 25gfull channel-group 104 mode active

interface Ethernet35/4 switchport mode trunk switchport trunkallowed vlan 105,150,1000 spanning-tree portfast no spanning-tree portfast auto spanning-tree bpduguard enable mtu 9216 speed forced 25gfull channel-group 104 mode active

The following table provides port examples for PowerFlex:

Switch A Switch B

Port channel interface Port-Channel105

switchport mode trunk switchport trunk allowed vlan 151,152,153,154 port-channel lacp fallback individual port-channel lacp fallback timeout 5 mlag 104 spanning-tree portfast no spanning-tree portfast auto spanning-tree bpduguard enable

interface Port-Channel105 switchport mode trunk switchport trunk allowed vlan 151,152,153,154 port-channel lacp fallback individual port-channel lacp fallback timeout 5 mlag 104 spanning-tree portfast no spanning-tree portfast auto spanning-tree bpduguard enable

Ethernet port interface Ethernet35/5

switchport mode trunk interface Ethernet35/5 switchport mode trunk

Administering the network 41

Switch A Switch B

switchport trunk allowed vlan 151,152,153,154 spanning-tree portfast no spanning-tree portfast auto spanning-tree bpduguard enable mtu 9216 speed forced 25gfull channel-group 105 mode active

switchport trunkallowed vlan 151,152,153,154 spanning-tree portfast no spanning-tree portfast auto spanning-tree bpduguard enable mtu 9216 speed forced 25gfull channel-group 105 mode active

The following example pertains to PowerFlex storage-only node connectivity. NOTE: In a non-two layer deployment, the data1 network and data3 network (if required) are defined on port 1 along with

PowerFlex management. Port 2 will have the data2 network and data4 network (if required).

Port examples for management as follows:

Switch A Switch B

Port channel interface Port-Channel104

switchport mode trunk switchport trunk allowed vlan 150,151,153,1000 port-channel lacp fallback individual port-channel lacp fallback timeout 5 mlag 104 spanning-tree portfast no spanning-tree portfast auto spanning-tree bpduguard enable

interface Port-Channel104 switchport mode trunk switchport trunk allowed vlan 150,151,153,1000 port-channel lacp fallback individual port-channel lacp fallback timeout 5 mlag 104 spanning-tree portfast no spanning-tree portfast auto spanning-tree bpduguard enable

Ethernet port interface Ethernet35/4

switchport mode trunk switchport trunk allowed vlan 150,151,153,1000 spanning-tree portfast no spanning-tree portfast auto spanning-tree bpduguard enable mtu 9216 speed forced 25gfull channel-group 104 mode active

interface Ethernet35/4 switchport mode trunk switchport trunkallowed vlan 150,151,153,1000 spanning-tree portfast no spanning-tree portfast auto spanning-tree bpduguard enable mtu 9216 speed forced 25gfull channel-group 104 mode active

The following table provides port examples for PowerFlex:

Switch A Switch B

Port channel interface Port-Channel105

switchport mode trunk switchport trunk allowed vlan 152,154 port-channel lacp fallback individual port-channel lacp fallback timeout 5 mlag 104 spanning-tree portfast no spanning-tree portfast auto spanning-tree bpduguard enable

interface Port-Channel105 switchport mode trunk switchport trunk allowed vlan 152,154 port-channel lacp fallback individual port-channel lacp fallback timeout 5 mlag 104 spanning-tree portfast no spanning-tree portfast auto spanning-tree bpduguard enable

Ethernet port interface Ethernet35/5

switchport mode trunk switchport trunk allowed vlan 152,154 spanning-tree portfast

interface Ethernet35/5 switchport mode trunk switchport trunkallowed vlan 152,154

42 Administering the network

Switch A Switch B

no spanning-tree portfast auto spanning-tree bpduguard enable mtu 9216 speed forced 25gfull channel-group 105 mode active

spanning-tree portfast no spanning-tree portfast auto spanning-tree bpduguard enable mtu 9216 speed forced 25gfull channel-group 105 mode active

The following example pertains to PowerFlex storage-only node connectivity. NOTE: In a non-two layer deployment, the SDC only data1 network and the SDC only data2 network are defined on port 1

along with PowerFlex management. Port 2 will have the SDS only data1 and SDS only data2 network.

Port examples for management as follows:

Switch A Switch B

Port channel interface Port-Channel104

switchport mode trunk switchport trunk allowed vlan 150,151,152,1000 port-channel lacp fallback individual port-channel lacp fallback timeout 5 mlag 104 spanning-tree portfast no spanning-tree portfast auto spanning-tree bpduguard enable

interface Port-Channel104 switchport mode trunk switchport trunk allowed vlan 150,151,152,1000 port-channel lacp fallback individual port-channel lacp fallback timeout 5 mlag 104 spanning-tree portfast no spanning-tree portfast auto spanning-tree bpduguard enable

Ethernet port interface Ethernet35/4

switchport mode trunk switchport trunk allowed vlan 150,151,152,1000 spanning-tree portfast no spanning-tree portfast auto spanning-tree bpduguard enable mtu 9216 speed forced 25gfull channel-group 104 mode active

interface Ethernet35/4 switchport mode trunk switchport trunkallowed vlan 150,151,152,1000 spanning-tree portfast no spanning-tree portfast auto spanning-tree bpduguard enable mtu 9216 speed forced 25gfull channel-group 104 mode active

The following table provides port examples for PowerFlex. The data networks for these ports are used for SDS traffic only.

Switch A Switch B

Port channel interface Port-Channel105

switchport mode trunk switchport trunk allowed vlan 153,154 port-channel lacp fallback individual port-channel lacp fallback timeout 5 mlag 104 spanning-tree portfast no spanning-tree portfast auto spanning-tree bpduguard enable

interface Port-Channel105 switchport mode trunk switchport trunk allowed vlan 153,154 port-channel lacp fallback individual port-channel lacp fallback timeout 5 mlag 104 spanning-tree portfast no spanning-tree portfast auto spanning-tree bpduguard enable

Ethernet port interface Ethernet35/5

switchport mode trunk switchport trunk allowed vlan 153,154 spanning-tree portfast no spanning-tree portfast auto spanning-tree bpduguard enable

interface Ethernet35/5 switchport mode trunk switchport trunkallowed vlan 153,154 spanning-tree portfast no spanning-tree portfast auto

Administering the network 43

Switch A Switch B

mtu 9216 speed forced 25gfull channel-group 105 mode active

spanning-tree bpduguard enable mtu 9216 speed forced 25gfull channel-group 105 mode active

Upgrade the Dell EMC network There are several options for the upgrading of OS10. It may be installed manually using the onie-nos-install command while in ONIE, and can be upgraded from the OS10# command prompt using the image install and boot system commands.

Several protocols are supported for the transfer of OS10 files over the network to the switch. These protocols include TFTP, FTP, HTTP, and SCP. You can also copy and install the OS from a local file using a USB device, or the IMAGE directory on the switch.

Prerequisites for access switch upgrades:

Ensure the primary MDM is not residing with the same PowerFlex rack as the current switch upgrade. The primary MDM usually resides on R01S01, when upgrading access switches for Rack 1, secondary MDM (usually resides

on R02S01) will be promoted to primary. The primary MDM will be moved back after completion of primary switch upgrade To switch ownership between primary and secondary MDM, type on the primary MDM: scli --

switch_mdm_ownership --new_master_mdm_id

Check the current version of switch operating system

Use this procedure to check the current version of operating system on the switch.

Steps

1. Log in to the Dell-OS CLI as admin. Use PuTTY to enter the password admin.

2. Type show version to check the operating system version.

The screen displays an output similar to the following:

Dell EMC Networking OS10-Enterprise Copyright (c) 1999-2019 by Dell Inc. All Rights Reserved. OS Version: 10.x.x Build Version: 10.x.x Build Time: 2019-03-01T10:51:29-0800 System Type: Z9100-ON Architecture: x86_64 Up Time:1 day 00:02:03

OS Version shows the version. The operating system version should be 10.x.x or later.

Save the license file and the configuration

Save the license file and the configuration before proceeding with the upgrade.

Steps

1. In the Dell-OS CLI, type show license status to get the license path.

The screen displays an output similar to the following:

my-switch# show license status

System Information --------------------------------------------------------- Vendor Name : DELL

44 Administering the network

Product Name : Z9100-ON Hardware Version: A03 Platform Name : x86_64-dell_z9100_c2538-r0 PPID : xxxxxxxxxxxxxx Service Tag : xxxxxxxx License Details ---------------- Software : OS10-Enterprise Version : 10.x.x License Type : PERPETUAL License Duration: Unlimited License Status : Active *License location: xxxxxxx/xxxxxxx/xx.xx ---------------------------------------------------------

You can find the license path in the license location row.

2. Type show interface mgmt to get the switch address (IP address configured by DHCP) and hostname.

The screen displays an output similar to the following:

my-switch# show interface mgmt Management 1/1/1 is up, line protocol is up Hardware is Dell EMC Eth, address is 34:17:eb:42:ed:00 Current address is 34:17:eb:42:ed:00 Interface index is 9 *Internet address is 5.5.169.236/20 Mode of IPv4 Address Assignment: DHCP Interface IPv6 oper status: Disabled Virtual-IP is not set Virtual-IP IPv6 address is not set MTU 1532 bytes, IP MTU 1500 bytes LineSpeed 1000M Flowcontrol rx off tx off ARP type: ARPA, ARP Timeout: 60 Last clearing of "show interface" counters: 1 weeks 01:31:32 Queuing strategy: fifo Input statistics: Input 43661179 packets, 6924867854 bytes, 0 multicast Received 0 errors, 0 discarded Output statistics: Output 24878 packets, 2163269 bytes, 0 multicast Output 0 errors, Output 0 invalid protocol

Record the IP address for the switch.

3. Determine if the IP addresses and management route are set manually by running the following commands:

my-switch# show running-configuration interface mgmt 1/1/1 ! interface mgmt1/1/1 no shutdown no ip address dhcp ip address 5.5.169.236/20 no ipv6 enable

If the MGMT IP address is not set to DHCP, record the MGMT IP address.

my-switch# show running-configuration management-route ! management route 0.0.0.0/0 managementethernet

If the hostname is not set to OS10, record the hostname.

Administering the network 45

Download Dell EMC Networking OS10

Use this procedure to download Dell EMC Networking OS10 and license for a new switch.

About this task

OS10 runs with a perpetual license on a device with OS10 factory-loaded. The license file is installed on the switch. If the license becomes corrupted or wiped out, you must download the license from DDL under the purchaser's account and re-install it.

Steps

1. Sign in to Dell EMC Networking OS10 using your account credentials.

2. Locate your entitlement ID and order number sent by email, and select the product name.

3. On the Product page, the Assigned To: field on the Product tab is blank. Click Key Available for Download.

4. Enter the device service tag you purchased the OS10 Enterprise Edition for in the Bind to: and Re-enter ID: fields. This step binds the software entitlement to the service tag of the switch.

5. Select how to receive the license key by email or downloaded to your local device.

6. Click Submit to download the License.zip file.

7. Select the Available Downloads tab.

8. Select the OS10 Enterprise Edition release to download, and click Download.

9. Read the Dell End User License Agreement. Scroll to the end of the agreement, and click Yes, I agree.

10. Select how to download the software files, and click Download Now.

11. After you download the OS10 Enterprise Edition image, unpack the TAR file and store the OS10 binary image on a local server. To unpack the TAR file, follow these guidelines:

Extract the OS10 binary file from the TAR file. For example, to unpack a TAR file on a Linux server or from the ONIE prompt, enter:

tar -xf tar_filename

12. Some Windows unzip applications insert extra carriage returns (CR) or line feeds (LF) when extracting the contents of a .TAR file. The additional CRs or LFs may corrupt the downloaded OS10 binary image. Turn this option off if you use a Windows-based tool to untar an OS10 binary file.

13. Generate a checksum for the downloaded OS10 binary image by running the md5sum command on the image file. Ensure that the generated checksum matches the checksum extracted from the TAR file.

md5sum image_filename

14. Type copy to copy the OS10 image file to a local server.

Connect to a switch

Use this procedure to connect to the switch.

Steps

Use one of the following methods to verify that the system is properly connected before starting installation:

Connect a serial cable and terminal emulator to the console serial port on the switch. The serial port settings can be found in the Installation Guide for your particular switch model. For example, the S4100-ON serial port settings are 115200, 8 data bits, and no parity.

Connect the management port to the network if you prefer downloading the image over the network. Use the Installation Guide for your particular switch model for more information about setting up the management port.

NOTE: Keep regular backups of switch configurations somewhere off the switch, and before performing operating system

updates or changes.

46 Administering the network

Configure a USB drive for OS10 installation

Use this procedure to prepare and mount the USB drive on the switch.

About this task

This process is required for both automatic and manual installations using USB. NOTE: Optionally, you can use SCP for this procedure. Download the image scp://userid:password@ :/ filepath/PKGS_OS10-Enterprise-10.5.1.0EX.110strech-installer-x86_64.bin

Steps

1. Extract the .TAR file, and copy the contents to a FAT32 formatted USB flash drive.

2. Plug the USB flash drive into the USB port on the switch.

3. From the ONIE menu, select ONIE: Install OS, then press the Ctrl-C key sequence to cancel.

4. From the ONIE:/ # command prompt, enter the following commands:

ONIE:/ # onie-discovery-stop (this optional command stops the scrolling) ONIE:/ # mkdir /mnt/usb ONIE:/ # cd /mnt ONIE:/mnt # fdisk l (this command shows the device USB is using)

The switches storage devices and partitions are displayed.

5. Use the device or partition that is formatted FAT32 (example: /dev/sdb1) in the next command.

ONIE:/mnt # mount t vfat /dev/sdb1 /mnt/usb ONIE:/mnt # mount a

The USB is now available for installing OS10 onto the switch.

Manual install using USB

A USB device can be used to manually upgrade OS10.

Steps

1. Use the output of the following command to copy/paste the .BIN filename into the install command below.

ONIE:/ # ls /mnt/usb

2. Change to the USB directory.

ONIE:/ # cd /mnt/usb

3. Manually install using the onie-nos-install command. If installing version 10.x.x, the command is:

ONIE:/mnt/usb # onie-nos-install PKGS_OS10-Enterprise-10.x.x-installer-x86_64.bin

The OS10 update takes approximately 10 minutes to complete and boots to the OS10 login: prompt when done. Several messages display during the installation process.

4. Log in to OS10 and run the show version command to verify that the update was successful.

OS10# show version Dell EMC Networking OS10 Enterprise Copyright (c) 1999-2018 by Dell Inc. All Rights Reserved. OS Version: 10.x.x Build Version: 10.x.x Build Time: 2018-03-30T18:05:41-0700 System Type: S4148F-ON

Administering the network 47

Architecture: x86_64 Up Time: 00:02:14

Upgrade OS10 image from existing OS10 install

Use this procedure to upgrade the OS10 image from an existing OS10.

Steps

1. Once you download the OS10 Enterprise Edition image, extract the .TAR file.

Some Windows unzip applications insert extra carriage returns (CR) or line feeds (LF) when they extract the contents of a .tar file, which may corrupt the downloaded OS10 binary image. Turn OFF this option if you use a Windows-based tool to untar an OS10 binary file.

For example, in WinRAR under the Advanced Options tab de-select the TAR file smart CR/LF conversion feature.

2. Save the current configuration on the switch, and backup the startup configuration.

Command Parameter

OS10#write memory Write the current configuration to startup-config

OS10#copy running-configuration tftp:// 10.1.1.1/switch-config.txt

Backup the startup-config to a TFTP server

3. Format a USB as VFAT/FAT32 and add the .BIN file, or move the .BIN file to a TFTP/FTP Server.

Use the native Windows tool, or equivalent, to format as VFAT/FAT32. Starting with OS10.4, OS10 will auto-mount a new USB key after a reboot.

4. Save the .BIN file in EXEC mode, and view the status. Update file name to match your firmware version.

WARNING: Do NOT use the .TAR file.

The image download command only downloads the software image - it does not install the software on your device. The image install command installs the downloaded image to the standby partition.

Command Parameter

OS10#image download usb://PKGS_OS10- Enterprise-10.version-info-here.BIN

Update via USB

-OR- ftp://userid:passwd@hostip:/filepath/ PKGS_OS10-Enterprise-10.version-info-here.BIN

Update via FTP

-OR- scp://userid:password@ :/filepath/ PKGS_OS10-Enterprise-10.5.1.0EX.110strech- installer-x86_64.bin

Update via SCP

OS10#show image status Monitor and wait for State Detail to change from Progress to Complete. For example:

OS10# show image status ============================================ ====== File Transfer State: idle -------------------------------------------- ------ State Detail: Completed: No error Task Start: 2020-02-11T17:03:54Z Task End: 2020-02-11T17:04:05Z Transfer Progress: 100 % Transfer Bytes: 563709117 bytes

View status

48 Administering the network

Command Parameter

File Size: 563709117 bytes Transfer Rate: 49829 kbps Installation State: idle -------------------------------------------- ------ State Detail: No install information available Task Start: 0000-00-00T00:00:00Z Task End: 0000-00-00T00:00:00Z

OS10#dir image For example:

OS10#dir image Directory contents for folder: image Date (modified) Size (bytes) Name --------------------- ------------ ------------------------------------------ 2020-02-11T17:34:12Z 563709117 PKGS_OS10- Enterrprise-10.5.1.0EX.110stretch-installer- x86_64.bin

View status

5. Install the software image in EXEC mode.

Command Parameter

OS10#image install image://PKGS_OS10- Enterprise-10.version-info-here.bin

Installs OS

NOTE: On older versions of OS10, the image install command will appear frozen, without showing the current status.

Duplicating the ssh/telnet session will allow you to run show image status to see the current status.

6. View the status of the current software install in EXEC mode. If the install status shows FAILED, check to make sure the .TAR file is extracted correctly.

Command Parameter

OS10#show image status Verify OS was updated

7. Change the next boot partition to the standby partition in EXEC mode.

Command Parameter

OS10#boot system standby Changes next boot partition

8. Check whether the next boot partition has changed to standby in EXEC mode.

Command Parameter

OS10#show boot detail Verify next boot partition is new firmware

9. Reload the new software image in EXEC mode.

Administering the network 49

Command Parameter

OS10#reload Reboots the switch

10. After the reload, verify the firmware is updated:

OS10# show version Dell EMC Networking OS10 Enterprise Copyright (c) 1999-2018 by Dell Inc. All Rights Reserved. OS Version: 10.4.0E(X2) Build Version: 10.4.0E(X2.22) Build Time: 2018-01-26T17:46:11-0800 System Type: S4148F-ON Architecture: x86_64 Up Time: 02:50:18

Upgrade from non-OS10 operating system to OS10 using ONIE

Use this procedure to upgrade from an non-OS10 operating system.

Prerequisites

This task is used for upgrading the switch to OS10 if: The switch is running Dell EMC Networking OS9, and the desire is to update to OS10. The switch is running a non-Dell operating system, and the desire is to update to OS10.

NOTE: This task is not recommended for updating a switch from one version of OS10 to another version of OS10 since it

erases any existing configuration.

Steps

1. Reload the switch (if OS9 is loaded, use the reload command), and press the ESC key before the counter reaches zero.

Grub 1.99~rc1 (Dell EMC) Built by root at gbbdev-maa-01 on Sat_Nov_25_12:54:44_UTC_2017 S4000 Boot Flash Label 3.21.2.9 NetBoot Label 3.21.2.9 Press Esc to stop autoboot ... 3..2..1.. Grub 1.99~rc1 (Dell EMC) Built by root at gbbdev-maa-01 on Sat_Nov_25_12:54:44_UTC_2017

S4000 Boot Flash Label 3.21.2.9 NetBoot Label 3.21.2.9 +----------------------------------------------+ |Dell EMC Networking | |Dell EMC Networking OS-Boot Line Interface | |DELL EMC DIAG | |ONIE | | | | | +----------------------------------------------+

2. Use the down arrow key to select ONIE and press ENTER.

3. An ONIE-enabled device boots with preloaded diagnostics and ONIE software and displays the following menu:

+----------------------------------------------+ |*ONIE: Install OS | | ONIE: Rescue | | ONIE: Uninstall OS | | ONIE: Update ONIE | | ONIE: Embed ONIE | | ONIE: Diag ONIE | +----------------------------------------------+

Only the ONIE: Uninstall OS and the ONIE: Install OS selections are used in this upgrade example. Table below describes the actions for each menu option.

50 Administering the network

GNU GRUB menu selection Actions performed

ONIE: Install OS Use for downloading and installing an operating system from a URL

Boots to the ONIE prompt Installs an OS10 image using the automatic discovery

process Deletes previously installed image and configuration Starts ONIE with ONIE Discovery Service (factory

default boot)

ONIE: Rescue Boots to the ONIE prompt Allows for manual installation of an OS10 image Allows for updating ONIE Useful for running diagnostics manually

ONIE: Uninstall OS Does not delete ONIE or diagnostics Deletes configuration Erases any installed operating system Restores to factory defaults

ONIE: Update ONIE Updates to a new ONIE version Used for downloading and updating ONIE from a URL Used for updating ONIE image using the automatic

discovery process

ONIE: Embed ONIE Formats an empty disk and installs ONIE Erases any installed operating system

ONIE: Diag ONIE Runs system diagnostics

Install OS from ONIE

Steps

To install the OS from within ONIE instead, see How to Install Dell Networking FTOS on Dell Open Networking (ON) Switches.

Update ONIE using an existing ONIE and TFTP

Use this procedure to update ONIE with an existing installation using a TFTP server.

Steps

1. Download the ONIE software from support.dell.com and place it on the TFTP server.

NOTE: In this example, the file name is onie-updater-x86_64-dellemc_s5200_c3538-r0.3.40.1.1-6.

2. Reload the switch.

3. From the GRUB menu, select ONIE and then ONIE: Update ONIE.

4. From the CLI, enter onie-self-update tftp://TFTP IP/onie-updater-x86_64-dellemc_s5200_c3538- r0.3.40.1.1-6 . Once ONIE is updated, the switch reboots into the active operating system partition.

5. Enter ONIE:/ # onie-sysinfo -v to verify the version.

Administering the network 51

DIAG OS installation or update

Use this procdure to install or update the DIAG OS. This procedure is a firmware upgrade.

About this task

Load or update the DIAG-OSthe diag installer imageusing the onie-nos-install command. The DIAG-OS installer runs in two modes: Update mode or Install mode.

In Update mode, the DIAG-OS updates the existing DIAG-OS and boots back to ONIE. In Install mode, the DIAG-OS erases the existing DIAG-OS and loads the new DIAG-OS.

NOTE: If you have a recovery USB plugged into your system, remove it before using the onie-nos-install command.

NOTE: Before you begin, go to www.dell.com/support and download the diagnostic package.

To activate the DIAG installer:

1. Boot in to ONIE: Rescue mode. 2. Enter ONIE:/ # touch /tmp/diag_os_install_mode to activate the DIAG installer.

3. Run the installer file. 4. Enter ONIE:/ # onie-nos-install tftp:// /diag-installer-x86_64-

dellemc_ _c2338-r0- - .bin to ensure that the file location is accessible over the network.

Steps

1. Enter the onie-discovery-stop command to stop ONIE Discovery mode.

2. Assign an IP address to the management interface and verify the network connectivity.

ONIE:/ # ifconfig eth0 xx.xx.xx.xx netmask xxx.xxx.x.x up ONIE:/ # ifconfig eth0 Link encap:Ethernet HWaddr 34:17:EB:05:B4:00 inet addr:xx.xx.xx.xx Bcast:xx.xx.xxx.xxx Mask:xxx.xxx.x.x inet6 addr: fe80::3617:ebff:fe05:b400/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:43 errors:0 dropped:0 overruns:0 frame:0 TX packets:31 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:5118 (4.9 KiB) TX bytes:7104 (6.9 KiB) Memory:dff40000-dff5ffff

3. Upgrade the DIAG Installer. NOTE: In Install mode, the DIAG-OS installation removes any existing NOS and DIAG-OS partition. If you do not create

file /tmp/diag_os_install_mode, the DIAG-OS installs in Upgrade mode. In this case, the installation process

does NOT touch any existing NOS.

ONIE:/ onie-nos-install tftp:// / /filename/diag-installer-x86_64- dell_ _c2538-r0-2016-08-12.bin discover: installer mode detected. Stopping: discover... done. Info: Fetching tftp:// /users/< user>/< platform>/diag-installer-x86_64- dell_< platform>_c2538-r0-2016-08-12.bin ... users/< user>/< platform> 100% |*******************************| 154M 0:00:00 ETA ONIE: Executing installer: tftp:// /users/< user>/< platform>/diag- installer-x86_64-dell_< platform>_c2538-r0-2016-08-12.bin Ignoring Verifying image checksum ... OK. cur_dir / archive_path /var/tmp/installer tmp_dir /tmp/tmp.qlnVIY Preparing image archive ...sed -e '1,/^exit_marker$/d' /var/tmp/installer | tar xf - OK. Diag-OS Installer: platform: x86_64-dell_< platform>_c2538-r0

EDA-DIAG Partiton not found. Diag OS Installer Mode : INSTALL

Creating new diag-os partition /dev/sda3 ... Warning: The kernel is still using the old partition table.

52 Administering the network

The new table will be used at the next reboot. The operation has completed successfully.

EDA-DIAG dev is /dev/sda3 mke2fs 1.42.13 (17-May-2015) Creating filesystem with 262144 4k blocks and 65536 inodes Filesystem UUID: 63fc156f-b6c1-415d-9676-ae4478704c5a Superblock backups stored on blocks: 32768, 98304, 163840, 229376

Allocating group tables: done Writing inode tables: done Creating journal (8192 blocks): done Writing superblocks and filesystem accounting information: done

Created filesystem on /dev/sda3 with label EDA-DIAG

Mounted /dev/sda3 on /tmp/tmp.BBEygm

Preparing /dev/sda3 EDA-DIAG for rootfs install untaring into /tmp/tmp.BBEygm

rootfs copy done Success: Support tarball created: /tmp/tmp.BBEygm/onie-support.tar.bz2

Updating Grub Cfg /dev/sda3 EDA-DIAG

ONIE uefi_uuid 69AD-9CBF

INSTALLER DONE... Removing /tmp/tmp.qlnVIY ONIE: NOS install successful: tftp:// /users/< user>/< platform>/diag- installer-x86_64-dell_< platform>_c2538-r0-2016-08-12.bin ONIE: Rebooting... ONIE:/ # discover: installer mode detected. Stopping: discover...start-stop-daemon: warning: killing process 2605: No such process done. Stopping: dropbear ssh daemon... done. Stopping: telnetd... done. Stopping: syslogd... done. Info: Unmounting kernel filesystems umount: can't umount /: Invalid argument The system is going down NOW! Sent SIGTERM to all processes Sent SIGKILL tosd 4:0:0:0: [sda] Synchronizing SCSI cache reboot: Restarting system reboot: machine restart

BIOS Boot Selector for <platform> Primary BIOS Version x.xx.x.x_MRC48

SMF Version: MSS x.x.x, FPGA x.x Last POR=0x11, Reset Cause=0x55

POST Configuration CPU Signature 406D8 CPU FamilyID=6, Model=4D, SteppingId=8, Processor=0 Microcode Revision 125 Platform ID: 0x10041A43 PMG_CST_CFG_CTL: 0x40006 BBL_CR_CTL3: 0x7E2801FF Misc EN: 0x840081 Gen PM Con1: 0x203808 Therm Status: 0x884C0000 POST Control=0xEA000100, Status=0xE6000000

BIOS initializations...

CPGC Memtest ................................ PASS

Administering the network 53

CPGC Memtest ................................ PASS

Booting `EDA-DIAG'

Loading DIAG-OS ... [ 3.786758] dummy-irq: no IRQ given. Use irq=N [ 3.792812] esas2r: driver will not be loaded because no ATTO esas2r devices were found [ 3.818171] mtdoops: mtd device (mtddev=name/number) must be supplied [ 4.880285] i8042: No controller found [ 4.890134] fmc_write_eeprom fake-design-for-testing-f001: fmc_write_eeprom: no busid passed, refusing all cards [ 4.901699] intel_rapl: driver does not support CPU family 6 model 77

Debian GNU/Linux 8 dell-diag-os ttyS1

dell-diag-os login: root Password: Linux dell-diag-os x.xx.xx #1 SMP Fri Aug 12 05:14:52 PDT 2016 x86_64

The programs included with the Debian GNU/Linux system are free software; the exact distribution terms for each program are described in the individual files in /usr/share/doc/*/copyright.

Debian GNU/Linux comes with ABSOLUTELY NO WARRANTY, to the extent permitted by applicable law. Diag OS version <platform>_DIAG_OS_x.xx.x.x Build date/time Fri Aug 12 05:23:56 PDT 2016 Build server netlogin-eqx-03 Build by <name> Kernel Info: Linux x.xx.xx #1 SMP Fri Aug 12 05:14:52 PDT 2016 x86_64 GNU/Linux Debian GNU/Linux 8 \n \l

Done Initializing Ethernet root@dell-diag-os:~#

4. Start diagnostics.

To start the ONIE diagnostics, use the EDA-DIAG option from the GRUB menu.

a. Boot into the EDA Diags. b. Log in as root.

Password: calvin.

c. Install the EDA-DIAG tools package.

Next steps

NOTE: To return to your networking operating software, enter the reboot command.

Install or upgrade EDA-DIAG tools

To install or upgrade the DIAGs in the DIAGs OS, use the dpkg --install dn-diags-<platform>-DiagOS- <version>-<date>.deb command.

Steps

1. Download the diagnostic tools from support.dell.com and unzip.

2. Copy using SCP the dn-diags-sssss-DiagOS-vvvvvv-ddddd.deb file to the switch. For example

root@dellemc-diag-os:~# ls dn-diags-S4100-DiagOS-3.33.4.1-6-2018-01-21.deb

3. Run the dpkg command to upgrade the tools.

root@dell-diag-os:~#dpkg --install dn-diags- -DiagOS- - .deb Selecting previously unselected package dn-diags-<platform>.deb. (Reading database ... 18873 files and directories currently installed.)

54 Administering the network

Preparing to unpack dn-diags-<platform>-DiagOS-<version>-<date>.deb ... Unpacking dn-diags-<platform>.deb (1.10) ... Setting up dn-diags-<platform>.deb (1.10) ... root@dell-diag-os:~#

Firmware requirements

CAUTION: The minimum required ONIE version is 3.40.1.1-6. Before using ONIE firmware updater, if your switch

has an ONIE version lower than 3.40.1.1-6, you must first upgrade your switch to this minimum requirement.

NOTE: Boot the switch and choose ONIE: Rescue mode to perform firmware upgrade.

To upgrade the ONIE version, use the ONIE discovery-stop command, as shown:

# onie-discovery-stop # onie-self-update onie-updater-x86_64-dellemc__c3538-r0.3.40.1.1-6

After you upgrade your switch to the minimum ONIE version requirement, you can use the ONIE firmware updater, as shown:

# onie-discovery-stop # onie-fwpkg add onie-firmware-x86_64-dellemc__c3538-r0.3.40.5.1-9.bin # onie-discovery-start

New in the release

BMC Changes the system LED to solid amber when the temperature sensor reaches the critical temperature threshold.

Changes the system LED to solid amber when a CPU thermal trip event occurs.

Fixes the fault LED that did not recover to normal after temperature sensor reading goes low.

Fixes the PSU and voltage sensors that did not work when the CPU is power off.

Fixes the front panel fan LED that did not blink amber when fan 4 failed.

BIOS Updates the Code Base to Label 40. Updates the Microcode to 0x2E to fix and Intel security

issue. Sets the system LED in the BIOS at a specific time. Gets the BMC IP and displays this information under the

BIOS setup. Adds the FW version GUID for AMI Afu support. Disables unused 10 GbE LAN Disables preserved NVRAM region during flash BIOS. Improves system-if BMC crashes, still have serial output.

SSD Updates SATA imageL18702C. Adds invalid count to Smarttools display list.

CPLD Master v00_06 Fixes internal test features. No change to functionality.

NOTE: During a firmware update, if there is an efivars duplicate issue, the BIOS configuration sets to the default,

and the efivar duplicate issue resolves.

Administering the network 55

Verify Dell switch firmware

To verify the Dell switch firmware, use any of the commands mentioned below.

# system "/mnt/onie-boot/onie/tools/bin/onie-fwpkg show-log | grep Firmware | grep version"

2021-07-08 11:04:34 ONIE: Success: Firmware update version: 3.33.1.1-7 2021-07-08 12:31:22 ONIE: Success: Firmware update version: 3.33.5.1-20 2022-01-21 20:48:03 ONIE: Success: Firmware update version: 3.33.5.1-23

3.33.5.1-23 will correspond to onie-firmware-x86_64-dellemc_s4100_c2338- r0.3.33.5.1-23.bin

Or

# system "/mnt/onie-boot/onie/tools/bin/onie-fwpkg show-results"

** Firmware update results information: Name | Version | Result | Date ============================================================+=============+==========+=== ================= onie-firmware-x86_64-dellemc_s4100_c2338-r0.3.33.5.1-20.bin | 3.33.5.1-20 | Success | 2021-07-08 08:31:22 onie-firmware-x86_64-dellemc_s4100_c2338-r0.3.33.5.1-23.bin | 3.33.5.1-23 | Success | 2022-01-21 15:48:03 onie-updater-x86_64-dellemc_s4100_c2338-r0.3.33.1.1-7 | 3.33.1.1-7 | Success | 2021-07-08 07:04:34 ============================================================+=============+==========+=== =================

To verify BIOS and CPLD, use the following command:

Switch# show system

Node Id : 1 MAC : 50:9a:4c:e2:21:00 Number of MACs : 256 Up Time : 00:28:17

-- Unit 1 --

Status : up System Identifier : 1 Down Reason : user-triggered Digital Optical Monitoring : disable System Location LED : off Required Type : S4148T Current Type : S4148T Hardware Revision : A02 Software Version : 10.5.2.3 Physical Ports : 48x10GbE, 2x40GbE, 4x100GbE BIOS : 3.33.0.1-11 System CPLD : 1.3 Master CPLD : 1.2

-- Power Supplies --

PSU-ID Status Type AirFlow Fan Speed(rpm) Status ---------------------------------------------------------------- 1 up AC REVERSE 1 14000 up

2 up AC REVERSE 1 13936 up

-- Fan Status

FanTray Status AirFlow Fan Speed(rpm) Status ----------------------------------------------------------------

56 Administering the network

1 up REVERSE 1 9637 up 2 9614 up

2 up REVERSE 1 9590 up 2 9590 up

3 up REVERSE 1 9567 up 2 9637 up

4 up REVERSE 1 9590 up 2 9567 up

In order to correspond to BIOS and CPLD versions, refer to the release notes for firmware documentation.

For S4100 power switch, see Dell EMC PowerSwitch S4100-ON Series ONIE Firmware Updater Release Notes

For S5200 power switch, see Dell EMC PowerSwitch S5200F-ON Series ONIE Firmware Updater Release Notes

IP address assignment in ONIE

Prerequisites

By default, DHCP is enabled in ONIE. If your network has DHCP configured, ONIE gets the valid IP address for the management port using DHCP, as shown.

Info: Using eth0 MAC address: xx:xx:xx:xx:xx:xx Info: Using eth1 MAC address: xx:xx:xx:xx:xx:xx Info: eth0: Checking link... up. Info: Trying DHCPv4 on interface: eth0 ONIE: Using DHCPv4 addr: eth0: xx.xx.xxx.xx / xxx.xxx.xxx.x

About this task

You can manually assign an IP address.

Steps

1. Wait for ONIE to complete a DHCP timeout and return to the prompt.

2. Wait for ONIE to assign a random default IP address. This address may not be valid for your network.

3. Enter the ifconfig command to assign a valid IP address.

This command is not persistent. After you reboot, you must reconfigure the IP address.

** Rescue Mode Enabled ** ONIE:/ # ONIE:/ # ifconfig eth0 xx.xx.xxx.xxx/xx up

NOTE: Since the management IP address is lost, configuration is done fro the OS10 mode. Save the configuration

before performing any action.

NOTE: Copy the startup configuration to the running configuration as the configuration maybe lost after the upgrade.

Administering the network 57

Administering the storage Perform the following procedures to administer the PowerFlex appliance storage.

Observe the following considerations when administering the storage: PowerFlex management controller 2.0:

PERC 755 raid controllers are added as a service in lifecycle mode in PowerFlex Manager. HBA 355 raid controllers are added as a service in managed mode in PowerFlex Manager.

Lifecycle mode The service supports health and compliance monitoring, service mode, and non-disruptive upgrades. All other service operations are blocked. Lifecycle mode controls the operations that can be performed for configurations that have limited support.

Managed mode The service supports health and compliance monitoring, non-disruptive upgrades, automated resource addition, and automated resource replacement features.

Using PowerFlex Manager to enter a node into maintenance mode ensures that no more than one host is in maintenance mode at any given time.

If you make manual changes outside of PowerFlex Manager, (for example, using PowerFlex GUI or scli), you might need to perform some steps within PowerFlex Manager to ensure that the external changes are reflected within the user interface and the environment is kept in a healthy state. See Managing external changes in the PowerFlex Manager online help.

PowerFlex management controller datastore and virtual machine details The following table explains which datastores to use:

Controller type

Volume name

Size (GB) VMs Domain name Storage pool

PowerFlex management controller 1.0

vsan_datastor e

All available capacity All N/A N/A

PowerFlex management controller 2.0

vcsa 3500 pfmc_vcsa PFMC PFMC-pool

general 1600 Management VMs

For example: Management

gateway Customer gateway Presentation

server CloudLink Additional VMs

PFMC PFMC-pool

pfxm 1000 PowerFlex Manager PFMC PFMC-pool

NOTE: For PowerFlex management controller 2.0, verify the capacity before adding additional VMs to the general volume.

If there is not enough capacity, expand the volume before proceeding. For more information on expanding a volume, see Dell

EMC PowerFlex Appliance Administration Guide.

3

58 Administering the storage

Determining and switching the MDM Use this procedure to switch the MDM.

Steps

1. Log in to PowerFlex Manager, to determine the primary MDM.

2. To view the details of a service, select the component. Scroll down on the Service Details page, the following information is displayed based on the resource types in the service:

Section Description

Physical Nodes View the following information about the nodes that are part of the service: Health Asset/Service Tag iDRAC Management IP Hostname PowerFlex Mode

The mode for each node is one of the following:

Hyper-converged includes both SDS and SDC components. Storage Only includes only the SDS component. Compute Only includes only the SDC component.

Associated IPs MDM Role

The MDM role is the metadata manager role. The MDM role applies only to those nodes that part of a PowerFlex cluster. The MDM role is one of the following:

Primary: The MDM in the cluster that controls the SDSs and SDCs. The primary MDM contains and updates the MDM repository, the database that stores the SDS configuration, and how data is distributed between the SDSs. This repository is constantly replicated to the secondary MDMs, so they can take over with no delay.

Every PowerFlex cluster has one primary MDM.

Secondary: An MDM in the cluster that is ready to take over the primary MDM role if necessary.

Tie Breaker: An MDM whose sole role is to help determine which MDM is the primary.

Standby MDM: A standby MDM can be called on to assume the position of a manager MDM when it is promoted to be a cluster member.

Standby Tie Breaker: A standby node that is prepared to take over as a tiebreaker.

Fault Set: A logical group of SDSs within a protection domain that defines by the way it is grouped where the copies of data exist.

3. Access the primary MDM:

a. In a hyperconverged deployment, use SSH to connect to the SVM that is acting as primary MDM. b. In a two-layer deployment, connect to the PowerFlex storage-only node that is acting as primary MDM using SSH.

4. From the PowerFlex CLI, type the following:

a. Type scli --login --username admin --password MDM_password to connect to the source node.

b. Type scli --query_cluster to reverify the primary MDM.

c. Type scli --switch_mdm_ownership to switch the primary MDM to the secondary MDM.

d. Type scli --query_cluster to reverify the primary MDM.

5. Connect to the new SVM that is acting as primary MDM using SSH.

a. Type scli --query_all_sds and verify that all servers are connected.

b. Type scli --query_all_sdc and verify that all servers are connected.

6. Run inventory on PowerFlex Manager to update PowerFlex Gateway with location of the new primary MDM.

Administering the storage 59

a. On the menu bar, click Resources. b. On the Resources page, click the All Resources tab. c. From the list of resources, click PowerFlex Gateway, and then click Run Inventory.

The resource state changes to Pending. When the inventory is complete, the resource state changes to Available.

Update resource inventory Use this procedure to run inventory to incorporate external changes that are made to resource data outside of PowerFlex Manager.

About this task

NOTE: Run inventory to incorporate external changes that are made to resource data outside of PowerFlex Manager. After

running the inventory to incorporate these changes, you can update the details on any service that must include the new

resource data.

Administrators and standard users can run the inventory on any resources. Standard users can run the inventory only on resources that are part of a node pool for which they have permission.

Steps

1. Log in to PowerFlex Manager.

2. On the menu bar, click Resources.

3. On the Resources page, click the All Resources tab.

4. From the list of resources, select the check box next to the resources that you want to inventory.

5. Click Run Inventory.

Next steps

See the PowerFlex Manager logs, go to Settings > Logs to view the start time and end time of the resource inventory operation.

Add volumes to the service Use this procedure to add volumes to the service in managed mode.

About this task

After the service is deployed, if the volumes are not created for the service, then add two volumes for a fully functional cluster.

In PowerFlex Manager, when a PowerFlex hyperconverged node deployment is complete, PowerFlex Manager will automatically create two volumes with 16 GB, thin provisioned named powerflex-service-vol-1 and powerflex-service-vol-2.

For PowerFlex storage-only node deployment, the service is incomplete, follow the below steps to add the volume.

For PowerFlex compute-only node deployment, the service is in lifecycle mode as there is no information on protection domain(PD) and storage. The vCLS VMs must be moved using the migration wizard.

If you are using PowerFlex Manager 3.6 and prior, volumes are added to a service manually. Verify that the machines are in a connected state.

Steps

1. Log in to PowerFlex Manager.

2. On the Services page, click the Add Resources button and choose Add Volumes.

3. When PowerFlex Manager displays the Add Volume wizard, click Add Existing Volumes or Create New Volumes.

NOTE: The Add Existing Volumes option is only available for a hyperconverged service.

60 Administering the storage

4. If you select Add Existing Volumes, select the Volume and provide the Datastore Name Template from Add Existing Volumes page.

5. If you are creating a new volume for a hyperconverged service, provide the following information:

a. Click Add New Volume. b. In the Volume Name field, select Create New Volume to create a new volume now, or select Auto generate name

when you create multiple volumes. c. In the New Volume Name field, type the volume name, if you are creating a new volume. d. In the Datastore Name field, select Create New Datastore to create a new datastore, or select an existing datastore.

If you choose a volume that is mapped to a datastore that was created previously in another hyperconverged or compute-only service, you need to select the same datastore that was associated with the volume in the other service.

e. In the New Datastore Name field, type the datastore name, if you are creating a new datastore. f. In the Storage Pool drop-down, choose the storage pool where the volume will reside. g. Select the Enable Compression check box to take advantage of the PowerFlex NVDIMM compression feature. h. In the Volume Size (GB) field, select the size in GB. The minimum size is 8 GB and the value you specify must be

divisible by eight. i. In the Volume Type field, select thick or thin.

A thick volume provides a larger amount of storage in advance, whereas a thin volume provides on-demand storage and faster setup and startup times.

j. In the New Volume Name field, if you select Auto Generate name, complete the following:

Field Description

Volume Name Template Modify the template based on your volume naming convention.

How Many Volume Enter number volumes need to be created.

Datastore Name Template Modify the template based on your datastore naming convention.

Storage Pool Choose the storage pool where the volume will reside.

Volume Size (GB) Select the size in GB. The minimum size is 8 GB and the value you specify must be divisible by eight.

Volume Type Select Thick or Thin.

k. Click Next > Finish.

If you are creating a new volume for a storage-only service, provide the following information:

a. In the Volume Name field, select Create New Volume to create a new volume now. b. In the New Volume Name field, type the volume name. c. In the Storage Pool drop-down, choose the storage pool where the volume will reside. d. Select the Enable Compression check box to take advantage of the PowerFlex NVDIMM compression feature. e. In the Volume Size (GB) field, select the size in GB. The minimum size is 8 GB and the value you specify must be

divisible by eight. f. In the Volume Type field, select thick or thin.

A thick volume provides a larger amount of storage in advance, whereas a thin volume provides on-demand storage and faster setup and startup times.

If you enable compression for the volume, thin is the only option available for Volume Type.

g. In the New Volume Name field, if you select Auto Generate name, complete the following:

Field Description

Volume Name Template Modify the template based on your volume naming convention.

How Many Volume Enter number volumes need to be created.

Datastore Name Template Modify the template based on your datastore naming convention.

Administering the storage 61

Field Description

Storage Pool Choose the storage pool where the volume will reside.

Volume Size (GB) Select the size in GB. The minimum size is 8 GB and the value you specify must be divisible by eight.

Volume Type Select Thick or Thin.

h. Click Next > Finish.

If you are creating a new volume for a compute-only service, provide the following information:

a. In the Volume Name field, select an existing volume. For a compute-only service, you can only select an existing volume from an existing deployment with hyperconverged or storage-only service.

b. In the Datastore Name field, select Create New Datastore to create a new datastore, or select an existing datastore. The Datastore Name field is only available for a hyperconverged or compute-only service, as it applies only to services with ESXi. If the volume was originally created in a storage-only service, you must select Create New Datastore to create a new datastore. Alternatively, if the volume was originally created in a hyperconverged service, you must select the datastore that was already mapped to the selected volume in the other service.

c. In the New Datastore Name field, type the datastore name, if you are creating a new datastore.

6. Optionally, click Add volume again to add another volume. Then, provide the required information for the volume.

7. Click Save. The service moves to the In Progress state and the new volume icons appear on the Service Details page. After the deployment completes successfully, the new volumes are displayed and indicated by a check mark in the Storage list on the Service Details page. The PowerFlex 3.0.1.2 and older GUI shows the new volumes under the storage pool. In PowerFlex 3.5, new volumes are under Configuration > Volumes. For a storage-only service, the volumes are created, but not mapped. For a compute-only or hyperconverged service, the volumes are mapped to SDCs. In the vSphere client, you can see the volumes in the storage section and also see the hosts that are mapped to the volumes, once the mappings are in place. From the Resource page, if you select the gateway, it displays the volumes added to the service.

Add volumes to a service in lifecycle mode Use this procedure to add volumes to a service in lifecycle mode.

Steps

1. Type scli --login --username --admin to log in to the primary MDM.

2. Type scli --query_protection_domain --protection_domain_name 3. Type scli --add_volume --protection_domain_name --

storage_pool_name --size_gb --volume_name --thin_provisioned --dont use rmcache to create a new volume.

4. Type scli --query_all_sdc.

5. Type scli --query_all_volumes.

6. Type scli --map_volume_to_sdc --volume_name --sdc_id -- allow_multi_map to map volumes to the SDCs.

7. Run an inventory on PowerFlex management controller 2.0 gateway and VMware vCenter to reflect the new volumes in PowerFlex Manager:

a. Log in to PowerFlex Manager. b. Click Resources. c. Click All Resources. d. From the list of resources, select the checkbox for the PowerFlex management controller 2.0 gateway and VMware

vCenter. e. Click Run Inventory.

62 Administering the storage

Adding a PowerFlex appliance node to an existing cluster Use this procedure to add a node to an existing cluster in managed mode.

Steps

1. Connect the new PowerFlex appliance nodes network interface cards (NICs) to access switches and management switch exactly like the existing nodes.

2. Ensure that the newly connected switch ports are not shut down.

3. Set the IP address of the iDRAC management port, username, password, and SNMP settings to what is expected by PowerFlex Manager.

4. Log in to PowerFlex Manager.

5. In the Services page, click Add Resources and click Add Nodes.

6. In the Duplicate Node wizard:

a. From the Resource to Duplicate list, select a node.

Select a node that is of the same type as the other nodes within the service.

b. In the Number of Instances box, enter the number of nodes instances that you want to add to the service.

The number of instances is fixed for this action.

c. Click Next.

d. Under PowerFlex Settings, specify the PowerFlex Storage Pool Spare Capacity setting by choosing one of the following options:

i. Recommended Spare Capacity % sets the spare capacity to 1 divided by the current number of SDSs in the protection domain, plus the number of nodes that you want to duplicate. For example, if you have three SDSs and you want to add one more node instance, the recommended spare capacity is set to 25 percent, based on the formula 1/4.

ii. Current Spare Capacity % sets the spare capacity to 1 divided by the current number of SDSs in the protection domain. For example, if you currently have three Storage Data Servers (SDSs) in the protection domain, the current spare capacity is set to 34 percent, based on the formula 1/3, rounded up.

e. Under OS Settings, set the Host Name Selection to Auto-Generate, Specify at Deployment Time, or Reverse DNS Lookup.

f. If you choose Specify at Deployment, provide a name for the host in the Host Name field. If you choose Auto- Generate, specify a template for the name in the Host Name Template field.

For an existing service that was not deployed by PowerFlex Manager, the Host Name Selection option is automatically set to Specify at Deployment Time and you must type the hostname.

g. If you are adding a node to a hyperconverged service, specify the Host Name Selection under SVM OS Settings and provide details about the hostname, as you did for the OS Settings.

h. In the IP Source box, provide an IP address. For an existing service that was not deployed with PowerFlex Manager, the default choice is User Entered IP and the IP settings for each network default to Manual Entry. However, you can change the setting to PowerFlex Manager Selected IP.

Under Hardware Settings, the Target Boot Device option is automatically set to Local Flash Storage for Dell EMC PowerFlex for an existing hyperconverged or compute only service that was not deployed by PowerFlex Manager.

i. Under Hardware Settings, in the Node Source box, select Node Pool or Manual Entry.

For an existing service not deployed by PowerFlex Manager, the node source defaults to Manual Entry, but you can change it to Node Pool.

j. In the Node Pool box, select the node pool. Alternatively, if you chose Manual Entry, select the specific node in the Choose Node box.

You can view all user-defined node pools and the global pool. Standard users can see only the pools for which they have permission.

For an existing service not deployed by PowerFlex Manager, the Node Pool defaults to Global.

Administering the storage 63

k. Click Next.

l. Review the Summary page and click Finish.

If the node you are adding has a different type of disk than the base deployment, PowerFlex Manager displays a banner at the top of the Summary page to inform you of the different disk types. You can still go to the node expansion. However, your service may have suboptimal performance.

Removing a PowerFlex node for maintenance Use this procedure for removing a PowerFlex node for maintenance in managed mode.

About this task

For more information, see Data assurance during maintenance.

Steps

1. Log in to PowerFlex Manager.

2. From the menu, click Services.

3. On the Services page, select a service and click View Details.

4. Click Enter Service Mode on the Service Details page.

5. Select one or more nodes on the Node Lists page and click Next. You can only put multiple nodes in service mode simultaneously if all the nodes are in the same fault set.

6. Select one of the following options: Instant Maintenance Mode enables you to perform short-term maintenance that lasts less than 30 minutes. Protected Maintenance Mode enables you to perform long-term maintenance that lasts more than 30 minutes. Evacuate Node from PowerFlex enables you to perform long-term maintenance that lasts more than 30 minutes.

NOTE: Evacuate node is only available from PowerFlex versions previous to 3.5.

7. Click Enter Service Mode.

PowerFlex Manager displays a yellow warning banner at the top of the service page. The Service Mode icon is displayed for the Overall Service Health and for the Resource Health for the selected node.

8. When you are ready to leave service mode, click Service Actions > Exit Service Mode.

Entering and exiting service mode

PowerFlex Manager enables you to put a node in service mode when you must perform maintenance operations on the node. When you put a node in service mode, you can specify whether you are performing short-term maintenance or long-term maintenance work. The option that you use for long-term maintenance depends on the PowerFlex version you are using.

Prerequisites

Before evacuating a node for long-term maintenance work, ensure that you have at least four nodes in the cluster. Also, ensure that you have sufficient storage space on the remaining nodes to evacuate the data from the node that is placed in service mode. If you are using protected maintenance mode (PowerFlex 3.5), the sum of the spare capacity and the free capacity must be greater than the size of the node being put in protected maintenance mode.

About this task

PowerFlex Manager detects when a node is in VMware ESXi or PowerFlex maintenance mode. It automatically places the node in service mode and also ensures that the service itself goes into service mode.

If DAS Cache is installed on a node, or if the node has a VMware NSX-T or NSX-V configuration, PowerFlex Manager does not enable you to enter service mode. PowerFlex Manager also does not enable you to enter service mode if the PowerFlex Gateway used in the service is being updated on the Resources page.

Steps

1. Log in to PowerFlex Manager.

64 Administering the storage

2. On the menu bar, click Services.

3. On the Services page, select a service and click View Details in the right pane.

4. Click Enter Service Mode under Service Actions.

5. Select one or more nodes on the Node Lists page and click Next.

You can only put multiple nodes in service mode simultaneously if all the nodes are in the same fault set.

6. Specify the type of maintenance you want to perform by selecting one of the following options: Instant Maintenance Mode enables you to perform short-term maintenance that lasts less than 30 minutes. PowerFlex

Manager does not migrate the data. Protected Maintenance Mode enables you to perform maintenance that requires longer than 30 minutes in a safe

and protected manner. When you use protected maintenance mode, PowerFlex makes a temporary copy of the data so that the cluster is fully protected from data loss. Protected maintenance mode applies only to hyperconverged and storage-only services.

Evacuate Node from PowerFlex (earlier versions of PowerFlex) enables you to perform long-term maintenance that lasts more than 30 minutes. PowerFlex Manager migrates the data to other nodes in the cluster. It takes longer to evacuate a node, but it is safer because there is no risk of a reboot causing data to be unavailable. Evacuation mode applies only to hyperconverged and storage-only services.

7. Click Finish.

PowerFlex Manager displays a yellow warning banner at the top of the service page. The Service Mode icon displays for the Deployment State and Overall Service Health, and for the Resource Health for the selected nodes.

8. When you are ready to leave service mode, click Service Actions > Exit Service Mode.

Rebooting a PowerFlex node Use this procedure to reboot a PowerFlex appliance node.

About this task

PowerFlex Manager prevents two nodes from being in service mode simultaneously to protect data.

Steps

1. See Entering and exiting service mode to put the node in service mode.

2. After PowerFlex Manager shows that the node has entered service mode, turn off the PowerFlex node by using the iDRAC interface to run a graceful shutdown.

3. Use the iDRAC interface to power on the PowerFlex node.

4. See Entering and exiting service mode to exit the node from service mode.

Resize a volume After adding volumes to a service, you can resize the volumes in managed mode.

About this task

For a storage-only service, you can increase the volume size. For a VMware ESXi compute-only service, you can increase the size of the datastore that is associated with the volume. For a hyperconverged service, you can increase the size of both the volume and the datastore.

If you resize a volume in a storage-only service, you must update the datastore size in the corresponding VMware ESXi compute-only service. The datastore size cannot exceed the size of the volume.

Steps

1. Log in to PowerFlex Manager.

2. On the Services page, click the volume component and choose Volume Actions > Resize.

3. Choose the volume that you want to resize:

a. Click Select Volume.

Administering the storage 65

b. Enter a volume or datastore name search string in the Search Text box. c. Optionally, apply additional search criteria by specifying values for the Size, Type, Compression, and Storage Pool

filters. d. Click Search.

PowerFlex Manager updates the results to show only those volumes that satisfy the search criteria. If the search returns more than 50 volumes, you must refine the search criteria to return only 50 volumes.

e. Select the row for the volume you want to resize. f. Click Save.

4. Update the sizing information:

If you are resizing a volume for a hyperconverged service, perform these steps:

a. In the New Volume Size (GB) field, specify a value that is greater than the current volume size. b. Optionally, select Resize Datastore to increase the size of the datastore.

If you are resizing a volume for a storage-only service, enter a value in the New Volume Size (GB) field. Specify a value that is greater than the current volume size. Values must be in multiples of eight, or an error occurs.

If you are resizing a volume for a compute-only service, review the Volume Size (GB) field to see if the volume size is greater than Current Datastore Size (GB). If it is, PowerFlex Manager expands the datastore size.

5. Click Save.

Resize a volume in lifecycle mode Use this procedure to resize a volume in lifecycle mode.

Steps

1. Type scli --login --username --admin to log in to the primary MDM.

2. Type scli --query_all_volumes.

3. Type scli --mdm_ip --modify_volume_capacity --volume_name -- --size_gb to resize the volume.

4. Run an inventory on PowerFlex management controller 2.0 gateway to reflect the new volumes in PowerFlex Manager:

a. Log in to PowerFlex Manager. b. Click Resources. c. Click All Resources. d. From the list of resources, select the checkbox for the PowerFlex management controller 2.0 gateway. e. From the Details pane, click Run Inventory.

Unmapping a volume Use this procedure to unmap an existing volume from the PowerFlex cluster using the PowerFlex GUI presentation server in the customer cluster.

About this task

NOTE: This procedure is not applicable for PowerFlex management controller 2.0.

Steps

1. Log in to the PowerFlex GUI presentation server.

2. Click the Configuration tab.

3. Click Volumes.

4. Select Volume, and click Mapping.

5. Click Unmap.

6. Select the nodes from the shown list and click Unmap.

66 Administering the storage

Unmap a volume on PowerFlex management controller 2.0 Use this procedure to unmap an existing volume from the PowerFlex management controller 2.0.

Steps

1. To unmap the volumes from all SDCs:

a. Type scli --login --username --admin to log in to the primary MDM.

b. Type scli --query_all_volumes.

c. Type scli --mdm_ip --unmap_volume_from_sdc --volume_name -- name> --all_sdcs to unmap the volume.

2. To unmap a volume from a single SDC:

a. Type scli --login --username --admin to log in to the primary MDM.

b. Type scli --query_all_sdc.

c. Type scli --query_all_volumes.

d. Type scli --mdm_ip --unmap_volume_from_sdc --volume_name -- name> --sdc_ip to unmap the volume.

3. Run an inventory on PowerFlex management controller 2.0 gateway to reflect the new volumes in PowerFlex Manager:

a. Log in to PowerFlex Manager. b. Click Resources. c. Click All Resources. d. From the list of resources, select the checkbox for the PowerFlex management controller 2.0 gateway. e. From the Details pane, click Run Inventory.

Unmapping a volume using a PowerFlex version prior to 3.5 Use this procedure to unmap an existing volume from the PowerFlex cluster, using a PowerFlex version prior to 3.5 in the customer cluster.

Steps

1. In the PowerFlex GUI, select Frontend > Volumes.

2. Expand the correct storage pool to see the mapped volumes.

3. Right-click the volume that you want to unmap and select Unmap.

4. Select the nodes from which you want to unmap this volume and click Unmap Volumes.

Removing a volume Use this procedure to remove a volume in the customer cluster.

About this task

If using a PowerFlex version prior to 3.5, see Removing a volume using a PowerFlex version prior to 3.5.

Prerequisites

PowerFlex Manager does not currently support removing a volume.

Steps

1. Log in to the PowerFlex GUI and click the Configuration tab.

2. Click Volumes that you want to unmap and click Mapping > unmap.

Administering the storage 67

3. Select the nodes from which you want to unmap this volume and click Unmap.

4. Click the volume that you want to remove, click More and select Remove.

5. Run an inventory on the gateway to reflect the new volumes in PowerFlex Manager:

a. Log in to PowerFlex Manager. b. Click Resources. c. Click All Resources. d. From the list of resources, select the checkbox for the gateway. e. From the Details pane, click Run Inventory.

Remove a volume on the PowerFlex management controller 2.0

Steps

1. Type scli --login --username --admin to log in to the primary MDM.

2. Type scli --query_all_volumes.

3. Type scli --mdm_ip --remove_volume --volume_name -- to remove the volume.

4. Run an inventory on PowerFlex management controller 2.0 gateway to reflect the new volumes in PowerFlex Manager:

a. Log in to PowerFlex Manager. b. Click Resources. c. Click All Resources. d. From the list of resources, select the checkbox for the PowerFlex management controller 2.0 gateway. e. From the Details pane, click Run Inventory.

Removing a volume using a PowerFlex version prior to 3.5 Use this procedure to remove a volume with the PowerFlex GUI management software in the customer cluster.

About this task

PowerFlex Manager does not currently support removing a volume.

Steps

1. In the PowerFlex GUI, select Frontend > Volumes.

2. Expand the correct storage pool to see the mapped volumes.

3. Right-click the volume that you want to delete and select Unmap.

4. Select ALL the nodes to unmap this volume and click Unmap Volumes.

5. Right-click the volume that you want to delete and select Remove > Volume and click OK.

6. Type the MDM password when prompted and click Close.

7. Update the PowerFlex Manager inventory, by doing the following steps:

a. In the PowerFlex Manager GUI, go to Resources page, select the PowerFlex Gateway and click Run Inventory. b. To confirm that the process completes with no errors, go Settings > Logs. c. In the PowerFlex Manager GUI, go to Services page for the hyperconverged, storage, and compute clusters and then

click Update Service Details. d. After Update Service Details process completes, confirm that all cluster objects report as healthy (green check mark).

68 Administering the storage

Disabling persistent checksum on medium granularity storage pools All medium granularity storage pools have the persistent checksum that is enabled by default PowerFlex. PowerFlex calculates and validates the checksum value for the payload during transit to protect data-in-flight. Checksum protection is applied to all inputs and outputs. Use the following procedures to disable the persistent checksum, if wanted, in the customer cluster.

Using PowerFlex GUI presentation server to disable persistent checksum

Use this procedure to disable the persistent checksum using PowerFlex GUI presentation server in the customer cluster.

Prerequisites

You will need the following information:

IP address or hostname of the PowerFlex GUI presentation server Valid credentials for the PowerFlex cluster Names of the protection domains to be worked on Names of the storage pools to be modified

Steps

1. Log in to the PowerFlex GUI presentation server with access to the PowerFlex cluster containing the Storage Pool you want to modify.

2. Expand the Configuration menu in the navigation pane (underneath Dashboard), by left clicking the entry.

3. Select Storage Pools.

4. Select the check box to the left of the Storage Pool you plan to modify.

5. Click More.

6. Select Background Device Scanner.

7. Clear Enable Background Device Scanner.

8. Click Apply.

9. Click Settings.

10. Click General.

11. In the resulting dialog box, leave the box checked for Enable Inflight / Persistent Checksum.

12. Clear the Persistent option.

13. Click Apply.

14. Repeat steps 1 through 13 for all additional Storage Pools to be modified.

NOTE: The background scanner service will not be reenabled automatically once the persistent checksum is disabled. To

reenable, repeat the above steps 1 through 6 to determine that the Enable Background Device Scanner option has

not been rechecked, if wanted, recheck and click Apply.

Enabling persistent checksum for medium granularity storage pools Systems upgraded to PowerFlex 3.5 do not have persistent checksum that is enabled for existing medium granularity storage pools. Persistent checksum is enabled using either the PowerFlex or the PowerFlex SCLI command-line tool in the customer cluster.

Administering the storage 69

Using PowerFlex to enable persistent checksum

Use this procedure to enable the persistent checksum using PowerFlex.

Prerequisites

You will need the following information:

IP or hostname of the PowerFlex presentation server Valid credentials for the PowerFlex cluster Names of the protection domains to be worked on Names of the storage pools to be modified

Steps

1. Log in to PowerFlex with access to the PowerFlex cluster containing the Storage Pool you want to modify.

2. Expand the Configuration menu in the navigation pane (underneath Dashboard), by left clicking the entry.

3. Select Storage Pools.

4. Select the check box to the left of the Storage Pool you plan to modify.

5. Click More.

6. Select Background Device Scanner.

7. Clear Enable Background Device Scanner .

8. Click Apply.

9. Click Settings.

10. Click General.

11. In the resulting dialog box, leave the box checked for Enable Inflight / Persistent Checksum.

12. Select one or both of the Inflight and Persistent options.

13. If wanted, check Validate on read (this validation may incur a performance penalty).

14. Click Apply.

NOTE: The background scanner service should be reenabled automatically once the persistent checksum is disabled.

Repeat steps 1 through 6 to determine that the Enable Background Device Scanner to option has been rechecked.

Enable fine granularity metadata read cache using the command line Use this procedure to enable metadata read cache for fine granularity storage pools for PowerFlex versions 3.5 and later.

About this task

PowerFlex Manager versions 3.8 and higher will enable this for new fine granularity storage pools. To determine the size, use the following formula: total drive capacity in TiB * 8%, if less than or equal to 32 GB.

Steps

1. Access the primary MDM:

a. In a hyperconverged deployment, use SSH to connect to the SVM that is acting as the primary MDM. b. In a two-layer deployment, use SSH to connect to the PowerFlex storage-only node that is acting as the primary MDM.

2. From the PowerFlex CLI, type:

scli --login --username admin --password MDM_password scli --set_default_fgl_metadata_cache_size --protection_domain_name Name> --metadata_cache_size_mb scli --enable_fgl_metadata_cache --protection_domain_name

3. For each SDS with fine granularity storage pools, type scli -set_fgl_metadata_cache_size -sds_id --metadata_cache_size_mb .

70 Administering the storage

Add licenses to PowerFlex and PowerFlex Manager Use this procedure to add licenses to PowerFlex and PowerFlex Manager.

Steps

1. To add a license for PowerFlex, do the following:

a. Identify and copy the contents of the PowerFlex license file. b. In the PowerFlex GUI presentation server, click Settings > Licenses. c. Paste the contents of the license file into the space provided.

2. To add a PowerFlex Manager license:

a. Log in to PowerFlex Manager. b. On the Licensing page of the Initial Setup wizard, click Choose File to the right of the Upload License field, and

select a valid license file. Based on the license selected, the following information is displayed : TypeDisplays the license type. PowerFlex Manager supports two license types:

StandardFull-access license type.

TrialEvaluation license that expires after a specified number of days and only supports a limited number of resources. The number of days before expiration and the number of resources supported both depend on the license you choose.

Total ResourcesDisplays the maximum number of resources allowed by the license. Expiration DateDisplays the expiration date of the license (only shown for a trial license).

c. To activate the license, click Save and Continue.

Managing volumes, nodes, and network components Use this procedure to manage components using PowerFlex Manager.

Steps

Log in to PowerFlex Manager.

The following table describes common tasks for managing system components and what steps to take in PowerFlex Manager to initiate each.

If you want to... Do this in PowerFlex Manager...

View network topology a. Click Services. b. On the Services page, select a service. c. On the Service Details tab, click the Port View tab.

Run inventory (nodes, switches, PowerFlex Gateway, and VMware vCenter cluster)

a. Click Resources and then click the All Resources tab. b. Click the check box for the resource you want to update and then click Run

Inventory. c. After running the inventory, click Update Service Details on the Services

page for any service that requires the updated resource data.

Add an existing service Click Services and click +Add Existing Service.

Perform node expansion a. Click Services. On the Services page, select a service. b. On the Service Details tab, under Resource Actions, expand the Add

Resources list and click Add Nodes. The procedure is the same for new services and existing services.

Remove a node a. Click Services. b. On the Services page, select a service. c. On the Service Details tab, under Resource Actions, click Remove

Resource. d. Select the node and click Next.

Administering the storage 71

If you want to... Do this in PowerFlex Manager...

e. Select Delete Resource for the Resource removal type.

Enter service mode a. Click Services. b. On the Services page, select a service. c. On the Service Details tab, under Service Actions, click Enter Service

Mode.

Exit service mode a. Click Services. b. On the Services page, select a service. c. On the Service Details tab, under Service Actions, click Exit Service Mode.

Replace a drive a. Click Services. b. On the Services page, select a service. c. On the Service Details tab, select a node and click Node Actions>Drive

Replacement.

Reconfigure MDM roles a. Click Services. b. On the Services page, select a service. c. On the Service Details tab, select a node and click Node

Actions>Reconfigure MDM Roles or click Reconfigure MDM Roles under Service Actions.

You can also reconfigure MDM roles from the Resources page. Select a PowerFlex Gateway and click View Details. Then, click Reconfigure MDM Roles.

Monitoring system health Use this procedure to monitor system health.

Steps

Log in to PowerFlex Manager.

The following table describes common tasks for monitoring system health and managing software and firmware compliance and what steps to take in PowerFlex Manager to initiate each.

If you want to... Do this in PowerFlex Manager...

Monitor system resources and health On the Dashboard, look at the Service Overview and Resource Overview sections.

Monitor software and firmware compliance

a. Click Services. b. On the Services page, select a service. c. On the Details page, under Service Actions, click View Compliance Report.

Perform software and firmware remediation

From the compliance report, view the firmware or software components. Click Update Resources to update non-compliant resources.

Generate a troubleshooting bundle a. Click Settings and then click Virtual Appliance Management. b. Click Generate Troubleshooting Bundle.

Download a report that lists compliance details for all resources

a. Click Resources. b. Click Export Report and select either PDF or CSV from the drop-down list.

View alerts Click Settings and then click Alerts.

72 Administering the storage

Upgrading PowerFlex appliance firmware Use this procedure for upgrading the intelligent catalog and operating system repositories.

About this task

For more information, see the PowerFlex Manager online help.

Steps

1. Log in to PowerFlex Manager.

2. Click Settings > Compliance and OS Repositories.

3. Select the Compliance Versions tab to load compliance versions and specify a default version for compliance checking.

The +Add button is available in both the Compliance Versions and OS Image Repositories tab.

You cannot make a minimal compliance version the default version for compliance checking, since it only includes server firmware updates. The default version must include the full set of compliance update capabilities. PowerFlex Manager does not show any minimal compliance versions in the Default Version dropdown menu.

The Compliance Versions tab displays the following information: State Displays an icon indicating one of the following states:

AvailableIndicates that the compliance file is downloaded and copied successfully. DownloadingIndicates that the compliance file is being downloaded and provides the percentage complete for the

download operation. SynchronizingIndicates that the compliance file is being synchronized with the virtual appliance after unpacking. UnpackingIndicates that the compliance file is being unpacked and provides the percentage complete for the

unpacking operation. PendingIndicates that the compliance file download process is in progress. ErrorIndicates that there is an issue downloading the compliance file.

VersionDisplay the compliance version. SourceDisplays the share path of the compliance version in a file share. File SizeDisplays the size of the compliance file in GB. TypeDisplays Minimal if the compliance file only contains firmware updates, or Full if it contains firmware and

software updates. View bundlesDisplays details about any bundles added for the compliance version. Available ActionsSelect one of the following options:

Delete Resynchronize

4. Select the OS Image Repositories tab to create operating system image repositories and view the following information:

State Displays the following states: AvailableIndicates that the operating system image repository is downloaded and copied successfully on the

appliance. PendingIndicates that the operating system image repository download process is in progress. ErrorIndicates that there is an issue downloading the operating system image repository.

RepositoriesDisplay the name of the repository. Image TypeDisplays the operating system type. Source PathDisplays the share path of the repository in a file share. In UseDisplays the following options:

TrueIndicates that the operating system image repository is in use. FalseIndicates that the operating system image repository is not in use.

Available ActionsSelect one of the following options: Delete Resynchronize

You cannot perform any actions on repositories that are in use. However, you can delete repositories that are in an Available state, but not in use and not set as a default version.

All the options are available only for repositories in an Error state. The Resynchronize option appears only when you must perform a backup and restore of a previous image.

Administering the storage 73

If a new compliance version becomes available, the Compliance and OS Repositories page displays a notification banner at the top of the screen with the text A new compliance version is available for download. View Details. To the far right of the banner, you should see an Actions menu that gives you the following choices: View details lets you see details about the new compliance version and download it. Hide lets you hide the banner for the new compliance version. This action applies only to the current user and session. If

the current user logs off, the banner reappears when this user logs in again. In addition, the banner appears if a different user logs in. The notification banner also displays if another compliance version becomes available.

Dismiss 30 days lets you dismiss the banner for this particular compliance version for 30 days. This action applies only to the current user. The banner appears if a different user logs in. The notification banner also displays if another compliance version becomes available.

The new compliance version banner shows up only if you have registered with Secure Remote Services.

Upgrade Windows and Linux compute-only nodes

Prerequisites

Ensure there are no workloads running on the compute-only node.

Steps

1. For Windows:

a. Download the PowerFlex_x.x.x.x_xxx_Complete_Software.zip bundle from Dell Support. b. Copy the following files to the compute-only nodes \Temp directory:

Windows software: PowerFlex_X.X.X.X_XXX_Complete_Software\PowerFlex_X.X.X.X_XXX_Complete_Windows_SW\Po werFlex_X.X.X.X_XXX_Complete_Windows_SW\PowerFlex_X.X.X.X_XXX_Windows\

SDC: EMC-ScaleIO-sdc-X.X-XXX.XXX.msi LIA: EMC-ScaleIO-lia-X.X-XXX.XXX.msi

c. Log in to the compute-only node as the administrator. d. Go to Control Panel > Programs > Programs and Features to check the SDC and LIA version. Change the directory

to C:\Temp.

e. Click EMC-ScaleIO-lia-X.X-XXX.XXX.msi, check I accept the terms in the License Agreement and click Install > Finish.

f. Click EMC-ScaleIO-sdc-X.X-XXX.XXX.msi, check I accept the terms in the License Agreement and click Install > Finish. Click Yes to reboot.

g. Go to Control Panel > Programs > Programs and Features to check the SDC and LIA version. h. To verify that the SDC is connected in the PowerFlex presentation server GUI, click Configuration > SDCs.

2. For Linux:

a. Download and unzip the Flex_RCM_x_x_x_rx.zip file to the jump server. This is a large file.

b. Change Directory into the extracted RCM c. Change the directory into OS, and unzip VxFlex_OS_x.x.x.x_xxx_RHEL_OEL7.zip and change the directory to

vxfm/vxflexox_x.x.x. d. Copy the following files to the compute-only nodes /tmp directory using SCP:

SDC: EMC-ScaleIO-sdc-X.X-X.XXX.el7.x86_64.rpm LIA: EMC-ScaleIO-lia-X.X-X.XXX.el7.x86_64.rpm

e. Log in to the compute-only node as root. f. Change the directory to /tmp.

g. Type rpm -qa | grep -i emc to check the version.

h. Type:

rpm -Uvh EMC-ScaleIO-lia-X.X-XXXX.XXX.rpm rpm -Uvh EMC-ScaleIO-sdc-X.X-XXXX-XXX.rpm

i. Reboot the compute-only node. j. To verify that the SDC is connected in the PowerFlex presentation server GUI, click Configuration > SDCs. k. To verify the LIA/SDC version, type rpm -qa | grep -i emc.

74 Administering the storage

3. Ensure all of your applications are running and the client connections to the applications are successful.

Mapping a volume using a PowerFlex version prior to 3.5 to a Windows PowerFlex compute-only node Use this procedure to map a PowerFlex volume to a Windows PowerFlex compute-only node in the customer cluster.

About this task

For PowerFlex Manager version 3.8, Windows compute-only is no longer supported.

Steps

1. Open the PowerFlex GUI, click Front-end, and select Volumes.

2. Right-click the volume, and then select Map.

3. Select the Windows compute-only nodes, and click Map Volumes.

4. Log in to the Windows Server compute-only node and open disk management.

5. Right-click the Windows icon, and then select Disk Management.

6. Rescan the disk by selecting Action > Rescan Disks.

7. Find the disk in the bottom frame, right-click in left area of the disk, and select Online.

8. Initialize the disk by doing the following steps:

a. Find the disk in the bottom frame, right click in right area of disk, and then select New Simple Volume. b. In the New Simple Volume Wizard, click Next. c. Select the default, and click Next. d. Assign the drive letter, and click Next. e. Select the default, and click Next. f. Click Finish.

Mapping a volume using Windows PowerFlex compute-only node Use this procedure to map a PowerFlex volume to a Windows PowerFlex compute-only node in the customer cluster.

About this task

For Windows PowerFlex compute-only nodes, only firmware upgrades are supported.

Steps

1. Log in to the PowerFlex GUI and click the Configuration tab.

2. Click Volumes.

3. Select volume and click Mapping and select Map.

4. Select the required Windows compute-only node and click Map.

5. Select the volume to map and click Apply.

6. Select the Windows compute-only nodes, and click Map Volumes.

7. Log in to the Windows Server compute-only node and open disk management.

8. Right-click the Windows icon, and then select Disk Management.

9. Rescan the disk by selecting Action > Rescan Disks.

10. Find the disk in the bottom frame, right-click in left area of the disk, and select Online.

11. Initialize the disk by performing the following steps:

a. Find the disk in the bottom frame, right click in right area of disk, and then select New Simple Volume. b. In the New Simple Volume Wizard, click Next. c. Select the default, and click Next.

Administering the storage 75

d. Assign the drive letter, and click Next. e. Select the default, and click Next. f. Click Finish.

Enabling and disabling SDC authentication PowerFlex allows authentication and authorization be enabled for all SDCs connected to a cluster. Once authentication and authorization are enabled, older SDC clients and SDCs without a configured password will be disconnected.

The SDC procedures are not applicable for the PowerFlex management cluster.

NOTE: If SDC authentication is enabled in a production environment, data unavailability may occur if clients are not properly

configured.

Preparing for SDC authentication

Prerequisites

You will need the following information: Primary and secondary MDM IP address PowerFlex cluster credentials

Steps

1. Log in to the primary MDM.

2. Authenticate against the PowerFlex cluster using the credentials provided.

3. List and record all connected SDCs (either NAME, GUID, ID, or IP), type: scli --query_all_sdc.

4. For each SDC in your list, use the identifier you recorded to generate and record a CHAP secret, type: scli -- generate_sdc_password --sdc_IP (or NAME, GUID, or ID) --reason "CHAP setup".

NOTE: This secret is specific to that SDC and cannot be reused for subsequent SDC entries.

For example, scli --generate_sdc_password --sdc_IP 172.16.151.36 --reason "CHAP setup" Example output:

[root@svm1 ~]# scli --generate_sdc_password --sdc_ip 172.16.151.36 --reason "CHAP setup" Successfully generated SDC with IP 172.16.151.36 password: AQAAAAAAAAAAAAA8UKVYp0LHCDFD59BrnEXNPVKSlGfLrwAk

Configuring SDCs to use authentication

Use this procedure to configure all the SDCs for authentication.

About this task

For each SDC, you must populate the generated CHAP password. On an VMware ESXi host, this requires setting a new scini parameter using the esxcli tool. Use this procedure to perform the configuration change. For Windows and Linux SDC hosts, the included drv_cfg utility can be used to update the driver and configuration file in real time. An example will be given after the VMware ESXi procedure. For Windows PowerFlex compute-only nodes, only firmware upgrades are supported.

NOTE: VMware ESXi hosts must be rebooted for the new parameter to take effect.

NOTE: This procedure is not applicable for the PowerFlex management controller 2.0.

76 Administering the storage

Prerequisites

Ensure you have generated preshared secrets (passwords) for all SDCs to be configured.

Ensure you have the following information:

Primary and secondary MDM IP address or NAMEs Credentials to access all SDC hosts or VMs

Steps

1. SSH to the VMware ESXi host using the provided credentials.

2. List the hosts current scini parameters esxcli system module parameters list, type -m scini | grep Ioctl.

IoctlIniGuidStr string 10cb8ba6-5107-47bc-8373-5bb1dbe6efa3 Ini Guid, for example: 12345678-90AB-CDEF-1234-567890ABCDEF

IoctlMdmIPStr string 172.16.151.40,172.16.152.40 Mdms IPs, IPs for MDM in same cluster should be comma separated. To configure more than one cluster use '+' to separate between IPs.For Example: 10.20.30.40,50.60.70.80+11.22.33.44. Max 1024 characters

IoctlMdmPasswordStr string Mdms passwords. Each value is - , Multiple passwords separated by ';' signFor example: 10.20.30.40-AQAAAAAAAACS1pIywyOoC5t;11.22.33.44-tppW0eap4cSjsKIcMax 1024 characters

NOTE: The third parameter, IoctlMdmPasswordStr is currently empty.

3. Using esxcli, configure the driver with the existing and new parameters. For specifying multiple IP address here, use a semi-colon (;) between the entries, as shown in the following example:

esxcli system module parameters set -m scini -p "IoctlIniGuidStr=10cb8ba6-5107-47bc-8373-5bb1dbe6efa3 IoctlMdmIPStr=172.16.151.40,172.16.152.40 IoctlMdmPasswordStr=172.16.151.40- AQAAAAAAAAA8UKVYp0LHCFD59BrnExNPvKSlGfLrwAk;172.16.152.40- AQAAAAAAAAA8UKVYp0LHCFD59BrnExNPvKSlGfLrwAk"

NOTE: The spaces between the Ioctl parameter fields and the opening/closing quotes. The above is entered on a single

line.

4. Now the SDC configuration is ready to be applied. On VMware ESXi nodes a reboot is necessary for this to happen. If the SDC is a hyperconverged node, proceed with step 5. Otherwise, skip to step 8.

5. For hyperconverged nodes, use PowerFlex or the scli tool to place the corresponding SDS into maintenance mode.

6. If the SDS is also the cluster primary MDM, switch the cluster ownership to a secondary MDM and verify cluster state before proceeding, type: scli --switch_mdm_ownership --mdm_name

7. Once the cluster ownership has been switched (if needed) and the SDS is in maintenance mode, the SVM may be powered down safely.

8. Place the ESXi host in maintenance mode. If workloads need to be manually migrated to other hosts, have those actions performed now prior to maintenance mode being engaged.

9. Reboot the ESXi host.

10. Once the host has completed rebooting, remove it from maintenance mode and power on the SVM (if present)

11. Take the SDS out of maintenance mode (if present).

12. Repeat steps 1 through 11 for all VMware ESXi SDC hosts.

Windows and Linux SDC nodes

Windows and Linux hosts have access to the drv_cfg utility, which allows driver modification and configuration in real time. See below for an example. The --file option allows for persistent configuration to be written to the driver's configuration file (so that the SDC remains configured after a reload or reboot). For Windows PowerFlex compute-only nodes, only firmware upgrades are supported.

Windows drv_cfg --set_mdm_password --ip --port 6611 --password

Administering the storage 77

Linux /opt/emc/scaleio/sdc/bin/drv_cfg --set_mdm_password --ip --port 6611 --password --file /etc/emc/scaleio/drv_cfg.txt

Iterate through the relevant SDCs, using the command examples above along with the recorded information.

Enabling SDC authentication

Once the SDCs have been prepared and configured for SDC authentication, you may proceed with enabling the feature. This procedure is not applicable for the PowerFlex management controller 2.0.

Prerequisites

Ensure all SDCs are configured with their appropriate CHAP secret. Any older or unconfigured SDC will be disconnected from the system when authentication is turned on.

You will need the following information: The primary MDM IP address Credentials to access the PowerFlex cluster

Steps

1. SSH to the primary MDM address.

2. Log in to the PowerFlex cluster using the provided credentials.

3. Enable the SDC authentication, type: scli --set_sdc_authentication --enable 4. Verify that the SDC authentication and authorization is turned on, and the SDCs are connected with passwords, type: scli

--check_sdc_authentication_status Example output:

[root@svm1 ~]# scli --check_sdc_authentication_status SDC authentication and authorization is enabled. Found 4 SDCs. The number of SDCs with generated password: 4 The number of SDCs with updated password set: 4

5. If the number of SDCs do not match, or you experience disconnected SDCs, list any or all disconnected SDCs and then disable the SDC authentication by using the commands: scli --query_all_sdc | grep "State: Disconnected" scli --set_sdc_authentication --disable Recheck the disconnected SDCs to ensure they have the proper configuration applied. If necessary, regenerate their shared secret and reconfigure the SDC. If unable to resolve SDC disconnection, leave the feature disabled and engage Dell EMC support as needed.

Disabling SDC authentication

This procedure is not applicable for the PowerFlex management controller 2.0.

Prerequisites

Ensure all SDCs are configured with their appropriate CHAP secret. Any older or unconfigured SDC will be disconnected from the system when authentication is turned on.

You will need the following information: Primary MDM IP address Credentials to access the PowerFlex cluster

Steps

1. SSH to the primary MDM address.

2. Log in to the PowerFlex cluster using the provided credentials.

78 Administering the storage

3. Disable the SDC authentication, type: scli --set_sdc_authentication --disable Once disabled, SDCs will reconnect automatically unless otherwise configured.

Expanding an existing PowerFlex cluster with SDC authentication enabled

Once a PowerFlex cluster has SDC authentication that is enabled, new SDCs must have the configuration step that is performed after the client is installed. This procedure is not applicable for the PowerFlex management controller 2.0. For Windows PowerFlex compute-only nodes, only firmware upgrades are supported.

Prerequisites

Ensure you have the following information: Primary MDM IP address Credentials for the PowerFlex cluster The IP address of the new cluster members Ensure you have added the SDC authentication enabled on the PowerFlex cluster.

Steps

1. Install and add the SDCs as per normal procedures (whether using PowerFlex Manager or manual expansion process).

NOTE: New SDCs will show as Disconnected at this point, as they cannot authenticate to the system.

2. SSH to the primary MDM.

3. Log in to the PowerFlex cluster using the scli tool.

4. For each of your newly added SDCs, generate and record a new CHAP secret, type: scli --generate_sdc_password --sdc_IP --reason "CHAP setup - expansion."

5. SSH and log in to the SDC host.

6. If the new SDC is an VMware ESXi host, follow the rest of this procedure. If Windows or Linux based, see Adding Windows or Linux Authenticated SDCs.

7. Type -m scini | grep Ioctl and esxcli system module parameters list -m scini to list the current scini parameters of the host.

8. Using esxcli, type esxcli system module parameters set -m scini -p to configure the driver with the existing and new parameters. For example, esxcli system module parameters set -m scini -p "IoctlIniGuidStr=09bde878-281a-4c6d-ae4f-d6ddad3c1a8f IoctlMdmIPStr=10.234.134.194,192.168.152.199,192.168".

9. At this stage, the SDC's configuration is ready to be applied. On ESXi nodes a reboot is necessary for this to happen. If the SDC is a hyperconverged node, proceed with step 10. Otherwise, go to step 12.

10. For PowerFlex hyperconverged nodes, use the presentation manager or scli tool to place the corresponding SDS into maintenance mode.

11. Once the SDS is in maintenance mode, the SVM may be powered off safely.

12. Place the ESXi host in maintenance mode. No workloads should be running on the node, as we have not yet configured the SDC.

13. Reboot the ESXi host.

14. Once the host has completed rebooting, remove it from maintenance mode and power on the SVM (if present)

15. Take the SDS out of maintenance mode (if present).

16. Repeat steps 5 through 15 for all ESXi SDC hosts.

Administering the storage 79

Administering the storage with asynchronous replication

Perform the following procedures to administer the PowerFlex appliance storage with asynchronous replication.

Remote replication on PowerFlex hyperconverged nodes

Remote replication ensures data protection of the PowerFlex appliance. It creates a remote copy of one volume from one cluster to another. PowerFlex appliance supports asynchronous replication.

Setting up the peer system is the first step when configuring remote protection. The volumes from each of the systems must be the same size. If the network is up, then the systems should be connected.

Remote consistency group (RCG) Remote consistency group (RCG) is an entity that includes a set of consistent volume pairs. The volume on the source from a single protection domain (PD) is replicated to a remote volume from a single PD on the target. This creates a consistent pair of volumes.

When replication is first activated for an RCG, the target volumes will be synchronized with the source volumes. For each volume pair, the entire contents of each source volume are copied to the corresponding target volume. When there is more than one volume pair in the RCG, the order in which the volumes are synchronized is determined by the order in which the volume pairs were created. The initial synchronization occurs while all applications are running and performing I/O. Any writes to an area of the volume that has already been synchronized will be sent to the journal. Writes to an area of the volume that has not already been synchronized will be ignored, as the updated content will be copied over eventually as part of the synchronization.

The initial synchronization can also take place while the system is offline, however the application I/O must first be paused. You can add and manage RCG on both the source and target systems.

Replication direction and mapping Replication direction and mapping according to subsequent remote consistency group (RCG) operations and possible actions are as follows:

Subsequent RCG operations

Possible actions Replication direction / access Access to volumes

Normal Switchover / test failover / failover

Remove

A to B Access to volumes is allowed only through the source (system A)

After failover Reverse / restore

Remove

N/A - data is not replicated By default, access to the volume is allowed through the original target (system B).

It is possible to enable access through the original source (system A).

4

80 Administering the storage with asynchronous replication

Subsequent RCG operations

Possible actions Replication direction / access Access to volumes

After failover + reverse NOTE: Switchover and test failover are only possible after the peers are synchronized.

Switchover / test failover / failover

Remove

B to A Access to the volumes is allowed only through the original target (system B)

After failover + restore NOTE: Switchover and test failover are only possible after the peers are synchronized.

Switchover / test failover / failover

Remove

A to B Access to the volumes is allowed only through the source (system A)

After switchover Switchover / test failover / failover

Remove

B to A Access to the volumes is allowed only through the original target (system B)

After test failover Switchover / test failover / failover

Remove

A to B Access to the volumes is allowed through both systems (system A and system B)

Adding a replication consistency group Use this procedure to add a replication consistency group (RCG).

Steps

1. Log in to the PowerFlex GUI presentation server; https://presentation_server_ip:8443.

NOTE: Use the primary MDM IP address and credentials to log in to the PowerFlex cluster.

2. In the left pane, select Protection > RCGs.

3. Click Add.

a. In the General tab, provide the RCG Name and RPO (recovery point objective).

RPO is the amount of time of data loss that is tolerated if replication between the systems is compromised.

NOTE: It is recommended to enter the minimal amount of time the feature allows. In this case, it is one minute.

b. Select the Source Protection Domain, Target System, and Target Protection Domain from the menu and click Next.

4. On the Add Replication Pairs page:

a. Click the volume from the Source column and click the corresponding size volume from the Target column.

NOTE: Source and destination volumes should be identical.

b. Click Add pair, select the added pair that must be replicated and click Next.

5. On the Review Pairs page:

a. Select the added pair and click Add RCG & Start Replication and start replication. b. Verify that the operation completes successfully and click Dismiss.

The RCG is added to both the source and target systems. It is necessary to wait for the end of the initial copy transmit before start to use.

Administering the storage with asynchronous replication 81

Checking the current copy status Use this procedure to check the current copy status through SCLI or the PowerFlex GUI.

Steps

1. Using SCLI, complete the following:

a. Log in to the primary MDM using SSH and log in to scli using the following command to add the peer system. b. Log in to scli to add the peer system, type: scli --login --username admin.

c. Enter the MDM cluster password. d. Type #scli --query_all_replication_pairs to verify replication status.

Once initial copy is complete, PowerFlex replication system is ready for use.

2. Using the PowerFlex GUI, complete the following:

a. From the PowerFlex GUI, in the left pane, click Protection > RCGs. b. In the right pane, select the relevant RCG check box. c. Select the Volume Pairs tab and in the Details pane, verify the initial copy status and progress.

Once initial copy is complete, PowerFlex replication system is ready for use.

Modifying the recovery point objective Use this procedure to modify the recovery point objective (RPO) if it is required.

Steps

1. In the PowerFlex GUI, in the left pane, click Protection > RCGs.

2. In the right pane, select the relevant RCG check box, and click Modify > Modify RPO.

3. In the Modify RPO for RCG dialog box, enter the new RPO time and click Apply.

4. Verify that the operation completed successfully and click Dismiss.

Adding a replication pair to a remote consistency group Use this procedure to add a replication pair to a remote consistency group (RCG).

Steps

1. From the PowerFlex GUI, in the left pane, click Protection > RCGs.

2. In the right pane, select the relevant RCG check box, and click Modify > Add Pair.

3. In the Add Pairs wizard, on the Add Replication Pairs page, select a volume from the source and a volume from the target and then click Add Pair.

4. Click Next.

5. In the Review Pairs page, verify the selected volumes are the correct volumes and click Add Pairs.

Unpairing from a remote consistency group Use this procedure to unpair from a remote consistency group (RCG).

Steps

1. From the PowerFlex GUI, in the left pane, click Protection > RCGs.

2. In the right pane, select the relevant RCG check box, and in the Details pane, in the Volume Pairs tab, click Unpair.

3. In the Remove Pair from RCG dialog box, click Remove Pair.

82 Administering the storage with asynchronous replication

4. Verify the operation completed successfully and click Dismiss.

Freezing a remote consistency group Use this procedure to freeze a remote consistency group (RCG).

About this task

Freezing stops writing data from the target journal to the target volume. This option is used while creating a snapshot or copy of the replicated volume.

Steps

1. From the PowerFlex GUI, in the left pane, click Protection > RCGs.

2. In the right pane, select the relevant RCG check box, and click More > Freeze apply.

3. Click Freeze Apply.

4. Verify that the operation completed successfully and click Dismiss.

Unfreezing a remote consistency group Use this procedure to unfreeze a remote consistency group (RCG).

About this task

Unfreezing the RCG is used while creating a snapshot or copy of the replicated volume.

Steps

1. From the PowerFlex GUI, in the left pane, click Protection > RCGs.

2. In the right pane, select the relevant RCG check box, and click More > Unfreeze apply.

3. Click Unfreeze Apply to resume data transfer from target journal to target volume.

4. Verify that the operation completed successfully and click Dismiss.

Setting the target to inconsistent mode Use this procedure to set the target to inconsistent mode.

About this task

Set the target to inconsistent mode to pause apply from the target journal to the target volume until the source journal has completed sending data to the target journal. If there is no consistent image on the target journal, then the system does not apply.

NOTE: It is recommended to take a snapshot of the target before setting the target to inconsistent mode for recovery

purposes of a consistent snapshot.

Steps

1. From the PowerFlex GUI, in the left pane, click Protection > RCGs.

2. In the right pane, select the relevant RCG check box, and click Modify > Set Target to Inconsistent Mode.

3. In the Set Target to Inconsistent Mode RCG dialog box, click Apply.

4. Verify that the operation completed successfully and click Dismiss.

Administering the storage with asynchronous replication 83

Setting the target to consistent mode Use this procedure to set the target to consistent.

About this task

If target is set to inconsistent, you can set it back to consistent. As data is transferred from source to target, the SDR verifies that the data in the journal is consistent with the data from the source. The SDR then sends an apply to the journal to prompt SDR to send data to the volume.

Steps

1. From the PowerFlex GUI, in the left pane, click Protection > RCGs.

2. In the right pane, select the relevant RCG check box, and click Modify > Set Target to Consistent Mode.

3. In the Set Target to Consistent Mode RCG dialog box, click Apply.

4. Verify that the operation completed successfully and click Dismiss.

Running a test failover Use this procedure to run a test failover of the latest copy of snapshots of source and target systems before running a failover.

About this task

Running a test failover provides the following: Enables you to perform resource-intensive operations on secondary storage without impacting production Test application upgrades on the target system without production impact Ability to attach different, and higher-performing compute systems or media in the target environment Ability to attach systems with different hardware attributes such as GPUs in the target domain Ability to run analytics on the data without impeding your operational systems Perform what-if actions on the data because that data will not be written back to prod Eliminates many manual storage tasks because the test is fully automated along with the snapshots

Prerequisites

Ensure replication is still running and is in a healthy state.

Before running a test failover, map the target volumes with appropriate access mode. By default, volumes are mapped with read_write access. This creates a conflict with the mapping of target volumes, since Powerflex set the remote access mode of the Replication Consistency Group (RCG) point-of-view to read_only. This is incompatible with the default mapping access mode of read_write volume mapping offered by the Powerflex GUI, therefore log onto the target system and manually map all volumes in the RCG to the target system using scli command.

Example: # scli --map_volume_to_sdc --volume_name volume1 --sdc_id 47c091f200000004 -- access_mode read_only Once the remote volumes are mapped, we can test the RCG failover, the test failover command: Creates a snapshot on the target system for all volumes attached to the RCG. Replaces the pointer used by the volume mapping for each volume with a pointer to its snapshot. Changes the access mode of the volume mapping of each volume on the target system to read_write.

A test failover operation is only possible after the peers are synchronized.

Steps

1. From the PowerFlex GUI, in the left pane, click Protection > RCGs.

2. In the right pane, select the relevant RCG check box, and click More > Test Failover.

3. In the RCG Test Failover dialog box, click Start Test Failover.

4. In the RCG Test Failover using target volumes dialog box, click Proceed.

5. Verify that the operation completed successfully and click Dismiss.

84 Administering the storage with asynchronous replication

Stopping test failover This procedure automatically deletes the snapshots created during test failover.

Steps

1. From the PowerFlex GUI, in the left pane, click Protection > RCGs.

2. In the right pane, select the relevant RCG check box, and click More > Test Failover Stop.

3. Click Approve.

4. Verify that the operation completed successfully and click Dismiss.

Running a failover Use this procedure to failover the source role to the target system.

About this task

If the system is not healthy, you can failover the source role to the target system. When the source is compromised, data from the host stops sending I/Os to the source volume, replication is then stopped, and the target system takes on the role of source. The host on the target starts sending I/Os to the volume. The target takes on the role of source, and the source takes on the role of target.

There are two options when choosing to failover a remote consistency group (RCG): Switchover -This option is a complete synchronization and failover between the source and the target. Application I/Os are

stopped at the source, and the source and target volumes are synchronized. Access mode is changed of the target volumes to the target host, the roles are switched, and finally new source volumes access mode are changed to read/write.

Latest PiT - The system prevents any write to the source volumes.

Prerequisites

Before performing failover, ensure you stop the application and unmount the file-systems at the source (if the source is available). Target volumes are only be mapped after performing a failover. Target volumes can also be mapped using scli.

Steps

1. From the PowerFlex GUI, in the left pane, click Protection > RCGs.

2. In the right pane, select the relevant RCG check box, and click More > Failover.

3. In the Failover RCG dialog box, select one of the following options:

Switchover: (sync and failover) Latest PiT: (date and time)

4. Click Apply Failover.

5. In the RCG Sync & Failover dialog box, click Proceed.

6. Verify that the operation completed successfully and click Dismiss.

7. From the top right, click Running Jobs and check the progress of the failover.

Restoring replication Use this procedure to restore replication when the remote consistency group (RCG) is in failover.

About this task

When the RCG is in failover mode, you can reverse or restore the replication. Restoring replication maintains the replication direction from the original source and overwrites all data at the target. This option may be selected from either source or target systems.

Administering the storage with asynchronous replication 85

Prerequisites

This option is available when RCG is in failover mode, or when the target system is not available. It is recommended to take a snapshot of the original destination before restoring the replication for backup purposes.

Steps

1. From the PowerFlex GUI, in the left pane, click Protection > RCGs.

2. In the right pane, select the relevant RCG check box, and click More > Restore.

3. In the Restore Replication RCG dialog box, click Apply.

4. Verify that the operation completed successfully and click Dismiss.

Reversing replication Use this procedure to reverse replication if the remote consistency group (RCG) is in failover or switchover mode.

About this task

When the RCG is in failover or switchover mode, you can reverse or restore the replication. Reversing replication changes the direction so that the original target becomes the source. All data at the original source is overwritten by the data at the target. This option may be selected from either source or target systems.

Prerequisites

This option is available when RCG is in failover mode, or when the target system is not available. It is recommended to take a snapshot of the original source before reversing the replication for backup purposes.

Steps

1. From the PowerFlex GUI, in the left pane, click Protection > RCGs.

2. In the right pane, select the relevant RCG check box, and click More > Reverse.

3. In the Restore Replication RCG dialog box, click Apply.

4. Verify that the operation completed successfully and click Dismiss.

Creating a snapshot of the remote consistency group (RCG) volume Use this procedure to create a snapshot of the RCG.

About this task

Create a snapshot of the RCG volume from the target system. The latest image of the volume is used for the snapshot. When creating a snapshot, the RCG enters a freeze mode.

Steps

1. From the PowerFlex GUI, in the left pane, click Protection > RCGs.

2. In the right pane, select the relevant RCG check box, and click More > Create Snapshots.

3. In the Create Snapshots RCG dialog box, click Create Snapshots.

4. Verify that the operation completed successfully and click Dismiss.

86 Administering the storage with asynchronous replication

Pausing the remote consistency group Use this procedure to pause the replication for the remote consistency group (RCG).

About this task

Pausing stops the transfer of data from the source to the target.

Steps

1. From the PowerFlex GUI, in the left pane, click Protection > RCGs.

2. In the right pane, select the relevant RCG check box, and click More > Pause RCG.

3. In the Pause RCG dialog box, click one of the following options:

Stop data transfer - this option saves all the data in the source journal volume until there is not any available capacity. Track Changes - this option enables manual slim mode where only metadata in the source journal volumes is saved.

4. Click Pause.

5. Verify that the operation completed successfully and click Dismiss.

Pausing the initial copy Use this procedure to pause replication of the initial copy from the source to the target.

Steps

1. From the PowerFlex GUI, in the left pane, click Protection > RCGs.

2. In the right pane, select the relevant RCG check box, and click Initial copy > Pause Initial copy.

3. In the Pause Initial Copy dialog box, click Pause Initial Copy.

Resuming the initial copy Use this procedure to resuming replication of the initial copy from the source to the target.

Steps

1. From the PowerFlex GUI, in the left pane, click Protection > RCGs.

2. In the right pane, select the relevant RCG check box, and click More > Resume.

3. In the Resume Initial Copy dialog box, click Resume Initial Copy.

4. Verify that the operation completed successfully and click Dismiss.

Resuming the replication consistency group Use this procedure to resume the replication consistency group (RCG).

Steps

1. From the PowerFlex GUI, in the left pane, click Protection > RCGs.

2. In the right pane, select the relevant RCG check box, and click More > Resume.

3. In the Pause RCG dialog box, click one of the following options:

4. Click Resume RCGs.

Stop Data Transfer - this option saves all the data in the source journal volume until there is not any available capacity. Track Changes - this option enables manual slim mode where only metadata in the source journal volumes is saved.

5. Verify that the operation completed successfully and click Dismiss.

Administering the storage with asynchronous replication 87

Setting priority Use this procedure to set the order priority for copying volume pairs.

About this task

Set the priority to the highest priority for pairs to be copied first, or set to the lowest priority to be copied last.

NOTE: Setting the priority is only valid during initial copy.

Steps

1. From the PowerFlex GUI, in the left pane, click Protection > RCGs.

2. In the right pane, select the relevant RCG check box.

3. In the Volumes Pairs tab, click Initial copy > Set Priority.

4. In the Set Priority for Pair dialog box, select Default or High and click Save.

5. Verify that the operation completed successfully and click Dismiss.

Mapping remote consistency groups to the Storage Data Clients (SDC) Use this procedure to designate which SDCs can access the remote consistency groups (RCGs) from the target volumes.

Prerequisites

This mapping is only enabled from the target RCG.

Steps

1. From the PowerFlex GUI, in the left pane, click Protection > RCGs.

2. In the right pane, click the relevant RCG check box and click Mapping > Map.

3. In the Map RCG Target Volumes dialog box, click the relevant SDC check box, and click Map.

4. In the Mappings section of the dialog box, select the volume check box and select the access mode.

NOTE: Read Access mode applies to all platforms, except Windows clusters, which require the No Access mode.

5. In the Map RCG Target Volumes dialog box, click Map RCG Target Volumes.

6. Click Apply.

7. Verify that the operation completed successfully and click Dismiss.

Mounting a VMFS datastore copy on the target VMware ESXi cluster Use this procedure to mount a VMFS datastore copy on the target VMware ESXi cluster.

Prerequisites

Ensure you perform a storage rescan on your host to update the view of storage devices that are presented to the host.

Steps

1. In the VMware vSphere web client navigator, browse to a host, a cluster, or a data center.

2. From the right-click menu, select Storage > New datastore.

3. Select VMFS as the datastore type.

4. Enter the datastore name and if necessary, select the placement location for the datastore.

88 Administering the storage with asynchronous replication

5. From the list of storage devices, select the volume that is mapped to the cluster, and click Next.

6. Select Keep existing signature and click Next.

NOTE: Assign a new signature option is only recommended when you want to mount the volume on the same

VMware ESXi host where you have original volume present. Also, be aware creating a new signature is irreversible

operation.

7. Click Finish.

8. Click OK.

9. Rescan for new VMFS volumes

a. In the VMware vSphere client, browse to a host, a cluster, or a data center. b. From the right-click menu, select Storage > Rescan Storage > Scan for new VMFS Volumes. c. Click OK.

Unmapping an Storage Data Client (SDC) from the remote consistency group target volumes Use this procedure to unmap an Storage Data Client (SDC) from the remote consistency group (RCG) target volumes.

Steps

1. From the PowerFlex GUI, in the left pane, click Protection > RCGs.

2. In the right pane, click the relevant RCG check box and click Mapping > Unmap.

3. In the Unmap dialog box, click the relevant SDC check box, and click Unmap.

4. Verify that the operation completed successfully and click Dismiss.

Configuring replication on PowerFlex storage-only nodes This section describes how to enable or disable replication on PowerFlex storage-only nodes manually.

Add storage data replication to PowerFlex

Use this task to add storage data replication to PowerFlex.

Prerequisites

Replication is supported on PowerFlex storage-only nodes with dual CPU. The node should be migrated to an LACP bonding NIC port design.

Steps

1. Connect to the PowerFlex GUI presentation server through https:// :8443 using MDM.

2. Click the Protection tab in the left pane.

3. Click SDR > Add, and enter the storage data replication name.

4. Choose the protection domain.

5. Enter the IP address to be used and choose, and click Add IP. Repeat this for each IP address you are adding, and click Add SDR.

NOTE: While adding storage data replication it is recommended to add IP addresses for flex-data1- , flex-data2-

, flex-data3- (if required), and flex-data4- (if required) along with flex-rep1- , and

flex-rep2- . Choose the role of Application and Storage for all data IP addresses and choose role as External

for the replication IP addresses.

Administering the storage with asynchronous replication 89

6. Repeat steps 3 through 5 for all the storage data replicator you are adding.

7. Click Protection > Journal Capacity > Add, and provide the capacity percentage as 10%, which is the default. You can customize if needed.

Extract and add the MDM certificate

Use this procedure to add MDM certificate.

About this task

The MDM certificate must be exchanged between the replicating clusters to protect from possible security attacks. This procedure is performed using the PowerFlex scli. On each system, a certificate is created and sent to the other host in the replicated pair.

Prerequisites

NOTE: This procedure can only be completed when the secondary site is active.

Steps

1. Log in to the primary MDM, by using the SSH on source and destination.

2. Run command: scli --login --username admin on the scli command and provide the MDM cluster password, when prompted.

See the following example to extract the certificate on source and destination primary MDM. Example for source: scli --extract_root_ca --certificate_file /tmp/Source.crt Example for destination: scli --extract_root_ca --certificate_file /tmp/destination.crt

3. Copy the extracted certificated of source (primary MDM) to destination (primary MDM) using the SCP and vice versa.

See the following example to add the copied certificate: Example for source: scli --add_trusted_ca --certificate_file /tmp/destination.crt --comment

destination_crt Example for destination: scli --add_trusted_ca --certificate_file /tmp/source.crt --comment

source_crt 4. Run scli --list_trusted_ca to verify the added certificate.

5. Once all the Journal Capacity is set, log in to the primary DM using SSH, and log in to scli using scli --login --username admin for adding the Peer.

Logged in. User role is SuperUser. System ID is 2e6ccfd208ef120f

Note the system ID.

6. Add a peer system on the primary site: scli --add_replication_peer_system --peer_system_ip.

7. Add a peer system on the remote site: scli --add_replication_peer_system --peer_system_ip (primary master mdm Mgmt ip,primary slave mdm Mgmt ip) --peer_system_id ( id of primary site ) --peer_system_name (primary sitename).

NOTE:

For a three node cluster, add two management IP addresses (primary and secondary).

For a five node cluster, add three management IP addresses (primary, secondary1, and secondary2).

Create the replication consistency group

Use this task to create the RCG when the remote site is up and running only.

About this task

The RCG is a logical container for volumes whose application data must be replicated consistency to each other. It includes a set of consistent volume pairs. The volume on the source from a single protection domain is replicated to a remote volume from

90 Administering the storage with asynchronous replication

a single protection domain on the target. This creates a consistent pair of volumes. You can add and manage RCG on both the source and target systems.

Before proceeding, create source and destination volumes of the same size. It is recommended, but not mandatory, that the volumes in the volume pair have the same attributes (including zero padding and granularity), not doing so can impact performance and capacity.

If you already have volume in source site, create the volume in destination site with same size.

NOTE: Do not map the volume that is created on target system to SDC.

Steps

1. Log in to the source site presentation server: :8443.

NOTE: Use the primary MDM IP address and credentials to log in to the PowerFlex cluster.

2. In the left pane, click Protection > RCGs.

3. In the right pane, click Add.

4. In the Add RCG wizard, enter the following on the General page:

a. Enter the RCG Name. b. Enter the number of RPO (recovery point objective) minutes. This is the amount of time of data loss that is tolerated if

replication between the systems is compromised. c. Select Source Protection Domain. d. Select Target System. e. Select Target Protection Domain.

5. Click Next.

6. On the Add Replication Pairs page:

a. Click the volume from the Source column and then click the same size volume from the Target column. b. Click Add Pair. The volume pair is added. c. Click Next.

7. On the Review Pairs page:

a. Ensure that the correct source and volume pair are selected and click ADD RCG & START REPLICATION. b. Verify that the operation completed successfully and click Dismiss.

The RCG is added to both the source and target systems.

It is necessary to wait for the end of the initial copy transmit before start to use.

Find the current copy status

Use this task to find the current copy status.

Steps

1. Log in to the primary MDM using SSH and log in to scli, type: # scli --login --username admin after the password prompt and enter the MDM cluster password.

2. Verify the replication status, type: # scli --query_all_replication_pairs.

Once initial copy is complete, PowerFlex replication is ready for use.

Modify the recovery point objective

Use this to update the recovery point objective (RPO) time as required.

Steps

1. From https://Presentation_Server_IP:8443 (PowerFlex GUI), in the left pane, click Protection > RCGs.

2. In the right pane, select the relevant RCG check box, and click Modify > Modify RPO.

3. In the Modify RPO for RCG dialog box, enter the updated RPO time and click Apply.

Administering the storage with asynchronous replication 91

4. Verify that the operation completed successfully and click Dismiss.

Disabling replication on PowerFlex storage-only nodes

Use this workflow to disable replication on PowerFlex storage-only nodes.

Steps

1. Freeze the remote consistency group.

2. Remove the remote consistency group.

3. Remove a peer system.

4. Remove a peer system and certificates.

5. Remove replication trust for peer system.

6. Enter SDS into maintenance mode.

7. Remove the storage data replication from PowerFlex.

8. Remove a storage data replication RPM.

9. Clean up the network configurations.

10. Exit SDS from maintenance mode.

11. Remove the journal capacity.

12. Remove the target volumes from the destination system.

Freeze the remote consistency group

Perform this procedure to freeze the remote consistency group (RCG). Freeze stops writing data from the target journal to the target volume. Use this option while creating a snapshot or copying the replicated volume.

Steps

1. Connect to the PowerFlex GUI presentation server through https:// :8443 using MDM.

2. From the left pane, click Protection > RCGs.

3. In the right pane, select the relevant RCG check box, and click More > Freeze Apply.

4. Verify that the operation completes successfully and click Dismiss.

Remove the remote consistency group

Use this procedure to remove the volume pairs and stop all remote consistency group (RCG) replication input and output.

Steps

1. Connect to the PowerFlex GUI presentation server through https:// :8443 using MDM.

2. From the left pane, click Protection > RCGs.

3. In the right pane, select the relevant RCG, and click More > Remove RCG.

4. Verify that the operation completes successfully, and click Dismiss.

Remove a peer system

Use this procedure to remove replication between the peer systems.

Steps

1. Connect to the PowerFlex GUI presentation server through https:// :8443 using MDM.

2. From the left pane, click Protection > Peer Systems.

3. In the right pane, select the relevant peer system, and click Remove.

4. Verify that the operation completes successfully, and click Dismiss.

92 Administering the storage with asynchronous replication

Remove replication trust for peer system

Use this optional procedure to remove the trusted certificates from the source and target systems.

Steps

1. Open an SSH session using PuTTY or a similar SSH client.

2. Log in to the primary MDM with admin credentials.

3. In the PowerFlex CLI, type scli --list_trusted_ca to display the list of trusted certificates in the system. Note the fingerprint details.

4. Type scli --remove_trusted_ca --fingerprint to remove the certificate.

5. Verify that the following message is received:

The Certificate was successfully removed.

6. Type rm /tmp/target.crt scli --list_trusted_ca 9A:14:00:5F:3F:A0:01:73:D9:8F:69:E3:9C:53:C5:FB:CB:7B:AE:CA scli --remove_trusted_ca --fingerprint 9A:14:00:5F:3F:A0:01:73:D9:8F:69:E3:9C:53:C5:FB:CB:7B:AE:CA and rm /tmp/source.crt scli --list_trusted_ca E4:07:A4:BF:A3:2B:6B:DD:93:F4:76:87:C0:8A:8C:6D:31:83:7A:23 scli --remove_trusted_ca -- fingerprint E4:07:A4:BF:A3:2B:6B:DD:93:F4:76:87:C0:8A:8C:6D:31:83:7A:23 to remove the source and target certificates.

7. Verify that the following message is received:

The Certificate was successfully removed.

Enter SDS in maintenance mode

Use this procedure to place an SDS into maintenance mode to perform nondisruptive maintenance on the SDS.

About this task

Perform this procedure if you need to clean the network configurations.

Steps

1. Connect to the PowerFlex GUI presentation server through https:// :8443 using MDM.

2. In the left pane, click Configuration > SDSs.

3. In the right pane, select the relevant SDS and click More > Enter Maintenance Mode.

4. In the Enter SDS into Maintenance Mode dialog box, select Instant. If maintenance mode takes more than 30 minutes, select PMM.

5. Click Enter Maintenance Mode.

6. Verify that the operation completes successfully and click Dismiss.

Remove storage data replication from PowerFlex

Use this procedure to remove storage data replication from PowerFlex.

Steps

1. Connect to the PowerFlex GUI presentation server through https:// :8443 using MDM.

2. In the left pane, click Protection > SDRs.

3. In the right pane, select the SDR Name and click More > Remove.

4. Repeat for all SDRs.

Administering the storage with asynchronous replication 93

Remove a storage data replication RPM

Use this procedure to remove a storage data replication RPM.

Steps

1. SSH to the PowerFlex node.

2. List all installed Dell EMC RPMs on a PowerFlex node by entering the following command: rpm -qa | grep -i emc.

3. Identify the SDR rpm - EMC-ScaleIO-sdr-x.x.xxx.el7.x86_64.rpm.

4. Remove the RPM by entering the following command: rpm -e EMC-ScaleIO-sdr-x.x.xxx.el7.x86_64.rpm 5. Verify that RPM is removed and the service is stopped.

Clean up network configurations

Use this procedure to clean a network configuration.

About this task

If this network is used for other functions, these steps are optional.

Steps

1. Remove the route-bond# files that are associated with the replication network, using the following commands:

cd /etc/sysconfig/network-scripts/

rm route-bond(x).xxx

Repeat this command for the second route.

2. Remove the ifcfg-bond# files that are associated with the replication network, using the following commands:

cd /etc/sysconfig/network-scripts/

rm ifcfg-bond(x).xxx

Repeat this command for the second interface.

Exit SDS in maintenance mode

Use this procedure to exit an SDS from maintenance mode.

Steps

1. Connect to the PowerFlex GUI presentation server through https:// :8443 using MDM.

2. In the left pane, click Configuration > SDSs.

3. In the right pane, select the relevant SDS and click More > Exit Maintenance Mode.

4. In the Exit SDS into Maintenance Mode dialog box, select Instant.

5. Click Exit Maintenance Mode.

6. Verify that the operation completes successfully and click Dismiss.

Repeat for each PowerFlex node in the protection domain.

94 Administering the storage with asynchronous replication

Remove journal capacity

Use this procedure to remove the journal capacity.

Steps

1. Connect to the PowerFlex GUI presentation server through https:// :8443 using MDM.

2. From the left pane, click Protection > Journal Capacity.

3. In the right pane, select the Protection Domain, and click Remove.

4. Verify that the operation completes successfully and click Dismiss.

Remove target volumes from the destination system

Use this procedure to remove target volumes from the destination system.

Steps

1. Connect to the PowerFlex GUI presentation server through https:// :8443 using MDM.

2. Remove the volumes used as target in the volume pair.

3. From the left pane, click Configuration > Volumes.

4. In the right pane, select the target volumes.

5. Click More > Remove.

6. Select Remove volume with all of its snapshots.

7. Click Remove.

8. Verify that the operation completes successfully and click Dismiss.

Administering the storage with asynchronous replication 95

Configuring and viewing alerts You can configure PowerFlex Manager to receive and display email alerts from discovered PowerFlex appliance components.

The alert connector is available through PowerFlex Manager. It sends email alerts on the health of PowerFlex nodes securely through Secure Remote Services. Secure Remote Services routes alerts to the Dell EMC support queue for diagnosis and dispatch.

When using the alert connector with Secure Remote Services, critical alerts can automatically generate service requests. Dell Technologies Services continuously evaluates and updates which alerts automatically generate service requests. For more information, contact Dell Technologies Services.

During node discovery, you can configure iDRAC nodes to automatically send alerts to PowerFlex Manager. PowerFlex Manager receives SNMP alerts directly from iDRAC and forwards them to Secure Remote Services. You must manually configure CloudLink and Dell EMC Networking OS10 switches to send alerts to PowerFlex Manager.

If not done at discovery, you can configure iDRAC nodes to automatically send alerts to PowerFlex Manager by editing the alert connector settings and selecting the Configure nodes for alert connector option.

PowerFlex Manager fetches telemetry reports from PowerFlex and forwards these reports as is to Secure Remote Services. The reports are then sent to the Dell Managed File Transfer (MFT) portal, where they can be leveraged by CloudIQ.

PowerFlex Manager forwards four different reports: The configuration report is sent once a day. The capacity report is sent every hour. The performance report is sent every five minutes. The alerts report is sent every five minutes.

CloudIQ integration is enabled by default. CloudIQ enables PowerFlex Manager to transport telemetry data, alerts and analytics via Secure Remote Services to assist Dell EMC in providing support.

NOTE: The alert connector does not replace any monitoring software that you might already have, including any already

available through the PowerFlex appliance such as PowerFlex Connect Home.

As of PowerFlex Manager version 3.3, OpenManage Enterprise is no longer required to connect to Secure Remote Services. If you use OpenManage Enterprise for other functionality, note that it is no longer installed and we recommend using PowerFlex Manager instead. In future upgrades, we will recommend removing this software module.

Configure the alert connector Configure the alert connector to register the device with Secure Remote Services using a unique software ID.

About this task

Configuring the alert connector enables critical and error alerting for node and PowerFlex resources that are managed by PowerFlex Manager.

CloudIQ is enabled by default.

Prerequisites

Before you configure the alert connector, ensure: The primary MDM in the PowerFlex cluster is valid and up and running. Secure Remote Services gateway is configured in the data center and connected to Secure Remote Services.

NOTE: Ensure that the resources are in managed mode to configure alert.

Steps

1. Log in to PowerFlex Manager (username: admin and password: admin).

5

96 Configuring and viewing alerts

2. On the menu bar, click Settings and click Virtual Appliance Management.

3. Click Add in the Alert connector section.

4. Complete the following steps in the Device Registration section:

a. Select the device type. b. Enter your unique software ID in the Enterprise License Management Systems (ELMS) Software Unique ID box. For

information about how to obtain the ID, see the License Authorization email that you received. c. Enter the unique number associated with your system in the Solution Serial Number box, for example, V1234567.

d. Select one or more of the following options for the Connection type:

Secure Remote Services Email

e. Optionally, disable CloudIQ integration by clearing Enable CloudIQ. f. Select the severity level for which you want to see alerts by choosing one of the following Alert Filter values:

Critical (Recommended) Warning Info

g. Specify how often you want to check for alerts by entering an Alert Polling Interval value in hours or minutes.

5. For a Secure Remote Services configuration, complete the following steps in the Secure Remote Services Section under Connector Settings:

a. Enter a node address for the Secure Remote Services gateway in the SRS Gateway Host IP or FQDN field.

NOTE: Secure Remote Services support recommends using the IP address when registering.

b. Enter the port number in the SRS Gateway Host Port field. c. Enter the required username in the User ID field. d. Enter the required password in the Password or NT Token field.

6. For an email configuration, complete the following steps in the Email Server Configuration under Connector Settings:

a. Choose the Server type.

SMTP SMTPS over SSL SMTPS STARTTLS

b. Enter an IP address or fully qualified domain name for the email server in the Server IP or FQDN field. c. Enter the port number for the email server in the Port field. d. Enter the required username in the User ID field. e. Enter the required password in the Password field. f. Enter the email address for the sender in the Sender Address field. g. Enter one or more email recipient addresses.

7. Click Save.

8. Click Send Test Alert, to verify that the alert connector is receiving alerts.

9. Click Test Connection, to verify the connection.

When the device is registered for alerting, topology and telemetry reports are automatically sent to Secure Remote Services weekly, starting at the time that the device was registered.

Configuring SNMP trap and syslog forwarding You can configure PowerFlex Manager for SNMP trap and syslog forwarding.

Configure SNMP communication to enable PowerFlex Manager to receive and forward SNMP traps. PowerFlex Manager can receive SNMP traps from system devices and forward them to one or more remote network management systems.

You can configure PowerFlex Manager to forward syslogs it receives from system components to a remote network management system. Authentication is provided by PowerFlex Manager, through the configuration settings you provide.

Configuring and viewing alerts 97

Configure SNMP trap forwarding

To configure SNMP trap forwarding, specify the access credentials for the SNMP version you are using and then add the remote server as a trap destination.

About this task

PowerFlex Manager supports different SNMP versions, depending on the communication path and function. The following table summarizes the functions and supported SNMP versions:

Function SNMP version

PowerFlex Manager receives traps from all devices, including iDRAC v2

PowerFlex Manager receives traps from iDRAC devices only v3

PowerFlex Manager forwards traps to the network management system v2, v3

NOTE: SNMPv1 is supported wherever SNMPv2 is supported.

PowerFlex Manager can receive an SNMPv2 trap and forward it as an SNMPv3 trap.

SNMP trap forwarding configuration supports multiple forwarding destinations. If you provide more than one destination, all traps coming from all devices are forwarded to all configured destinations in the appropriate format.

PowerFlex Manager stores up to 5 GB of SNMP alerts. Once this threshold is exceeded, PowerFlex Manager automatically purges the oldest data to free up space.

For SNMPv2 traps to be sent from a device to PowerFlex Manager, you must provide PowerFlex Manager with the community strings on which the devices are sending the traps. If during resource discovery you selected to have PowerFlex Manager automatically configure iDRAC nodes to send alerts to PowerFlex Manager, you must enter the community string used in that credential here.

For a network management system to receive SNMPv2 traps from PowerFlex Manager, you must provide the community strings to the network management system. This configuration happens outside of PowerFlex Manager.

For a network management system to receive SNMPv3 traps from PowerFlex Manager, you must provide the PowerFlex Manager engine ID, user details, and security level to the network management system. This configuration happens outside of PowerFlex Manager.

Prerequisites

PowerFlex Manager and the network management system use access credentials with different security levels to establish two-way communication. Review the access credentials that you need for each supported version of SNMP. Determine the security level for each access credential and whether the credential supports encryption.

To configure SNMP communication, you need the access credentials and trap targets for SNMP, as shown in the following table:

If adding... You must know...

SNMPv2 Community strings by which traps are received and forwarded

SNMPv3 User and security settings

Steps

1. Log in to PowerFlex Manager.

2. On the menu bar, click Settings, and click Virtual Appliance Management.

3. On the Virtual Appliance Management page, in the SNMP Trap Configuration section, click Edit.

4. To configure trap forwarding as SNMPv2, click Add community string. In the Community String box, provide the community string by which PowerFlex Manager receives traps from devices and by which it forwards traps to destinations.

You can add more than one community string. For example, add more than one if the community string by which PowerFlex Manager receives traps differs from the community string by which it forwards traps to a remote destination.

98 Configuring and viewing alerts

NOTE: An SNMPv2 community string that is configured in the credentials during discovery of the iDRAC or through

management is also displayed here. You can create a new community string or use the existing one.

5. To configure trap forwarding as SNMPv3, click Add User. Enter the Username, which identifies the ID where traps are forwarded on the network management system. The username must be at most 16 characters. Select a Security Level:

Security Level Details Description authPassword privPassword

Minimal noAuthNoPriv No authentication and no encryption

Not required Not required

Moderate authNoPriv Messages are authenticated but not encrypted

(MD5 at least 8 characters)

Required Not required

Maximum authPriv Messages are authenticated and encrypted

(MD5 and DES both at least 8 characters)

Required Required

Note the current engine ID (automatically populated), username, and security details. Provide this information to the remote network management system so it can receive traps from PowerFlex Manager.

You can add more than one user.

6. In the Trap Forwarding section, click Add Trap Destination to add the forwarding details.

a. In the Target Address (IP) box, enter the IP address of the network management system to which PowerFlex Manager forwards SNMP traps.

b. Provide the Port for the network management system destination. The SNMP Trap Port is 162. c. Select the SNMP Version for which you are providing destination details. d. In the Community String/User box, enter either the community string or username, depending on whether you are

configuring an SNMPv2 or SNMPv3 destination. For SNMPv2, if there is more than one community string, select the appropriate community string for the particular trap destination. For SNMPv3, if there is more than one user-defined, select the appropriate user for the particular trap destination.

7. Click Save.

The Virtual Appliance Management page displays the configured details as shown below:

Trap Forwarding (SNMP v2 community string or SNMP v3 user)

NOTE: To configure nodes with PowerFlex Manager SNMP changes, go to Settings > Virtual Appliance

Management, and click Configure nodes for alert connector.

Configure syslog forwarding

You can configure PowerFlex Manager to forward syslogs it receives from system components to a remote network management system. PowerFlex Manager provides authentication through the configuration settings you provide.

About this task

You can configure PowerFlex Manager to forward syslogs to up to five destination remote servers. You can set only one forwarding entry per remote server.

You can apply forwarding filters based on facility type and severity level. For example, you can configure PowerFlex Manager to forward all syslog messages to one remote server and then forward syslog messages of a given severity to a different remote server. The default is to forward syslog messages of all facilities and severity levels to the remote syslog server.

Configuring and viewing alerts 99

Prerequisites

Ensure that the system components are configured to send syslog messages to PowerFlex Manager. This configuration happens outside of PowerFlex Manager.

Ensure that you have the following information:

Obtain the IP address of the hostname for the remote syslog server and the port where the server is accepting syslog messages.

If sending only some syslog messages to a remote server, you must know the facility and severity of the log messages to forward.

Steps

1. Log in to PowerFlex Manager.

2. On the menu bar, click Settings and click Virtual Appliance Management.

3. On the Virtual Appliance Management page, in the Syslog section, click Edit.

4. Click Add syslog forward.

5. For Host, enter the destination IP address of the remote server to which you want to forward syslogs.

6. Enter the destination Port 514 where the remote server is accepting syslog messages.

7. Select the network Protocol used to transfer the syslog messages. The default is UDP.

8. Optionally enter the Facility and Severity Level to filter the syslogs that are forwarded. The default is to forward all.

9. Click Save to add the syslog forwarding destination.

The Virtual Appliance Management page displays the configured details as shown below:

Syslog Forwarding ( )

100 Configuring and viewing alerts

Administering PowerFlex Manager This section includes information about key PowerFlex Manager activities.

These activities include: Backing up and restoring data Adding or modifying user accounts Assigning users to services Recovering lost passwords Password management Credentials management Restarting the PowerFlex Manager virtual appliance

PowerFlex Manager limits Maximum number of nodes in PowerFlex Manager: 384 Maximum number of nodes in a service: 32 Maximum number of volumes in a service: 1024 for hyperconverged and 32,000 for storage-only Maximum number of networks in a service: 400 Maximum number of discovered resources

Switches: 10 CloudLink Centers: 4 VMware vCenters: 6

PowerFlex Manager settings limits NTP: 4 for PowerFlex Manager appliance; Node operating system limitations 1-3, PowerFlex Manager sets 1 during

deployment SMTP: 1 per PowerFlex Manager appliance for alert connector SNMP: 20 communities Syslog: 4 remote syslog servers

Back up and restore

Back up and restore PowerFlex Manager

Use this task to schedule backups, perform an immediate backup, and perform a restore.

About this task

Performing a backup saves all user-created data to a remote share from which it can be restored.

Steps

1. Log in to PowerFlex Manager.

2. Click Settings and click Backup and Restore.

3. The Backup and Restore page displays information about the last backup operation that was performed on the PowerFlex Manager virtual appliance. Information in the Settings and Details section applies to both manual and automatically scheduled backups and includes the following:

Last backup date

6

Administering PowerFlex Manager 101

Last backup status

Backup directory path to an NFS or a CIFS share

Backup directory username

4. The Backup and Restore page also displays information about the status of automatically scheduled backups (enabled or disabled).

On this page, you can:

Manually start an immediate backup - using Backup Now option

Restore an earlier configuration - using Restore Now option

Edit general backup settings

Edit automatically scheduled backup settings

Back up the appliance SSL and trusted certificates

Before you upgrade PowerFlex Manager, you must back up the SSL and trusted certificates.

About this task

For more information, see https://www.dell.com/support/kbdoc/en-us/000193466/powerflex-manager-how-to-backup- restore-appliance-ssl-certificates-trusted-certificates?lang=en.

Steps

1. Back up the appliance SSL certificates:

a. Log in to the PowerFlex Manager appliance using SSH and sudo to root: sudo su - b. Copy the following files to /home/delladmin/ or to /tmp/:

cp /etc/pki/tls/certs/localhost.crt /home/delladmin/ cp /etc/pki/tls/private/localhost.key /home/delladmin

c. Type chown delladmin:delladmin /home/delladmin/localhost.* to change the owner to delladmin.

d. Copy the localhost.crt and localhost.key file from the PowerFlex Manager appliance to the jump server or another Linux system for temporary storage.

2. Back up the appliance SSL trusted certificates:

a. From the original PowerFlex Manager appliance, type /etc/pki/java/cacerts to copy the ca cert database to the jump server or another destination you choose.

Restore the appliance SSL and trusted certificates

About this task

For more information, see https://www.dell.com/support/kbdoc/en-us/000193466/powerflex-manager-how-to-backup- restore-appliance-ssl-certificates-trusted-certificates?lang=en.

Steps

1. Restore the appliance SSL certificates:

a. Log in to the PowerFlex Manager appliance using SSH and sudo to root: sudo su - b. Copy the .crt and .key file from the jumphost or file location to the new PowerFlex Manager appliance in /home/

delladmin or /tmp/.

c. To adjust the permissions to the original state or to check the permissions, type:

chown root:root /path/to/files/localhost.* chmod 755 /path/to/files/localhost.crt chmod 400 /path/to/files/localhost.key

102 Administering PowerFlex Manager

d. To copy the files to their default location type:

cp /path/to/files/localhost.crt /etc/pki/tls/certs/ cp /path/to/files/localhost.key /etc/pki/tls/private/

e. Restart the PowerFlex Manager appliance. f. After the restart, check that the certificate is appearing correctly in the browser and in PowerFlex Manager under

Settings > Virtual Appliance Management > Appliance SSL Certificate.

2. Restore the appliance SSL trusted certificates:

a. Copy the cacerts file to the newly deployed or upgraded PowerFlex Manager appliance where you would like them restored, type keytool -list -keystore /etc/pki/ca-trust/extracted/java/cacerts -storepass "changeit" to list the contents.

b. Type keytool -list -keystore -storepass "changeit" |grep -i pfxm to verify the PowerFlex Manager certificate is present.

c. Type keytool -exportcert -rfc -keystore -alias -storepass changeit -file to export certificates from the cacerts backup database.

d. From the directory you ran the commands from, copy the filename for the certificate to your jump server or a different destination using WinSCP.

e. To upload the file to SSL Trusted Certificates, click Edit from the SSL Trusted Certificates section. f. Browse to the file that was uploaded and click Save.

Add or modify user accounts Add or modify user accounts using PowerFlex Manager.

Steps

Log in to PowerFlex Manager.

If you want to ... Do this...

Create a user a. On the menu bar, click Settings and click Users. b. On the Users page, click Create. c. Enter current password (password of user that is currently logged in). d. Enter a unique User Name to identify the user account. e. Enter a Password that a user enters to access PowerFlex Manager. Confirm the password. The

password length must be between 832 characters and must include at least one number, one capital letter, one lowercase letter.

f. Enter the users First Name and Last Name. g. From the Role drop-down list, select one of the following roles:

Administrator Standard Read-only Operator

h. Enter the Email address and Phone number for contacting the user. i. Select Enable User to create the account with an Enabled status, or clear this option to

create the account with a Disabled status.

j. Click Save.

Edit a user a. On the menu bar, click Settings and click Users. b. On the Users page, select the user account that you want to edit. c. Click Edit. For security purpose, confirm your password before editing the user. d. You can modify the following user account information from this window:

First name Last name Role Email Phone

Administering PowerFlex Manager 103

If you want to ... Do this...

Enable user (If you select the Enable user check box, the user can log in to PowerFlex Manager. If you disable the check box, the user cannot log in.)

e. Click Save.

Delete a user a. On the menu bar, click Settings and click Users. b. On the Users page, select the user account that you want to delete. c. Click Delete. Click Yes in the warning message to delete the accounts.

Enable or disable a user a. On the menu bar, click Settings and click Users. b. On the Users page, select one or more user accounts to enable/disable. c. In the menu, click Enable or Disable, to update the state to enabled or disabled, as selected.

NOTE: For an already enabled user account State, the Enable option in the menu is deactivated, and for an already disabled user account State, the Disable option in the menu is deactivated.

Assigning users to services You can assign users to services using PowerFlex Manager.

Steps

1. Log in to PowerFlex Manager.

2. On the menu bar, click Services.

3. On the Services page, click the service, and in the right pane of the Service Details page, click View Details.

4. On the Service Details page, in the right pane, click Edit.

5. Specify permissions for the service under Who should have access to the service deployed from this template?.

Only PowerFlex administrators - The service has access to users with administration rights PowerFlex Manager administrators and specific standard and operator users - This option allows to restrict

access to specific users PowerFlex Manager administrators and all standard and operator users - Allows all users

6. Click Save.

Recovering a lost password To recover a lost password, contact Dell Technologies Support.

Access switch password management Use this procedure to change the password of the access switches.

About this task

During predeployment, the person doing the installation sets the password for the access switches. During deployment of PowerFlex Manager, you also set the switch password in PowerFlex Manager. When the access switch password is changed after deployment, you must also change the access switch password within PowerFlex Manager to maintain manageability by PowerFlex Manager.

The terms and are used to see current and new passwords respectively.

Steps

1. Change the PowerFlex Manager access switch password by doing the following:

a. In PowerFlex Manager, go to Settings > Credentials Management, select the access switch credential, click Edit, change the Password to the , and click Save. See Credentials management for more information.

104 Administering PowerFlex Manager

2. Change the password of the access switches by doing the following:

a. Use an SSH client program like PuTTY to log in to an access switch console. b. Type the following commands:

Switch type Command

Dell EMC PowerSwitch configure username admin password privilege 15 end copy running-config startup-config

Cisco Nexus configure username admin password 0 end copy running-config startup-config

3. Test the changes.

a. In the PowerFlex Manager GUI, go to Resources page, select the access switches, and click the Run Inventory. b. To confirm that the process completes with no errors, check Settings > Logs.

VMware vCenter password management Use this procedure to change the VMware vCenter password.

About this task

During deployment of PowerFlex appliance, the person doing the installation sets the VMware vCenter password in PowerFlex Manager. When the VMware vCenter password is changed after deployment, the password must also be changed within PowerFlex Manager to maintain manageability by PowerFlex Manager.

The terms and are used to see current and new passwords respectively.

Steps

1. Change the PowerFlex Manager VMware vCenter password by completing the following:

a. In PowerFlex Manager, go to Settings > Credentials Management, select the VMware vCenter credential, click Edit, change the Password to the , and click Save. See Credentials management for more information.

2. Change the VMware vCenter password by completing the following:

a. Log in to the VMware vCenter web interface using the .

b. Click the username in upper right of page and select Change password. c. Type the and the and click OK.

3. Test the changes. Even though the cluster is operating properly, because of the time between changing the password in PowerFlex Manager and changing the password in the ESXi OS, nodes may show a critical error on the Services page in PowerFlex Manager. The following steps will return the nodes to the healthy state.

a. In the PowerFlex Manager GUI, go to Resources page, select vCenter and click Run Inventory. b. To confirm that the process completes with no errors, check Settings > Logs. c. In the PowerFlex Manager GUI, go to Services page for ESXi nodes and click Update Service Details. d. After Update Service Details completes the process, confirm that all cluster objects report as healthy (green check

mark).

Administering PowerFlex Manager 105

VMware ESXi operating system password management Use this procedure to change the VMware ESXi operating system root password.

About this task

During deployment of PowerFlex appliance, the person completing the installation sets the VMware ESXi operating system password in PowerFlex Manager. When PowerFlex Manager deploys VMWare ESXi, it sets the password in the operating system. When the VMware ESXi operating system password is changed after deployment, the ESXi operating system password must also be changed within PowerFlex Manager to maintain manageability by PowerFlex Manager.

In the following procedure, the terms and are used to see the current and new passwords respectively.

Steps

1. To change the PowerFlex Manager VMware ESXi operating system password, complete the following:

a. In PowerFlex Manager, go to Settings > Credential Management, select the VMware ESXi operating system credential, click Edit, change the Password to the , and click Save. See Credentials management for more information.

2. To change the VMware ESXi operating system root password on every hyperconverged or PowerFlex compute-only node, complete the following:

a. Log in to VMWare ESXi web interface on the PowerFlex node using root and the .

b. In upper right of page, click the root@ , and select Change password. c. Type the twice and click Change password.

3. Test the changes: Even though the cluster is operating properly, because of the time between changing the password in PowerFlex Manager and changing the password in the VMware ESXi operating system, nodes may show a critical error on the Services page in PowerFlex Manager. The following steps return the nodes to the healthy state.

a. In the PowerFlex Manager GUI, go to Resources page, select the VMware ESXi nodes and VMware vCenter and click Run Inventory.

b. To confirm that the process completes with no errors, check Settings > Logs. c. In the PowerFlex Manager GUI, go to Services page for VMware ESXi nodes and click Update Service Details. d. After the Update Service Details process completes, confirm that all cluster objects report as healthy (green check

mark).

Adding a non-root user to VMware ESXi

Steps

1. Log in to VMware ESXi UI (paste the IP in browser and access it using root user).

2. Go to Manage > Security & Users > Users

3. Click Add user.

4. Provide username and password.

5. Confirm password and click Add.

6. Go to Host > Actions > Permissions.

7. Click Add User and select the user you created from the drop down menu.

8. Select administrator in the second menu and click Add user.

Once user is added, you will be able to see the user with admin role.

9. Verify the login with new user.

Minimum VMware vCenter permissions PowerFlex Manager supports managing VMware vCenter objects without root permissions. There are three PowerFlex Manager VMware vCenter management modes available:

106 Administering PowerFlex Manager

Monitoring mode Lifecycle mode Management mode These procedures provide information for creating VMware vCenter user accounts with access for the various modes listed (monitoring mode etc). VMware vCenter default permissions to meet the specified requirements so no additional permission changes are needed.

Create a user in monitoring mode

Steps

1. Log in to VMware vSphere client, select Administration > Users and Groups.

2. Click Add User to create a user account and enter the username and password.

3. In root view of the VMware vSphere client, click Administration and select Roles.

a. Create a new role and select the following permissions:

Profile-driven storage > Profile-driven storage view VM > Read customization specifications (under provisioning)

b. Assign a name to the new role.

4. Click Hosts and Clusters and right-click the VMware vCenter.

a. Choose Add permission and select the user account previously created. b. Select the name of the role you created and check Propagate to children.

5. Log in to PowerFlex Manager.

6. Click Settings and create a new credential of type vCenter.

NOTE: Ensure the username and password coincide with the vSphere credentials created earlier.

7. Create a user credential for the vCenter server that matches the account created in vCenter earlier.

8. Add the vCenter server object to the inventory using those credentials from PowerFlex Manager. For more information on the PowerFlex Manager credential creation see the PowerFlex Manager online help.

Create a user in lifecycle mode

Steps

1. Log in to VMware vSphere client, select Administration > Users and Groups.

2. Click Add User to create a user account and enter the username and password.

3. In root view of the VMware vSphere client, click Administration and select Roles.

a. Create a new role and select the following permissions:

Profile-driven storage > Profile-driven storage view VM > Read customization specifications (under provisioning) Host: Connection, firmware, maintenance, power, query patch, system management, system resources

b. Assign a name to the new role.

4. Click Hosts and Clusters and right-click the VMware vCenter.

a. Choose Add permission and select the user account previously created. b. Select the name of the role you created and check Propagate to children.

5. Log in to PowerFlex Manager.

6. Click Settings and create a new credential of type vCenter.

NOTE: Ensure the username and password coincide with the vSphere credentials created earlier.

7. Create a user credential for the vCenter server that matches the account created in vCenter earlier.

8. Add the vCenter server object to the inventory using those credentials from PowerFlex Manager. For more information on the PowerFlex Manager credential creation see the PowerFlex Manager online help.

Administering PowerFlex Manager 107

Create a user in managed mode

Use this procedure to create VMware vCenter role with required permissions to run PowerFlex Manager in managed mode. You can create a role using the default permissions or selecting granularly defined permissions.

Steps

1. To do default:

a. Log in to VMware vSphere client, select Administration > Users and Groups. b. Click Add User to create a user account and enter the username and password. c. In root view of the VMware vSphere client, click Administration and select Roles. d. Click Hosts and Clusters and right-click the VMware vCenter.

i. Choose Add permission and select the user account previously created. ii. Select the name of the role you created and check Propagate to children.

e. Log in to PowerFlex Manager. f. Click Settings and create a new credential of type vCenter.

NOTE: Ensure the username and password coincide with the vSphere credentials created earlier.

g. Create a user credential for the vCenter server that matches the account created in vCenter earlier. h. Add the vCenter server object to the inventory using those credentials from PowerFlex Manager. For more information

on the PowerFlex Manager credential creation see the PowerFlex Manager online help.

2. To do more granular permissions:

a. Log in to VMware vSphere client, select Administration > Users and Groups. b. Click Add User to create a user account and enter the username and password. c. In root view of the VMware vSphere client, click Administration and select Roles. d. Create a new role and select the following permissions:

Category Permissions to select

Alarms Acknowledge alarm Create alarm Disable alarm action Modify alarm Remove alarm Set alarm status

Permissions Modify permission Modify privilege Modify role Reassign role permissions

AutoDeploy Host AssociateMachine Image Profile Create Edit Rule Create Delete Edit RuleSet Activate Edit

Certificates Manage certificiates

Certificate Management Create/Delete (Admins priv) Create/Delete (below Admins priv)

108 Administering PowerFlex Manager

Category Permissions to select

CNS Searchable

Content Library Add library item Create a subscription for a published library Create local library Create subscribed library Delete library item Delete local library Delete subscribed library Delete subscription of a published library Download files Evict library item Evict subscribed library Import storage Probe subscription information Publish a library item to its subscribers Publish a library to its subscribers Read storage Sync library item Sync subscribed item Type introspection Update configuration settings Update files Update library Update library item Update local library Update subscribed library Update subscription of a published library View configuration settings

Content Add disk Clone Decrypt Direct Access Encrypt Encrypt new Manage KMS Manage encryption policies Manage keys Migrate Recrypt Register VM Register host

dVPort group Create Delete IPFIX operation Modify Policy operation Scope operatin

Distributed switch Create Delete Host operation IPFIX operation Modify Move

Administering PowerFlex Manager 109

Category Permissions to select

Network I/O control operation Policy operation Port configuration operation Port setting operation VSPAN operation

Datacenter Create datacenter Move datacenter Network protocol profile configuration Query IP pool allocation Reconfigure datacenter Release IP allocation Remove datacenter Rename datacenter

Datastore Allocate space Browse datastore Configure datastore Low level file operations Move datastore Remove datastore Remove file Rename datastore Update virtual machine files Update virtual machine metadata

ESX Agent Manager Config Modify View

Extension Register extension Unregister extension Update extension

External stats provider Register Unregister Update

Folder Create folder Delete folder Move folder Rename folder

Global Act as vCenter Server Cancel task Capacity planning Diagnostics Disable methods Enable methods Global tag Health Licenses Log event Manage custom attributes Proxy Script action Service managers Set custom attribute

110 Administering PowerFlex Manager

Category Permissions to select

Settings System tag

Health update provider Register Unregister Update

Host CIM CIM interaction Configuration Advanced settings Authentication Store Change PciPassthru settings Change SNMP settings Change data and time settings Change settings Connection Firmware Hyperthreading Image configuration Maintenance Memory configuration NVDIMM Network Configuration Power Quarantine Query patch Security profile and firewall Storage partition configuration System Management System resources Virtual machine autostart configuration Inventory Add host to cluster Add standalone host Create cluster Modify cluster Move cluster or standalone host Move host Remove cluster Remove host Rename cluster Local operations Add host to vCenter Create virtual machine Delete virtual machine Manage user groups Reconfigure virtual machine vSphere Replication Manage replication

vSphere Tagging Assign or Unassign vSphere tag Create vSphere tag Create vSphere tag category Delete vSphere tag Delete vSphere tag category

Administering PowerFlex Manager 111

Category Permissions to select

Edit vSphere tag Edit vSphere tag category Modify UsedBy Field for Category Modify UsedBy Field for Tag

Network Assign network Configure Move network Remove

Performance Modify intervals

Host Profile Clear Create Delete Edit Export View

Resource Apply recommendation Assign vApp to resource pool Assign virtual machine to resource pool Create resource pool Migrate powered off virtual machine Migrate powered on virtual machine Modify resource pool Move resource pool Query vMotion Remove resource pool Rename resource pool

Scheduled task Create tasks Modify task Remove task Run task

Sessions Impersonate user Message Validate session View and stop sessions

Datastore cluster Configure a datastore cluster

Profile-driven storage Profile-driven storage update Profile-driven storage view

Storage views Configure service View

Tasks Create task Update task

Transfer service Manage Monitor

vApp Add virtual machine Assign resource pool Assign vApp Clone Create Delete

112 Administering PowerFlex Manager

Category Permissions to select

Export Import Move Power off Power on Rename Suspend Unregister View OVF environment vApp appliacation configuration vApp instance configuration vApp managedBy configuration vApp resource configuration

VMware vSphere Update Manager Configure Configure service Manage baseline Attach baseline Manage baseline Manage Patches and Upgrades Remediate to Apply Patches, Extensions. and Upgrades Scan for Applicable Patches, Extensions, and Upgrades Stage Patches and Extensions View Compliance Status Upload file Upload upgrade images and offline bundles

Virtual machine Change Configuration Acquire disk lease Add existing disk Add new disk Add or remove device Advanced configuration Change CPU count Change Memory Change Settings Change Swapfile placement Change resource Configure Host USB device Configure Raw device Configure managedBy Display connection settings Extend virtual disk Modify device settings Query fault tolerance compatibility Query unowned files Reload from path Remove disk Rename Reset guest information Set annotation Toggle disk change tracking Toggle fork parent Upgrade virtual machine compatibility Edit Inventory

Administering PowerFlex Manager 113

Category Permissions to select

Create from existing Create new Move Register Remove Unregister Guest operations Guest operation alias modification Guest operation alias query Guest operation modifications Guest operation program execution Guest operation queries Interaction Answer question Backup operation on virtual machine Configure CD media Configure floppy media Connect devices Console interaction Create screenshot Defragment all disks Drag and drop Guest operating system management by VIX API Inject USB HID scan codes Install VMware Tools Pause or Unpause Perform wipe or shrink operations Power off Power on Record session on virtual machine Replay session on virtual machine Reset Resume Fault Tolerance Suspend Suspend Fault Tolerance Test failover Test restart Secondary VM Turn off Fault Tolerance Turn on Fault Tolerance Provisioning Allow disk access Allow file access Allow read-only disk access Allow virtual machine download Allow virtual machine files upload Clone template Clone virtual machine Create template from virtual machine Customize guest Deploy template Mark as template Mark as virtual machine Modify customization specification Promote disks

114 Administering PowerFlex Manager

Category Permissions to select

Read customization specifications Service configuration Allow notifications Allow polling of global event notifications Manage service configurations Modify service configuration Query service configurations Read service configuration Snapshot management Create snapshot Remove snapshot Revert to snapshot vSphere Replication Configure replication Manage replication Monitor replication

vSAN Cluster ShallowRekey

vService Create dependency Destroy dependency Reconfigure dependency configuration Update dependency

e. Assign a name to the new role. f. Click Hosts and Clusters and right-click the VMware vCenter.

i. Choose Add permission and select the user account previously created. ii. Select the name of the role you created and check Propagate to children.

g. Log in to PowerFlex Manager. h. Click Settings and create a new credential of type vCenter.

NOTE: Ensure the username and password coincide with the vSphere credentials created earlier.

i. Create a user credential for the vCenter server that matches the account created in vCenter earlier. j. Add the vCenter server object to the inventory using those credentials from PowerFlex Manager. For more information

on the PowerFlex Manager credential creation see the PowerFlex Manager online help.

Windows server operating system password management Use this procedure to change the Windows server operating system password.

About this task

The terms and are used to see the current and new passwords respectively.

Steps

1. Log in to PowerFlex Manager.

2. Click Settings > Credentials Management, select the Windows Compute-Only nodes credential. See Credentials management for more information.

3. Click Edit and change the password to the and click Save.

4. To change the Windows Server operating system password on every hyperconverged or compute-only node, complete the following:

Administering PowerFlex Manager 115

5. Log in to the server either directly or by using Remote Desktop.

6. Right-click Computer, and select Manage.

7. Select Configuration.

8. Click Local Users and Groups > Users.

9. Find and right-click the Administrator user.

10. Click Set Password > Proceed.

11. Type and confirm the new password.

12. Test the changes. The PowerFlex nodes may show a critical error. The error is due to the time lag between changing the password in PowerFlex Manager and changing the password in the Windows operating system. The following steps return the PowerFlex nodes to a healthy state:

a. In the PowerFlex Manager GUI, go to Resources, select the Windows CO nodes, and click Run Inventory. b. To confirm that the process completes with no errors, check Settings > Logs. c. In the PowerFlex Manager GUI, go to Services for Red Hat Enterprise Linux nodes and click Update Service Details. d. After the Update Service Details process completes, confirm that all cluster objects report as healthy (green check

mark).

Updating passwords in PowerFlex Manager Use this procedure to update the passwords for iDRAC, VMware ESXi compute-only node, and SVM storage-only nodes operating system

Steps

1. Log in to PowerFlex Manager.

2. Go to the Resources page and select the required node.

3. In the Update Password wizard, select the component that you want to update password and click Next.

4. In the Select Credentials page, select the new credential from the menu or create a credential.

5. Click Finish.

6. Click Yes to confirm.

Update passwords for the PowerFlex Gateway

Steps

1. Log in to PowerFlex Manager.

2. Go to the Resources page, select PowerFlex Gateway and click Update Password.

3. In the Update Password wizard, select the Component and click Next.

4. Select the new credential (which includes admin and root) from the menu or create a credential and click Finish.

5. Click Yes to confirm.

6. Once completed, verify both PowerFlex Gateway UI and the operating system logins.

Updating passwords for PowerFlex Gateway components

You can update the passwords for one or more PowerFlex Gateway components from PowerFlex Manager.

Steps

1. Log in to PowerFlex Manager.

2. On the menu bar, click Resources.

3. On the All Resources tab, select one or more PowerFlex Gateway components for which you want to change the passwords.

4. Click Update Password.

PowerFlex Manager displays the Update Password wizard.

116 Administering PowerFlex Manager

5. On the Select Components page, select PowerFlex Password.

6. Click Next.

7. On the Select Credentials page, create a credential with a new password or change to a different credential.

a. Open the PowerFlex (n) object under the Type column to see details about each gateway you selected on the Resources page.

b. To create a credential that has the new password, click the plus sign (+) under the Credentials column.

Specify the Credential Name, as well as the Gateway Admin User Name and Gateway OS User Name for which you want to change passwords. Enter the new passwords for both users and confirm these passwords.

c. To modify the credential, click the pencil icon for one of the nodes under the Credentials column and select a different credential.

d. Click Save.

8. Click Finish.

9. Click Yes to confirm.

Results

PowerFlex Manager starts a new job for the password update operation, and a separate job for the device inventory. If PowerFlex Manager is managing a cluster for any of the selected PowerFlex Gateway components, it updates the credentials for the Gateway Admin User and Gateway OS User, as well as any related credentials, such as the LIA and lockbox credentials. If PowerFlex Manager is not managing the cluster, it only updates the credentials for the Gateway Admin User and Gateway OS User.

Updating passwords for system components

You can update the passwords for some system components from PowerFlex Manager.

Steps

1. Log in to PowerFlex Manager.

2. On the menu bar, click Resources.

3. On the All Resources tab, select one or more resources of the same type for which you want to change passwords.

For example, you could select one or more iDRAC nodes or you could select one or more PowerFlex Gateway components.

4. Click Update Password.

PowerFlex Manager displays the Update Password wizard.

5. On the Select Components page, select one or more components for which you want to update a password and click Next.

The component choices vary depending on which resource type you initially selected on the Resources page.

6. On the Select Credentials page, create a credential or change to a different credential having the same username.

7. Click Finish and click Yes to confirm the changes.

Updating passwords for nodes

You can update the passwords for one or more nodes from PowerFlex Manager.

Steps

1. Log in to PowerFlex Manager.

2. On the menu bar, click Resources.

3. On the All Resources tab, select one or more nodes for which you want to change the passwords.

4. Click Update Password.

PowerFlex Manager displays the Update Password wizard.

5. On the Select Components page, specify which passwords you want to update for the selected nodes by clicking one or more of the following check boxes. iDRAC Password Node Operating System Password

Administering PowerFlex Manager 117

SVM Operating System Password

6. Click Next.

7. On the Select Credentials page, create a credential with a new password or change to a different credential.

a. Open the iDRAC (n) object under the Type column to see details about each node you selected on the Resources page.

b. To create a credential that has the new password, click the plus sign (+) under the Credentials column.

Specify the Credential Name and the User Name for which you want to change the password. Enter the new password in the Password and Confirm Password fields.

c. To modify the credential, click the pencil icon for the nodes under the Credentials column and select a different credential.

d. Click Save.

You must perform the same steps for the node operating system and SVM operating system password changes. For a node operating system credential, only the OS Admin credential type is updated.

8. Click Finish.

9. Click Yes to confirm.

Results

PowerFlex Manager starts a new job for the password update operation, and a separate job for the device inventory. The node operating system and SVM operating components are updated only if PowerFlex Manager is managing a cluster with the operating system and SVM. If PowerFlex Manager is not managing a cluster with these components, these components are not displayed and their credentials are not updated. Credential updates for iDRAC are allowed for managed and reserved nodes only. Unmanaged nodes do not provide the option to update credentials.

Embedded operating system password management Use this procedure to change the embedded operating system root password. The embedded operating system is the Linux operating systems on PowerFlex storage-only nodes.

About this task

During deployment of PowerFlex appliance, the person doing the installation sets the embedded operating system password in PowerFlex Manager. When PowerFlex Manager deploys the embedded operating system, it sets the password in the operating system. When the embedded operating system password is changed after deployment, you must also change the embedded operating system password within PowerFlex Manager to maintain manageability by PowerFlex Manager.

The terms and are used to see the current and new passwords respectively.

Steps

1. To change the PowerFlex Manager embedded operating system password, complete the following:

a. In PowerFlex Manager, go to Settings > Credential Management, select the embedded operating system credential, click Edit, change the Password to the , and click Save. See Credentials management for more information.

2. To change the embedded operating system root password on every PowerFlex storage-only node, complete the following:

a. Use an SSH client program like PuTTY to log in as root to the embedded operating system console using the .

b. Change the embedded operating system root password using passwd command:

[root@node1 ~]# passwd Changing password for user root. New password: Retype new password: passwd: all authentication tokens updated successfully.

3. Test the changes: Even though the cluster is operating properly, because of the time between changing the password in PowerFlex Manager and changing the password in the embedded operating system, nodes may show a critical error on the Services page in PowerFlex Manager. The following steps return the nodes to the healthy state.

118 Administering PowerFlex Manager

a. In the PowerFlex Manager GUI, go to Resources page, select the embedded operating system nodes, and click Run Inventory.

b. To confirm that the process completes with no errors, check Settings > Logs. c. In the PowerFlex Manager, go to Services page for embedded operating system nodes and click Update Service

Details. d. After the Update Service Details process completes, confirm that all cluster objects report as healthy (green check

mark).

Adding users

Steps

1. If you are signed in as the root user, you can create a user at any time by typing: adduser username.

2. If you are a sudo user, add a new user by typing: sudo adduser username.

3. Give your user a password so that they can log in, type: passwd username.

NOTE: If you are signed in nonroot user with sudo privileges, add sudo ahead of the command.

4. Type in the password twice to confirm it. The user is set up and ready for use

Granting sudo privileges to a user

If the new user should have the ability to run commands with root (administrative) privileges, you must give the new user access to sudo.

Steps

To get sudo privileges the user is added to the wheel group (which gives sudo access to all its members by default) using gpasswd.

If you are logged in as ... Type the following:

root user gpasswd -a username wheel nonroot user with sudo privileges sudo gpasswd -a username wheel

Now the new user can run commands with administrative privileges, type sudo ahead of the command that you want to run as an administrator:

sudo some_command You are prompted to enter the password of the regular user account that you are signed in as. Once the correct password has been submitted, the command you entered is performed with root privileges.

Managing users with sudo privileges

About this task

While you can add and remove users from a group (such as wheel) with gpasswd, the command doesn't have a way to show which users are members of a group. In order to see which users are part of the wheel group (and thus have sudo privileges by default), you can use the lid function. lid is normally used to show which groups a user belongs to, but with the -g flag, you can reverse it and show which users belong in a group using the following sudo lid -g wheel. The output will show you the usernames and unique identifiers UIDs that are associated with the group. This is a good way of confirming that your previous commands were successful, and that the user has the privileges that they need.

Administering PowerFlex Manager 119

Deleting users

The choice of deletion method depends on if you are deleting the user and user files or the user account only.

Steps

1. SSH to the server and log in as root.

2. In the command prompt, choose either of the following:

If you want to delete the user type the following:

without deleting any of their files userdel username home directory along with the user account itself userdel -r username

NOTE: Add sudo ahead of the command if you are signed in as a nonroot user with sudo privileges.

With either command, the user is automatically removed from any groups that they were added to. This includes the

wheel group if they were given sudo privileges. If you later add another user with the same name, they have to be added

to the wheel group again to gain sudo access.

Presentation server root password management Use this procedure to change the presentation server root password.

About this task

During deployment of PowerFlex appliance, the person doing the installation sets the presentation server root password in PowerFlex Manager. When PowerFlex Manager deploys the presentation server, it sets the password in the operating system. When the presentation server password is changed after deployment, the presentation server password must also be changed within PowerFlex Manager to maintain manageability by PowerFlex Manager.

The terms and are used to see the current and new passwords respectively.

Steps

1. To change the PowerFlex Manager presentation server root password, do the following:

a. In PowerFlex Manager, go to Settings > Credential Management, select the presentation server credential, click Edit, change the Password to the , and click Save. See Credentials management for more information.

2. To change the presentation server root password, do the following:

a. Use an SSH client program like PuTTY to log in as root to the presentation server using .

b. Change the presentation server root password using passwd command:

[root@presentation-server ~]# passwd Changing password for user root. New password: Retype new password: passwd: all authentication tokens updated successfully

Red Hat Enterprise Linux user and password management

Steps

1. To create a new user, SSH to the jump server and log in as root and type useradd username .

Where are command-line options as outlined in the following table:

120 Administering PowerFlex Manager

Option

-c can be replaced with any string. This option is generally used to specify the full name of a user.

-d home_directory Home directory to be used instead of default /home/username/.

-e date Date for the account to be disabled in the format YYYY-MM-DD.

-f days Number of days after the password expires until the account is disabled. If 0 is specified, the account is disabled immediately after the password expires. If -1 is specified, the account is not disabled after the password expires.

-g group_name Group name or group number for the user's default (primary) group. The group must exist prior to being specified here.

-G group_list List of additional (supplementary, other than default) group names or group numbers, separated by commas, of which the user is a member. The groups must exist prior to being specified here.

-m Create the home directory if it does not exist.

-M Do not create the home directory.

-N Do not create a user private group for the user.

-p password The password encrypted with crypt.

-r Create a system account with a UID less than 1000 and without a home directory.

-s User's login shell, which defaults to /bin/bash.

-u uid User ID for the user, which must be unique and greater than 999.

2. By default, useradd creates a locked user account. To unlock the account, run the following command as root to assign a password: passwd username.

Enabling sudo on a user

Steps

To enable sudo for your user on Red Hat Enterprise Linux, add your user ID (uid) to the wheel group:

a. SSH to the jump server and log in as root and type running su.

b. Type usermod -aG wheel .

c. Log out and log in again.

SUSE user and password management

Creating users

About this task

useradd allows you to add users and specify certain criteria such as: comments, the users home directory, shell type, and many others account properties for SUSE Linux operating system.

Steps

1. SSH to the server, and type: Server1:~# useradd -m -c " " -s /bin/bash .

Administering PowerFlex Manager 121

Where is a shell type of bash.

The following table that explains what each qualifier is used for:

Qualifier Description

-m This qualifier makes the useradd command create the users home directory.

-c "test username" This qualifier specifies a comment about the user.

-s /bin/bash This qualifier specifies which shell the user should use.

test The final qualifier is the username of the user.

2. Set the associated password, type: server1:~ # passwd .

server1:~ # passwd Changing password for . New Password: Reenter New Password: Password changed.

Once the password is set, the user can successfully log in to the server.

Deleting users

The command to delete users is userdel and is specified with the -r qualifier which removes the home directory and mail spool.

Steps

SSH to the server and type:server1:~ # userdel -r Once you have issued the userdel command, you will notice that the /home/ directory is removed. If you only want to delete the user but leave their home directory intact, you can issue the same command but without the -r qualifier.

Enabling sudo on a user

Steps

1. SSH to the server and log in as root.

2. Type the following: sudo usermod -a -G wheel USERNAME

Credentials management PowerFlex Manager requires a root-level username and password to access and manage nodes, switches, VMware vCenter, element managers, PowerFlex Gateway, presentation server, and operating system resources.

The Credentials Management page displays the following information about the credentials: NameA user-defined name that identifies the credentials. TypeA type of resource that uses the credential. ResourcesThe total number of resources to which the credential is assigned.

From the credential list, click a credential to view its details in the Summary tab: Name of the user who created and modified the credential. Date and time that the credential was created and last modified.

On the Credentials Management page, you can: Create credentials Edit existing credentials Delete existing credentials

122 Administering PowerFlex Manager

Restarting the PowerFlex Manager virtual appliance Use this procedure to restart PowerFlex Manager.

About this task

To restart the virtual appliance, you must be a user with the administrator role. The restart operation logs off all other users and cancels any running jobs.

Steps

1. Log in to PowerFlex Manager.

2. On the menu bar, click Settings, and then click Virtual Appliance Management.

3. On the Virtual Appliance Management page, click Reboot Virtual Appliance. A message displays confirming that you want to restart the virtual appliance.

4. Click Yes to confirm. The system restarts.

5. Once the reboot is complete, click Click to log in and provide your credentials.

Administering PowerFlex Manager 123

Deploying PowerFlex nodes using PowerFlex Manager

This section provides steps on how to automate node configuration with PowerFlex Manager. PowerFlex Manager provides two different type of deployments: Full network automation Partial network automation The full network automation feature allows for the configuration of nodes with supported switches, and the partial network automation allows for the configuration of nodes with unsupported switches.

If you choose to use partial network automation, you give up the error handling and network automation features that are available with a full network configuration that includes supported switches. It also requires more manual configuration before deployments can proceed successfully.

Deployment modes PowerFlex Manager is deployed in one of three modes: alerting, managed, or lifecycle.

You need a PowerFlex Manager license to deploy PowerFlex Manager in managed mode.

Mode Description

Managed Management and orchestration. PowerFlex Manager has deployed services or imported existing services for elements.

Lifecycle Monitoring, service mode, and compliance upgrade. PowerFlex Manager puts a service in lifecycle mode if the configuration has limited support. All other service operations are blocked.

Alerting Monitoring and alerting. PowerFlex Manager has not deployed services or imported existing services for elements.

Managed mode

PowerFlex Manager managed mode provides all the functionality of alerting mode.

It also provides the following functions:

Deployment and configuration of PowerFlex nodes, switches, operating system provisioning, virtualization, PowerFlex storage, and CloudLink encryption

Maintenance, including drive servicing, service mode, port views, and the ability to take over management of previously deployed PowerFlex rack environments

Upgrade of PowerFlex node BIOS and firmware, PowerFlex, VMware ESXi, CloudLink Center and agents Expansion, including adding volumes, PowerFlex nodes, and switches to an existing environment

Lifecycle mode

Lifecycle mode allows lifecycle management of node components in an unsupported configuration. In lifecycle mode, a service supports health and compliance monitoring, service mode, and non-disruptive upgrade operations. All other service operations are blocked.

Lifecycle mode controls the operations that can be performed for configurations with limited support, including the following:

No switch configuration

7

124 Deploying PowerFlex nodes using PowerFlex Manager

No switch connectivity Unsupported switch Invalid server inventory Missing network settings Unsupported server configurations Unsupported NIC or NIC teaming policies Network configuration without a PXE VLAN setting PowerFlex MDM cluster without virtual IP addresses VMware vSphere Cluster Services (vCLS) virtual machines on local storage

PowerFlex Manager also puts a service in lifecycle mode if you select a minimal compliance version that includes firmware only for the service and/or the VMware NSX-T service is configured.

Alerting mode

A PowerFlex Manager deployment is in alerting mode if there are no services. Alerting mode provides a number of functions. Supports discovery and inventory of system components (PowerFlex nodes, switches, VMware vCenter, CloudLink Center,

and PowerFlex Gateway) Provides performance metrics for PowerFlex nodes, switches, and PowerFlex Monitors system component health status Indicates compliance status Sends alerts to Secure Remote Services and configures node iDRAC to send alerts to PowerFlex Manager

Full network automation Full network automation allows for the configuration of nodes with supported switches.

Full network automation: Deploying a PowerFlex compute-only node with Red Hat Enterprise Linux or CentOS

This procedure describes how to deploy a PowerFlex compute-only node with Red Hat Enterprise Linux or embedded OS using full network automation option with PowerFlex Manager. The full network automation option will configure changes to physical switches.

About this task

This procedure steps through how to deploy a service by creating a new template. A sample template can be used to create a template but does not show the steps here.

Prerequisites

To create a new template from a clone, do the following: 1. Log in to PowerFlex Manager, and click Templates > Add a Template to open Add a Template wizard. 2. Select Clone an existing PowerFlex Manager template. 3. Click ? to access the online help. 4. Follow the instructions on how to add a new template from a sample template.

To download the minimal embedded operating system image file, go to CentOS.org and select the minimal ISO image.

Steps

1. Log in to PowerFlex Manager.

2. Add OS image to the repository.

NOTE: Skip this step if the OS image is already added to the repository.

a. Click Settings from menu bar and click Compliance and OS Repositories. b. Click the OS Image Repositories tab.

Deploying PowerFlex nodes using PowerFlex Manager 125

c. Click Add to open Add OS Image Repository wizard and enter the following:

OS image information Details

Repository name Enter .

Image type Select Red Hat/CentOS 7.

Source path and filename Enter http:// / / .iso .

Username Enter .

Password Enter .

d. Validate the path is working correctly, click Test Connection. e. Click Add to upload the ISO image.

3. Click Templates.

4. Click Add a Template to open the Add a Template page.

a. In Create a New Template, enter the Template Name. b. Click Next. c. Under Create Template, enter the following details:

Clone template Details

Template name Enter

Template Category Select Create New Category.

Enter a name in the New Category Name box.

Template Description Enter

Example: compute-only nodes with RedHat or CentOS

Firmware and software compliance Select the latest Intelligent Catalog version from the list.

Who should have access to the service deployed from this template?

Select from list who should have access to this service template.

5. Click Save.

6. Click Add Node to open the Node wizard and select Full Network Automation.

a. Click Continue. b. Enter the following details:

Node Details

Component name Enter .

Number of instances Enter .

Related components Select Associate Selected.

Check checkbox for PowerFlex cluster.

c. Click Continue. d. Under OS Settings, enter the following settings:

Description Values

Host name selection Select .

OS image Select < Red Hat or CentOS Image>.

OS credentials Select .

126 Deploying PowerFlex nodes using PowerFlex Manager

Description Values

Timezone Select

NTP server Select

Use node For Dell EMC PowerFlex Select the checkbox.

PowerFlex role Select Compute Only.

Enable encryption Clear the checkbox.

Switch port configuration Select Port Channel (LACP enabled).

Teaming and bonding configuration Select Mode4(IEEE 802.3ad policy).

e. Under Hardware Settings, enter the details within the following table:

Hardware settings Details

Target boot device Select Local Flash Storage for DellEMC PowerFlex.

Node pool Select .

f. Under BIOS Settings, enter the details within the following table:

BIOS settings Details

System profile Select Performance.

User accessible USB ports Select All Ports On.

Number of cores per processor Select All.

Virtualization technology Select Enabled.

Logical processor Select Enabled.

Execute disable Select Enabled.

Node interleaving Select Enabled.

g. Under Network Settings, follow the steps below to add interfaces. h. Click Add New Interfaceto create the first interface. i. Under Interface 1, enter the following details:

Network settings Details

Port layout Select Two port 25 gigabit.

j. Under Port 1, click Choose Networks to open Interface 1 Port 1 Network Configuration window. k. Select the checkboxes for the following networks:

Selected networks Description

Powerflex-mgmt- Powerflex-Management

pxe PXE Network

general purpose General purpose network

l. Click >> to add the selected networks to the right column and click Save. m. Under Port 2, click Choose Networks to open Interface 1 Port 2 Network Configuration window. n. Select the checkboxes for the following networks:

Selected networks Description

powerflex-data1- Powerflex-Data1

Deploying PowerFlex nodes using PowerFlex Manager 127

Selected networks Description

powerflex-data2- Powerflex-Data2

powerflex-data3- (if required) Powerflex-Data3

powerflex-data4- (if required) Powerflex-Data4

o. Click >> to add the selected networks to the right column and click Save. p. Click Add New Interfaceto create the second interface. q. Under Interface 2, enter the following details:

Hardware settings Details

Port Layout Select Two port 25 gigabit

r. Under Port 2, click Choose Networks to open Interface 2 Port 1 Network Configuration window. s. Select the checkboxes for the following networks:

Selected networks Description

Powerflex-mgmt- Powerflex-Management

pxe PXE Network

general purpose General purpose network

t. Click >> to add the selected networks to the right column and click Save. u. Under Port 2, click Choose Networks to open Interface 2 Port 2 Network Configuration window. v. Select the checkboxes for the following networks:

Selected networks Description

powerflex-data1- Powerflex-Data1

powerflex-data2- Powerflex-Data2

powerflex-data3- (if required) Powerflex-Data3

powerflex-data4- (if required) Powerflex-Data4

w. Click >> to add the selected networks to the right column and click Save. x. Click Validate Settings, if there are any errors, correct them and click Close. y. Click Save to complete the clone creation.

7. Create the clusters.

a. Click Add Cluster to create PowerFlex Cluster. b. Click Component Name > PowerFlex Cluster. c. Select Associate All or Associate Selected. d. Click Continue. e. Under PowerFlex Settings, enter the details within the following table:

PowerFlex settings Details

Target PowerFlex Gateway Select .

f. Click Save.

8. In the Template Information box, click Publish Template.

9. In the pop-up, click Yes.

10. On the Compute Template page, under Template Information, click Deploy and select the following:

Deploy settings Details

Select Published template Select .

128 Deploying PowerFlex nodes using PowerFlex Manager

Deploy settings Details

Service name Enter < Service Name>.

Service description Enter < Service Description>.

Firmware and software compliance Select the latest Intelligent Catalog version from list.

Who should have access to the service deployed from this template

Select from list who should have access to this service template.

11. Click Next to Deployment setting page.

12. Validate settings and click Next to Schedule Deployment page.

13. Leave default to Deploy Now.

14. Click Next.

15. Verify the summary page and click Finish.

Full network automation: Deploying a PowerFlex storage-only node

This procedure describes how to deploy a PowerFlex storage-only node with Red Hat Enterprise Linux or CentOS using full network automation option with PowerFlex Manager. The full network automation option will configure changes to physical switches.

About this task

This procedure shows how to deploy a service by creating a new template. A sample template can be used to create a template but does not show the steps here.

Prerequisites

To create a new template from a clone, do the following: 1. Log in to PowerFlex Manager, and click Templates > Add a Template to open Add a Template wizard. 2. Select Clone an existing PowerFlex Manager template. 3. Click ? to access the online help. 4. Follow the instructions on how to add a new template from a sample template.

To download the minimal embedded operating system image file, go to CentOS.org and select the minimal ISO image.

Steps

1. Log in to PowerFlex Manager.

2. Add OS image to the repository.

NOTE: Skip this step if the OS image is already added to the repository.

a. Click Settings from menu bar and click Compliance and OS Repositories. b. Click the OS Image Repositories tab. c. Click Add to open Add OS Image Repository wizard and enter the following:

OS image information Details

Repository name Enter .

Image type Select Red Hat / CentOS 7.

Source path and filename Enter http:// / / .iso .

Username Enter .

Password Enter .

d. Validate the path is working correctly, click Test Connection. e. Click Add to upload the ISO image.

3. Click Templates.

Deploying PowerFlex nodes using PowerFlex Manager 129

4. Click Add a Template to open the Add a Template page.

a. Enter the Template Name. b. Click Next. c. Under Create Template, enter the following details:

Clone template Details

Template name Enter

Template Category Select Create New Category..

Enter a name in the New Category Name box.

Template Description Enter

Example: storage-only nodes with RedHat or CentOS

Firmware and software compliance Select the latest Intelligent Catalog version from the list.

Who should have access to the service deployed from this template?

Select from list who should have access to this service template.

5. Click Save.

6. Click Add Node to open the Node wizard and select Full Network Automation.

a. Click Continue. b. Enter the following details:

Node Details

Component name Enter .

Number of instances Enter .

Related components Select Associate Selected.

Check checkbox for PowerFlex cluster.

c. Click Continue. d. Under OS Settings, enter the following settings:

Description Values

Host name selection Select .

OS image Select < Embedded OS Image>.

OS credentials Select .

Timezone Select

NTP server Select

Use node for Dell EMC PowerFlex Click checkbox.

PowerFlex role Select Storage Only.

Enable compression Select the check box (based on your requirement).

Enable encryption Select the check box (based on your requirement).

Enable replication Select the check box (based on your requirement).

Switch port configuration Select Port Channel (LACP enabled)

Teaming and bonding configuration Select Mode4 (IEEE 802.3ad policy)

e. Under SVM OS Settings, enter the details within the following table:

130 Deploying PowerFlex nodes using PowerFlex Manager

SVM OS settings Details

Host name selection Select < appropriate host name selection >.

Host name template Enter < host name template >.

OS credentials Select < OS Credentials >.

NTP server Enter < IP address >.

f. Under Hardware Settings, enter the details within the following table:

Hardware settings Details

Target boot device Select Local Flash Storage for DellEMC PowerFlex.

Node pool Select .

g. Under BIOS Settings, enter the details within the following table:

BIOS settings Details

System profile Select Performance.

User accessible USB ports Select All Ports On.

Number of cores per processor Select All.

Virtualization technology Select Enabled.

Logical processor Select Enabled.

Execute disable Select Enabled.

Node interleaving Select Enabled.

h. Under Network Settings, follow the steps below to add interfaces. i. Click Add New Interface to create the first interface. j. Under Interface 1, enter the following details:

Network settings Details

Port layout Select Two port 25 gigabit.

k. Under Port 1, click Choose Networks to open Interface 1 Port 1 Network Configuration window. l. Select the checkboxes for the following networks:

Selected networks Description

powerflex-data2- Powerflex-Data2

powerflex-data4- (if required) Powerflex-Data4

Powerflex-mgmt- Powerflex-Management

powerflex-prod- Production Network

m. Click >> to add the selected networks to the right column and click Save. n. Under Port 2, click Choose Networks to open Interface 1 Port 2 Network Configuration window. o. Select the checkboxes for the following networks:

Selected networks Description

powerflex-data1- Powerflex-Data1

powerflex-data3- (if required) Powerflex-Data3

p. Click >> to add the selected networks to the right column and click Save.

Deploying PowerFlex nodes using PowerFlex Manager 131

q. Click Add New Interface to create the second interface. r. Under Interface 2, enter the following details:

Hardware settings Details

Port Layout Select Two port 25 gigabit.

s. Under Port 2, click Choose Networks to open Interface 2 Port 1 Network Configuration window. t. Select the checkboxes for the following networks:

Selected networks Description

powerflex-data2- Powerflex-Data2

powerflex-data4- (if required) Powerflex-Data4

Powerflex-mgmt- Powerflex-Management

pxe- PXE Network

u. Click >> to add the selected networks to the right column and click Save. v. Under Port 2, click Choose Networks to open Interface 2 Port 2 Network Configuration window. w. Select the checkboxes for the following networks:

Selected networks Description

powerflex-data1- Powerflex-Data1

powerflex-data3- (if required) Powerflex-Data3

x. Click >> to add the selected networks to the right column and click Save. y. Click Validate Settings, if there are any errors, correct them and click Close. z. Click Save to complete the clone creation.

7. Create the clusters.

a. Click Add Cluster to create PowerFlex Cluster. b. Click Component Name > PowerFlex Cluster. c. Select Associate All or Associate Selected. d. Click Continue. e. Under PowerFlex Settings, enter the details within the following table:

PowerFlex settings Details

Target PowerFlex Gateway Select < New Target PowerFlex Gateway VM >.

Protection domain name Select Auto generate protection domain name (Recommended).

Protection domain name template Leave as default: PD-${num}

Acceleration pool name

NOTE: Only available if compression is enabled.

Select Auto generate acceleration pool name (Recommended).

Acceleration pool name template

NOTE: Only available if compression is enabled.

Leave as default: Site-AP-${num}

Storage pool name Select Auto generate storage pool name (Recommended).

Number of storage pools Select < number >.

Storage pool name template Leave as default.

Granularity

NOTE: Only available if compression is enabled.

Select < Fine or Medium >.

132 Deploying PowerFlex nodes using PowerFlex Manager

PowerFlex settings Details

Enable fault sets Check box if fault sets need to be enabled. NOTE: If the PowerFlex configuration includes fault sets, contact Dell EMC support for assistance. Do not go to the procedure until you have received guidance from a support representative.

f. Click Save.

8. In the Template Information box, click Publish Template.

9. In the pop-up, click Yes.

10. On the Storage Template page, under Template Information, click Deploy and select the following:

Deploy settings Details

Select published template Select < Current Name of Template >.

Service name Enter < Service Name >.

Service description Enter < Service Description >.

Firmware and software compliance Select the latest Intelligent Catalog version from list.

Who should have access to the service deployed from this template

Select from list who should have access to this service template.

11. Click Next to Deployment setting page.

12. Validate settings and click Next to Schedule Deployment page.

13. Leave default to Deploy Now.

14. Click Next.

15. Verify the summary page and click Finish.

Full network automation: Deploying a VMware ESXi PowerFlex hyperconverged node or PowerFlex compute-only node

This procedure describes how to deploy a PowerFlex compute-only node with VMware ESXi using full network automation option with PowerFlex Manager. The full network automation option will configure changes to physical switches.

About this task

This procedure shows how to deploy a service by creating a new template. A sample template can be used to create a template but does not show the steps here.

Prerequisites

To create a new template from a clone, do the following: 1. Log in to PowerFlex Manager, and click Templates > Add a Template to open Add a Template wizard. 2. Select Clone an existing PowerFlex Manager template, choose Category and Template to be cloned. 3. In Category, select Sample Template and in Template to be cloned select Compute Only ESXi and click Next. 4. Click ? > Help to access the online help for more information on Category and Sample Templates. 5. Follow the instructions on how to add a new template from a sample template.

Steps

1. Log in to PowerFlex Manager.

2. Add OS image to the repository.

NOTE: Skip this step if the OS image is already added to the repository.

a. Click Settings from menu bar and click Compliance and OS Repositories.

Deploying PowerFlex nodes using PowerFlex Manager 133

b. Click the OS Repositories tab. c. Click Add to open Add OS Image Repository wizard and enter the following:

OS image information Details

Repository name Enter .

Image type Select ESXi.

Source path and filename Enter http:// / / .iso .

Username Enter .

Password Enter .

d. Validate the path is working correctly, click Test Connection. e. Click Add to upload the ISO image.

3. Click Templates.

4. Click Add a Template to open the Add a Template page.

a. Enter the Template Name. b. Click Next. c. Under Create Template, enter the following details:

Clone template Details

Template name Enter

Example name: HCI or CO VMware ESXi Compute Template.

Template Category Select Create New Category..

Enter a name in the New Category Name box.

Template Description Enter

Example: compute-only or HC nodes with VMware ESXi

Firmware and software compliance Select the latest Intelligent Catalog version from the list.

Who should have access to the service deployed from this template?

Select from list who should have access to this service template.

5. Click Save.

6. Click Add Node to open the Node wizard and select Full Network Automation.

a. Click Continue. b. Enter the following details:

Node Details

Component name Enter .

Number of instances Enter .

Related components Select Associate Selected.

Check checkbox for PowerFlex cluster.

c. Click Continue. d. Under OS Settings, enter the following settings:

Description Values

Host name selection Select .

134 Deploying PowerFlex nodes using PowerFlex Manager

Description Values

Host name template (auto-generated) Enter

OS image Select < ESXi Image>.

OS credentials Select .

NTP server Enter

NOTE: Multiple NTP server IP addresses can be entered by using commas.

Use node for Dell EMC PowerFlex Click checkbox.

PowerFlex role Select Compute Only or Hyperconverged.

Enable compression Select appropriate.

Enable encryption Select appropriate.

Enable replication Select appropriate.

Switch port configuration Select Port Channel (LACP enabled).

Teaming and bonding configuration Select Route Based on IP hash.

e. Under SVM OS Settings, enter the details within the following table:

SVM OS settings Details

Host name selection Select < appropriate host name selection >.

Host name template Enter < host name template >.

OS credential Select < OS credentials >.

NTP server Enter < IP address >.

f. Under Hardware Settings, enter the details within the following table:

Hardware settings Details

Target boot device Select Local Flash Storage for DellEMC PowerFlex.

Node pool Select < pool name >.

g. Under BIOS Settings, enter the details within the following table:

BIOS settings Details

System profile Select Performance.

User accessible USB ports Select All ports on.

Number of cores per processor Select All.

Virtualization technology Select Enabled.

Logical processor Select Enabled.

Execute disable Select Enabled.

Node interleaving Select Enabled.

h. Under Network Settings, follow the steps below to add interfaces. i. Click Add New Interface to create the first interface. j. Under Interface 1, enter the following details:

Deploying PowerFlex nodes using PowerFlex Manager 135

Network settings Details

Port Layout Select Two port 25 gigabit.

k. Under Port 1, click Choose Networks to open Interface 1 Port 1 Network Configuration window. l. Select the checkboxes for the following networks:

Selected networks Description

powerflex-esx-mgmt- Hypervisor management

powerflex-vmotion- Hypervisor migration

Powerflex-mgmt- Powerflex management

pxe- PXE network

m. Click >> to add the selected networks to the right column and click Save. n. Under Port 2, click Choose Networks to open Interface 1 Port 2 Network Configuration window. o. Select the checkboxes for the following networks:

Selected networks Description

powerflex-data1- Powerflex-Data1

powerflex-data2- Powerflex-Data2

powerflex-data3- (if required) Powerflex-Data3

powerflex-data4- (if required) Powerflex-Data4

p. Click >> to add the selected networks to the right column and click Save. q. Click Add New Interface to create the second interface. r. Under Interface 2, enter the following details:

Hardware settings Details

Port Layout Select Two port 25 gigabit.

s. Under Port 2, click Choose Networks to open Interface 2 Port 1 Network Configuration window. t. Select the checkboxes for the following networks:

Selected networks Description

powerflex-esx-mgmt- Hypervisor management

powerflex-vmotion- Hypervisor migration

Powerflex-mgmt- Powerflex management

pxe- PXE network

u. Click >> to add the selected networks to the right column and click Save. v. Under Port 2, click Choose Networks to open Interface 2 Port 2 Network Configuration window. w. Select the checkboxes for the following networks:

Selected networks Description

powerflex-data1- Powerflex-Data1

powerflex-data2- Powerflex-Data2

powerflex-data3- (if required) Powerflex-Data3

powerflex-data4- (if required) Powerflex-Data4

x. Click >> to add the selected networks to the right column and click Save.

136 Deploying PowerFlex nodes using PowerFlex Manager

y. In Static Routes, select Enabled. z. Click Validate Settings, if there are any errors, correct them and click Close. aa. Click Save to complete the clone creation.

7. Create the clusters.

a. Click Add Cluster to create PowerFlex Cluster. b. Click Component Name > PowerFlex Cluster. c. Select Associate All or Associate Selected. d. Click Continue. e. Under PowerFlex Settings, enter the details within the following table:

PowerFlex settings Details

Target PowerFlex Gateway Select < New Target PowerFlex Gateway VM >.

Protection domain name Select Auto generate protection domain name (Recommended)

Protection domain name template Leave as default: PD-${num}

Acceleration pool name

NOTE: Only available if compression is enabled.

Select Auto generate acceleration pool name (Recommended).

Acceleration pool name template

NOTE: Only available if compression is enabled.

Leave as default: Site-AP-${num}

Storage pool name Select Auto generate storage pool name (Recommended)

Number of storage pools Select < number >.

Storage pool name template Leave as default.

Granularity

NOTE: Only available if compression is enabled.

Select < Fine or Medium >.

Enable fault sets Check box if fault sets need to be enabled. NOTE: If the PowerFlex configuration includes fault sets, contact Dell EMC support for assistance. Do not go to the procedure until you have received guidance from a support representative.

f. Click Save.

8. Create VMware Cluster.

a. Click Add Cluster to create VMware Cluster. b. Select VMware Cluster for the component name. c. Select Associated All option. d. Click Continue e. Under Cluster Settings, enter the details within the following table:

Cluster settings Details

Target virtual machine manager Select < vCenter Server hostname >.

Data center name Select < Create New Datacenter or an existing Datacenter >.

New datacenter name Select < Datacenter Name >.

Cluster name Select < Create New Cluster or an existing Cluster >.

New cluster name Select Cluster Name.

Cluster HA enabled Select checkbox to enable.

Deploying PowerFlex nodes using PowerFlex Manager 137

Cluster settings Details

Cluster DRS enabled Select checkbox to enable.

f. Under vSphere VDS Settings, click Configure VDS Settings button to open Configure VDS Settings wizard. g. Select Existing port group or create new port group. h. Assuming deployment is standard, select Auto Create All Port Groups. i. Click Next to VDS Naming page. j. Enter the details within the following table:

VDS Label Details

VDS1 VDS Name Enter

VDS2 VDS Name Enter

9. Click Next to Port Group Select page.

10. Validate the port group names that was automatically generated.

11. Click Next to continue to the Port Group Select page.

12. Validate the appropriate port group are created on the correct VDS.

13. In Advanced Networking Selection, select the appropriate MTU values for the port group and and click Next.

14. Click Finish.

15. In the Confirm pop-up, click Yes.

16. Click Save.

17. In the Template Information box, click Publish Template.

18. In the pop-up, click Yes.

19. On the Compute Template page, under Template Information, click Deploy and select the following:

Deploy settings Details

Select Published template Select .

Service name Enter < Service Name >.

Service description Enter < Service Description >.

Firmware and software compliance Select the latest Intelligent Catalog version from list.

Who should have access to the service deployed from this template

Select from list who should have access to this service template.

20. Click Next to Deployment setting page.

21. Validate settings and click Next to Schedule Deployment page.

22. Leave default to Deploy Now.

23. Click Next.

24. Verify the summary page and click Finish.

Adding volumes to a PowerFlex hyperconverged node or PowerFlex compute-only node

Steps

1. In PowerFlex Manager, click Services.

2. Click Service Name to open service.

3. In the Service Information action box, click Add Resources > Add Volumes in Resource Actions to open the Add Volume wizard.

4. In the Add Volume wizard, select Add existing volume or Create new volume and click Next.

138 Deploying PowerFlex nodes using PowerFlex Manager

5. In the Create New Volume page, click Add New Volume and select options and enter the following details:

Volume 1 Details

Volume name Select Create New Volume.

New volume name Enter < New Volume Name >.

Storage pool Select Storage Pools .

Volume size (GB) Enter < Size Number >.

Datastore name Select Datastore Name .

New datastore name Enter < New Datastore Name>.

Volume type Select Thick or Thin.

6. Repeat Steps 1 through 5 for each additional volume.

7. Click Save.

Partial network automation Partial network automation allows for the configuration of nodes with unsupported switches.

Partial network automation does not have the error handling and network automation features that are available with a full network configuration that includes supported switches. It requires more manual configuration before deployments can proceed successfully.

Partial network automation: Deploying a PowerFlex compute-only node with Red Hat Enterprise Linux or CentOS

This procedure describes how to deploy a PowerFlex compute-only node with Red Hat Enterprise Linux or CentOS using partial network automation option with PowerFlex Manager.

About this task

The partial network automation option will not configure any changes to physical switches. The changes on the switch ports are required to be configured by the customer before executing this service. See the switch example configurations in Customer Switch Port Configuration Examples.

The procedure steps through how to deploy a service by creating a new template. A sample template can be used to create a template but does not show the steps here.

Prerequisites

To create a new template from a clone, do the following: 1. Log in to PowerFlex Manager, and click Templates > Add a Template to open Add a Template wizard. 2. Select Clone an existing PowerFlex Manager template. 3. Click ? to access the online help. 4. Follow the instructions on how to add a new template from a sample template.

To download the minimal embedded operating system image file, go to CentOS.org and select the minimal ISO image.

Steps

1. Log in to PowerFlex Manager.

2. Add OS image to the repository.

NOTE: Skip this step if the OS image is already added to the repository.

a. Click Settings from menu bar and click Compliance and OS Repositories. b. Click the OS Image Repositories tab.

Deploying PowerFlex nodes using PowerFlex Manager 139

c. Click Add to open Add OS Image Repository wizard and enter the following:

OS image information Details

Repository name Enter

Image type Select Red Hat/CentOS 7

Source path and filename Enter http:// / / .iso

Username Enter

Password Enter

d. Validate the path is working correctly, click Test Connection. e. Click Add to upload the ISO image.

3. Click Templates.

4. Click Add a Template to open the Add a Template page.

a. Enter the Template Name. b. Click Next. c. Under Create Template, enter the following details:

Clone template Details

Template name Enter

Example name: CO redhat or CentOS Compute Template.

Template Category Select Create New Category.

Enter a name in the New Category Name box.

Template Description Enter

Example: compute-only nodes with RedHat or CentOS

Firmware and software compliance Select the latest Intelligent Catalog version from the list.

Who should have access to the service deployed from this template?

Select from list who should have access to this service template.

5. Click Save.

6. Click Add Node to open the Node wizard and select Partial Network Automation.

a. Click Continue. b. Enter the following details:

Node Details

Component name Enter .

Number of instances Enter Related components Select Associate Selected.

Check checkbox for PowerFlex cluster.

c. Click Continue. d. Under OS Settings, enter the following settings:

Description Values

Host name selection Select

OS image Select < Red Hat or CentOS Image>

140 Deploying PowerFlex nodes using PowerFlex Manager

Description Values

OS credentials Select

Use node for Dell EMC PowerFlex Select the checkbox

PowerFlex role Select Compute Only

Switch port configuration Select Port Channel (LACP enabled)

Teaming and bonding configuration Select Mode 4 (IEEE 802.3d policy)

e. Under Hardware Settings, enter the details within the following table:

Hardware settings Details

Target boot device Select Local Flash Storage for Dell EMC PowerFlex

Node pool Select < compute pool >

f. Under BIOS Settings, enter the details within the following table:

BIOS settings Details

System profile Select Performance

User accessible USB ports Select All Ports On

Number of cores per processor Select All

Virtualization technology Select Enabled

Logical processor Select Enabled

Execute disable Select Enabled

Node interleaving Select Enabled

g. Under Network Settings, follow the steps below to add interfaces. h. Click Add New Interfaceto create the first interface. i. Under Interface 1, enter the following details:

Network settings Details

Port Layout Select Two port 25 gigabit

j. Under Port 1, click Choose Networks to open Interface 1 Port 1 Network Configuration window. k. Select the checkboxes for the following networks:

Selected networks Description

powerflex-data2- Powerflex-Data2

powerflex-data4- (if required) Powerflex-Data4

Powerflex-mgmt- Powerflex-Management

pxe- PXE Network

l. Click >> to add the selected networks to the right column and click Save. m. Under Port 2, click Choose Networks to open Interface 1 Port 2 Network Configuration window. n. Select the checkboxes for the following networks:

Selected networks Description

powerflex-data1- Powerflex-Data1

powerflex-data3- (if required) Powerflex-Data3

Deploying PowerFlex nodes using PowerFlex Manager 141

o. Click >> to add the selected networks to the right column and click Save. p. Click Add New Interfaceto create the first interface. q. Under Interface 2, enter the following details:

Hardware settings Details

Port Layout Select Two port 25 gigabit

Redundancy Leave default checkbox cleared .

r. Under Port 2, click Choose Networks to open Interface 2 Port 1 Network Configuration window. s. Select the checkboxes for the following networks:

Selected networks Description

powerflex-data2- Powerflex-Data2

powerflex-data4- (if required) Powerflex-Data4

Powerflex-mgmt- Powerflex-Management

pxe- PXE Network

t. Click >> to add the selected networks to the right column and click Save. u. Under Port 2, click Choose Networks to open Interface 2 Port 2 Network Configuration window. v. Select the checkboxes for the following networks:

Selected networks Description

powerflex-data1- Powerflex-Data1

powerflex-data3- (if required) Powerflex-Data3

w. Click >> to add the selected networks to the right column and click Save. x. Click Validate Settings, if there are any errors, correct them and click Close. y. Click Saveto complete the clone creation.

7. Create the cluster.

a. Click Add Cluster. b. Click Component Name > PowerFlex Cluster. c. Select Associate All or Associate Selected. d. Click Continue. e. Under PowerFlex Settings, enter the details within the following table:

PowerFlex settings Details

Target PowerFlex Gateway Select < New Target PowerFlex Gateway VM >

f. Click Save.

8. In the Template Information box, click Publish Template.

9. In the pop-up, click Yes.

10. On the Compute Template page, under Template Information, click Deploy and select the following:

Deploy settings Details

Select published template Select .

Service name Enter < Service Name >.

Service description Enter < Service Description >.

Firmware and software compliance Select the latest Intelligent Catalog version from list.

Who should have access to the service deployed from this template

Select from list who should have access to this service template.

142 Deploying PowerFlex nodes using PowerFlex Manager

11. Click Next to Deployment setting page.

12. Validate settings and click Next to Schedule Deployment page.

13. Leave default to Deploy Now.

14. Click Next.

15. Verify the summary page and click Finish.

Partial network automation: Deploying a PowerFlex storage-only node

This procedure describes how to deploy a PowerFlex storage-only node with embedded operating system using partial network automation option with PowerFlex Manager.

About this task

The partial network automation option will not configure any changes to physical switches. The changes on the switch ports are required to be configured by the customer before executing this service. See switch example configurations in Customer Switch Port Configuration Examples.

The procedure steps through how to deploy a service by creating a new template. A sample template can be used to create a template but does not show the steps here.

Prerequisites

To create a new template from a clone, do the following: 1. Log in to PowerFlex Manager, and click Templates > Add a Template to open Add a Template wizard. 2. Select Clone an existing PowerFlex Manager template. 3. Click ? to access the online help. 4. Follow the instructions on how to add a new template from a sample template.

To download the minimal embedded operating system image file, go to CentOS.org and select the minimal ISO image.

Steps

1. Log in to PowerFlex Manager.

2. Add OS image to the repository.

NOTE: Skip this step if the OS image is already added to the repository.

a. Click Settings from menu bar and click Compliance and OS Repositories. b. Click the OS Image Repositories tab. c. Click Add to open Add OS Image Repository wizard and enter the following:

OS image information Details

Repository name Enter

Image type Select Red Hat / CentOS 7

Source path and filename Enter http:// / / .iso

Username Enter

Password Enter

d. Validate the path is working correctly, click Test Connection. e. Click Add to upload the ISO image.

3. Click Templates.

4. Click Add a Template to open the Add a Template page.

a. Enter the Template Name. b. Click Next. c. Under Create Template, enter the following details:

Deploying PowerFlex nodes using PowerFlex Manager 143

Clone template Details

Template name Enter

Example name: embedded os storage Template.

Template Category Select Create New Category.

Enter a name in the New Category Name box.

Template Description Enter

Example: storage-only nodes with embedded os

Firmware and software compliance Select the latest Intelligent Catalog version from the list.

Who should have access to the service deployed from this template?

Select from list who should have access to this service template.

5. Click Save.

6. Click Add Node to open the Node wizard and select Partial Network Automation.

a. Click Continue. b. Enter the following details:

Node Details

Component name Enter .

Number of instances Enter Related components Select Associate Selected.

Check checkbox for PowerFlex cluster.

c. Click Continue. d. Under OS Settings, enter the following settings:

Description Values

Host name selection Select

OS image Select < Embedded OS Image>

OS credentials Select

Timezone Select

NTP server Select

Use Node for Dell EMC PowerFlex Select the checkbox

PowerFlex role Select Storage Only

Enable compression Select the checkbox

Enable encryption Select the checkbox

Enable replication Select the checkbox

Switch port configuration Select Port Channel (LACP enabled)

Teaming and bonding configuration Select Mode4 (IEEE 802.3ad policy)

e. Under SVM OS Settings, enter the details within the following table:

SVM OS settings Details

Host name selection Select < appropriate host name selection >

144 Deploying PowerFlex nodes using PowerFlex Manager

SVM OS settings Details

Host name template Enter < host name template >

OS credentials Select < OS Credentials >.

NTP server Enter < IP Address >.

f. Under Hardware Settings, enter the details within the following table:

Hardware settings Details

Target boot device Select Local Flash Storage for DellEMC PowerFlex

Node pool Select < pool name >

g. Under BIOS Settings, enter the details within the following table:

BIOS settings Details

System profile Select Performance

User accessible USB ports Select All Ports On

Number of cores per processor Select All

Virtualization technology Select Enabled

Logical processor Select Enabled

Execute disable Select Enabled

Node interleaving Select Enabled

h. Under Network Settings, follow the steps below to add interfaces. i. Click Add New Interfaceto create the first interface. j. Under Interface 1, enter the following details:

Network settings Details

Port Layout Select Two port 25 gigabit

k. Under Port 1, click Choose Networks to open Interface 1 Port 1 Network Configuration window. l. Select the checkboxes for the following networks:

Selected networks Description

powerflex-data2- Powerflex-Data2

powerflex-data4- (if required) Powerflex-Data4

Powerflex-mgmt- Powerflex-Management

powerflex-prod- Production Network

m. Click >> to add the selected networks to the right column and click Save. n. Under Port 2, click Choose Networks to open Interface 1 Port 2 Network Configuration window. o. Select the checkboxes for the following networks:

Selected networks Description

powerflex-data1- Powerflex-Data1

powerflex-data3- (if required) Powerflex-Data3

p. Click >> to add the selected networks to the right column and click Save. q. Click Add New Interfaceto create the first interface. r. Under Interface 2, enter the following details:

Deploying PowerFlex nodes using PowerFlex Manager 145

Hardware settings Details

Port Layout Select Two port 25 gigabit

s. Under Port 2, click Choose Networks to open Interface 2 Port 1 Network Configuration window. t. Select the checkboxes for the following networks:

Selected networks Description

powerflex-data2- Powerflex-Data2

powerflex-data4- (if required) Powerflex-Data4

Powerflex-mgmt- Powerflex-Management

pxe- PXE Network

u. Click >> to add the selected networks to the right column and click Save. v. Under Port 2, click Choose Networks to open Interface 2 Port 2 Network Configuration window. w. Select the checkboxes for the following networks:

Selected networks Description

powerflex-data1- Powerflex-Data1

powerflex-data3- (if required) Powerflex-Data3

x. Click >> to add the selected networks to the right column and click Save. y. Click Validate Settings, if there are any errors, correct them and click Close. z. Click Saveto complete the clone creation.

7. Create the cluster.

a. Click Add Cluster. b. Click Component Name > PowerFlex Cluster. c. Select Associate All or Associate Selected. d. Click Continue. e. Under PowerFlex Settings, enter the details within the following table:

PowerFlex settings Details

Target PowerFlex Gateway Select < New Target PowerFlex Gateway VM >

f. Click Save.

8. In the Template Information box, click Publish Template.

9. In the pop-up, click Yes.

10. On the Storage Template page, under Template Information, click Deploy and select the following:

Deploy settings Details

Select published template Select .

Service name Enter < Service Name >.

Service description Enter < Service Description >.

Firmware and software compliance Select the latest RCM version from list.

Who should have access to the service deployed from this template

Select from list who should have access to this service template.

11. Click Next to Deployment setting page.

12. Validate settings and click Next to Schedule Deployment page.

13. Leave default to Deploy Now.

14. Click Next.

146 Deploying PowerFlex nodes using PowerFlex Manager

15. Verify the summary page and click Finish.

Partial network automation: Deploying a VMware ESXi PowerFlex hyperconverged node or PowerFlex compute-only node

This procedure describes how to deploy a PowerFlex hyperconverged node or PowerFlex compute-only node VMware ESXi using partial network automation option with PowerFlex Manager.

About this task

This procedure shows how to deploy a service by creating a new template. A sample template can be used to create a template but does not show the steps here.

Prerequisites

To create a new template from a clone, do the following: 1. Log in to PowerFlex Manager and click Templates > Add a Template to open Add a Template wizard. 2. Select Clone an existing PowerFlex Manager template. 3. Click ? to access the online help. 4. Follow the instructions on how to add a new template from a sample template.

Steps

1. Log in to PowerFlex Manager.

2. Add OS image to the repository.

NOTE: Skip this step if the OS image is already added to the repository.

a. Click Settings from menu bar and click Compliance and OS Repositories. b. Click the OS Image Repositories tab. c. Click Add to open Add OS Image Repository wizard and enter the following:

OS image information Details

Repository name Enter

Image type Select ESXi

Source path and filename Enter http:// / / .iso

Username Enter

Password Enter

d. Validate the path is working correctly, click Test Connection. e. Click Add to upload the ISO image.

3. Click Templates.

4. Click Add a Template to open the Add a Template page.

a. In Create a New Template, enter the Template Name. b. Enter the Template Name. c. Click Next. d. Under Create Template, enter the following details:

Clone template Details

Template name Enter

Example name: HCI or CO VMware ESXi Compute Template.

Template Category Select Create New Category.

Deploying PowerFlex nodes using PowerFlex Manager 147

Clone template Details

Enter a name in the New Category Name box.

Template Description Enter

Example: compute-only or HC nodes with VMware ESXi

Firmware and software compliance Select the latest RCM version from the list.

Who should have access to the service deployed from this template?

Select from list who should have access to this service template.

5. Click Save.

6. Click Add Node to open the Node wizard and select Partial Network Automation.

a. Click Continue. b. Enter the following details:

Node Details

Component name Enter .

Number of instances Enter Related components Select Associate Selected.

Check checkbox for PowerFlex cluster.

c. Click Continue. d. Under OS Settings, enter the following settings:

Description Values

Host name selection Select

OS image Select < ESXi Image>

OS credentials Select

NTP server Enter

NOTE: Multiple NTP server IP addresses can be entered by using commas.

Use Node For Dell EMC PowerFlex Select the checkbox

PowerFlex Role Select Compute Only or Hyperconverged

Enable compression Select appropriate.

Enable encryption Select appropriate.

Enable replication Select appropriate.

Switch Port Configuration Select Port Channel (LACP enabled)

Teaming and bonding configuration Select Route Based on IP hash

e. Under SVM OS Settings, enter the details within the following table:

SVM OS settings Details

Host name selection Select < appropriate host name selection >

Host name template Enter < host name template >

OS credentials Select < OS Credentials >.

148 Deploying PowerFlex nodes using PowerFlex Manager

SVM OS settings Details

NTP server Enter < IP address >.

f. Under Hardware Settings, enter the details within the following table:

Hardware settings Details

Target boot device Select Local Flash Storage for DellEMC PowerFlex

Node pool Select < pool name >

g. Under BIOS Settings, enter the details within the following table:

BIOS settings Details

System profile Select Performance

User accessible USB ports Select All ports on

Number of cores per processor Select All

Virtualization technology Select Enabled

Logical processor Select Enabled

Execute disable Select Enabled

Node interleaving Select Enabled

h. Under Network Settings, follow the steps below to add interfaces. i. Click Add New Interface to create the first interface. j. Under Interface 1, enter the following details:

Network settings Details

Port Layout Select Two port 25 gigabit

k. Under Port 1, click Choose Networks to open Interface 1 Port 1 Network Configuration window. l. Select the checkboxes for the following networks:

Selected networks Description

powerflex-esx-mgmt- Hypervisor Management

powerflex-vmotion- Hypervisor Migration

Powerflex-mgmt- Powerflex-Management

pxe- PXE network

m. Click >> to add the selected networks to the right column and click Save. n. Under Port 2, click Choose Networks to open Interface 1 Port 2 Network Configuration window. o. Select the checkboxes for the following networks:

Selected networks Description

powerflex-data1- Powerflex-Data1

powerflex-data2- Powerflex-Data2

powerflex-data3- (if required) Powerflex-Data3

powerflex-data4- (if required) Powerflex-Data4

p. Click >> to add the selected networks to the right column and click Save. q. Click Add New Interfaceto create the second interface. r. Under Interface 2, enter the following details:

Deploying PowerFlex nodes using PowerFlex Manager 149

Hardware settings Details

Port layout Select Two port 25 gigabit.

s. Under Port 2, click Choose Networks to open Interface 2 Port 1 Network Configuration window. t. Select the checkboxes for the following networks:

Selected networks Description

powerflex-esx-mgmt- Hypervisor Management

powerflex-vmotion- Hypervisor Migration

Powerflex-mgmt- Powerflex-Management

pxe- PXE network

u. Click >> to add the selected networks to the right column and click Save. v. Under Port 2, click Choose Networks to open Interface 2 Port 2 Network Configuration window. w. Select the checkboxes for the following networks:

Selected networks Description

powerflex-data1- Powerflex-Data1

powerflex-data2- Powerflex-Data2

powerflex-data3- (if required) Powerflex-Data3

powerflex-data4- (if required) Powerflex-Data4

x. Click >> to add the selected networks to the right column and click Save. y. Click Validate Settings, if there are any errors, correct them and click Close. z. Click Saveto complete the clone creation.

7. Create the clusters.

a. Click Add Cluster to create PowerFlex Cluster. b. Select PowerFlex Cluster for the component name. c. Select Associate All option. d. Click Continue. e. Under PowerFlex Settings, enter the details within the following table:

PowerFlex settings Details

Target PowerFlex Gateway Select < New Target PowerFlex Gateway VM >

f. Click Save.

8. Create VMware Cluster.

a. Click Add Cluster to create VMware Cluster. b. Select VMware Cluster for the component name. c. Select Associate All option. d. Click Continue e. Under Cluster Settings, enter the details within the following table:

Cluster settings Details

Target Virtual Machine Manager Select < vCenter Server hostname >

Data Center Name Select < Create New Datacenter or an existing Datacenter >.

New Datacenter Name Select < Datacenter Name >.

Cluster Name Select < Create New Cluster or an existing Cluster >.

New Cluster Name Select < Cluster Name >.

150 Deploying PowerFlex nodes using PowerFlex Manager

Cluster settings Details

Cluster HA Enabled Select checkbox to enable.

Cluster DRS Enabled Select checkbox to enable.

f. Under vSphere VDS Settings, click Configure VDS Settings button to open Configure VDS Settings wizard. g. Assuming deployment is standard, select Auto Create All Port Groups or Create New Port Groups. h. Click Next to the VDS Naming page. i. Enter the details within the following table:

VDS Label Details

VDS1 VDS Name Enter

VDS2 VDS Name Enter

9. Click Next to Port Group Select page.

10. Validate the Port Group Names that was automatically generated.

11. Click Next to continue to the Advanced Networking page.

12. Validate the appropriate port group are created on the correct VDS.

13. Click Finish.

14. In the Confirm pop-up, click Yes.

15. Click Save.

16. In the Template Information box, click Publish Template.

17. In the pop-up, click Yes.

18. On the Compute Template page, under Template Information, click Deploy and select the following:

Deploy settings Details

Select Published template Select .

Service name Enter < Service Name >.

Service description Enter < Service Description >.

Firmware and software compliance Select the latest RCM version from list.

Who should have access to the service deployed from this template

Select from list who should have access to this service template.

19. Click Next to Deployment setting page.

20. Validate settings and click Next to Schedule Deployment page.

21. Leave default to Deploy Now.

22. Click Next.

23. Verify the summary page and click Finish.

Adding volumes to a PowerFlex hyperconverged node or PowerFlex compute-only node

Steps

1. In PowerFlex Manager, click Services.

2. Click Service Name to open service.

3. In the Service Information action box, click Add Resources > Add Volumes in Resource Actions to open the Add Volume wizard.

4. In the Add Volume wizard, select Add existing volume or Create new volume and click Next.

5. In the Create New Volume page, click Add New Volume and select options and enter the following details:

Deploying PowerFlex nodes using PowerFlex Manager 151

Volume 1 Details

Volume name Select Create New Volume.

New volume name Enter < New Volume Name >.

Storage pool Select Storage Pools .

Volume size (GB) Enter < Size Number >.

Datastore name Select Datastore Name .

New datastore name Enter < New Datastore Name>.

Volume type Select Thick or Thin.

6. Repeat Steps 1 through 5 for each additional volume.

7. Click Save.

152 Deploying PowerFlex nodes using PowerFlex Manager

Restoring the PowerFlex Gateway Use this procedure when the PowerFlex Gateway has been lost and must be restored.

About this task

The PowerFlex management cluster requires a separate gateway to manage the cluster.

During deployment, PowerFlex Manager sets the same password for the PowerFlex Gateway admin account, the PowerFlex Gateway lockbox, MDMs, and LIA. When restoring a lost PowerFlex Gateway, you must set these passwords in the PowerFlex Gateway to match the PowerFlex Gateway admin password set during deployment (or in the PowerFlex Manager Settings > Credentials Management page) to maintain manageability by PowerFlex Manager.

You must also set the PowerFlex Gateway root password to match the PowerFlex Gateway root password set during deployment (or in the PowerFlex Manager Settings > Credentials Management) page to maintain manageability by PowerFlex Manager.

Prerequisites

You must have the following information available before beginning this procedure. The identifiers in brackets ( ) are used in the procedure to represent the required values.

Description Identifier

PowerFlex Management IP address

PowerFlex Management VLAN

PowerFlex Data 1 IP

PowerFlex management controller 2.0 Management IP

PowerFlex management controller 2.0 Management VLAN

PowerFlex management controller 2.0 Data 1 IP

PowerFlex management controller 2.0 Data 1 VLAN

PowerFlex management controller 2.0 Data 2 IP

PowerFlex management controller 2.0 Data 2 VLAN

PowerFlex management controller 2.0 primary MDM

PowerFlex management controller 2.0 secondary MDM

PowerFlex VLAN

PowerFlex IP

PowerFlexVLAN

PowerFlex Gateway root password

PowerFlex Gateway admin password

Default Gateway IP

DNS Server IP

NTP Server IP

PowerFlex Gateway Domain

PowerFlex Gateway hostname

Primary MDM IP

8

Restoring the PowerFlex Gateway 153

Description Identifier

Secondary MDM IP 1

Secondary IP 2 (if five node MDM cluster)

Steps

1. Install the PowerFlex Gateway.

a. Install the PowerFlex Gateway OVF and VMDK files. b. Change the root password. c. Configure the PowerFlex Gateway network interfaces. d. Configure thePowerFlex Gateway DNS client. e. Configure the PowerFlex Gateway NTP client. f. Install the JAVA and PowerFlex Gateway PRMs.

2. Restore the PowerFlex Gateway configuration.

Configure SNMP for PowerFlex Gateway Perform this procedure to configure SNMP for PowerFlex for the customer and management clusters.

Prerequisites

Ensure that a lockbox exists and that it contains MDM credentials.

Enable the SNMP feature in the gatewayUser.properties file.

Steps

1. Use a text editor to open the gatewayUser.properties file, which is located in the following directory on the PowerFlex installer / PowerFlex Gateway server: Linux: /opt/emc/scaleio/gateway/webapps/ROOT/WEB-INF/classes Windows: C:\Program Files\EMC\ScaleIO\Gateway\webapps \ROOT\WEB-INF\classes\

2. Locate the parameter features.enable_snmp and edit it as follows:

features.enable_snmp=true

3. Add the PowerFlex Manager IP address by editing the parameter snmp.traps_receiver_ip.

The SNMP trap receivers' IP address parameter supports up to two comma-separated or semicolon-separated host names or IP addresses.

4. Optionally change the following parameters:

Option Description

snmp.sampling_frequency The MDM sampling period. The default is 30.

snmp.resend_frequency The frequency of resending existing traps. The default is 0, which means that traps for active alerts are sent every sampling cycle.

5. Save and close the file.

6. Run the following command to restart the PowerFlex Gateway service:

service scaleio-gateway restart

154 Restoring the PowerFlex Gateway

Installing the PowerFlex Gateway Use this procedure to install the PowerFlex Gateway for the customer and management clusters.

Steps

1. Log in to PowerFlex Manager.

2. Click Templates > Sample template > Management PowerFlex Gateway > Clone.

3. In Template Name enter a template name.

4. Select a template category from the Template Category list. To create a template category, select Create New Category and enter the Category name.

5. In Template Description enter a description for the template.

6. To specify the version to use for compliance, select the version from the Firmware and Software Compliance list or select Use PowerFlex Manager appliance default catalog.

NOTE: You cannot select a minimal compliance version for a template, since it only includes server firmware updates.

The compliance version for a template must include the full set of compliance update capabilities. PowerFlex Manager

does not show any minimal compliance versions in the Firmware and Software Compliance list.

7. Indicate who should have access to the service deployed from this template by selecting one of the following options:

Grant access to only PowerFlex Manager administrators.

Grant access to PowerFlex Manager administrators and specific standard and operator users. Click Add Users to add one or more standard and or operator users to the list. Click Remove Users to remove users from the list.

Grant access to PowerFlex Manager administrators and all standard and operator users.

8. On the Additional Settings page, provide new values for the Network Settings, PowerFlex Gateway Settings, and Cluster settings.

9. Click Finish.

10. Once template is created, click Templates, select the PowerFlex Gateway template and click Edit.

11. Edit each component (PowerFlex Gateway and VMware Cluster), select the required field and Save.

12. Publish the template.

Installing the PowerFlex Gateway prior to PowerFlex 3.5 Use this procedure to install the PowerFlex Gateway on the PowerFlex management environment.

About this task

Download the PowerFlex Gateway from Dell support site. The PowerFlex Gateway uses the SVM OVF and VMDK files. The SVM OVF and VMDK files have the filename format of: ScaleIOVM_Xnics_x.x.xxxxxxx.xxx.ovf, and ScaleIOVM_Xnics_x.x.xxxxxxx.xxx.vmdk.

Steps

1. Download the SVM OVF and VMDK files and save them to a location that is accessible to the vCenter being used to manage the PowerFlex appliance management environment.

2. Log in to VMware vCenter.

3. Deploy the PowerFlex Gateway OVF and VMDK files.

4. Type a unique name for the PowerFlex Gateway VM name and select a location for the VM.

5. Select a compute resource for the PowerFlex Gateway VM.

6. On the Review details page, click Next.

7. On the Select storage page, complete the following:

a. Select Virtual disk provisioning: Thick Provision Lazy Zeroed. b. VM Storage policy: Datastore Default. c. Select a datastore on which to install the VM. Do not install VMs on BOSS cards.

Restoring the PowerFlex Gateway 155

d. Click Next.

8. On the Select networks page:

a. Set VM Networks to . b. Click Next.

9. Review details on Ready to complete page and then click Finish.

10. Wait for the PowerFlex Gateway OVF deployment to complete.

11. Right-click the PowerFlex Gateway VM and select Edit Settings. Set the following:

a. Network adapter 1: b. Network adapter 2: c. Network adapter 3: d. Network adapter 4: (if required) e. Network adapter 5: (if required)

12. Click OK.

Changing the root password on the VM After deploying the PowerFlex Gateway OVF, you must change the root password of the VM.

Steps

1. Log in to VMware vCenter.

2. Power on the PowerFlex Gateway VM.

3. Use VMware virtual console to connect to the PowerFlex Gateway VM.

4. Log in using these credentials: User is root, password is admin.

5. Use the Linux passwd command to change the default password to .

6. To logout of the console, type exit.

7. Log in to the root account using the new password to ensure it works.

Configuring the PowerFlex Gateway network interfaces Use this procedure to configure your PowerFlex Gateway network interfaces.

Steps

1. Find the MAC addressees of the PowerFlex Gateway VM, by doing the following:

a. Log in to vCenter. b. Right-click the PowerFlex Gateway VM and select Edit Settings. c. Select Network adapter 1 ( ) and record the MAC address. d. Repeat step for Network adapter 2 ( ), Network adapter 3 ( ), Network adapter 4

( if required), and Network adapter 5 ( if required). e. Use the VMware virtual console to connect to the PowerFlex Gateway VM. f. At the command prompt, type:

nmtui

g. In the NetworkManager TUI screen, select Edit a connection. h. Under Ethernet, select Wired connection 1. i. Compare the MAC address listed on the Device line to the MAC addresses recorded above so that you know, which

VLAN corresponds to Wired connection 1. j. Select Cancel. k. Repeat for Wired connection 2 and Wired connection 3 so that you have recorded which VLAN corresponds to each

wired connection.

156 Restoring the PowerFlex Gateway

2. Using the nmtui command, configure the Wired connection corresponding to PowerFlex appliance management VLAN ( ).

a. On the =ETHERNET line, select Show. b. Leave the Cloned MAC address line blank. c. Leave the MTU line blank. d. On the =IPv4 CONFIGURATION line, select Automatic and change to Manual then select Show. e. On the Addressees line, select Add and enter the IP address of this interface ( ). f. On the Gateway line, enter the default gateway ( ). g. On the DNS Servers line, select Add and enter the DNS server IP address ( ). h. On the Search domains line, select Add and enter the domain ( ). Do not select Never use this network

for the default route. i. Select Ignore automatically obtained routes. j. Select Ignore automatically obtained DNS parameters. k. Select Require IPv4 addressing for this connection. l. On the =IPv6 CONFIGURATION line, select Automatic and change to Ignore. m. Select Automatically connect. n. Select Available to all users. o. To exit the screen, select OK.

3. Using the nmtui command, configure the Wired connection corresponding to PowerFlex Data 1 VLAN ( ).

a. On the =ETHERNET line, select Show. b. Leave the Cloned MAC address line blank. c. Set MTU to 9000. d. On the =IPv4 Configuration line, select Automatic and change to Manual then select Show. e. On the Addresses line, select Add and enter the IP address of this interface ( ). f. Leave the Gateway line blank. g. Leave the DNS Servers line blank. h. Leave the Search domains line blank. i. Select Never use this network for the default route. j. Select Ignore automatically obtained routes. k. Select Ignore automatically obtained DNS parameters. l. Select Required IPv4 addressing for this connection. m. On the =IPv6 CONFIGURATION line, select Automatic and change to Ignore. n. Select Automatically connect. o. Select Available to all users. p. To exit, select OK.

4. Using the nmtui command, configure the Wired connection corresponding to the PowerFlex Data2 VLAN ( ).

a. On the =ETHERNET line, select Show. b. Leave the Cloned MAC address line blank. c. Set MTU to 9000. d. On the =IPv2 CONFIGURATION line, select Automatic and change to Manual then select Show. e. On the Addresses line, select Add and enter the IP address of this interface ( ). f. Leave the Gateway line blank. g. Leave the DNS Servers line blank. h. Leave the Search domains line blank. i. Select Never use this network for the default route. j. Select Ignore automatically obtained routes. k. Select Ignore automatically obtained DNS parameters. l. Select Require IPv4 addressing for this connection. m. On the =IPv6 CONFIGURATION line, select Automatic and change to Ignore. n. Select Automatically connect. o. Select Available to all users. p. Select OK.

NOTE: If adding data5 and data6 VLANs for native asynchronous replication, repeat Step 3 and 4.

5. On the Ethernet screen, select Back.

Restoring the PowerFlex Gateway 157

6. On NetworkManager TUI screen, select Quit and then OK.

7. Review network configuration with the Linux ip addr command.

8. Verify that you can ping each of the network interfaces and the default gateway ( ). Also, ping a PowerFlex node on all three networks.

Configuring the PowerFlex Gateway NTP client Use this procedure to configure the PowerFlex Gateway NTP client.

Steps

1. Edit the chrony.conf file: vi /etc/chrony.conf 2. At about line 7, add a line : server iburst.

3. Save chrony.conf file and quit the editor: :wq! 4. Set the timezone. For example, for the Chicago timezone: timedatectl set-timezone America/Chicago.

5. Reboot the PowerFlex Gateway by typing: reboot.

6. A few minutes after the system boots, the time synchronizes with the NTP server time. Verify this by using the Linux command date in the VMware virtual console.

Configuring the PowerFlex Gateway hostname Use this procedure to set the PowerFlex Gateway hostname.

Steps

1. Use the VMware virtual console to connect to the PowerFlex Gateway VM.

2. At the command prompt, type:

hostnamectl set-hostname

Installing the Java and PowerFlex Gateway RPMs Use this procedure to install the Java and PowerFlex Gateway RPMs.

Steps

1. Use VMware virtual console to connect to PowerFlex Gateway VM.

2. At the command prompt, type: cd /root/install 3. Install Java RPM by typing the following: #rpm -ivh java-1.8.0-openjdk-

headless-1.8.0.292.b10-1.el7_9.rpm.

4. Install gateway RPM by typing:

GATEWAY_ADMIN_PASSWORD= rpm -I EMC-ScaleIO-gateway-3.0-100.208.x86_64.rpm

5. Confirm the correct network configuration and the installation of the rpms by using a web browser to connect to the PowerFlex Gateway ( ). The PowerFlex Installer login dialog box opens.

6. Close the PowerFlex Installer box without logging in.

7. If there is a system failure, Dell EMC recommends creating a snapshot of the PowerFlex Gateway to allow recovery.

158 Restoring the PowerFlex Gateway

Restoring the PowerFlex Gateway configuration Use this procedure to restore the PowerFlex Gateway configuration for the customer and management clusters.

Steps

1. Type scli --login --username admin to log in to the primary MDM.

2. Type scli --query_cluster to identify the virtual IP addresses.

3. Log in to the PowerFlex Gateway as root.

4. Modify the gatewayUser.properties file:

a. Enter: cd /opt/emc/scaleio/gateway/webapps/ROOT/WEB_INF/classes.

b. Enter: vi gatewayUser.properties to edit the file and modify the following. IP addresses should be on network:

If you have a 3-node cluster: mdm.ip.addresses= , , ,

If you have a 5-node MDM cluster: mdm.ip.addresses= , , , ,

system.id = System ID of the PowerFlex cluster features.notification_method=none Security.bypass_certificate_check=true

5. Create the PowerFlex Gateway lockbox credentials:

/opt/emc/scaleio/gateway/bin/FOSGWTool.sh --change_lb_passphrase --new_passphrase

6. Create the PowerFlex Gateway MDM credentials:

/opt/emc/scaleio/gateway/bin/FOSGWTool.sh -set_mdm_credentials --mdm_user admin - mdm_password

7. Create the PowerFlex Gateway LIA password:

/opt/emc/scaleio/gateway/bin/FOSGWTool.sh --set_lia_password --lia_password

8. Restart the PowerFlex Gateway service:

service scaleio-gateway restart

9. Log in to PowerFlex Manager, go to the Resources page, select the PowerFlex Gateway, and then click Run Inventory.

10. Go to Services and verify Overall Service Health.

Deploying the PowerFlex GUI presentation server You can use a sample template to clone a PowerFlex GUI presentation server and deploy it using the PowerFlex Manager in the customer cluster.

About this task

NOTE: This procedure is not applicable for PowerFlex management controller 2.0.

Prerequisites

Discover and set the PowerFlex management controller VMware vCenter as Managed in the PowerFlex Manager and select this VMware vCenter and vSAN datastore for the presentation server template.

Restoring the PowerFlex Gateway 159

Steps

1. Log in to PowerFlex Manager.

2. On the PowerFlex Manager menu bar, click Template > Sample template > Management - presentation server and click Clone in the right pane.

3. In the Clone Template dialog box, enter a template name under Template Name.

4. Select a template category from the Template Category list. To create a template category, select Create New Category and enter the Category name.

5. In the Template Description, enter a description for the template.

6. To specify the version to use for compliance, select the version from the Firmware and Software Compliance list or choose Use PowerFlex Manager appliance default catalog.

You cannot select a minimal compliance version for a template, since it only includes server firmware updates. The compliance version for a template must include the full set of compliance update capabilities. PowerFlex Manager does not show any minimal compliance versions in the firmware and software compliance list.

7. Indicate access rights to the service deployed from this template by selecting one of the following options:

PowerFlex Manager administrators PowerFlex Manager administrators and specific standard and operator users

Click Add Users to add one or more standard and or operator users to the list and click Remove Users to remove users from the list.

PowerFlex Manager administrators and all standard and operator users

8. Click Next.

9. On the Additional Settings page, provide new values for the Network Settings, PowerFlex Presentation Server Settings, and Cluster Settings.

Under PowerFlex Presentation Server settings, select the presentation server credential that is created for the presentation server.

10. Select the PowerFlex management controller VMware vCenter or single vCenter.

11. Click Finish.

12. Once template is created, click Templates, select the PowerFlex presentation server template and click Edit.

13. Edit each component (PowerFlex presentation server and VMware Cluster), select the required field and Save.

14. Select the Publish template and click Deploy.

NOTE: The presentation server is autodiscovered on the Resource page on the successful deployment of the service.

Linking and unlinking the MDM to the presentation server web UI You can only link one MDM at any given time (1:1). Unlink the existing system if you to want to link another system. Unlink the MDM cluster from the web UI if you want to connect to another MDM cluster, and follow the first-time log in procedure to log in to the new MDM cluster. This procedure is not applicable for the PowerFlex Management cluster.

Link the MDM to the presentation server web UI

Steps

1. Log in to the presentation server web UI link (https://Presentation_Server_IP_Address:8443/).

2. Enter the primary MDM IP address.

NOTE: This is a one-time setup wizard, the first time you link the presentation server to primary MDM.

3. Approve Certificates.

4. Enter the MDM cluster username and password.

160 Restoring the PowerFlex Gateway

Unlink the MDM to the presentation server web UI

About this task

NOTE: Unlinking should be done from the presentation server login page.

Steps

1. Log in to the presentation server web UI link (https://Presentation_Server_IP_Address:8443/).

2. Log in to the PowerFlex GUI.

3. Click Settings > Unlink system.

Restoring the PowerFlex Gateway 161

Upgrading VMware vCenter This section provides details on the upgrading a patch release of VMware vCenter on the PowerFlex appliance controller node.

There are UI changes for the VMware vSphere Client 7.0 U3c update:

On the Home screen, there is no Menu button. The new menu is next to vSphere Client. From the Add DVSwitch menu, the New Host is no longer available.

NOTE: You must upgrade VMware vCenter before you upgrade the PowerFlex appliance.

When upgrading VMware vCenter from 6.7.x or 7.0x to 7.0U3c, if the upgrade fails because of the precheck, ESXi could potentially have dual i40en driver conflicts. For more information, see the KB article https://kb.vmware.com/s/article/ 86447. Use Option 2 from the KB article and remove the conflicting drive from the impacted hosts and upgrade VMware vCenter again.

Upgrading VMware vCenter infrastructure management components Use this procedure to upgrade the VMware vCenter Server Appliance (vCSA) on the PowerFlex appliance controller node.

About this task

Upgrading to VMware vSphere 7.0 Update 2a, VMware vCenter deploys vSphere Cluster Services (vCLS) VMs that are automatically deployed once the host is added to the cluster or on the existing clusters. These VMs are managed by VMware vCenter. Avoid making any changes to these VMs as it impacts HA and DRS service on VMware vCenter.

When upgrading VMware vCenter from 6.7.x or 7.0x to 7.0U3c, if the upgrade fails because of the precheck, ESXi could potentially have dual i40en driver conflicts. For more information, see the KB article https://kb.vmware.com/s/article/ 86447. Use Option 2 from the KB article and remove the conflicting drive from the impacted hosts and upgrade VMware vCenter again.

vCLS VMs should be migrated to shared storage. When performing any maintenance activity, these VMs are migrated to next available host/datastore in the cluster. The first three hosts added to cluster have these VMs created and there is a maximum of three VMs per cluster.

Prerequisites

Ensure the following are completed before you initiate the upgrade process: Back up the VMware vSphere infrastructure management components. Download the appropriate VMware vSphere vCenter patch and VMware ESXi ISO files from the download repository to the

jump server in the PowerFlex controller node. Take a snapshot of the vCSA prior to upgrading. See the VMware KB article for more information.

Steps

1. Take a snapshot of the VMware vSphere Management VMs (PowerFlex Manager appliance, VMware Controller vCenter, embedded operating system jump server, Secure Remote Services Gateway, PowerFlex Gateway, PowerFlex presentation server - optional is the CloudLink Center).

a. Check the datastore disk usage to verify that enough disk space is available to create snapshots. b. Right-click and select Snapshot > Take Snapshot. c. Enter a name and description, clear the Snapshot the virtual machine's memory and click OK. d. Repeat these steps for each management VM.

2. Log in to the VMware vCenter appliance management port and create a backup.

a. Use the backup utility https://{FQDN}:5480 to create a backup.

9

162 Upgrading VMware vCenter

3. Using the VMware vSphere client, upload the VMware-vCenter-Server-Appliance-X.x.x.xxxxx-xxxxxxx- patch-FP.iso to the local datastore on the PowerFlex controller node.

4. From the VMware vSphere client, click Storage > Datacenter > PERC-01 > Files.

a. Create a folder named ISO (if not created already).

b. Click the upload icon and upload the required ISO file.

NOTE: This step may fail if the browser finds a certificate that it does not trust. If a failure occurs, upload the ISO

files to an existing folder.

c. Allow time for the upload to complete. You can view the status at the bottom of the screen. d. From the VMware vSphere client, select the VM to attach the ISO. e. Go to Hosts and Clusters. f. On the Summary screen, expand VM Hardware. g. Click Edit settings, and then from the CD and DVD drive row, choose Datastore ISO file from menu and check

connected. h. Click Browse, choose the ISO folder and, select VMware-vCenter-Server-Appliance-X.xxxx. Click OK. i. Note the IP addresses of the VM. This is used later.

5. Open Mozilla Firefox on the jump server and go to the IP address of the vCSA appliance VM to be upgraded on port 5480 as noted in the previous step. For example, https:// :5480 .

6. Log in to the interface with username root and default password VMwar3!!.

NOTE: If you have to change the root password, see the VMware KB article.

7. On the left menu, select Update > Check Updates. Click Check CD ROM. Wait while the system validates the ISO attached earlier.

8. When complete, select Stage and Install. Click I accept and click Next. Clear Join the VMware Customer... and click Next. Check I have backed up vCenter... and click Finish.

9. Click OK. To reboot, right-click the VM from vCenter and Power > Restart Guest OS. Allow up to 10 minutes for the VM to reboot.

NOTE: When rebooting the PowerFlex management controller vCSA, web client connectivity is lost. After the reboot,

log back on to the web client.

10. Log in to the VMware vSphere client again and validate that the SSO domain is running, and disconnect the ISO.

NOTE: The VMware vSphere client may take some time to start, as the vCSA can take up to 15 additional minutes to

start all VMware vCenter services.

11. Run inventory using PowerFlex Manager:

a. Log in to PowerFlex Manager. b. Click Resourcse > All Resources tab. c. Click the checkbox for vCSA. Click Run Inventory. d. Click Close.

12. Verify that you have the correct Intelligent Catalog vCenter version, as follows:

a. Use the vSphere client to log in to the vCenter server. b. Click Help > About VMWare vSphere. c. A dialog appears with the build number of the VMware vCenter Server. Verify that it matches the requirement.

13. Disconnect the Patch-FP.iso that was previously mounted.

Stage and upgrade the iDRAC and firmware Use this task to stage the iDRAC and firmware.

Prerequisites

The iDRAC firmware upgrade must be done before any other upgrades. Perform the iDRAC firmware upgrade first, then upgrade the other component firmware.

Upgrading VMware vCenter 163

Steps

1. Log in to the iDRAC web interface by opening a Mozilla Firefox or Google Chrome browser and go to https:// .

NOTE: Under Server Information, review the System Host Name and verify that you have connected to the correct

hostname.

2. Select Maintenance > System Update > Manual Update and click Choose File.

3. Go to the Intelligent Catalog folder /shares/xxxxx and select the component update file. The components to update include:

iDRAC service module Dell BIOS Dell BOSS controller Dell iDRAC/Lifecycle controller Dell Intel X550/X540/i350 Dell Mellanox ConnectX-4 LX or ConnectX-5 EN Dell PERC H740P, HBA 355 or H755P controllers

4. Click Upload.

5. Select the firmware that you uploaded and click Install Next Reboot.

CAUTION: Do NOT click Install and Reboot, as it could cause a system outage.

NOTE: The installation will be in the job queue for the next reboot. Click Job Queue from the prompted information

message to monitor the progress for the installation.

Shutting down all the VMs running on the controller host Use this task to shut down all the VMs running on the controller node.

Steps

1. Log in to the web UI of the controller VMware ESXi host directly.

2. Go to Virtual Machines.

3. Shut down all the VMs except the jump server running on the controller host.

Upgrading VMware vSphere ESXi Use this task to upgrade VMware vSphere ESXi.

Steps

1. Use WinSCP to copy the ESXi-X.x.0-xxxxxx.zip patch file to the /vmfs/volumes/PERC-01/ISO folder on the VMware ESXi server.

2. Using SSH, connect to the VMware ESXi host and check for the uploaded file by typing the following command: cd / vmfs/ volumes/PERC-01/ISO.

3. For VMware ESXi 7.0 use the following command to install VMware ESXi .zip patches: esxcli software vib update -d /vmfs/volumes/PERC-01/ISO/VMware-ESXi-7.0 -depot.zip

NOTE: To perform this command, use the path used to connect from the WinSCP and transfer the ZIP file.

4. To update the Prole image on the host, complete the following:

a. To optionally list the prole of the VMware ESXi .zip archive, type esxcli software sources profile list -d /vmfs/ volumes/PERC-01/ .

164 Upgrading VMware vCenter

b. To upgrade the VMware ESXi version, type esxcli software profile update -p DellEMC- ESXi- X.x-xxxxxxxxx-xxx -d / vmfs/volumes/PERC-01/VMware-VMvisor- Installer- X.x-xxxxxxxxx- xxx.zip.

When the upgrade completes successfully, the following message displays, followed by the list of upgraded packages:

5. Go to iDRAC > Launch virtual console and select Boot > UEFI Device Path to enter system BIOS.

NOTE: Steps 5 through 8 apply only to PowerEdge R650, R750, and R625 servers for the initial bootmode change from

BIOS to UEFI.

6. Reboot the VMware ESXi host. Select Power > Reset (Warm Boot).

7. Press F2 to enter system setup.

8. Click System BIOS > Boot setting, select Boot mode as UEFI.

NOTE: Ensure that the BOSS card is set as the primary boot device from the UEFI Device Path under the Boot tab.

If the BOSS card is not set as the primary boot device, reboot the server and change the UEFI boot sequence from

System BIOS > Boot setting > UEFI BOOT Settings.

9. Click Back > Back > Finish > Yes > Finish > OK > Finish > Yes. The node reboots. Go to Exit maintenance mode.

Powering on all the VMs running on the controller host Use this task to power on all the VMs running on the controller node.

Steps

1. Log in to the web UI of the controller VMware ESXi host directly.

2. Go to Virtual Machines.

3. Power on all the VMs running on the controller host.

Upgrading the iDRAC service module Use this task to upgrade the iDRAC Service Module (iSM).

Steps

1. Use WinSCP to upload ISM-Dell-Web-3.4.x-xxxx.VIB-ESX6i-Live_A00.zip to the /vmfs/volumes/ DASxx/ISO folder.

2. Use SSH to access the VMware ESXi nodes and type esxcli software vib install -d /vmfs/volumes/ DASxx/ISO/ ISM-Dell-Web-3.4.x-xxxx.VIB-ESX6i-Live_A00.zip.

Change the SVM CPU clock reservation Use this task to change the SVM CPU clock reservation and CPU shares on VMware vCenter.

Prerequisites

Ensure you have completed the following: Take a snapshot of the SVM. Check the CPU and clock speed.

Steps

1. Log in to the VMware vCenter with administrator credentials.

2. Right-click the SVM and select Edit Settings.

3. Expand CPU.

Upgrading VMware vCenter 165

4. Select Reservation and enter the value in GHz.

5. Select Shares and select High from the menu.

Reservation = (SVM Core count /2 ) x (the ghz of the underlying CPU)

CPU Clock speed (GHz) SVM vCPU Reservation (GHz)

6330 2 16 16

6248R 3 16 24

6230 2.1 10 10.5

6242 2.8 14 19.6

5215 2.5 8 10.5

6326 2.9 12 17.4

Find the CPU and clock speed

Steps

1. Log in to VMware vCenter.

2. Click Host and Cluster.

3. Expand the Cluster and select Physical node.

4. Find the details against Processor Type under the Summary tab.

Migrating vCLS VMs on controller nodes Use this task to migrate vCLS VMs on controller nodes.

About this task

The VMware vSphere vCSA 7.0Ux update creates VMware vCLS VMs when the host is added to the cluster.

WARNING: VMware vCSA manages these VMs, no changes should be made to these VMs as it may impact the

HA and DRS services. Skip this task, in case the VMs are already migrated to shared datastore.

Steps

1. Click Administration > vCenter Server Extension > vSphere ESX Agent Manager > VMs. The VMs are also visible on the VMs and templates view.

2. VMs are created under the vCLS folder once the host is added to the cluster.

3. On the VMs and Templates view, click the vCLS folder.

4. Right-click the VM and click Migrate.

5. On the window, click Yes.

6. Click Change storage only.

7. For controller nodes, migrate them to the VSAN datastore.

8. Repeat the above procedure for all the vCLS VMs.

Upgrading the embedded operating system jump VM Use this procedure to upgrade the embedded operating system jump VM.

Steps

1. Obtain the updated switch image from the IC software repository .

166 Upgrading VMware vCenter

2. Deploy the existing the embedded jump VM and assign a valid IP address with Internet connectivity. A valid DNS entry must be defined.

3. Run df -h to verify that there is enough available free space on the /shares partition of the embedded jump VM to download the RPM packages and create the ZIP file. At least 15 GB is recommended.

4. Run uname -a to determine the embedded operating system version and verify the Linux kernel version by reviewing the output and the values in the file (/etc/centos-release).

5. Run cat /etc/centos-release to verify the embedded operating system version.

Installing the offline repository Use this procedure to install an offline repository.

Steps

1. Create a directory in the /shares volume called centos-RPM, type: sudo mkdir /shares/Centos-RPM.

2. Copy the repository update ZIP file to the /tmp directory of the embedded operating system VM using WinSCP or similar.

3. Extract the contents of the repository update ZIP file to the /shares/Centos-RPM directory, type: sudo unzip /tmp/ repofilename.zip -d /shares/Centos-RPM.

4. Create and modify a new repository file in the (/etc/yum.repos.d) directory, type: sudo vi /etc/yum.repos.d/ centos.rpm.repo. In this example, the file that is created is (/etc/yum.repos.d/centos.rpm.repo).

5. Clean the yum cache, type: # sudo yum clean all.

6. Verify access to the new repository, type: # sudo yum repolist.

7. Deploy the updates from the repository, type: yum update. When prompted answer (y).

8. When the process is complete, reboot the system, type: reboot.

9. Once the system reboot has completed, verify kernel version, type: uname -a viewing the (/etc/centos-release) file.

10. Verify the embedded operating system version, type: cat /etc/centos-release.

11. Remove the RPM files, type: sudo rm -f -r /shares/Centos-RPM.

12. Remove the repository index file, type: sudo rm /etc/yum.repos.d/centos.rpm.repo.

13. Clean yum cache, type sudo yum clean all.

Upgrading VMware vCenter 167

Upgrading a PowerFlex appliance environment

Use this section when there are new versions of PowerFlex Manager and Intelligent Catalog available.

About this task

NOTE: PowerFlex Manager does not support upgrades to VMware vCenter or the controller hosts. To manually upgrade,

see Upgrading VMware vCenter.

You can update the PowerFlex appliance environment by following these steps in the order that is shown here.

If your system has native asynchronous replication enabled, note the following:

The standard upgrade process should be followed on each system. Upgrade one system fully, before upgrading the second system. It is ideal to have both systems running the same Intelligent Catalog version. The recommendation is to upgrade them both

when logistically possible. Do not pause the replication process when following standard upgrade procedures.

WARNING: If the PowerFlex hyperconverged nodes or PowerFlex compute-only nodes are part of NSX-T, ensure

you or VMware Services upgrades the NSX-T Data Center before upgrading VMware vSphere ESXi on the nodes.

Dell EMC is not responsible for installing or upgrading NSX-T Data Center.

Secure Remote Services Upgrades

WARNING: For nodes in reserved mode, a firmware update will cause the nodes to reboot.

Prerequisites

Before upgrading PowerFlex Manager, ensure that it meets the following requirements:

The Secure Remote Services are discontinued and must be converted to Secure Connect Gateway. For more information, see Secure Remote Services 3.52 Upgrade to Secure Connect Gateway Supplement Documentation.

Ensure that the Secure Remote Services version is 3.52.x. For earlier versions, you must upgrade the Secure Remote Services to perform the conversion. For details, see https://www.dell.com/support/kbdoc/en-in/000188341/srs- ve-multiple-upgrades-may-be-required-to-reach-version-3-52.

Complete the following workflow to upgrade a PowerFlex appliance environment:

Upgrade to a supported version of VMware vCenter. NOTE: The version depends on the VMware ESXi version available with Intelligent Catalog, for example; if you are going

to upgrade VMware ESXi from 6.5 to 6.7 you should first upgrade your VMware vCenter to 6.7 before starting the

upgrade.

Upgrade PowerFlex Manager. Add new compliance file (Intelligent Catalog) and operating system images to PowerFlex Manager.

NOTE: In PowerFlex Manager, add the new compatibility management file, if you are using PowerFlex Manager 3.6 or

prior, skip this step.

Upgrade the PowerFlex GUI presentation server. Upgrade CloudLink Center. Upgrade PowerFlex. Upgrade the PowerFlex appliance nodes.

10

168 Upgrading a PowerFlex appliance environment

Intelligent catalog (IC) trains and the upgrade process NOTE: During the storage-only service upgrade, the /home partition is removed and the root partition is resized.

PowerFlex Manager deletes any data that is present under /home of the PowerFlex storage-only node.

An Intelligent Catalog (IC) is a catalog of a storage, firmware, and drivers that have been engineered and validated together. Staying on an engineered IC reduces the chance of a system outage due to conflicting components or other problems such as, known issues.

A new IC train is created when major version changes occur.

NOTE: IC jumps greater than two are considered high risk. Contact Dell Technologies Support before proceeding.

To upgrade to a new IC train, you first upgrade to the end of the IC train on which your system resides, and then upgrade to the new IC train. Performing these two upgrades keeps the system on an engineered and validated path. This is the safest choice for system stability and data integrity.

For example, the following diagram shows the multihop upgrade from IC 37_355_0x to IC 38_363_0x and a two-step upgrade, if the customer is upgrade from IC 37_361_0x to IC 38_363_0x.

Verify and change the maximum transmission unit (MTU) value This section provides details on verifying and changing the MTU values on VMware VMkernel port group and the dvSwitch on the VMware vCenter.

In order for the MTU value to be updated, back up the switch port configuration. and verify the port channel for the impacted host is updated to 9216 using show running-configuration interface port-channel . When this is completed, verify the dvSwitch is backed up. To back up the dvSwitch, see Back up the dvswitch configuration.

NOTE: If the MTU value is already set to 9000, ignore the Change the maximum transmission unit... tasks.

Find the following tables for more details on MTU values:

Switch MTU

Default/current Recommended

Dell PowerSwitch - 9216 Cisco Nexus - 9216 cust_dvswitch 1500 9000

Upgrading a PowerFlex appliance environment 169

VMK MTU

Default/current Recommended

vMotion 1500 9000 mgnt 1500 1500/9000

Back up and verify the dvSwitch configuration

Steps

1. Click the menu and choose Networking.

2. Click the impacted dvSwitch and click on the Configure tab.

3. On the properties screen, verify the MTU value.

If the MTU value is already set to 9000 ignore below configuration.

Change the maximum transmission unit (MTU) on the access switch

Use this task to change the maximum transmission unit (MTU) values to 9216/jumbo on physical switch port.

Steps

Log in in to the access switch with administrative credentials.

If you are updating a... Type the following...

Cisco Nexus switch interface port-channel31 description Downlink-Port-Channel-to- r840-01-dvswitch1 no shutdown switchport mode trunk switchport trunk allowed vlan 89,91-92,152,160 mtu 9216 vlt-port-channel 31 spanning-tree port type edge

Dell PowerSwitch switch interface port-channel31 description Downlink-Port-Channel-to- r840-01-dvswitch1 no shutdown switchport mode trunk switchport trunk allowed vlan 89,91-92,152,160 mtu 9216 vpc 31 spanning-tree port type edge

170 Upgrading a PowerFlex appliance environment

Change the maximum transmission unit (MTU) on the cust_dvswitch

Use this task to change the MTU on the cust_dvswitch.

Steps

1. Log in to the VMware vCenter with administrator credentials.

2. Select Networking.

3. Select cust_dvswitch.

4. Right-click and select Edit Settings.

5. Select Advanced and change the MTU value to 9000.

Change the maximum transmission unit (MTU) for VMware vMotion VMK

About this task

This task is optional for the management VMware VMK. If you are ready to use or implement jumbo frames repeat this task on the management VMware VMK.

Steps

1. Click Host and Clusters.

2. Select the node and click Configure.

3. From Networking, select Vmkernal adapters.

4. Select the vMotion VMK and click Edit.

5. On the Port Properties tab, change the MTU to 9000.

6. Repeat steps 1 through 5 for the other nodes.

Add a new compatibility management file Use this procedure to add a new compatibility management file to PowerFlex Manager.

About this task

Compatibility management helps PowerFlex Manager to recognize the correct Intelligent Catalog version and provides valid upgrade path details for the appliance and Intelligent Catalog.

NOTE: If the compatibility management file is not uploaded to the PowerFlex appliance, upgrading the PowerFlex Manager

appliance and service to latest version will be blocked.

It also helps bring the system into compliance and provide the details about supported and valid upgrade paths.

Steps

1. Log in to PowerFlex Manager.

2. Click Settings and select Virtual appliance management.

3. On the Compatibility management section, click Add.

4. Download the compatibility management file from Dell Technologies Support site to the jump server.

5. Click Upload from Local to use a local file. Then, click Choose File to select the GPG file and click Save.

Upgrading a PowerFlex appliance environment 171

Upgrade the PowerFlex Manager virtual appliance Use the PowerFlex Manager version applicable to the RCM. If you must upgrade the PowerFlex Manager virtual appliance to the supported latest version, PowerFlex Manager has an integrated mechanism for upgrading the PowerFlex Manager virtual appliance from an older version to the latest version.

After you upgrade the virtual appliance, you may see services on older RCMs (older than 12 months) go into lifecycle mode. This is expected as not all older RCMs are qualified to be managed by all newer versions of PowerFlex Manager. These services may still be upgraded using this guide.

PowerFlex Manager 3.7 introduces compatibility management to provide valid upgrade paths for PowerFlex Manager virtual appliance and RCM.

NOTE: In PowerFlex Manager 3.7, the /var filesystem is increased to 180 GB from 100 GB. To expand the filesystem in

PowerFlex Manager after the upgrade is complete, refer to https://www.dell.com/support/kbdoc/en-us/000192061.

You can update the PowerFlex Manager virtual appliance from a local repository path

NOTE: The PowerFlex Gateway root file system may over-fill, due to large Localhost_access.log files. To prevent

this, refer to https://www.dell.com/support/kbdoc/en-us/541865.

If you are running version 3.1 of the PowerFlex Manager virtual appliance, you must increase the CPU, memory, and disk space resources for the VM before upgrading the virtual appliance. The upgrade requires 8 virtual CPUs, 32 GB of memory, and 200 GB of hard disk space.

To expand the CPU, memory, and disk space for the VM:

1. Power off the PowerFlex Manager VM. 2. Right-click the VM in VMware vCenter. 3. Choose Edit Settings. 4. Adjust the settings for the CPU, memory, and disk space.

You cannot change the disk space setting if you have any snapshots of the virtual appliance. You must delete the snapshots before changing the disk space setting.

5. Increase the available disk space in the PowerFlex Manager virtual appliance.

For details, see https://c.na95.visual.force.com/apex/KnowledgeArticleTutorialView? popup=true&id=kA32G000000XZDQ&pubstatus=o.

Back up using PowerFlex Manager

Use this procedure to run the backup of the appliance manually.

About this task

PowerFlex Manager backup files include the following information: Activity logs Credentials Deployments Resource inventory and status Events Initial setup IP addresses Jobs Licensing Networks Templates Users and roles Resource module configuration files Performance metrics

CAUTION: If you back up a PowerFlex Manager virtual appliance with a working alert connector configuration,

and restore that backup onto a different IP address, the alert connector shows an error state. The Secure

172 Upgrading a PowerFlex appliance environment

Remote Services gateway allows communication on only the original IP address. You must deregister the alert

connector after restoring the backup and then re-register the alert connector.

Steps

1. Log in to PowerFlex Manager.

2. From the menu, click Settings > Backup and Restore.

3. From the Backup and Restore page, click Backup Now.

4. Select one of the following:

To use general settings that are applied to all backup files, from Settings and Details, click Use Backup Directory Path and Encryption Password.

To use custom settings, from Backup Directory Path: Enter a file path where the backup file is saved using either NFS (host/share) or CIFS (\\host\share\).

Optionally, enter a username and password in the Backup Directory, User Name, and Backup Directory Password fields.

From the Encryption Password field, enter a password that is required to open the backup file, and verify the encryption password by entering the password in the Confirm Encryption Password field. The password can include any alphanumeric characters.

5. Click Backup Now.

NOTE: Note the PowerFlex Manager management, OOB IP address, netmask, gateway, PXE network, DNS and domain

name for configuring the IP address after the deployment of new PowerFlex Manager appliance.

Back up the appliance SSL and trusted certificates

Before you upgrade PowerFlex Manager, you must back up the SSL and trusted certificates.

About this task

For more information, see https://www.dell.com/support/kbdoc/en-us/000193466/powerflex-manager-how-to-backup- restore-appliance-ssl-certificates-trusted-certificates?lang=en.

Steps

1. Back up the appliance SSL certificates:

a. Log in to the PowerFlex Manager appliance using SSH and sudo to root: sudo su - b. Copy the following files to /home/delladmin/ or to /tmp/:

cp /etc/pki/tls/certs/localhost.crt /home/delladmin/ cp /etc/pki/tls/private/localhost.key /home/delladmin

c. Type chown delladmin:delladmin /home/delladmin/localhost.* to change the owner to delladmin.

d. Copy the localhost.crt and localhost.key file from the PowerFlex Manager appliance to the jump server or another Linux system for temporary storage.

2. Back up the appliance SSL trusted certificates:

a. From the original PowerFlex Manager appliance, type /etc/pki/java/cacerts to copy the ca cert database to the jump server or another destination you choose.

Power off the PowerFlex Manager appliance

Use this procedure to power off the PowerFlex Manager appliance.

Steps

1. Log in to the Management Controller VMware vCenter.

2. Right-click on the PowerFlex Manager appliance.

3. Click Power > Shutdown Guest OS.

Upgrading a PowerFlex appliance environment 173

Take a snapshot of the PowerFlex Manager appliance

Use this procedure to take a snapshot of the PowerFlex Manager appliance.

Steps

1. Log in to the Management Controller VMware vCenter.

2. Right-click on the PowerFlex Manager appliance.

3. Select Snapshots > Take snapshot.

4. Uncheck Snapshot the virtual machine's memory and enter a description.

5. Click OK.

Upgrading the PowerFlex Manager virtual appliance without using Secure Remote Services (from a local repository path)

If you do not have Secure Remote Services connectivity, you can upgrade PowerFlex Manager without using Secure Remote Services.

About this task

The procedure for upgrading without using Secure Remote Services requires that you perform some additional manual steps. You need to download the ZIP file that contains the PowerFlex Manager software for the upgrade and specify the local repository path to the ZIP file.

Prerequisites

Before performing the upgrade, you should take a backup of the PowerFlex Manager virtual appliance settings.

Steps

1. Log in to Dell EMC Download Center.

2. Navigate to the software by selecting the following options: PowerFlex Manager > PowerFlex OVA

3. To upgrade from version 3.2 or later, perform these steps:

For HTTP or NFS:

a. Copy the upgrade ZIP file to /var/tmp on the PowerFlex Manager appliance using WinSCP or a similar program.

b. Open an SSH console session to the PowerFlex Manager appliance.

Default credentials are:

User: delladmin Password: delladmin

c. Create a symlink to the HTTP and/or NFS shares on the appliance. HTTP example: sudo link /var/tmp/Dell-PFxM-buildnumber.zip /var/www/html/Dell-PFxM-

buildnumber.zip NFS example: sudo link /var/tmp/Dell-PFxM-buildnumber.zip /var/nfs/Dell-PFxM-

buildnumber.zip d. If you are using HTTP, restart the HTTP daemon: sudo systemctl restart httpd e. Log in to PowerFlex Manager and go to Settings > Virtual Appliance Management.

Default credentials are:

User: admin Password: admin

f. Click Edit next to the Appliance Upgrade Settings. g. Select the radio button next to the Update Appliance from local repository path option and set the Repository Path

as needed for the protocol: HTTP example: http://IP-address/Dell-PFxM-buildnumber.zip NFS example: IP-address:/var/nfs/Dell-PFxM-buildnumber.zip

174 Upgrading a PowerFlex appliance environment

For HTTPS:

a. Follow the steps for HTTP above. b. Modify the ssl.conf file: sudo vim /etc/httpd/conf.d/ssl.conf

Remove the comment (#) from the beginning of the DocumentRoot line at about line 73.

c. Restart the HTTP daemon: sudo systemctl restarthttpd d. In PowerFlex Manager, go to Settings > Virtual Appliance Management, click Edit next to the Appliance Upgrade

Settings, and change the path as needed for the protocol: HTTPS example: https://IP-address/Dell-PFxM-buildnumber.zip

For CIFS:

a. Copy the upgrade ZIP file to a network share. b. In PowerFlex Manager, go to Settings > Virtual Appliance Management, click Edit next to the Appliance Upgrade

Settings, and change the path as needed for the protocol: CIFS example: \\IP-address\[path to file location]\Dell-PFxM-buildnumber.zip

You can use a non-authenticated CIFS share or an authenticated CIFS share.

For FTP:

a. Copy the upgrade ZIP file to an FTP server.

The FTP server must be configurable for anonymous logins and also have at least one user created.

b. In PowerFlex Manager, go to Settings > Virtual Appliance Management, click Edit next to the Appliance Upgrade Settings, and change the path as needed for the protocol: FTP example: ftp://IP-address/Dell-PFxM-buildnumber.zip

To set the path without credentials, enable Anonymous logins on the FTP server. Then, set the path in PowerFlex Manager, but do not provide credentials.

To set the path with credentials, disable Anonymous logins on the FTP server and confirm the username and credentials for a user on the FTP server. Then, set the path in PowerFlex Manager and provide credentials for the user on the FTP server.

4. To upgrade from version 3.0.1 or 3.1, perform these steps:

a. Download the upgrade ZIP file. b. Copy the upgrade ZIP file to /var/tmp on the PowerFlex Manager appliance using WinSCP or a similar program.

c. Open an SSH console session to the PowerFlex Manager appliance.

Default credentials are:

User: delladmin Password: delladmin

d. Ensure there are no files in the upgrade repository on the appliance: sudo rm -rf /var/lib/razor/repo- store/upgrade/*

e. Unzip the upgrade to /var/lib/razor/repo-store on the appliance: sudo unzip /var/tmp/Dell-* -d /var/lib/ razor/repo-store/upgrade

f. In PowerFlex Manager, go to Settings > Virtual Appliance Management, click Edit next to the Appliance Upgrade Settings, and change the path as needed.

Set it to http://IP-address:8080/svc/repo/upgrade/ NOTE: If the path fails, you can use a local repository that contains the upgrade file (for example, \\IP- address\Dropbox\Foggite\pfxm\Dell-PFxM-build-number.zip) as an alternative.

5. In the Appliance Upgrade Settings, you should now see the new version that you want to upgrade to listed under the Available Virtual Appliance Version.

NOTE: Please upload the latest GPG file to upgrade PowerFlex Manager. GPG files from older releases will not have the

correct PowerFlex Manager version.

6. When you are ready to perform the upgrade, click the Update Virtual Appliance button in the Virtual Appliance Management page.

The update takes approximately 2030 minutes and reboots the PowerFlex Manager appliance during the process.

Upgrading a PowerFlex appliance environment 175

The update process displays messages indicating the progress of the update. Once the update is complete, the system restarts and you are redirected to the login page.

NOTE: The warning displays as follows:

Are you sure you want to update PowerFlex Manager? Warning: Updating the appliance will restart the system. This action will log-off all current users and cancel all jobs in progress. Logged-in Users: 1 In-Progress Jobs: 0 Scheduled Jobs: 0

Enter Update PowerFlex Manager and click Yes.

7. Log in to the PowerFlex Manager virtual appliance.

Next steps

If you are updating PowerFlex Manager from a release prior to 3.3, you must configure iDRAC nodes to automatically send alerts to PowerFlex Manager:

1. Click Settings. 2. Under Settings, click Credentials Management. 3. On the Credentials Management page, edit the credential for each node and ensure that the correct SNMP community

string is included in the credential.

Select a node and click Edit to review the SNMP v2 community string and make any required changes.

The default community string is public. To use a different value, overwrite this string. The string that you specify must match the current community string setting on the iDRAC server.

4. Under Settings, click Virtual Appliance Management. 5. In the SNMP Trap Forwarding section, review the iDRAC SNMP community strings.

Click Edit to see the list of SNMP community strings. The list should include any that were previously added on the Credentials Management page. For these community strings, the Used By column shows all the credentials that use these community strings, and the Created By column shows Auto. You cannot update or delete the SNMP community strings that are being used by credentials. Once the credential having the community string is deleted from the Credentials Management page, then this deleted community string will automatically be removed from the SNMP Trap Forwarding page.

6. In the Alert Connector section, click Configure nodes for alert connector.

Check the Jobs page to see the running job. Wait for it to complete before proceeding.

7. To verify that the alert connector is receiving alerts, click Send Test Alert. 8. Go to the Alerts page to verify that you are receiving the alerts that you expect to see from the servers.

Confirming service settings

You can use a service that was originally deployed in a previous release of PowerFlex Manager. You do not need to re-create the service, unless it was an existing service from PowerFlex Manager version 3.4.

About this task

NOTE: Due to compute-only upgrades being deprecated by the Gateway please remove the compute-only service from

PowerFlex Manager before upgrading by following the manual steps for both Windows and Linux based compute-only

nodes. After the upgrade the compute-only nodes can be readded in reserved mode so as to maintain firmware upgradability

in the future.

PowerFlex Manager automatically converts a new service that was deployed in an earlier release. After you upgrade the virtual appliance, PowerFlex Manager may display a notification banner at the top of the Services page to alert you to the fact that a service is in a Critical state. To get the service into a Healthy state, you may need to confirm the service settings if PowerFlex Manager has added new required fields to any of the components.

176 Upgrading a PowerFlex appliance environment

PowerFlex Manager does not automatically convert an existing service from PowerFlex version 3.0.x, so you need to perform some additional steps to migrate a deployment from PowerFlex version 3.0.x.

Prerequisites

Upgrade the PowerFlex Manager virtual appliance and log in to PowerFlex Manager.

Steps

1. Go to the Services page and open a service.

2. If you see a notification banner indicating that the service is in a Critical state, click Confirm Service Settings to get the service into a Healthy state.

WARNING: Be sure to complete the Confirm Service Settings wizard before attempting to update the

PowerFlex Gateway. After upgrading from 3.2 to 3.3, PowerFlex Manager starts the Confirm Service

Settings wizard and requires that you specify the operating system to use on the service. If you attempt

to upgrade your gateway without completing the Confirm Service Settings wizard, the update will fail.

3. If you are converting the service from PowerFlex version 3.0.x, perform these steps:

a. Remove the deployment information for the existing service without making any configuration changes to the deployed components. To do this, you need to select the service and click Remove Service on the Services page. Select Remove Service as the removal type.

b. Discover and import the hardware resources for the existing deployment. To do this, you need to click Add Existing Service on the Services page.

4. Repeat this process for each of the remaining services.

Adding a new Intelligent Catalog file and OS images to PowerFlex Manager Add a new compliance file (Intelligent Catalog) and new OS images files using PowerFlex Manager.

About this task

PowerFlex Manager only supports Intelligent Catalog upgrades. Intelligent Catalog downgrades are not supported. Once you initiate an upgrade, it must run to completion. Contact your Dell EMC account team if you need further assistance with an upgrade.

Steps

1. Log in to PowerFlex Manager.

2. On the menu bar, click Settings and select Compliance and OS Repositories.

3. Click the question mark ? in the upper right corner of Add Compliance File page and follow the online help.

4. Verify that Make this the default version for compliance checking is selected.

NOTE: The Intelligent Catalog does not contain OS image files. You must load OS files separately by clicking Settings

and selecting Compliance and OS Repositories.

5. Go to Dell Technologies Support site and log in using the Service Tag associated with any of the PowerFlex nodes in the PowerFlex appliance.

6. Go to the Drivers & Download tab to download the Intelligent Catalog and OS image files.

NOTE: To be notified when new software releases are available, click Driver Notifications at the bottom of the

Drivers & Downloads tab.

7. On the Compliance and OS Repositories page, on the OS Image Repositories tab and click Add.

8. In the Add OS Image Repository dialog box, enter the name of the repository, the image type, and the path of the OS image file name.

To download the minimal embedded operating system image file, go to CentOS.org and select the minimal ISO image.

9. Click Add.

Upgrading a PowerFlex appliance environment 177

Upgrade the PowerFlex presentation server

Discover the presentation server manually

For PowerFlex Manager version 3.6 and earlier, you must discover the presentation server manually.

About this task

For PowerFlex Manager version 3.6 and earlier, you must patch the OS manually, see Embedded OS RPM patching on the PowerFlex Gateway VM and presentation server VM.

The presentation server is supported from PowerFlex 3.5.

For upgrading to PowerFlex 3.6 from PowerFlex 3.0.x.x - Deploy and configure the PowerFlex GUI presentation server.

For upgrading to PowerFlex 3.6 from PowerFlex 3.5.x.x - follow the steps below and then Upgrade PowerFlex GUI presentation server using PowerFlex Manager.

Steps

1. From the menu bar, click Resources. From the Resources page, click Discover on the All Resources tab.

2. From the Discovery wizard, read the instructions and click Next.

3. From the Identify Resources page, click Add Resource Type.

4. From Resource Type, select a presentation server.

5. Enter the presentation server IP address in the IP/Hostname Range field.

6. From Resource State, select one of the Managed options.

7. From the Credentials list, select an existing or create a credential to discover resource types. To create a credential, click + to the right of the Credentials box. PowerFlex Manager maps the credential type to the type of resource you are discovering.

8. Click Next > Finish. The discovered resources presentation server is listed on the Resources page.

Upgrade PowerFlex GUI presentation server using PowerFlex Manager

Use this procedure to upgrade PowerFlex GUI presentation server using PowerFlex Manager version 3.7.

About this task

This upgrade includes the RPM, the OS patch, Java, and all required software components.

Steps

1. Log in to PowerFlex Manager.

2. Click Resources, select the PowerFlex GUI presentation server and click Update Resources.

3. From the Apply Resource page, select Allow PowerFlex Manager to perform non disruptive updates now or Schedule non-disruptive updates to run later.

4. Click Apply.

5. Click Yes from the Confirm pop-up window.

178 Upgrading a PowerFlex appliance environment

Embedded OS RPM patching on the PowerFlex Gateway VM and presentation server VM

Use this procedure to upgrade the embedded OS RPM patching for PowerFlex Gateway VM and presentation server VM.

About this task

For PowerFlex Manager 3.6 and earlier, use this procedure to manually upgrade the OS version on the PowerFlex Gateway VM and presentation server.

For PowerFlex Manager 3.7 or later, OS patching is part of the upgrade and done automatically by PowerFlex Manager, you can skip this procedure.

Prerequisites

1. Type df -kh to verify that the VM has enough disk space.

Ensure 2 GB of disk space is available in /dev/sda1. If not, type sudo rm -r /var/cache/yum*.

2. Type uname -a to check the current version of the PowerFlex Gateway VM.

The name before applying the patch:

[root@sio-gateway ~]# uname -a

Linux sio-gateway 3.10.0-1127.18.2.el7.x86_64 #1 SMP Sun Jul 26 15:27:06 UTC 2020 x86_64 x86_64 x86_64 GNU/ Linux

The name after applying the patch:

[root@sio-gateway ~]# uname -a

Linux sio-gateway 3.10.0-1160.el7.x86_64 #1 SMP Mon Oct 19 16:18:59 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux

Steps

1. Log in to the PowerFlex Gateway VM using winSCP.

2. Copy the patch script (SVM_OS_Patching_package_xxxxxxxx.zip) from the path VxFlex_OS_3.x.x_xxx_Complete_Software\VxFlex_OS_3.x.x_xxx_Complete_VMware_SW\ to /tmp. The script name must be SVM_OS_Patching_package_xxxxxxxx.zip.

3. Use SSH to log in to the Gateway VM, check for the patch script zip file by typing cd /tmp and ls -ltr.

4. Type unzip SVM_OS_Patching_package_xxxxxxxx.zip.

5. Change file permission to execute or type chmod +x SVM_OS_Patching_package_xxxxxxxx.zip.

6. Type ./SVM_OS_Patching_package_xxxxxxxx.zip to execute the script.

7. Reboot the Gateway VM.

8. Type uname -a to verify the updated version.

9. To clean up the local patch files, type:

rm -f /tmp/SVM_OS_Patching_package_xxxxxxxx.zip 10. Repeat for the presentation server VM.

Deploy and configure the PowerFlex GUI presentation server

Use this procedure to deploy and configure the PowerFlex GUI presentation server using PowerFlex Manager.

Prerequisites

NOTE: For upgrades from RCMs 3.5.2.x and older, you must first discover the controller vCSA to be able to deploy the

presentation server on the controller vCSA.

Steps

1. From the sample templates, clone the Management - Presentation Server template.

Upgrading a PowerFlex appliance environment 179

2. On the Template Information page, provide the template name, template category, template description, firmware and software compliance, and who has access to the service deployed from this template.

3. On the Additional Settings page, under Network Settings, select the network created for PowerFlex management. Under PowerFlex Presentation Server Settings select the presentation credential created for presentation server. Under Cluster Settings, select the VMware vCenter where you want to deploy the presentation server.

4. Click Finish.

5. After creating the template, click Template, select the presentation server template and click Edit.

6. Edit both PowerFlex Presentation Server and VMware Cluster components, select the required field, and then click Save.

7. Click Publish Template.

8. Deploy the service:

a. On the menu bar, click Templates > Deploy New Service. b. Provide the name of the service to be deployed and click Next. c. On the Deployment Settings page, configure the required settings and click Next. d. Select Deploy Now and click Next. e. Review the Summary page and click Finish.

9. Once deployed, access the presentation server using https://Presentation_Server_IP_Address:8443/.

NOTE: In PowerFlex Manager 3.7, the presentation server is auto discovered on the Resource page after the successful

deployment. For PowerFlex Manager 3.6 or prior, the presentation server is not auto discovered and is not available on

the Resource page.

10. To link the presentation server to PowerFlex, open a browser and enter https:// Presentation_Server_IP_Address:8443/ for the log in screen. Complete the one-time setup wizard. Once complete, the link goes directly to the log in screen. Do the following:

a. Enter the IP address or hostname of the MDM server and click Next. b. Click Agree to approve certificates. c. Enter the username and password of the primary MDM.

Upgrade CloudLink Center PowerFlex Manager upgrades the CloudLink Center version of the deployed service. Then, you must upgrade any CloudLink Agent that is not running the same version as the CloudLink Center to the same version as the CloudLink Center.

About this task

After the CloudLink Center upgrade completes successfully, the PowerFlex nodes require additional maintenance because the CloudLink Agent is not in compliance with the new version for the service. In this case, you need to update any services that are non-compliant from the Services page.

PowerFlex Manager does not allow you to downgrade the CloudLink Center.

Prerequisites

You must be running CloudLink version 6.9 to upgrade to CloudLink version 7.0. You must be running CloudLink version 7.0 to upgrade to CloudLink version 7.1.

Steps

1. Log in to PowerFlex Manager.

2. Go to the Resources tab and select All Resources, and then select a CloudLink Center to upgrade.

3. Click Update Resources.

4. Choose Allow PowerFlex Manager to perform firmware and software updates now or Schedule firmware and software updates.

5. Click Apply.

6. Go to the Services page to update any services that are not in compliance with the new version of the CloudLink Center.

180 Upgrading a PowerFlex appliance environment

Validate SNMP in CloudLink Center

Use this procedure to validate SNMP in CloudLink Center.

About this task

This procedure is valid only if SNMP is configured in CloudLink.

Steps

1. Log in to the CloudLink Center using secadmin credentials.

2. Click SERVER > Select SNMP.

3. Click SNMP Test Trap to check for SNMP notifications.

4. If you receive a failed message, Failed to SEND TEST SNMP TRAP, perform the following steps:

a. Select the Host and click Modify. b. Correct the Target Version, Port, or Community string and click Modify. c. Click SNMP Test Trap.

Validate the syslog status in CloudLink Center

Use this procedure to validate the syslog status in CloudLink Center.

About this task

This procedure is valid only if syslog is configured in CloudLink.

Steps

1. Log in to the CloudLink Center using secadmin credentials.

2. Click SERVER > Select Syslog.

Ensure that the service status is Active.

3. If the service status is Postponed, click Resume.

Upgrading PowerFlex Use this procedure to upgrade PowerFlex (including the PowerFlex Gateway) with PowerFlex Manager. This procedure is supported on PowerFlex versions 3.0.0 to 3.x.

About this task

Upgrading PowerFlex is a two-step process. First, you upgrade a deployed service's PowerFlex Gateway from the Resources page. This will allow PowerFlex Manager to upgrade the PowerFlex Gateway version. Then it automatically upgrades all the SDSs for all the services that are tied to the Gateway.

The upgrade process performs some health prechecks to confirm that the service is healthy before the upgrade. If the service is not healthy, the PowerFlex Gateway upgrade is not successful.

Any nodes that require reconfiguration prior to an upgrade to PowerFlex 3.x are shown in the Needs Attention section of the wizard. If you choose to reconfigure all of the nodes, you can proceed with the upgrade process. If you select only a few of the nodes, PowerFlex Manager will reconfigure these nodes, but not proceed with the upgrade process until you have reconfigured the remaining nodes. You need to reconfigure all of the nodes before you can complete the upgrade process.

After the PowerFlex upgrade completes successfully, the storage-only services are in compliance with the new version. However, the nodes in the services may require additional maintenance. In this case, you must update any services that are non-compliant from the Services page. For more information (including SDC upgrades), see Update PowerFlex appliance nodes.

NOTE: The PowerFlex Gateway root file system may over-fill, due to large Localhost_access.log files. To prevent

this, refer to https://www.dell.com/support/kbdoc/en-us/541865.

In PowerFlex Manager 3.7, the PowerFlex Gateway upgrade includes the OS patch and java patches for the PowerFlex Gateway VM.

Upgrading a PowerFlex appliance environment 181

The compatibility management file facilitates the valid upgrade path. Check the recommended upgrade path: Click Service, and click View Compliance Report.

From the Node Compliance Report page, from the Compliance Status, click Change. From the Change Compliance File page, from Preferred Compliance, select the Intelligent Catalog file from the

drop-down menu. If the upgrade path is valid, compatibility is displayed as Recommended. The Compatibility would alert you in case

the Intelligent Catalog selected is not the valid path with Compatibility status as Not allowed. It gives the details on recommended and Support version of Intelligent Catalog allowed for the upgrade.

Click Cancel > Close. Do not change the Intelligent Catalog Compliance on the Service page. Go to Settings Compliance and OS Repositories. Change the Default version to the Intelligent Catalog recommended in the

service.

Prerequisites

Your system must be at version 3.0.x.x. Ensure all nodes and VMs have working NTP with time synchronized. Ensure there are no rebuilds and rebalances. Verify that there are no high severity alerts. Ensure that SVMs have sufficient memory and CPU. PowerFlex Gateway requires minimum of 8 GB memory and 2 vCPUs for upgrading to and running PowerFlex versions

3.0.x.x and later. Verify the PowerFlex Gateway memory and update it if required before performing this task.

To upgrade the PowerFlex Gateway using PowerFlex Manager, manually uninstall any xcache packages currently on the SDS nodes by entering rpm -e EMC-ScaleIO-xcache-3.xxxx.

A minimum of 16 GB SVM disk space is required for RCM 3.5. In PowerFlex Manager 3.7, the compatibility management file facilitates the valid upgrade path. Ensure the valid Intelligent

Catalog file is selected in the Default version on Compliance and OS Repositories before initiating the upgrade.

NOTE: PowerFlex Manager expects the LIA password to be the same as the MDM admin and PowerFlex Gateway

administrator password. PowerFlex Manager will automatically change the LIA password to match the defined MDM/

PowerFlex Gateway administrator password during future RCM upgrade operations.

Steps

1. Log in to PowerFlex Manager.

2. From the menu, click Resources.

3. On Resources page, select PowerFlex Gateway resource and click Update Resources.

4. On the Update Details page, check the Needs Attention section to see whether any of the nodes need to be reconfigured before upgrade. Select any nodes that you want to reconfigure. To select all nodes, click the box to the left of SDS Name.

5. Click Next.

6. On the Summary page, choose Allow PowerFlex Manager to perform non-disruptive updates now or Schedule non-disruptive updates to run later.

Specify the type of update you want to perform by selecting one of the following options:

Instant Maintenance Mode enables you to perform updates quickly. PowerFlex Manager does not migrate the data. Protected Maintenance Mode (PowerFlex 3.5 or later) enables you to perform updates that require longer than 30

minutes in a safe and protected manner.

7. On the Summary page, choose Allow PowerFlex Manager to perform non-disruptive updates now or Schedule non-disruptive updates to run later.

NOTE: To verify that the appliance node is ready for a PowerFlex upgrade, check the Needs attention tab. If a

node appears in this tab, select the node and click Finish. This ensures that the SVM has the required CPU and RAM

capacity.

8. If you only selected a subset of the nodes for reconfiguration, confirm the reconfiguration by typing RECONFIGURE NODES. Otherwise, confirm the update action by typing UPDATE POWERFLEX.

If you reconfigured only a subset of the nodes, you need to restart the wizard later to reconfigure the remaining nodes before you can complete the upgrade process.

9. Click Finish.

182 Upgrading a PowerFlex appliance environment

10. Click Yes to confirm. Monitor the job progress. When the upgrade is completed, PowerFlex Gateway appears as Compliant with IC. If you did not uninstall the xcache packages before beginning the upgrade process, the PowerFlex upgrade fails with this error: Command failed: An installation package of type xcache for rhel7 was not found.

11. From the Resources page, click Compliant with Default Catalog Report and verify the version.

12. Go to the Services page to update any nodes that are not in compliance with the new version of PowerFlex. Update nodes in a service from the Service page instead of the Resources page.

Upgrading Java on the PowerFlex Gateway and PowerFlex GUI presentation server

Upgrade Java to OpenJDK on the PowerFlex Gateway and PowerFlex GUI presentation server.

Prerequisites

CAUTION: Skip this task if using PowerFlex Manager 3.7 or greater. In PowerFlex Manager 3.7, Java gets

updated as part of the PowerFlex Gateway upgrade using PowerFlex Manager.

In PowerFlex Manager 3.6 or prior, Java is updated manually by this task.

Download OpenJDK and its dependency packages from release repository. Copy the downloaded files (JavaPackages.tar.gz and java-1.8.0-openjdkheadless-1.8.0.xxx.bxx-

x.el7_9.x86_64.rpm) to /root/install on the PowerFlex Gateway VM and PowerFlex GUI presentation server using WinSCP.

On the PowerFlex Gateway and PowerFlex GUI presentation server VM, verify the version, type: # java -version.

Steps

1. For the PowerFlex Gateway, shut down the gateway service, type: # systemctl stop scaleio-gateway.

2. Install or upgrade the OpenJDK dependencies on the PowerFlex Gateway and PowerFlex GUI presentation server.

a. Change directory to /root/install, type: # cd /root/install.

b. List the dependencies, type: # ls c. Check for the copied OpenJDK dependency for javapackages.tar.gz.

d. Decompress the file, type: # tar -zxf JavaPackages.tar.gz# tar -zxf JavaPackages.tar.gz.

e. Install or upgrade OpenJDK dependency packages, type:

#cd JavaPackages/ #rpm -Uvh *.rpm

3. Remove existing oracle java, by querying the java package information, type: # rpm -qa | grep -i jre.

4. Capture the existing Java version and delete it, type: #rpm -e .

5. Install new OpenJDK, type: #rpm -ivh /root/install/java-1.8.0-openjdkheadless-1.8.0.xxx.bxx- x.el7_9.x86_64.rpm.

6. Verify the version, type: # java -version.

The version is OpenJDK 64-bit server VM build 25.XXX-b09, mixed mode.

NOTE: The upgrade will automatically upgrade LockBox and restart the service.

7. Validate the gateway service is running, type: # systemctl status scaleio-gateway.service.

8. Validate the LockBox credentials, type:

# cd /opt/emc/scaleio/gateway/bin # ./FOSGWTool.sh --query_esx_credentials

Example of the command output:

Upgrading a PowerFlex appliance environment 183

Default ESX credentials exist. Specific ESX credentials configuration: Ip: 192.168.100.1-192.168.100.20

9. Validate the gateway is working:

a. Log in to PowerFlex Gateway GUI. b. Retrieve the system topology, enter the MDM details. c. Verify all PowerFlex clusters details are listed here.

Update PowerFlex appliance nodes You can update a PowerFlex appliance node using PowerFlex Manager.

Steps

1. Log in to PowerFlex Manager and select Services.

2. Select the existing deployment that you are upgrading to view its details.

3. Click Services. On the Services page, select a service. To change the IC, select View Compliance Report.

4. On the Node Compliance Report page, click Change on the Compliance status. Select the RCM on the preferred compliance file. Confirm the change by typing CHANGE COMPLIANCE FILE and click Save and Close.

NOTE: In earlier versions of PowerFlex Manager, on the Details page, see the Target version/Target IC version at

the top right. To change the IC, click Change Target /Change Target IC. You can set it to the default IC or different

IC.

5. On the Service Details page, in the right pane under Service Actions, click View Compliance Report.

6. From the compliance report, view the firmware or software components, select the specific nodes that are non-compliant, and click Update Resources.

a. To perform a non-disruptive update right away, select Allow PowerFlex Manager to perform firmware and software updates now. Select one of the following to specify the type of update:

Instant Maintenance Mode - provides quick updates. PowerFlex does not migrate the data. Protected Maintenance Mode - provides updates that require longer than 30 minutes in a safe and protected

manner.

b. To perform a non-disruptive update at a later time, select Schedule firmware and software updates. c. To perform a disruptive update right away for a full upgrade, select Allow PowerFlex Manager to perform disruptive

updates now.

The full system upgrade process is faster. However, the nodes, as well as all of the data, are unavailable while the upgrade is in process. If you are certain that you want to proceed, type REBOOT ALL NODES AT ONCE.

d. Click Apply and click Yes to confirm.

The update process handles node, BIOS, firmware, VMware ESXi driver updates, and VMware ESXi major version upgrades automatically. For PowerFlex, the update process also updates SDCs in any hyperconverged and compute-only services, if these SDCs are not in compliance with the new version for the service. For CloudLink, PowerFlex Manager automatically updates the CloudLink Agent.

PowerFlex Manager does not upgrade VMware vCenter itself. However, it does check the VMware vCenter version to determine if it matches the VMware ESXi version. If the VMware ESXi version is greater than the VMware vCenter version, PowerFlex Manager blocks the VMware ESXi host upgrade and displays an error. PowerFlex Manager instructs you to upgrade VMware vCenter first, or use a different compliance version that is compatible with the installed VMware vCenter version.

7. If you encounter any errors while performing firmware or software updates, you can view the PowerFlex Manager logs for the service to see where the error might have occurred.

a. On the Service Details page, in the right pane, under Service Actions, click Generate Troubleshooting Bundle.

This creates a compressed file that contains PowerFlex Manager application logs, PowerFlex Gateway logs, iDRAC lifecycle logs, Dell EMC PowerSwitch switch logs, Cisco Nexus switch logs, and VMware ESXi logs. The logs are for the current service only.

184 Upgrading a PowerFlex appliance environment

Alternatively, you can access the logs from a VMware console, or by using SSH to log in to PowerFlex Manager, if you have SSH enabled.

Migrating VMware vSphere Cluster Services (vCLS) VMs This task helps migrate the VMware vCLS VMs to service datastore using the Migrate vCLS VM wizard in PowerFlex Manager and bring the service in managed mode.

About this task

VMware vCLS is a new feature in VMware vSphere 7.0 Update 2a. This feature ensures cluster services such as vSphere DRS and vSphere HA are available to maintain the resources and health of the workloads running in the clusters independent of the VMware vCenter server instance availability.

Steps

1. Log in to PowerFlex Manager and select the Service tab.

2. Select the existing hyperconverged cluster.

3. Go to Service page and click the Migrate vCLS wizard.

4. Select the volume and datastore to migrate the vCLS VMs.

NOTE: For example, volume could be named powerflex-service-vol-1,powerflex-service-vol-2 and datastore named

powerflex-esxclustershotname-ds1,powerflex-esxclustershotname-ds2.

5. Click Finish. This action will creates two volumes and two datastores each of 16 GB and the VMs are migrate to service datastore.

Upgrading Cisco NX-OS 7.x to Cisco NX-OS 9.x Use this procedure to upgrade Cisco NX-OS 7.x to Cisco NX-OS 9.x.

About this task

Extra steps are required to compact the Cisco NX-OS image files for the upgrade to complete successfully.

Steps

1. Start an SSH session to the switch.

2. Enter the following command to commit to persistent storage. In addition, copy the config to a remote server (jump server): copy running-config startup-config.

3. Enter the show version command to determine the current running version, type: show version.

NOTE: The output from running the show version command displays a running firmware version. Depending on your

switch model, near the bottom of the display, the previous running version may display and should not be confused with

the current running version.

Software BIOS: version 07.61 NXOS: version 7.0(3)I7(3

4. Check the contents of the bootflash directory to verify that enough free space is available for the new Cisco NX-OS software image.

a. To check the free space on the flash, type: dir bootflash.

For example:

Upgrading a PowerFlex appliance environment 185

Usage for bootflash:// 1275002880 bytes used 375902208 bytes free 1650905088 bytes total

b. Delete older firmware files to make additional space, if needed, type: delete bootflash:nxos.7.0.2.I7.6.bin.

NOTE: Do not delete the current running version of the firmware files, as shown in the previous show version.

The Cisco Nexus 3000 and Cisco Nexus 9000 switches do not provide a confirmation prompt before deleting them.

5. If upgrading a Cisco Nexus 3000 series switch, type the following to compact the current running image file: switch# install all nxos bootflash:nxos.7.0.3.I7.bin compact

6. Using SCP, FTP, TFTP server, type the following to copy the firmware file to local storage on the Cisco Nexus switch: Use below TFTP command to copy image.

copy tftp://XXX.XXX.XXX.XXX/nxos.9.3.3.bin bootflash:

Use SCP to copy image.

copy scp://filescp@x.x.x.x//home/filescp/image/nxos.9.3.3.bin bootflash: The firmware files are hardware model-specific. The firmware follows the same naming convention as the current running firmware files (show version).

NOTE: For Cisco Nexus 3000 series switches use the following command to copy the image:

copy scp://filescp@x.x.x.x//home/filescp/image/nxos.9.3.3.bin bootflash:nxos.9.3.3 compact

NOTE: If warnings of not enough space to copy files continues, perform an SCP copy with the compact option to

compact the file as it is copied over. Doing this may result with encountering a defect. The work-around for this defect

requires cabling the management port and configuring its IP address on a shared network with the SCP server, allowing

the copy to take place across that management port. Once complete, go to Step 7.

7. Identify the upgrade impact, type: show install all impact.

switch# show install all impact nxos bootflash:nxos.9.3.3.bin

Validate the output if the image is compatible for an upgrade.

8. Start the upgrade process, type: install all nxos bootflash:nxos.9.3.3.bin.

NOTE: For Cisco Nexus 3000 series switches use the following command for the install process:

install all nxos bootflash:nxos.9.3.3.bin compact

NOTE: If you receive errors regarding free space on the bootflash, go to Step 3 and ensure you removed older firmware

files to free additional disk space for the upgrade to take place. Check all subdirectories on bootflash when searching for

older bootflash files.

Installer is forced disruptive. Pre-upgrade check failed. Return code 0x40930062 (free space in the filesyste, is below threshold)). After the upgrade, the switch reboot could take 5 to 10 minutes. Use a continuous ping command from the jump server to validate when the switch is back online.

Installer will perform compatibility check first. Please wait. Installer is forced disruptive

Verifying image bootflash:/nxos.9.3.3.bin for boot variable "nxos". [###############################] 100% -- SUCCESS Verifying image type. [###############################] 100% -- SUCCESS Preparing "nxos" version info using image bootflash:/nxos.9.3.3.bin [###############################] 100% -- SUCCESS Preparing "bios" version info using image bootflash:/ nxos.9.3.3.bin [###############################] 100% -- SUCCESS Performing module support checks. [###############################] 100% -- SUCCESS

186 Upgrading a PowerFlex appliance environment

Notifying services about system upgrade. [###############################] 100% -- SUCCESS

Switch will be reloaded for disruptive upgrade. Do you want to continue with the installation (y/n)? [n] y Install is in progress, please wait. Performing runtime checks, [###############################] 100% -- SUCCESS Setting boot variables. [###############################] 100% -- SUCCESS Performing configuration copy. [###############################] 100% -- SUCCESS Module 1: Refreshing compact flash and upgrading bios/loader/bootrom. Warning: please do not remove or power off the module at this time. [###############################] 100% -- SUCCESS

Finishing the upgrade, switch will reboot in 10 seconds.

For continuous ping, type: ping 1.1.1.1 -t.

9. Using SSH, log back in to the switch with username and password.

10. Display the entire upgrade process, type: switch# show install all status.

11. Verify that the switch is running the correct new version, type: switch# show version.

For example:

Software BIOS: version 07.66 NXOS: versio 9.3(3) BIOS compile time: 06/11/2019 NXOS image file is: bootflas://nxos.9.3.3.bin NXOS compile time: 12/22/2019 2:00:00 [12/22/2019 09:00:37]

Upgrading the electronic programmable logic device (EPLD)

Steps

1. Start an SSH session to the switch.

2. Enter the following command to commit to persistent storage. In addition, copy the configuration to a jump server: copy running-config startup-config.

3. Determine the current running version, type show version module epld .

Wasps-N93180YC-TOR1-A# show version module 1 epld

EPLD Device Version ------------------------------- MI FPGA 0X4 IO FPGA 0X9

4. Check the contents of the bootflash directory to verify that enough free space is available for the image.

a. Check the free space on the flash, type: dir bootflash:.

Example command output:

Usage for bootflash:// 1275002880 bytes used 375902208 bytes free 1650905088 bytes total

b. Delete older firmware files to make additional space, if needed.

NOTE: The Cisco Nexus 3000 and Cisco Nexus 9000 switches do not provide a confirmation prompt before deleting

them.

NOTE: The Cisco Nexus 3172 switch and Cisco Nexus 3132 switch do not require EPLD upgrade.

Upgrading a PowerFlex appliance environment 187

5. Using SCP, FTP, TFTP server, type the following to copy the firmware file to local storage on the Cisco Nexus switch: Use below TFTP command to copy image.

copy tftp://XXX.XXX.XXX.XXX/ n9000-epld.9.3.3.img bootflash:

Use SCP to copy image.

copy scp://filescp@x.x.x.x//home/filescp/image/ n9000-epld.9.3.3.img bootflash: 6. To determine if you must upgrade, type show install all impact epld bootflash: n9000-epld.9.3.3.img.

Wasps-N93180YC-TOR1-A# show install all impact epld bootflash:n9000-epld.9.3.3.img

Retrieving EPLD versions.... Please wait. Images will be upgraded according to following table: Module Type EPLD Running-Version New-Version Upg-Required ------ ---- ----------- --------------- ----------- ------------ 1 SUP MI FPGA 0x04 0x04 No 1 SUP IO FPGA 0x09 0x15 Yes Compatibility check: Module Type Upgradable Impact Reason ------------------------------------------------ 1 SUP Yes disruptive Mobile Upgradable

7. Start the upgrade process, type: install epld bootflash: n9000-epld.9.3.3.img module all.

Wasps-N93180YC-TOR1-A# install epld bootflash:n9000-epld.9.3.3.img module all Digital signature verification is successful Compatibility check: Module Type Upgradable Impact Reason ------------------------------------------------ 1 SUP Yes disruptive Mobile Upgradable Retreiving EPLD versions... Please wait Images will be upgraded according to following table: Module Type EPLD Running-Version New-Version Upg-Required ------ ---- ----------- --------------- ----------- ------------ 1 SUP MI FPGA 0x04 0x04 No 1 SUP IO FPGA 0x09 0x15 Yes The above modules require upgrade. The switch will be reloaded at the end of the upgrade Do you want to continue (y/n) ? [n] y

Proceeding to upgrade Modules.

Starting Module 1 EPLD Upgrade

Module 1 : IO FPGA [Programming] : 100.00% ( 64 of 64 sectors) Module 1 EPLD upgrade is successful. Module Type EPLD Running-Version New-Version Upg-Required ------ ---- ----------- --------------- ----------- ------------ 1 SUP MI FPGA 0x04 0x04 No Module 1 EPLD upgrade is successful.

NOTE: After the upgrade, the switch reboot could take 5 to 10 minutes. Use a continuous ping command from the jump

server to validate when the switch is back online.

8. Using SSH, log back in to the switch with username and password.

9. Verify that the switch is running the correct new version, type:switch# show install epld status.

188 Upgrading a PowerFlex appliance environment

Wasps-N93180YC-TOR1-A# show install epld status

1) Module 1 upgraded on Wed Apr 8 02:26:31 2020(545665 us) EPLD Install Image: EPLD image file 9.3.3. built on Sun Dec 22 02:25:45 2019

Status: EPLD Upgrade was Successful

EPLD Curr Ver Old Ver ------------------------------------------------------ IO FPGA 0x15 0x9

2) Module 1 upgraded on Wed Apr 8 02:23:31 2020 (545546 us) EPLD Install Image: EPLD image file 9.3.3. built on Sun Dec 22 02:25:45 2019

Status: EPLD Upgrade was Successful

The Golden (primary backup) copy of the EPLD now needs to be updated.

10. Type show version module 1 epld.

Vikings-N93180YC-A# sh version module 1 epld

EPLD Device Version --------------------------------------- MI FPGA 0x10 IO FPGA 0x17

11. Update the Golden EPLD image, type install epld bootflash: n9000-epld.9.3.3.img module 1 golden.

Vikings-N93180YC-A# install epld bootflash:n9000-epld.9.3.3.img module 1 golden Digital signature verification is successful Compatibility check: Module Type Upgradable Impact Reason ------ ----------------- ---------- ---------- ------ 1 SUP Yes disruptive Module Upgradable

Retrieving EPLD versions.... Please wait. Images will be upgraded according to following table: Module Type EPLD Running-Version New-Version Upg-Required ------ ---- ------------- --------------- ----------- ------------ 1 SUP MI FPGA 0x10 0x10 Yes 1 SUP IO FPGA 0x17 0x20 Yes The above modules require upgrade. The switch will be reloaded at the end of the upgrade Do you want to continue (y/n) ? [n] y

Proceeding to upgrade Modules.

Starting Module 1 EPLD Upgrade

Module 1 : MI FPGA [Programming] : 100.00% ( 64 of 64 sectors) Module 1 : IO FPGA [Programming] : 100.00% ( 64 of 64 sectors) Module 1 EPLD upgrade is successful. Module Type Upgrade-Result ------ ------------------ -------------- 1 SUP Success

Module 1 EPLD upgrade is successful.

Reseting Active SUP (Module 1) FPGAs. Please wait...

NOTE: After the upgrade, the switch reboot could take 5 to 10 minutes. Use a continuous ping command from the jump

server to validate when the switch is back online.

Do not upgrade the Golden EPLD image for NX-OS version 9.3(5) and later unless otherwise specified.

Upgrading a PowerFlex appliance environment 189

12. Using SSH, log back in to the switch with username and password.

13. Verify that the switch is running the correct new version, type: switch# show version module 1 epld.

Vikings-N93180YC-A# sh version module 1 epld

EPLD Device Version --------------------------------------- MI FPGA 0x10 IO FPGA 0x20

Upgrade firmware for IPI G5 network controller Upgrade the IPI G5 network controller firmware to a version consistent with the RCM using the IPI G5 network controller web UI.

Prerequisites

Log in to Dell Technologies Support site Download the latest Panduit IPI G5 firmware.

Steps

1. Log in to the IPI G5 network controller.

2. Click the gear icon to access Settings, then select System Management.

3. Select the Actions menu and select Upload Firmware.

4. Click Choose File and select the firmware file to upload.

After the system updates to the new firmware version, the IPI G5 network controller reboots automatically.

190 Upgrading a PowerFlex appliance environment

Upgrading VMware NSX-T Edge nodes This section describes how to upgrade the VMware NSX-T Edge nodes to the latest Intelligent Catalog when available.

To upgrade the VMware NSX-T Edge nodes to the latest Intelligent Catalog, you or VMware Services must upgrade the NSX-T Data Center before upgrading VMware vSphere ESXi on the NSX-T Edge nodes.

Upgrade one VMware NSX-T Edge node fully before proceeding to upgrade the next node.

Ideally, all the NSX-T Edge nodes should be running the same Intelligent Catalog version. The recommendation is to upgrade them both when logistically possible.

Use the following workflow to complete the upgrade: 1. Stage and upgrade the iDRAC and firmware. 2. Validate that the vSAN is error free (vSAN storage option only). 3. Shutdown all the VMs running on the NSX-T Edge Gateway hosts. 4. Place the NSX-T Edge Gateway ESXi host in maintenance mode. 5. Upgrade VMware vSphere ESXi. 6. Power on all the VMs running on the NSX-T Edge Gateway hosts. 7. Upgrade the iDRAC service module. 8. Upgrade the VMware distributed switches 9. Upgrade the VMware vSAN disk format (vSAN storage option only) 10. Verify the VMware vSAN health (vSAN storage option only) 11. Migrate the vCLS VM to NSX-T Edge nodes (vSAN storage option only)

Stage and upgrade the iDRAC and firmware Stage the iDRAC and firmware for the VMware NSX-T Edge nodes.

Prerequisites

The iDRAC firmware upgrade must be done before any other upgrades. Perform the iDRAC firmware upgrade first, then upgrade the other component firmware.

Steps

1. Log in to the iDRAC web interface by opening a Mozilla Firefox or Google Chrome browser and go to https:// .

NOTE: Under Server Information, review the System Host Name and verify that you have connected to the correct

hostname.

2. Select Maintenance > System Update > Manual Update and click Choose File.

3. Go to the Intelligent Catalog folder /shares/xxxxx and select the component update file. The components to update include:

iDRAC service module Dell BIOS Dell BOSS controller Dell iDRAC/Lifecycle controller Dell Intel X550/X540/i350 Dell Mellanox ConnectX-4 LX Dell PERC H740P mini raid controller

4. Click Upload.

5. Select the firmware that you uploaded and click Install Next Reboot.

11

Upgrading VMware NSX-T Edge nodes 191

CAUTION: Do NOT click Install and Reboot, as it could cause a system outage.

NOTE: The installation will be in the job queue for the next reboot. Click Job Queue from the prompted information

message to monitor the progress for the installation.

Validate the vSAN health Validate that the vSAN is error free only if the vSAN is configured as the storage option on the NSX-T Edge Gateway nodes.

Steps

1. From the VMware vSphere Client, click Cluster > Monitor > vSAN > SkylineHealth.

2. Ensure the vSAN is healthy.

If the vSAN is not healthy address the issues before continuing with the upgrade.

Shut down all the VMs on the NSX-T Edge Gateway host Use this procedure to shut down all the VMs running on the NSX-T Edge Gateway node.

Steps

1. Log in to the web UI of the controller VMware ESXi host directly.

2. Select Virtual Machines.

3. Shut down all the VMs except the jump server running on the NSX-T Edge Gateway host.

Put VMware NSX-T Edge Gateway host into maintenance mode Place the VMware NSX-T Edge Gateway host into maintenance mode.

Prerequisites

Migrate the online VMs before putting the host into maintenance mode.

Steps

1. On the VMware vSphere Client, click Hosts and Clusters.

2. Right-click the host and select Maintenance Mode > Enter Maintenance Mode.

3. Verify Move powered-off and suspended virtual machines to other hosts in the cluster is not selected.

4. Verify Ensure data accessibility is selected.

5. Click OK to put the host into maintenance mode.

Upgrade VMware vSphere ESXi Use this task to upgrade VMware vSphere ESXi.

Steps

1. Use WinSCP to copy the ESXi-6.x.0-xxxxxx.zip patch file to the /vmfs/volumes/vsanDatastore/ISO folder on the VMware ESXi server (where XX is unique for each host).

192 Upgrading VMware NSX-T Edge nodes

2. Using the SSH shell, connect to the VMware ESXi host and check for the uploaded file by typing the following command: cd /vmfs/volumes/vsanDatastore/ISO.

3. To update the prole image on the host:

a. To optionally list the prole of the ESXi zip archive, type esxcli software sources profile list -d /vmfs/volumes/vsanDatastore/ISO/Esxi-6.7.0-XXXXXXXX-X.X.X.X_Dell_XXG.zip. The following output appears:

Name Vendor Acceptance Level ------------------------------------------ ------------ ---------------- ESXi-6.7.0-20200804001-standard-customized VMware, Inc. PartnerSupported

b. To upgrade the VMware ESXi version, type esxcli software profile update -p ESXi-6.7.0-20200804001-standard-customized -d /vmfs/volumes/vsanDatastore/ISO/ Esxi-6.7.0-XXXXXXXX-X.X.X.X_Dell_XXG.zip

When the upgrade completes successfully, the following message displays, followed by the list of upgraded packages:

Update Result Message: The update completed successfully, but the system needs to be rebooted for the changes to be effective. Reboot Required: true

4. To upgrade the iDRAC service module:

a. Use WinSCP to upload ISM-Dell-Web-3.x.x-xxxx.VIB-ESX6i-Live_A00.zip to the /vmfs/volumes/ vsanDatastore/ISO folder.

b. Use SSH to access the VMware ESXi nodes and type esxcli software vib install -d /vmfs/volumes/ vsanDatastore/ISO/ISM-Dell-Web-3.x.x-xxxx.VIB-ESX6i-Live_A00.zip.

5. Reboot the ESXi host. Select Power > Reset (Warm Boot).

6. Press F2 to enter system setup.

7. Under System BIOS > Boot setting > select Boot mode as UEFI.

NOTE: Ensure that the BOSS card is set as the primary boot device from the UEFI Device Path under the Boot tab.

If the BOSS card is not set as the primary boot device, reboot the server and change the UEFI boot sequence from

System BIOS > BOOT settings > UEFI BOOT settings.

8. Click Back > Back > Finish > Yes > Finish > OK > Finish > Yes. The node reboots. Proceed to the Exit maintenance mode section.

9. Repeat these steps on all VMware ESXi servers.

Next steps

You must complete the upgrade for all hosts before proceeding to the Distributed Virtual Switch upgrade.

Exit maintenance mode Use this procedure to take the VMware NSX-T Edge Gateway host out of maintenance mode.

Steps

1. From the VMware vSphere Client Home screen, select Hosts and Clusters.

2. Right-click the host and select Exit Maintenance Mode.

Upgrading VMware NSX-T Edge nodes 193

Power on all VMs running on the VMware NSX-T Edge Gateway host Use this procedure to power on all the VMs running on the VMware NSX-T Edge Gateway node.

Steps

1. Log in to the VMware NSX-T Edge Gateway host.

2. Select Virtual Machines.

3. Power on all the VMs running on the VMware NSX-T Edge Gateway host.

Upgrade the iDRAC service module Use this procedure to upgrade the iDRAC Service Module (iSM).

Steps

1. Use WinSCP to upload ISM-Dell-Web-3.x.x-xxxx.VIB-ESX6i-Live_A00.zip to the /vmfs/volumes/ vsanDatastore/ISO folder.

2. Use SSH to access the VMware ESXi nodes and type esxcli software vib install -d /vmfs/volumes/ vsanDatastore/ISO/ISM-Dell-Web-3.x.x-xxxx.VIB-ESX6i-Live_A00.zip.

Upgrade the VMware vSphere Distributed Switch Use this procedure to upgrade the VMware vSphere distributed switch.

Steps

1. Connect to the VMware vCenter server using the VMware vSphere Client.

2. Click Networking and select the VMware Distributed Switch you want to upgrade.

3. Right-click the dvSwitch and select Settings > Export Configuration.

4. Select configuration to export distributed switch and all port groups.

5. Enter a description and click OK and Yes.

6. Select the location, enter the file name, and click Save.

NOTE: For VMware vSphere 6.7, from the vSphere client HTML5 and Mozilla Firefox, click OK twice. There is no

prompt for filename or save. With Google Chrome, click OK once. There is no prompt for filename or save.

7. To upgrade VMware vSphere Distributed Switch, right-click Distributed Switch > Upgrade > Upgrade Distributed Switch.

NOTE: There are two upgrade options available, Upgrade Network I/O Control, and Enhanced LACP Support. The

Network I/O Control upgrade is required. The Enhanced LACP Support option is required only if it is enabled.

8. On the first screen, select Next to confirm the upgrade.

9. VMware vCenter performs compatibility checking if connected hosts are compatible. Click Next to continue. The last screen summarizes the steps of the upgrade to the VMware Distributed Switch.

10. Click Finish.

11. Repeat the steps for all VMware vSphere Distributed Switches.

194 Upgrading VMware NSX-T Edge nodes

Upgrade the VMware vSAN disk format (vSAN storage option only) Use this procedure to upgrade the VMware vSAN disk format.

Prerequisites

Verify that you are using the updated version of VMware vCenter Server. Verify that you are using the latest version of VMware NSX-T Edge Gateway hosts. Verify that the disks are in a healthy state. (In the vSphere Client, navigate to Host and Clusters, highlight your PowerFlex

management controller cluster, then click on the vSAN tab, and click on physical disks to verify the object status in the right hand column.

Verify that your hosts are not in maintenance mode. When upgrading the disk format, do not place the hosts in maintenance mode. When any member host of a vSAN cluster enters maintenance mode, the member host no longer contributes capacity to the cluster. The cluster capacity is reduced and the cluster upgrade might fail.

Steps

1. Navigate to the vSAN cluster in the VMware vSphere Client.

2. From Host and Clusters, highlight your PowerFlex management controller cluster and click the Configure tab on the right hand pane.

3. Under vSAN, select General.

4. Under On-Disk Format Version, click Pre-Check Upgrade.

The upgrade pre-check analyzes the cluster to uncover any issues that might prevent a successful upgrade. Some of the items checked are host status, disk status, network status, and object status. Upgrade issues appear in the disk pre-check status text box.

The pre-check should be run before initiating on-disk format upgrade task.

5. Under On-Disk Format Version, click Upgrade.

6. Verify the check box beside Allow Reduced Redundancy is cleared.

7. Click Yes on the Upgrade box to perform the upgrade of the on-disk format.

Verifying VMware vSAN health (vSAN storage option only) Use this procedure to verify VMware vSAN health.

Steps

1. From the VMware vSphere Client, navigate to the vSAN cluster.

2. Navigate to Home > Host and Clusters, and highlight the PowerFlex management controller cluster.

3. Click Monitor > vSAN > Skyline Health.

4. Verify that all the tests have passed.

Upgrading VMware NSX-T Edge nodes 195

Enable replication on existing PowerFlex hyperconverged nodes

This section describes how to convert existing non-replication PowerFlex hyperconverged nodes to replication enabled PowerFlex hyperconverged nodes.

This guide assumes you have two standard PowerFlex appliances (source and target) deployed, each having a separate MDM cluster. Networking must be in place between the two sites before proceeding with replication.

It is also possible to create replication between PowerFlex hyperconverged nodes and PowerFlex storage-only nodes.

Prerequisites The following requirements are needed before proceeding with enabling replication:

LACP bonded NIC port design PowerFlex node with PowerFlex 3.6 PowerFlex hyperconverged node with minimum 2 sockets *12 cores each Journal capacity (sized on delta change rate for each replicated volume) Additional external VLANs for replication must be added (flex-rep1- , flex-rep2- ) used for Storage Data

Replication (SDR) to SDR communication between source and destination sites for replicating data. At least one protection domain (source and destination) At least one storage pool (source and destination) SDS devices have been added to the appropriate storage pool (source and destination) PowerFlex nodes installed at the source and destination sites with communication between them. (MDM to MDM

communication required in addition to external networks) At least one identical size volume on both source and destination sites. The volume at the source site must be mapped and

the volume on the destination site is used for replication and must be unmapped.

Workflow Here is the workflow for removing the existing PowerFlex hyperconverged nodes from PowerFlex Manager and enabling replication on the existing PowerFlex hyperconverged nodes:

1. Remove the existing PowerFlex hyperconverged nodes from PowerFlex Manager. 2. Create and configure replication port groups (flex-rep1- and flex-rep2- ) in flex_dvswitch. 3. Prepare SVM for replication, as follows:

a. Enter the Storage Data Server (SDS) node (SVMs) into maintenance mode. b. Add the virtual NICs to the SVMs for Storage Data Replication (SDR) external communication. c. Modify vCPU, memory, virtual Non-Uniform Memory Access (vNUMA), and CPU reservation settings on SVMs.

4. Power on the SVM and configure the network interfaces. 5. Install the SDR on the SDS nodes (SVMs). 6. Exit SDS maintenance mode. 7. Add journal capacity percentage. The recommended starting value is 10%. 8. Add the Storage Data Replicator (SDR) to the PowerFlex nodes. 9. Create the peer system between the source and destination site. 10. Add the peer system. 11. Create the replication consistency group (RCG). 12. Define network for replication in PowerFlex Manager. Do not define the gateway. 13. Add an existing service to PowerFlex Manager.

12

196 Enable replication on existing PowerFlex hyperconverged nodes

Remove an existing PowerFlex hyperconverged service from PowerFlex Manager Remove an existing PowerFlex hyperconverged service from PowerFlex Manager to install replication components.

About this task

It is important to remove a service, to add a service back to PowerFlex Manager after replication components are installed. Hyperconverged services must be removed at both the source and destination sites only if both are hyperconverged, else do it only on one site.

Steps

1. Log in to PowerFlex Manager.

2. On the menu bar, click Services.

3. On the Services page, click the service and in the right pane, click View Details.

4. On the Service Details page, in the right pane, under Service Actions, click Remove Service.

5. In the Remove Service dialog box, select Remove Service.

6. Select Leave nodes in PowerFlex Manager inventory and set the state to Managed.

7. Click Remove.

Create and configure replication port groups Use this task to create and configure replication port groups (flex-rep1- and flex-rep2- ) in flex_dvswitch.

Steps

1. Log in to the VMware vSphere client and select the Networking inventory view.

2. Select Inventory, right-click flex_dvswitch and select New Port Group.

3. Type flex-rep1 and click Next.

4. From VLAN type menu, select VLAN and in VLAN ID enter 161 (as per the Logical Configuration Survey (LCS)).

5. Select Customize default policies configuration under Advanced option.

6. Click Next > Next > Next.

7. From Teaming and failover tab:

a. Change Load Balancing to Route based on IP hash. b. Move up the LACP-Lag uplink under Active uplinks. c. Move down uplink1 and uplink2 under Unused uplinks. d. Click Next.

8. Click Next > Next > Finish.

9. Repeat Steps 2 through 8 to create the following port group: flex-rep2 (VLAN ID as per the LCS).

Preparing the SVMs for replication Use the following procedures to prepare the SVMs for replication.

Set the SDS NUMA Enable replication on PowerFlex nodes with FG pool Verify Network Manager is disabled Update the network configuration Update the grub configuration file

Enable replication on existing PowerFlex hyperconverged nodes 197

Set the SDS NUMA

Use this task to allow the SDS to use the memory from the other NUMA.

Steps

1. Log in to the SDS (SVMs) using PuTTY.

2. Append the line numa_memory_affinity=0 to SDS configuration file /opt/emc/scaleio/sds/cfg/conf.txt, type: echo #numa_memory_affinity=0 >> /opt/emc/scaleio/sds/cfg/conf.txt.

3. Verify that the line is appended by running: #cat /opt/emc/scaleio/sds/cfg/conf.txt.

Enabling replication on a PowerFlex appliance with FG Pool

Use this task to enable replication on a PowerFlex appliance with FG Pool.

About this task

If the PowerFlex appliance has FG Pool and want to enable replication, set the SDS thread count to ten, from default of eight.

Steps

1. SSH to primary MDM, then log in to PowerFlex cluster, using #scli --login --username admin.

2. Query the current value, type: #scli --query_performance_parameters --print_all --tech --all_sds| grep -i SDS_NUMBER_OS_THREADS.

3. Set the value of SDS_number_OS_threads to 10, type: # scli --set_performance_parameters -sds_id --tech --sds_number_os_threads 10 .

NOTE: Do not set the SDS threads globally, set the SDS threads per SDS.

Verify Network Manager is disabled

Use this task to ensure that Network Manager is disabled.

Steps

1. Log in the SDS (SVMs) using PuTTY.

2. Run # systemctl status NetworkManager to ensure that Network Manager is not running.

Output must display disabled and inactive.

3. If it is enabled and active, stop and disable the service, run:

# systemctl stop NetworkManager # systemctl disable NetworkManager

Update the network configuration

Use this task to update the network configuration file for all the network interfaces.

Steps

1. Log in to SDS (SVMs) using PuTTY.

2. Make a note of MAC addresses of all the interfaces, using: #ifconfig or #ip a.

198 Enable replication on existing PowerFlex hyperconverged nodes

3. Edit all the interface configuration files (ifcfg-eth0, ifcfg-eth1, ifcfg-eth2, ifcfg-eth3, ifcfg-eth4) and update the NAME, DEVICE and HWADDR to ensure correct MAC address and NAME are assigned.

NOTE: Ignore the entries with correct values.

Use the vi editor to update the file # vi /etc/sysconfig/network-scripts/ifcfg-ethX or

Append the line using the following command:

# echo NAME=ethX >> /etc/sysconfig/network-scripts/ifcfg-ethX # echo HWADDR=xx:xx:xx:xx:xx:xx >> /etc/sysconfig/network-scripts/ifcfg-ethX

Example file:

BOOTPROTO=none ONBOOT=yes HOTPLUG=yes TYPE=Ethernet DEVICE=eth2 IPADDR=192.168.155.46 NETMASK=255.255.254.0 DEFROUTE=no MTU=9000 PEERDNS=no NM_CONTROLLED=no NAME=eth2 HWADDR=00:50:56:80:fd:82

Update the grub configuration file

Use this task to update the grub configuration file.

About this task

Remove net.ifnames=0 and biosdevname=0 from the /etc/default/grub file to avoid the interface name issue when you add virtual NICs to SVM for SDR communication.

Steps

1. Log in to the SVM using PuTTY.

2. Edit the grub configuration file located in /etc/default/grub, type: # vi /etc/default/grub.

3. From the last line, remove net.ifnames=0 and biosdevname=0, and save the file.

4. Rebuild the grub configuration file, using: # grub2-mkconfig -o /boot/grub2/grub.cfg

Enable replication on existing PowerFlex hyperconverged nodes 199

Enter the SDS nodes into maintenance mode and power off Use this task to put the SDS into maintenance mode.

About this task

While entering the SDS nodes into maintenance mode, if the SDS node is a primary MDM as well, ensure to switch the MDM role before placing the SDS node into maintenance mode.

NOTE: Place one SDS node into maintenance mode at a time.

Steps

1. Log in to the PowerFlex GUI presentation server: https:// :8443.

2. In the left pane, click Configuration > SDSs.

3. In the right pane, select the relevant SDS and click More > Enter Maintenance Mode.

4. In the Enter SDS into Maintenance Mode dialog box, select Instant (if maintenance mode takes more than 30 minutes, then select Protected).

5. Click Enter Maintenance Mode.

6. Verify that the operation is completed successfully and click Dismiss.

7. Shut down the appropriate SVM:

a. Log in to VMware vCenter using VMware vSphere Client. b. Select the SVM, right-click Power > Shut-down Guest OS.

Add virtual NICs to SVMs Use this task to add two more NICs to each SVM for SDR external communication.

Steps

1. Log in to the VMware vCenter vSphere client and go to Host and Clusters.

2. Right-click the SVM and click Edit Setting.

3. Click Add new device, select Network Adapter from the list.

4. Select the appropriate port group created for SDR external communication, click OK.

5. Repeat steps 2 to 4 for creating additional NICs.

Record the MAC address of the newly added network interface controllers Use this task to record the MAC addresses of the newly added adapters from the VMware vCenter.

Steps

1. Right-click the SVM and click Edit Setting.

2. Click the newly added network interface controllers from Virtual Hardware list and make note of the MAC address.

200 Enable replication on existing PowerFlex hyperconverged nodes

Modifying the vCPU, memory, vNUMA and CPU reservation settings on SVMs There are specific memory and CPU settings must be updated when you enable replication on your PowerFlex Appliance with PowerFlex hyperconverged nodes.

Modify the memory size

Use this task to modify the memory size according to the SDR requirements on a replication-enabled PowerFlex node.

About this task

NOTE: 12 GB of additional memory is required for SDR. For example, if you have 24 GB memory existing in the SVM for an

MG pool, add 12 GB for enabling replication, 24+12 = 36 GB. If you have 32 GB memory existing in the SVM for an FG pool,

add 12 GB for enabling replication so it would be 32 + 12 = 44 GB.

Steps

1. Log in to the VMware vCenter vSphere client.

2. Right-click the VM you want to change and select Edit Settings.

3. Under the Virtual Hardware tab, expand Memory and modify the memory size according to the SDR requirement.

4. Click OK.

Increase the vCPU count

Use this task to increase the vCPU count according to the SDR requirement.

About this task

The physical core requirement is two sockets with ten cores each (vCPU * per NUMA domain cannot exceed physical cores).

Consider the following examples for the vCPU count: Total number of vCPUs for an MG pool: 8 (SDS)+8 (SDR)+2 (MDM/TB) + 2(CloudLink) = 20 vCPUs. Total number of vCPUs for an FG pool: 10 (SDS)+10 (SDR)+2 (MDM/TB) + 2(CloudLink) = 24 vCPUs

Steps

1. Log in to VMware vCenter vSphere client.

2. Right-click the virtual machine that you want to change, then select Edit Settings.

3. Under the Virtual Hardware tab, expand CPU and increase the vCPU count according to the SDR requirement.

4. Click OK.

Setting the vNUMA advanced option

Use this task to set numa.vcpu.maxPerVirtualNode.

About this task

Ensure the CPU hot plug feature is disabled, in case it is enabled disable it before configuring vNUMA parameter.

Steps

1. Log in to the production VMware vCenter using vSphere client.

2. Right-click the VM that you want to change and select Edit Settings.

3. Under the Virtual Hardware tab, expand CPU, ensure CPU Hot Plug option is unchecked.

Enable replication on existing PowerFlex hyperconverged nodes 201

Set the vNUMA advanced option

Use this task to set the SVM numa.vcpu.maxPerVirtualNode value to half the vCPUs assigned to the SVM.

About this task

For example, if the SVM for an MG pool has 20 vCPUs, set numa.vcpu.maxPerVirtualNode=10. If the SVM for an FG pool has 24 vCPUs, set numa.vcpu.maxPerVirtualNode = 12.

Prerequisites

Ensure that the CPU hot plug is disabled. Do the following to disable the CPU hot plug feature before configuring vNUMA parameter:

1. Log in to the VMware vCenter vSphere client. 2. Right-click the VM that you want to change and select Edit Settings. 3. Under the Virtual Hardware tab, expand CPU and verify that the CPU Hot Plug option is cleared.

Steps

1. Go to the SVM in the VMware vSphere client.

2. Select a data center, folder, cluster, resource pool, or host to find a VM.

3. Click the VMs tab.

4. Right-click the VM and select Edit Settings.

5. Click VM Options and expand Advanced.

6. Under Configuration Parameters, click Edit Configuration.

7. In the dialog box that appears, click Add Configuration Params to enter a new parameter name and its value.

For example, if the SVM for an MG pool has 20 vCPUs, set numa.vcpu.maxPerVirtualNode = 10. If the SVM for an FG pool has 24 vCPUs, set numa.vcpu.maxPerVirtualNode = 12.

8. Click OK twice.

Ensure the following: Under CPU, Shares are set to High. 50% of the vCPUs is reserved on the SVM. For example, if the SVM for an MG pool is configured with 20 vCPUs and

CPU speed is 2.8 GHz, set a reservation of 28 GHz (20x2.8/2). If the SVM for an FG pool is configured with 24 vCPUs and CPU speed is 3 GHz, set a reservation of 36 GHz (24x3/2).

9. Right-click the VM you want to change and select Edit Settings.

10. Under the Virtual Hardware tab, expand CPU, verify Reservation and Shares as mentioned.

Modifying the memory size according to the SDR requirements for FG pool-based PowerFlex systems with replication

Use this task to add additional memory required for SDR.

About this task

NOTE: 12 GB of additional memory is required for SDR. For example, if you have 32 GB memory existing in the SVM, add 12

GB for enabling replication so it would be 32 + 12 = 44 GB.

Prerequisites

Steps

1. Log in to the production VMware vCenter using vSphere client.

2. Right-click the VM you want to change and select Edit Settings.

3. Under the Virtual Hardware tab, expand Memory, modify the memory size according to SDR requirement.

4. Click OK.

202 Enable replication on existing PowerFlex hyperconverged nodes

Increasing the vCPU count according to the SDR requirement

Use this task to increase the vCPU count according to the SDR requirement.

About this task

The physical core requirement is two sockets with ten cores each (vCPU * per NUMA domain cannot exceed physical cores).

vCPU total: 10(SDS) + 10 (SDR) + 2 (MDM/TB) + 2(CloudLink) = 24 vCPUs

Steps

1. Log in to the production VMware vCenter using VMware vSphere client.

2. Right-click the virtual machine that you want to change, then select Edit Settings.

3. Under the Virtual Hardware tab, expand CPU, increase the vCPU count according to SDR requirement.

4. Click OK.

Setting the vNUMA advanced option

Use this task to set numa.vcpu.maxPerVirtualNode.

About this task

Ensure the CPU hot plug feature is disabled, in case it is enabled disable it before configuring vNUMA parameter.

Steps

1. Log in to the production VMware vCenter using vSphere client.

2. Right-click the VM that you want to change and select Edit Settings.

3. Under the Virtual Hardware tab, expand CPU, ensure CPU Hot Plug option is unchecked.

Editing the SVM configuration

Use this task to set the SVM numa.vcpu.maxPerVirtualNode to half the vCPU's assigned to the SVM.

About this task

For example, if the SVM has 24 vCPU, set this numa.vcpu.maxPerVirtualNode = 12.

Steps

1. Browse to the SVM in the VMware vSphere client.

2. To find a VM select a data center, folder, cluster, resource pool, or host.

3. Click the VMs tab.

4. Right-click the VM and select Edit Settings.

5. Click VM Options and expand Advanced.

6. Under Configuration Parameters, click Edit Configuration.

7. In the dialog box that appears, click Add Configuration Params to enter a new parameter name and its value.

Example: numa.vcpu.maxPerVirtualNode = 12 8. Click OK > OK.

Ensure the following: CPU shares are set to high. 50% of the vCPU's reserved on the SVM.

For example, if the SVM is configured with 24 vCPUs and CPU speed is 3 GHz, set a reservation of 36 GHz (24x3/2).

9. Right-click the VM you want to change and select Edit Settings.

10. Under the Virtual Hardware tab, expand CPU, verify Reservation and Shares as mentioned.

Enable replication on existing PowerFlex hyperconverged nodes 203

Powering on the SVM and configuring network interfaces Use the following procedures to power on the SVMs and create interface configuration files for the newly added network adapters:

Configure the newly added network interface controllers for the SVMs Add a permanent static route for replication external networks

Configure the newly added network interface controllers for SVMs

Use this procedure to configure the newly added network interface controllers for the SVMs.

Steps

1. Log in to VMware vCenter using VMware vSphere client.

2. Select the SVM, right-click Power > Power on.

3. Log in to SVM using PuTTY.

4. Create rep1 network interface, type: cp /etc/sysconfig/network-scripts/ifcfg-eth0 /etc/sysconfig/ network-scripts/ifcfg-eth5.

5. Create rep2 network interface, type: cp etc/sysconfig/network-scripts/ifcfg-eth0 /etc/sysconfig/ network-scripts/ifcfg-eth6.

6. Edit newly created configuration files (ifcfg-eth5, ifcfg-eth6) using the vi editor and modify the entry for IPADDR, NETMASK, GATEWAY, DEFROUTE, DEVICE, NAME and HWADDR, where:

DEVICE is the newly created device of eth5 and eth6

IPADDR is the IP address of the rep1 and rep2 networks

NETMASK is the subnet mask

GATEWAY is the gateway for the SDR external communication

DEFROUTE change to no

NAME is the newly created device name for eth5 and eth6

HWADDR is the MAC address collected from the topic Add virtual NICs to SVMs

NOTE: Ensure that the MTU value is set to 9000 for SDR interfaces on both primary and secondary site and also end to

end devices. Confirm with the customer or see the Logical Configuration Survey (LCS) about their existing MTU values

and configure it.

Add a permanent static route for replication external networks

Use this task to create a permanent route.

Steps

1. Go to /etc/sysconfig/network-scripts and create a file called route-interface and type:

#touch /etc/sysconfig/network-scripts/route-eth5 #touch /etc/sysconfig/network-scripts/route-eth6

2. Edit each file and add the appropriate network information.

For example, 10.0.10.0/23 via 10.0.30.1, where 10.0.10.0/23 is the network address and prefix length of the remote or destination network. The IP address 10.0.30.1 is the gateway address leading to the remote network.

Sample file

/etc/sysconfig/network-scripts/route-eth5 10.0.10.0/23 via 10.0.30.1

204 Enable replication on existing PowerFlex hyperconverged nodes

/etc/sysconfig/network-scripts/route-eth6 10.0.20.0/23 via 10.0.40.1

3. Reboot the SVM, type: #reboot.

4. Ensure all the changes are persistent after reboot.

5. Once SVM has come up, ensure all the interfaces are configured properly, type: #ifconfig or #ip a.

6. Verify the new route added to the system, type: #netstat -rn.

Install SDR RPMs on the SDS nodes (SVMs)

About this task

The SDR RPM must be installed on all SVMs, both at the source and destination sites only if both sites have PowerFlex hyperconverged nodes. Storage Data Replicators (SDR) are responsible for processing all I/Os of replication volumes. All application I/Os of replicated volumes are processed by the source SDRs. At the source, application I/Os are sent by the SDC to the SDR. The I/Os are sent to the target SDRs and stored in their journals. The target SDRs journals apply the I/Os to the target volumes. A minimum of two SDRs are deployed at both the source and target systems to maintain high availability. If one SDR fails, the MDM directs the SDC to send the I/Os to an available SDR.

Steps

1. Use WinSCP or SCP to copy the SDR package to the tmp folder.

2. SSH to SVM and run the following to install the SDR package:#rpm -ivh /tmp/EMC-ScaleIO-sdr-3.6- x.xxx.el7.x86_64.rpm.

Exit SDS maintenance mode

Steps

1. Log in to the source site presentation server: https://presentation_server_IP:8443.

2. In the left pane, click Configuration > SDSs.

3. In the right pane, select the relevant SDS and click More > Exit maintenance mode.

4. Select Exit Maintenance Mode.

5. Verify that the operation completed successfully and click Dismiss.

6. Wait for the rebuild and rebalance operation to finish before starting activity on next SVM

7. Repeat the following tasks on all the SVMs in source and destination sites:

Prepare SVMs for replication Enteri SDS node (SVMs) into maintenance mode and power off the SVM Addi virtual NICs to SVMs Modify memory and CPU settings on SVMs Power on the SVM and configure network interfaces Exit SDS maintenance mode

Verify communication between the source and destination

Steps

1. Log in to all the SVMs and PowerFlex nodes in source and destination sites.

2. Ping the following IP addresses from each of the SVM and PowerFlex nodes in source site:

Management IP addresses of the primary and secondary MDMs External IP addresses configured for SDR-SDR communication

3. Ping the following IP addresses from each of the SVM and PowerFlex nodes in destination site:

Enable replication on existing PowerFlex hyperconverged nodes 205

Management IP addresses of the primary and secondary MDMs External IP addresses configured for SDR-SDR communication

Add journal capacity percentage The journal is a component of the SDR. It stores the data at the source before it is sent to the destination. At the destination, the journal stores the data before it is applied to the destination volumes. At the source, application I/Os are sent by the SDC to the SDR. The SDR packages I/Os in bundles and sends them to the target journal. Once the I/Os are sent to the destination journal, they are cleared from the source journal. Once the I/Os are applied to the target volumes, they are cleared from the destination journal.

Journal capacity is defined as a percentage of the total storage capacity (usable capacity) in the storage pool and must equal at least 28 GB per SDR. The journal capacity is allocated from every pool where there are replicated volumes. The capacity allocated from each pool is at least 5% of the usable capacity of the replicated volumes. The total allocated journal capacity from all the pools in the PD must be at least equal to number of SDRs x 28 GB.

Example of PowerFlex system with 1PD and 1SP: If you have four SDRs in your PD and SP with 36 TB usable capacity of replicated volume, then the minimum journal capacity should be maximum (5% of 36 TB, (4x28 GB) which is maximum (1.8 TB, 112 GB) = 1.8 TB.

NOTE: The journal capacity is defined as a percentage of the total storage capacity in the storage pool, increasing the total

storage capacity by adding devices will increase the journal capacity. Similarly, if you decrease the total storage capacity by

removing devices from the storage pool, the journal capacity will automatically decrease.

Calculate journal capacity to allocate

The journal is shared between all of the replicated RCGs in the protection domain.

About this task

Journal capacity should be allocated from storage pools as fast as (or faster than) the storage pool of the fastest replicated application in the protection domain. It should use the same drive technology and about the same drive count and distribution in nodes.

Steps

1. Select the storage pool from which to allocate the journal capacity.

2. Consider the minimal requirements needed (28 GB multiplied by the number of SDR sessions). Journal capacity will be the maximum of these two factors.

Consider the expected outage time. The minimal outage allowance is one hour, but at least three hours are recommended.

3. Calculate the journal capacity needed per application: maximal application throughput x maximum outage interval.

4. Calculate the percentage of capacity based on the previously calculated needs as journal capacity is defined as a percentage of storage pool capacity.

For example, an application generates 1 GB/s of writes. The maximal supported outage is three hours (3 hours x 3600 seconds = 10800 seconds). The journal capacity needed for this application is 1 GB/s x 10800 s = ~10.547 TB. Since the journal capacity is expressed as a percentage of the storage pool capacity, divide the 10.547 TB by the size of the storage pool usable capacity, which is 200 TB:100 x 10.547 TB/200 TB = 5.27% round this up to 6%.

5. Repeat this for each application being replicated.

NOTE: When the storage pool capacity is critical, capacity cannot be allocated for new volumes or for expanding

existing volumes. This behavior must be considered when planning the capacity available for journal usage. The volume

usage must leave enough capacity available in the storage pool to allow provisioning of journal volumes. The plan should

account for the storage pool staying below critical capacity even when the journal capacity is almost fully utilized.

206 Enable replication on existing PowerFlex hyperconverged nodes

Add allocated journal capacity

Add allocated journal capacity from the storage pool.

Steps

1. In the left pane, click Protection > Journal Capacity.

2. In the right pane, click Add.

3. In the Add Journal Capacity dialog box, select the relevant storage pool and add the percentage for journal capacity.

4. Click Add to allocate journal capacity from the storage pool.

5. Verify the operation completed successfully and click Dismiss.

Adding the Storage Data Replicator to a PowerFlex appliance Use this task to add the SDR to the PowerFlex appliance.

Prerequisites

The IP address of the node must be configured for SDR. The SDR communicates with several components: SDC (application) SDS (storage) Remote SDR (external)

Steps

1. In the left pane, click Protection > SDRs.

2. In the right pane, click Add.

3. In the Add SDR dialog box, enter the connection information of the SDR:

a. Enter the SDR name. b. Update the SDR Port, if required (default is 11088).

c. Select the relevant Protected Domain. d. Enter the IP Address of the MDM that is configured for SDR. e. Select Role External for the SDR to SDR external communication. f. Select Role Application and Storage for the SDR to SDC and SDR to SDS communication. g. Click ADD SDR to initiate a connection with the peer system.

4. Verify that the operation completed successfully and click Dismiss.

5. Modify the IP address role if required:

a. From the PowerFlex GUI, in the left pane, click Protection > SDRs. b. In the right pane, select the relevant SDR check box, and click Modify > Modify IP Role. c. In the Modify IPs Role dialog box, select the relevant role for the IP address. d. Click Apply. e. Verify that the operation completed successfully and click Dismiss.

6. Repeat both tasks Adding journal capacity and Adding Storage Data Replicator (SDR) to PowerFlex system for source and destination PowerFlex appliances.

Enable replication on existing PowerFlex hyperconverged nodes 207

Create the peer system between the source and destination site Use this task to create the peer system between the source and destination site.

Steps

1. Log in the primary MDM using SSH on the source and destination to extract and add the MDM certificate.

2. Type: #scli -login -username admin after the password prompt and enter the MDM cluster password.

3. Extract the certificate on the source and destination primary MDM, type:

For the source: #scli --extract_root_ca --certificate_file /tmp/source.crt For the destination: # scli --extract_root_ca --certificate_file /tmp/destination.crt

4. Copy the extracted certificate of the source (primary MDM) to the destination (primary MDM) using SCP and conversely.

From source MDM: #scp /tmp/source.crt :/tmp/ From destination MDM: #scp /tmp/destination.crt :/tmp/

5. Add the copied certificate, type:

For source: # scli --add_trusted_ca --certificate_file /tmp/destination.crt --comment destination_crt

For destination: # scli --add_trusted_ca --certificate_file /tmp/source.crt --comment source_crt

6. Verify the new certificate by typing: # scli --list_trusted_ca.

7. Click Journal Capacity in the left pane from the Replication tab to verify that the journal capacity is set according to the requirement by clicking .

Add the peer system Use this procedure to add the peer system.

Steps

1. Type scli -login -username admin after the password prompt and enter the MDM cluster password.

NOTE: From the output, obtain the system ID. It is used in the following step to add a peer system on the primary site.

Example output: Logged in. User role is SuperUser. System ID is 2e6ccfd208ef120f

2. Add the peer system to the primary site, type: # scli --add_replication_peer_system --peer_system_ip (remote system mdm management ips) --peer_system_id (system id of remote site) -- peer_system_name (remote site name)

3. Add the peer system to the remote site, type: # scli --add_replication_peer_system --peer_system_ip (primary system mdm management ips) --peer_system_id (system id of primary site) -- peer_system_name (primary site name)

NOTE:

For a three-node cluster, you need two IP addresses - comma separated (primary, secondary).

For a five-node cluster, you need three IP addresses - comma separated (primary, secondary1, secondary2).

208 Enable replication on existing PowerFlex hyperconverged nodes

Create the replication consistency group Use this task to create the RCG when the remote site is up and running only.

About this task

The RCG is a logical container for volumes whose application data must be replicated consistency to each other. It includes a set of consistent volume pairs. The volume on the source from a single protection domain is replicated to a remote volume from a single protection domain on the target. This creates a consistent pair of volumes. You can add and manage RCG on both the source and target systems.

Before proceeding, create source and destination volumes of the same size. It is recommended, but not mandatory, that the volumes in the volume pair have the same attributes (including zero padding and granularity), not doing so can impact performance and capacity.

If you already have volume in source site, create the volume in destination site with same size.

NOTE: Do not map the volume that is created on target system to SDC.

Steps

1. Log in to the source site presentation server: :8443.

NOTE: Use the primary MDM IP address and credentials to log in to the PowerFlex cluster.

2. In the left pane, click Protection > RCGs.

3. In the right pane, click Add.

4. In the Add RCG wizard, enter the following on the General page:

a. Enter the RCG Name. b. Enter the number of RPO (recovery point objective) minutes. This is the amount of time of data loss that is tolerated if

replication between the systems is compromised. c. Select Source Protection Domain. d. Select Target System. e. Select Target Protection Domain.

5. Click Next.

6. On the Add Replication Pairs page:

a. Click the volume from the Source column and then click the same size volume from the Target column. b. Click Add Pair. The volume pair is added. c. Click Next.

7. On the Review Pairs page:

a. Ensure that the correct source and volume pair are selected and click ADD RCG & START REPLICATION. b. Verify that the operation completed successfully and click Dismiss.

The RCG is added to both the source and target systems.

It is necessary to wait for the end of the initial copy transmit before start to use.

Find the current copy status

Use this task to find the current copy status.

Steps

1. Log in to the primary MDM using SSH and log in to scli, type: # scli --login --username admin after the password prompt and enter the MDM cluster password.

2. Verify the replication status, type: # scli --query_all_replication_pairs.

Once initial copy is complete, PowerFlex replication is ready for use.

Enable replication on existing PowerFlex hyperconverged nodes 209

Modify the recovery point objective

Use this to update the recovery point objective (RPO) time as required.

Steps

1. From https://Presentation_Server_IP:8443 (PowerFlex GUI), in the left pane, click Protection > RCGs.

2. In the right pane, select the relevant RCG check box, and click Modify > Modify RPO.

3. In the Modify RPO for RCG dialog box, enter the updated RPO time and click Apply.

4. Verify that the operation completed successfully and click Dismiss.

Define the network for replication in PowerFlex Manager Use this procedure to define the network for SDR external communication.

Steps

1. On the menu bar, click Settings > Networks. The Networks page opens.

2. Click Define. The Define Network page opens.

3. In the Name field, enter the name of the network. Optionally, in the Description field, enter a description for the network.

4. From the Network Type drop-down menu, select PowerFlex Replication.

5. In the VLAN ID field, enter a VLAN ID between 1 and 4094.

6. Select the Configure Static IP Address Ranges check box, and then do the following:

a. In the Subnet box, enter the IP address for the subnet. The subnet is used to support static routes for data and replication networks.

b. In the Subnet Mask box, enter the subnet mask.

NOTE: Do not define the gateway when you define the network for PowerFlex replication and PowerFlex data.

c. Optionally, in the Primary DNS and Secondary DNS fields, enter the IP addresses of primary DNS and secondary DNS. d. Optionally, in the DNS Suffix field, enter the DNS suffix to append to the hostname resolution. e. To add an IP address range, click Add IP Address Range. In the row, specify a starting and ending IP address for the

range.

Repeat this step to add IP address ranges based on the requirement. For example, you can use one range for flex-rep1- network and the second range for flex-rep2- network.

7. Click Save.

Add an existing service to PowerFlex Manager Use this procedure to add an existing service to discover and import hardware resources that were not originally deployed with PowerFlex Manager.

Prerequisites

Ensure the following conditions are met before you add an existing service:

The vCenter, PowerFlex Gateway, CloudLink Center, and hosts must be discovered in the resource list. The PowerFlex Gateway must be in the service.

Steps

1. On the menu bar, click Services and then click + Add Existing Service.

2. On the Add Existing Service page, enter a service name in the Name field.

210 Enable replication on existing PowerFlex hyperconverged nodes

3. Enter a description in the Description field.

4. Select the Type for the service.

The choices are Hyperconverged, Compute Only, and Storage Only.

PowerFlex Manager checks to see whether there are any vCLS VMs on local storage. If it finds any, it puts the service in lifecycle mode and gives you the opportunity to migrate these to shared storage.

5. To specify the compliance version to use for compliance, select the version from the Firmware and Software Compliance list or choose Use PowerFlex Manager appliance default catalog.

You cannot specify a minimal compliance version when you add an existing service, since it only includes server firmware updates. The compliance version for an existing service must include the full set of compliance update capabilities. PowerFlex Manager does not show any minimal compliance versions in the Firmware and Software Compliance list.

NOTE: Changing the compliance version might update the firmware level on nodes for this service. Firmware on shared

devices is maintained by the global default firmware repository.

6. Specify the service permissions under Who should have access to the service deployed from this template? by performing one of the following actions:

To restrict access to administrators, select the Only PowerFlex Manager Administrators option. To grant access to administrators and specific standard users, select the PowerFlex Manager Administrators and

Specific Standard and Operator Users option, and perform the following tasks: a. Click Add User(s) to add one more standard or operator users to the list. b. To delete a standard or operator user from the list, select the user and click Remove User(s). c. After adding the standard and or operator users, select or clear the check box next to the standard or operator users

to grant or block access to use this template. To grant access to administrators and all standard users, select the PowerFlex Manager Administrators and All

Standard and Operator Users option.

7. Click Next.

8. Choose one of the following network automation types: Full Network Automation Partial Network Automation

When you choose Partial Network Automation, PowerFlex Manager skips the switch configuration step, which is normally performed for a service with Full Network Automation. Partial network automation allows you to work with unsupported switches. However, it also requires more manual configuration before a deployment can proceed successfully. If you choose to use partial network automation, you give up the error handling and network automation features that are available with a full network configuration that includes supported switches.

In the Number of Instances box, provide the number of component instances that you want to include in the template.

9. On the Cluster Information page, enter a name for the cluster component in the Component Name field.

10. Select values for the cluster settings:

For a hyperconverged or compute-only service, select values for these cluster settings:

a. Target Virtual Machine ManagerSelect the vCenter name where the cluster is available. b. Data Center NameSelect the data center name where the cluster is available.

NOTE: Ensure that selected vCenter has unique names for clusters in case there are multiple clusters in the

vCenter.

c. Cluster NameSelect the name of the cluster you want to discover. d. OS ImageSelect the image or choose Use Compliance File ESXi image if you want to use the image provided with

the target compliance version. PowerFlex Manager filters the operating system image choices to show only ESXi images for a hyperconverged or compute-only service.

For a storage-only service, select values for these cluster settings:

a. Target PowerFlex GatewaySelect the gateway where the cluster is available. b. Protection DomainSelect the name of the protection domain in PowerFlex. c. OS ImageSelect the image or choose Use Compliance File Linux image if you want to use the image provided with

the target compliance version. PowerFlex Manager filters the operating system image choices to show only Linux images for a storage-only service.

11. Click Next.

Enable replication on existing PowerFlex hyperconverged nodes 211

12. On OS Credentials page, select the OS credential that you want to use for each node and SVM.

You can select one credential for all nodes (or SVMs), or choose credentials for each item separately. You can create the operating system credentials on the Credentials Management page under Settings.

PowerFlex Manager validates the credentials for the nodes and SVMs before it creates the service. This validation makes it possible for PowerFlex Manager to run a full inventory on all nodes and SVMs before creating the service. The process of running the inventory can take five to ten seconds to complete.

To import a VMware NSX-T or NSX-V configuration, PowerFlex Manager must have the operating system inventory to recognize that NSX VIBs are on the node. Without the inventory, it is unable to tell if a node has NSX-T or NSX-V.

PowerFlex Manager runs the inventory on all nodes and SVMs for which the credentials are valid. The service uses any nodes and SVMs for which it has a successful inventory. For example, if you have four nodes, and one node has an invalid operating system password, PowerFlex Manager adds the three nodes for which the credentials are valid and ignores the one with the invalid password.

13. Click Next.

The list of resources available in the cluster is displayed on the Inventory Summary page.

14. Review the inventory on the Inventory Summary screen.

The summary shows all nodes that are available. If a node is not available, it might be because this node does not match the Type you selected for the service (Hyper-converged, Compute only, or Storage only).

Depending on how the node is configured, the summary might show additional inventory information. For example, for a node that has NVDIMM compression, the summary shows additional information about the acceleration pool and compression settings.

If the resources are discovered and in an available state, the Available Inventory displays the components as Yes. An unavailable PowerFlex Gateway is shown as No.

If the credentials are invalid for a node or SVM, or if you have a network connectivity problem, PowerFlex Manager displays No in the Available Inventory column for the node, and displays an error message to notify you about the problem.

PowerFlex Manager cannot update firmware and software versions for PowerFlex clusters that do not have available PowerFlex Gateways. If expected PowerFlex Gateways are not shown as available, you can discover the gateways and run the wizard again.

NOTE: PowerFlex Manager retrieves the hostname value from iDRAC and not the operating system. If the hostname

field is not updated in iDRAC, an incorrect value can be displayed in PowerFlex Manager. Certain operating systems

require extra packages that are installed for iDRAC to update the correct hostname.

15. Click Next.

16. On the Network Mapping page, review the networks that are mapped to port groups and make any required edits.

PowerFlex Manager attempts to select the correct network based on the VLAN ID, subnet, or IP ranges entered in PowerFlex Manager. If PowerFlex Manager finds only one network for a given network type, it selects the network automatically. If it finds more than one, you must select the network from the Network drop-down list. The OS Installation network does not get a VLAN ID.

NOTE: If the OS Install VLAN is not already configured in your environment, add it. This network is required to perform

node expansions. This network is typically added during PowerFlex Manager configuration.

If there are any port groups for which you do not want PowerFlex Manager to manage access, leave those port groups cleared. If no network is selected for a particular port group, PowerFlex Manager leaves it out of the deployment data and does not add it to the nodes.

For an existing service that supports NSX-T, PowerFlex Manager shows VDS switches that are sharing uplinks.

17. To import a large number of general-purpose VLANs from vCenter, perform these steps:

a. Click Import Networks on the Network Mapping page. PowerFlex Manager displays the Import Networks wizard. In the Import Networks wizard, PowerFlex Manager lists the port groups that are defined on the vCenter as Available Networks. You can see the port groups and the VLAN IDs.

b. Optionally, search for a VLAN name or VLAN ID. PowerFlex Manager filters the list of available networks to include only those networks that match your search.

c. Click each network that you want to add under Available Networks. If you want to add all the available networks, click the check box to the left of the Name column.

d. Click the double arrow (>>) to move the networks you chose to Selected Networks. PowerFlex Manager updates the Selected Networks to show the ones you have chosen.

e. Click Save.

212 Enable replication on existing PowerFlex hyperconverged nodes

18. Click Next.

19. Review the Summary page and click Finish when you are ready to add the service.

The process of adding an existing service causes no disruption to the underlying hardware resources. It does not shut down any of the nodes or the vCenter.

For an existing service, the Reference Template field shows Generated Existing Service Template on the Service Details page. You can distinguish existing services from new services that were deployed with PowerFlex Manager.

When PowerFlex Manager must put a service in lifecycle mode, the Summary page for the Add Existing Service wizard displays a warning message indicating the reason.

In some situations, an imported configuration might not meet the minimal requirements for lifecycle mode. In this case, PowerFlex Manager does not allow you to add the service.

Next steps

When you add an existing service, PowerFlex Manager matches the hosts, vCenter, and other items it finds with discovered resources in the resource list. If you missed a component initially, you can change your resource inventory, and update the service to reflect these changes. Go back to the resources list, select the component, and mark it as Managed by selecting Change resource state to Managed. Then, perform an Update Service Details operation on the service to pull in the missing component.

When you deploy an existing service, PowerFlex Manager reserves any IP addresses from vCenter or the PowerFlex Gateway that it needs. If you later tear down the service, it releases those IP addresses so that they can be reused.

If you add an existing service that supports NSX-T or NSX-V, PowerFlex Manager displays a banner indicating that the service supports a limited set of actions. Most service actions are disabled for an NSX-T or NSX-V configuration, except the ability to update the firmware and software components, remove resources (or the service as a whole), and update service details.

When you add an existing service, PowerFlex Manager checks to see whether there are any vCLS VMs on local storage. If it finds any, it displays a banner on the Service Details page indicating that it has put the service in lifecycle mode and gives you the opportunity to migrate the VMs to shared storage.

Enable replication on existing PowerFlex hyperconverged nodes 213

Retrieving PowerFlex performance metrics

Retrieving PowerFlex performance metrics using the PowerFlex GUI Use this procedure to retrieve PowerFlex performance metrics using the PowerFlex GUI.

Prerequisites

Use a standard tool to generate simulated IOPS. A simple way to do this is to load a Linux VM and use flexible I/O Tests (fio) to generate IOPS. Following, is the command line using fio to generate random reads and writes:

fio --name=randrw --rw=randrw --direct=1 --ioengine=libaio --bs=16k --numjobs=8 -- rwmixread=90 -- size=1G --runtime=600 --group_reporting

Steps

1. To retrieve overall performance metrics:

a. Launch the PowerFlex GUI. b. In the Dashboard, look at the PERFORMANCE data. c. The Dashboard displays the following:

Overall system IOPs Overall system bandwidth Overall system latency

2. To retrieve volume-specific metrics:

a. Launch the PowerFlex GUI. b. In the Dashboard, select CONFIGURATION > Volumes.

3. To retrieve SDS-specific metrics:

a. Launch the PowerFlex GUI. b. In the Dashboard, Select CONFIGURATION > SDSs.

Retrieving PowerFlex performance metrics using a PowerFlex version prior to 3.5 Use this procedure to retrieve PowerFlex performance metrics for a PowerFlex version prior to 3.5.

Prerequisites

Use a standard tool to generate simulated IOPS. A simple way to do this is to load a Linux VM and use flexible I/O Tests (fio) to generate IOPS. Following, is the command line using fio to generate random reads and writes:

fio --name=randrw --rw=randrw --direct=1 --ioengine=libaio --bs=16k --numjobs=8 -- rwmixread=90 -- size=1G --runtime=600 --group_reporting

Steps

1. To retrieve overall performance metrics:

a. Launch the PowerFlex GUI. b. In the Dashboard, look at the IO Workload page.

13

214 Retrieving PowerFlex performance metrics

c. The Dashboard displays the following:

Overall system IOPs Overall system bandwidth Read/write statistics Average I/O size

2. To retrieve volume-specific metrics:

a. Select Frontend > Volumes. b. Select a volume and click the Property Sheet icon. c. The volume performance metrics are displayed in the General section of the Volume Properties pane.

3. To retrieve host-specific metrics:

a. Select Frontend > SDCs. b. Select a host, and click the Property Sheet icon. c. The host performance metrics are displayed in the General section of the Host SDC Properties pane.

Retrieving PowerFlex performance metrics 215

Performing maintenance activities in a PowerFlex cluster

You place a node in maintenance mode to repair, replace, or upgrade hardware components for the customer and management clusters.

For more information, see Data assurance during maintenance.

When performing maintenance on PowerFlex nodes, there are three maintenance options:

Mode Description

Instant maintenance mode Perform short-term maintenance that lasts less than 30 minutes. It is designed for quick entry to and exit from a maintenance state. The node is immediately and temporarily removed from active participation.

Use for scenarios such as non-disruptive, rolling upgrades, where the maintenance window is only a few minutes (for example, a reboot) and there are no known hardware issues.

Protected maintenance mode Perform maintenance or updates that require longer than 30 minutes in a safe and protected manner. PowerFlex makes a temporary copy of the data, providing data availability without the risk of exposure of an accessible single copy.

Evacuate the node from the cluster Default method prior to PowerFlex version 3.5. Data is migrated to other nodes in the cluster.

Instant maintenance mode (IMM) In instant maintenance mode, the data on the node undergoing maintenance is not removed from the cluster. However, this data is not available for use for the duration of the maintenance activity. Instead, extra copies of data residing on the other nodes are used for application reads.

The existing data on the node being maintained is, in effect, frozen on the node. This is a planned operation that does not trigger a rebuild. Instead, the MDM instructs the SDCs where to read and write IOs intended to be directed at the node in maintenance.

A disadvantage of instant maintenance mode is that it introduces a risk of having only a single copy of data available during maintenance activity. During instant maintenance mode, there are always two copies of data. However, any copy residing on the node in maintenance is unavailable for the maintenance duration.

When exiting instant maintenance mode, you do not need to rehydrate the node completely. You need to only sync back any relevant changes that have occurred and reuse all the unchanged data on the node. This results in a quick exit from maintenance mode and quick return to full capacity and performance.

Protected maintenance mode (PMM) Protected maintenance mode initiates a many-to-many rebalancing process. Data is preserved on the node entering maintenance, and a temporary copy of the data is created on the sustaining nodes. Data on the node in maintenance is frozen and inaccessible. Protected maintenance mode maintains two copies of data at all times, avoiding the risks from the single copy in instant maintenance mode.

During protected maintenance mode, changes are tracked only for writes that affect the SDS under maintenance mode (what does this mean). When exiting the SDS from maintenance mode, only the changes that occurred during maintenance need to be synced to the SDS.

14

216 Performing maintenance activities in a PowerFlex cluster

Due to the creation of a temporary third data copy, protected maintenance mode requires more spare capacity than instant maintenance mode. Account for this spare capacity during deployment if you plan to use protected maintenance mode. There must be enough spare capacity to handle at least one other node failure, as protected maintenance mode cycles might be long and other elements could fail.

Protected maintenance mode makes the best use of all unused, available capacity, as it uses both the allocated spare capacity and any generally free capacity. It does not ignore capacity requirements. Nodes entering protected maintenance mode or in the same fault set may have degraded capacity.

The following equation summarizes the minimum requirements: Free + spare - 5% of the storage pool >= protected maintenance mode node size

Use the following command to get the system information for this calculation: scli --query all

Eject the node from the cluster When a node is gracefully removed using the UI or CLI, a many-to-many rebalance operation between nodes begins. This ensures that there are two copies of all data on all other nodes before the node being maintained is dropped from the cluster. Data is fully protected as there are always two available copies of the data.

You may need to adjust the spare capacity assigned to the cluster overall, as the data rebalancing uses up free spare capacity on the other nodes. For example, if you start with 10 nodes and 10% spare capacity, running with nine nodes requires 12% spare capacity to avoid an insufficient spare capacity alert. Spare capacity must be equal to or greater than the capacity of the smallest unit (node).

During maintenance, the cluster functions normally, but with one less node and therefore less capacity and lower performance. Data writes are sent to and mirrored on the other nodes. It does not matter how long the maintained node is offline, as it is no longer a part of the cluster. There is no exposure or risk of data unavailability if a problem arises that prohibits the node from being re-added.

General restrictions and limitations: Do not put two nodes from the same protection domain simultaneously into instant maintenance mode or protected

maintenance mode. You cannot mix protected maintenance mode and instant maintenance mode on the same protection domain. For each protection domain, all SDS concurrently in protected maintenance mode must belong to the same fault set. There

are no inter-protection domain dependencies for protected maintenance mode. You can take down one SDS or full fault set in protected maintenance mode.

Data assurance during maintenance Use these guidelines to guarantee your data is safe during maintenance operations.

The following table provides guidance for the available data assurance mechanisms when performing maintenance operations. It also indicates whether the option is available for PowerFlex management controller 2.0.

NOTE: If using a version of PowerFlex prior to 3.5, protected maintenance mode is not available. If the maintenance

window is greater than 30 minutes, use the eject node option.

Maintenance operation

Considerations Higher risk - IMM Lower risk - PMM Eject the node from the cluster

Applicable for PFMC 2.0

Node reboot Generally quick and do not involve risk

Acceptable option Conservative approach

Unnecessary Yes

Node upgrade: firmware

Vary in time depending on components being upgraded

Use for brief upgrades of single components needing a reboot (BIOS)

Recommended. Use if upgrading multiple components, including long- running firmware

Unnecessary Yes

Performing maintenance activities in a PowerFlex cluster 217

Maintenance operation

Considerations Higher risk - IMM Lower risk - PMM Eject the node from the cluster

Applicable for PFMC 2.0

Node upgrade: OS and SDC

Upgrade and/or patches can be applied in under 30 minutes per node

Acceptable for brief patch applications

Recommended in most situations to provide additional protection

Unnecessary Yes

Node upgrade: firmware and OS and SDC and CloudLink agent

Mixed upgrade approach combining all updates with a single reboot

Not recommended as most upgrades will not complete within 30 minutes

Recommended Unnecessary Yes

CloudLink agent is not applicable for controller nodes

Network changes: restart network, adjust MTU, add/ remove VLANs

Network changes are typically quick but interrupt connectivity to nodes

Acceptable for quick updates

Recommended in most cases. Provides additional protection

Unnecessary Yes

PowerFlex software upgrade

Upgrade of components should take less than one minute per node, if no issues occur

NOTE: SDC is upgraded as part of node OS upgrade

Acceptable for software component update

Provides additional protection to handle any hardware failure

Unnecessary Yes

Other operations Use the expected activity time to guide the decision of which mode to use

Use if under 30 minutes

Use if greater than 30 minutes

Use if there are other considerations or do not expect to return the node to the cluster

Yes

Entering protected maintenance mode Use the below procedure to enter protected maintenance mode using PowerFlex Manager.

About this task

When you put a node into service mode, you specify whether you are performing short-term or long-term maintenance work. The option that you use for long-term maintenance depends on the PowerFlex version you are using.

PowerFlex Manager does not allow a node to enter service mode in the following scenarios:

VMware NSX-T or NSX-V is configured. PowerFlex Gateway used in the service is being updated on the Resources page. Services that do not have switches discovered.

Steps

1. Log in to PowerFlex Manager.

2. On the Services page, select a service, and click View Details in the right pane.

3. Click Enter Service mode under Service Actions.

NOTE: The service should have at least three nodes to enter into protected performance maintenance mode using

PowerFlex Manager.

4. Select one or more nodes on the Node Lists page, and click Next.

218 Performing maintenance activities in a PowerFlex cluster

NOTE: For an environment with Fault set, PowerFlex Manager can put a single node or full fault set in protected

maintenance mode. For an environment without Fault sets, PowerFlex Manager can put a four node minimum in

protected maintenance mode.

5. Select Protected Maintenance Mode.

6. Click Enter Service Mode.

7. Verify that the node shows as Service Mode (Protected Maintenance) in PowerFlex Manager.

Exiting protected maintenance mode Use this procedure to exit protected maintenance mode using PowerFlex Manager.

Steps

1. Log in to PowerFlex Manager.

2. On the Services page, select the service.

3. Click Exit Service Mode.

Performing maintenance activities in a PowerFlex cluster 219

Administering the CloudLink Center

Adding and managing CloudLink Center licenses Perform the following procedures to add CloudLink Center licenses and manage CloudLink Center licenses through PowerFlex Manager.

License CloudLink Center

Use this procedure to add licenses to CloudLink Center.

About this task

CloudLink license files determine the number of machine instances, CPU sockets, encrypted storage capacity, or physical machines with self-encrypting drives (SEDs) that your organization can manage using CloudLink Center. License files also define the CloudLink Center usage duration.

NOTE: CloudLink center can act as a key management interoperability protocol (KMIP) server if you upload a KMIP license

to it.

Steps

1. Log in to CloudLink Center.

2. Select System > License.

3. Click Upload License.

4. Browse to the license file and click Upload.

NOTE: If the CloudLink environment is managed by PowerFlex Manager, after you update the license, go to the

Resources page, select the CloudLink VMs, and click Run Inventory.

Add the CloudLink Center license in PowerFlex Manager

Use this procedure to add CloudLink Center in PowerFlex Manager.

Steps

1. Log in to PowerFlex Manager.

2. Click Settings > Software Licenses, and click Add.

3. Click Choose File, and browse the license file.

4. Select Type as CloudLink, and click Save.

5. From Resource, select the CloudLink VMs, and click Run inventory.

Delete expired or unused CloudLink Center licenses from PowerFlex Manager

Use this procedure to delete expired or unused CloudLink Center licenses from PowerFlex Manager.

Steps

1. Log in to PowerFlex Manager.

15

220 Administering the CloudLink Center

2. Click Settings > Software Licenses.

3. Select the license you want to delete, and click Delete.

4. From Resource, select the CloudLink VMs, and click Run inventory.

Configure custom syslog message format

Use this procedure to configure the custom syslog message format.

Steps

1. Log in to CloudLink Center.

2. Click Server > Change Syslog Format. The Change Syslog Format dialog box is displayed.

3. From the Syslog Format list, select Custom.

4. Enter the string for the syslog entry, and click Change.

Registering KMIP on CloudLink Center

Prerequisites

Ensure you have KMIP server details and the required KMIP server permission files (key.pem, cert.pem, ca.pem). If these files are not available, log in to the KMIP server and download the certificate ZIP file.

About this task

This procedure explains how to add CloudLink Center to a KMIP server and create a KMIP keystore.

Steps

1. Log in to the CloudLink Center.

2. Go to System > Keystore > Add.

3. Provide any name and description and click Next.

4. Select Key Location Type as Local Database.

5. Select the Protector Type as KMIP.

6. Enter the following information:

KMIP server address Username (secadmin) Password Upload the three ZIP files downloaded from the KMIP server

7. Click Test. A successful message is displayed that Protector is accessible.

8. Click Add. The KMIP keystore is available under the CloudLink keystore. To use this KMIP for a new service, while creating the template in PowerFlex Manager, select CloudLink Center

Settings > KMIP keystore. For an existing service, edit the machine group used by the service.

a. Go to CloudLink Center > Agents > Machine Group > Actions > Modify. b. Change the Keystore to KMIP Keystore and click Modify. c. Once the Keystore is changed, remove the service from PowerFlex Manager and add the existing service from the

Services page.

Administering the CloudLink Center 221

Manage a self-encrypting drive (SED) from CloudLink Center Use this procedure to manage an SED device through CloudLink Center.

About this task

When managing SEDs from CloudLink Center, be aware of the following:

CloudLink Center can manage encryption keys for self-encrypting drives (SEDs). Managing SEDs with CloudLink Center is functional when the CloudLink agent is installed on machines with SEDs. When managed by CloudLink Center, SED encryption keys are stored in the current keystore for the machine group they are

in. The functionality for managing SEDs requires a separate SED license. If the SED cannot retrieve the key from CloudLink Center, the SED remains locked.

Steps

From the CloudLink Center, select Agent > Machines, click Actions and select Manage SED. Ownership of the encryption key is enabled.

NOTE: This option is only available if an SED license is uploaded and an SED is detected in the physical machine managed

by CloudLink Center. The Manage SED option does not change data on an SED it only takes ownership of the encryption

key.

Manage a self-encrypting drive from the command line As an alternative to CloudLink Center, use the command line to manage an SED.

Steps

1. Log in to the Storage Data Servers (SDS).

2. To manage the SED from the command line, type svm manage [device name].

For example, svm manage /dev/sdb.

222 Administering the CloudLink Center

Release a self-encrypting drive Use this procedure to release an SED that is managed by CloudLink.

About this task

This option allows you to release ownership of an SED that is managed by CloudLink. This option is only available if an SED license is uploaded and an SED is detected in the physical machine managed by CloudLink Center.

When CloudLink releases an SED, the encryption key is released in CloudLink Center.

Steps

1. From CloudLink Center, go to Agents > Machines and select SDS Machine. Click Release SED.

2. From RELEASE SED, use the menu to select the SED drive that you want to release and click Release.

The status of the SED drive changes to Releasing Control.

Administering the CloudLink Center 223

Once CloudLink releases the control, the SED device status shows as Unmanaged.

NOTE: The Release SED option does not change any data on the SED.

Release management of a self-encrypting drive from the command line Use this procedure to release an SED using the command line.

Steps

1. Log in to the Storage Data Server (SDS).

2. To release the SED from the command line, type svm release [device name].

For example, svm release /dev/sdb.

Changing the CloudLink secadmin user password Use this procedure to change the password of the CloudLink secadmin user.

About this task

During predeployment, the administrator completing the CloudLink Center deployment VMs sets the password for the CloudLink secadmin user.

During the deployment of PowerFlex Manager, the CloudLink secadmin user password in PowerFlex Manager is set. If the CloudLink secadmin user password is changed after deployment, you must also change the CloudLink secadmin user password within PowerFlex Manager to maintain manageability by PowerFlex Manager.

Steps

1. Open a web browser and log in to either CloudLink VM.

2. Log in with secadmin username and password (VMwar3123!!).

3. On the upper right corner, click secadmin, and click Change Password.

224 Administering the CloudLink Center

4. On the CHANGE PASSWORD screen, type Current Password and New Password to the respective fields, and click Change.

5. On the upper right corner click secadmin, and then select Logout.

6. Log in with secadmin username and the new password.

7. Change the CloudLink password in PowerFlex Manager by doing the following steps:

a. In PowerFlex Manager, go to Settings > Credentials management, select the CloudLink credential, click Edit, change the Password, and click Save. See Credentials management for more information.

8. Test the changes:

a. In the PowerFlex Manager GUI, go to Resources page, select the CloudLink center, and click Run Inventory. b. To confirm that the process completes with no errors, check Settings > Logs.

Unlocking the CloudLink secadmin user Use this procedure to unlock the CloudLink secadmin user.

About this task

The CloudLink secadmin user account gets locked after three unsuccessful login attempts (by default).

Steps

1. Log in to the VMware vCenter that manages CloudLink center VMs and launch the CloudLink VM Console.

2. Log in with CloudLink user credentials. The Summary page displays.

3. Click OK.

4. Type the CloudLink user password on the Re-enter password page.

5. On Update Menu, select Unlock User, and click OK. The User secadmin has been unlocked message is displayed.

6. Click OK.

7. To test the changes, log in to CloudLink VM IP using the secadmin user, and the correct password.

Setting CloudLink Vault passcodes During the initial server configuration, the vault passcodes are set.

Steps

1. To set or change the passcode, log in to the CloudLink Center.

2. Go to System > Vault > Actions > Set passcodes.

3. Update passcodes and click Set passcodes.

NOTE: You can change passcodes at any time.

Back up and restore CloudLink Center

Viewing back up information

Use this procedure to view the back up page information.

Steps

To view the backup information, log in to the CloudLink Center, and click System > Backup. The Backup page lists the following information:

Administering the CloudLink Center 225

Terminology Information

Backup File Prefix The prefix used for the backup files.

Current Key ID The identifier for the current RSA-2048 key pair.

Current Backup File The name of the current backup file.

Current Backup Time The date and time that the current backup file was generated.

Backup Schedule The schedule for generating automatic backups.

Next Backup In The time remaining before the next automatic backup is generated.

When a backup file is downloaded, the Backup page lists the following additional information:

Terminology Information

Last Downloaded File The name of the backup file that was last downloaded. Only shown when a backupfile has been downloaded.

Last Downloaded Time The date and time of the last back up file download. Only shown when a backup file has been downloaded.

Backup Store The backup store configuration type. If you havenot configured a backup store, the value is Local, which is stored on the local desktop

You can also use the FTP or SFTP servers as backup stores. To change the backup store, click System > Backup > Actions > Change Backup Store

If you have configured an FTP or SFTP backup store, the following additional information is available:

Terminology Information

Host The remote FTP, SFTP, or FTPS host where you saved the CloudLink Center backups. You can set this value to the host IP address or hostname (if DNS is configured).

Port The port used to access the backup store.

User The user with permission to access the backup store.

Directory The directory in the backup store where backup files are available.

Changing the schedule for automatic backups

CloudLink Center automatically generates a backup file each day at midnight (UTC time).

Steps

To change the schedule for generating automatic backups, click System > Backup > Actions > Change Backup Schedule.

Generating a backup file manually

If you want to preserve CloudLink Center before the next automatic back up, you can generate a backup manually.

Steps

In the CloudLink Center, click System > Backup > Actions > Generate new backup.

226 Administering the CloudLink Center

Generating a backup key pair

Use this procedure to generate a new backup key pair.

About this task

For example, if the private key for a backup key pair is lost, you can generate a new key pair. You cannot access your backup files without the associated private key. When you generate a new key pair, CloudLink Center automatically generates a new backup file to ensure that the current backup can be opened with the private key of the current key pair.

Dell EMC recommends the following practices when you generate a new backup key pair.

Steps

1. Download the private key to the Downloads folder for the current user account. For example, C: \Users\Admnistrator\Downloads

NOTE: The previously generated backup key will not open the backup file created, after a new key is generated.

2. Click System > Backup > Actions > Generate And Download New Key.

Downloading the current backup file

You can download the current backup file at any time.

About this task

The current backup file is either: The last backup file that CloudLink Center automatically created. The last backup file that you manually generated after the last automatic backup.

Steps

1. Click System > Backup > Actions > Download Backup.

2. In the Download Current Backup dialog box, click Download.

When you download the current backup file, CloudLink Center shows the age of the backup file.

Administering the CloudLink Center 227

Restoring the CloudLink backup

Restore the CloudLink backup.

Steps

1. Log in to the CloudLink Center.

2. Click System > Backup > Actions > Restore keystores.

3. In the Restore Keystores dialog box, complete the following steps:

a. In the Key box, browse to the private key file. b. In the Backup box, browse to the backup file. c. In the Unlock box, type the passcode that was set during the initial configuration of the CloudLink Center. d. Click Restore.

A Restore Keystores succeeded message is displayed.

NOTE: If the CloudLink backup is not associated with a key pair, the file is corrupted or key mismatch error message

is displayed. In such a scenario, go to Generating a backup key pair, and Downloading the current backup file.

228 Administering the CloudLink Center

Powering off and on the PowerFlex appliance cluster

Power off the PowerFlex management controller 2.0 Power off the PowerFlex management controller 2.0 on each of the PowerFlex management controller ESXi hosts.

Steps

1. Determine the primary MDM IP and the protection domain name:

a. Log in to PowerFlex Manager to determine the primary MDM. b. To view the details of a service, select the component. Scroll down on the Service Details page, the following

information is displayed based on the resource types in the service:

Primary MDM IP Protection Domain

2. Power off the VMs except for the PowerFlex SVMs:

a. Log in to the PowerFlex management controller 2.0 ESXi hosts. b. Click Virtual Machines. c. Power off all the VMs, except the PowerFlex SVMs.

3. Inactivate the protection domain:

a. Log in to primary MDM. b. Type scli --inactivate_protection_domain --protection_domain_name ) to inactivate the

protection domain. c. Enter Y to confirm.

d. Type scli --query_protection_domain --protection_domain_name to verify that the operational state of the protection domain is inactive.

4. Power off the PowerFlex management controller 2.0:

a. Log in to each of the PowerFlex management controller ESXi hosts. b. Click Virtual Machines. c. Power off the PowerFlex SVM.

5. Enter maintenance mode for the PowerFlex management controller 2.0.

NOTE: Put the PowerFlex management controller 2.0 in maintenance mode.

a. Log in to the PowerFlex management controller 2.0 ESXi hosts. b. Place each host in maintenance mode.

6. Power off the PowerFlex management controller 2.0:

a. Log in to the PowerFlex management controller 2.0 ESXi hosts. b. Power off the PowerFlex management controller 2.0. c. Verify that the hosts are shut down using the iDRAC.

Power on the PowerFlex management controller 2.0 Power on the PowerFlex management controller 2.0 on each of the PowerFlex management controller ESXi hosts.

Steps

1. Power on the PowerFlex management controller 2.0:

16

Powering off and on the PowerFlex appliance cluster 229

a. Log in to the iDRACs of the PowerFlex management controller 2.0 nodes. b. Power on the PowerFlex management controller 2.0 nodes. c. Verify that VMware ESXi boots and that you can ping the management IP address.

Allow up to 20 minutes for the PowerFlex management controller 2.0 to boot after VMware ESXi loads.

2. Exit maintenance mode on the PowerFlex management controller 2.0:

a. Log in to the ESXi hosts on the PowerFlex management controller 2.0. b. Exit maintenance mode on the PowerFlex management controller 2.0.

3. Power on the PowerFlex SVMs:

a. Log in to the ESXi hosts on the PowerFlex management controller 2.0. b. Click Virtual Machines. c. Power on the SVM.

4. Activate the protection domain:

a. Log in to the primary MDM. b. Type scli --activate_protection_domain --protection_domain_name ) to activate the

protection domain (see the Logical Configuration Survey for the name of the PowerFlex management controller 2.0). c. Type scli --query_protection_domain --protection_domain_name to verify that the

operational state of the protection domain is active.

5. Rescan the storage devices for datastores:

a. Log in to the ESXi hosts on the PowerFlex management controller 2.0. b. Click Storage > Devices > Rescan.

6. Power on the VCSA, DNS, and jump server virtual machines for the PowerFlex management controller 2.0:

a. Log in to PowerFlex management controller A through the VMware host client and verify that the DNS, VMware vCenter and Jump VM, and VMs are installed. If the components are not on the PowerFlex management controller A, verify that the components exist on PowerFlex management controller B or C and power them on.

b. Verify that the VMs started and have network connectivity. c. Log in to the vCSA. d. Power on the remaining management VMs in the following order:

i. PowerFlex management controller Gateway ii. PowerFlex Manager iii. CloudLink Center VMs (if applicable) iv. PowerFlex GUI presentation server v. vCSA for the customer (if applicable) vi. Secure Remote Services

e. Verify the following for HA, DRS, and the affinity rules:

Log in to the VMware vSphere Client and browse to the cluster. Click Configure. Under Services, verify that the vSphere DRS and vSphere Availability are on. Under Configuration, verify that the VM and or Host rules are added.

f. Log in to PowerFlex management controller A through the VMware host client and verify that the DNS, VMware vCenter, jump VM, and VMs are installed. If the components are not on PowerFlex management controller A, verify that the components exist on PowerFlex management controller B or C and power them on.

Powering off a PowerFlex appliance hyperconverged cluster To safely power off the PowerFlex appliance cluster, power off one component at a time in the order specified in this procedure. This procedure applies to PowerFlex appliance nodes with VMware ESXi.

Prerequisites

Verify that all startup configurations for the network switches are saved.

230 Powering off and on the PowerFlex appliance cluster

Steps

1. Launch the PowerFlex GUI and log in to the primary PowerFlex MDM. Verify the PowerFlex cluster is healthy and no rebuild or rebalances are running by observing the Rebuild and Rebalance widgets on the dashboard.

2. Log in to the VMware vSphere Client of the vCenter that manages the PowerFlex appliance cluster.

a. Expand the clusters. b. Shut down all customer/application VMs(not SVMs) running on the PowerFlex storage datastores. c. Disable DRS and HA on the PowerFlex appliance cluster and put the nodes into Maintenance Mode.

CAUTION: Do not shut down the SVMs as this can cause data loss.

3. In PowerFlex GUI:

PowerFlex Versions prior to PowerFlex 3.5

Inactivate PowerFlex Protection Domains (both source and destination protection domains if asynchronous replication is enabled). a. In Configuration, select the Protected Domains and click

More > Inactive. b. Click Inactivate in the pop up. c. Verify the operation is completed successfully and click

Dismiss. d. Click OK and then type the administrator password when

prompted. e. Repeat for each protection domain and verify that each is

deactivated. f. Exit the PowerFlex GUI presentation server.

a. Click Backend > Storage and change the view to By SDSs.

b. Right-click a protection domain, and select Inactivate.

c. Click OK and then type the administrator password when prompted.

d. Repeat for each protection domain and verify that each is deactivated.

e. Exit the PowerFlex GUI.

4. Shut down all the SVMs.

5. From the VMware vSphere Client of the vCenter that manages the PowerFlex Gateway VM and CloudLink center VM:

a. Shut down the PowerFlex Gateway VM. b. Shut down both CloudLink Center presentation server VMs.

6. Use iDRAC to do a Graceful Shutdown on the PowerFlex appliance nodes.

7. Using the appropriate VMware vSphere Client:

a. Shut down the PowerFlex Manager VM.

NOTE: If you shut down the PowerFlex Manager VM while a job (such as a service deployment) is still in progress,

the job will not complete successfully.

8. If required, power off the access switches first and then the management switch.

Powering on a PowerFlex appliance hyperconverged cluster To safely power on the PowerFlex appliance, power on one component at a time in the order specified in this procedure.

About this task

This procedure applies to PowerFlex appliance nodes with VMware hypervisors (ESXi).

Prerequisites

Verify that all connections are correct and seated properly.

Steps

1. Power on the network components in the following order:

Powering off and on the PowerFlex appliance cluster 231

NOTE: Network components take about 10 minutes to power on.

a. Management switch b. Access switches

NOTE: Ping the management IP address of the switches to verify power on is complete.

2. Using the appropriate VMware vSphere Client power on these VMs in the following order:

a. PowerFlex Gateway presentation server b. Both CloudLink Center VMs c. PowerFlex Manager

3. Power on the PowerFlex appliance nodes and do the following:

a. Use SSH to connect to all network switches. b. Verify that connected interfaces are not in a not connected/down state, with the command: show interface

status.

c. Use iDRAC to power on all the PowerFlex appliance compute nodes and verify that they are fully booted to the ESXi screen.

d. Using the VMware vSphere client of the vCenter that manages the PowerFlex appliance cluster, and take each PowerFlex appliance node out of Maintenance Mode.

i. Power on all SVMs ii. Enable DRS and HA on the PowerFlex appliance cluster.

e. Log in to PowerFlex.

PowerFlex Versions prior to PowerFlex 3.5

i. Verify that all software-defined storage (SDS) is online. Verify that all disks are online.

ii. In Configuration > Protected domain, select the protected domain, click More > Activate and repeat for each protection domain.

iii. Repeat the steps for source and destination, if asynchronous replication is enabled

iv. Verify the following if asynchronous replication is enabled

v. Click Protection > SDR. Verify all the SDRs are healthy.

vi. Click Protection > Journal Capacity. Ensure journal capacity has already added.

vii. Click Protection > RCGs. Verify that the RCG in the replication cluster returns to a working state.

i. Verify that all software-defined storage (SDS) is online. Verify that all disks are online.

ii. Select Backend > Storage > Protection Domain > Activate and repeat for each protection domain.

f. From the VMware vSphere client that manages the PowerFlex appliance cluster, do the following:

i. Rescan to rediscover PowerFlex storage datastores. ii. Power on the customer VMs. VMs might be displayed as inaccessible because PowerFlex storage is not available until

all the SVMs complete initialization.

Powering off PowerFlex appliance two-layer cluster This procedure applies to PowerFlex appliance two-layer cluster with VMware ESXi for compute nodes and the embedded operating system based on CentOS for storage-only nodes

About this task

To safely power off the PowerFlex appliance two-layer cluster, power off one component at a time in the order specified in this procedure.

232 Powering off and on the PowerFlex appliance cluster

Prerequisites

Verify that all startup configurations for the network switches are saved.

Steps

1. Launch the PowerFlex GUI and log in to the primary PowerFlex MDM. Verify the PowerFlex cluster is healthy and no rebuild or rebalances are running by noting the Rebuild and the Rebalance widgets on the dashboard.

2. In the VMware vSphere Web Client that manages the PowerFlex appliance cluster compute-only nodes:

a. Expand the clusters and shut down all application VMs running on the PowerFlex storage datastores. b. Disable DRS and HA on the customer compute cluster. c. Put the PowerFlex appliance compute nodes into Maintenance Mode.

3. Use iDRAC to do a Graceful Shutdown on the PowerFlex appliance compute nodes.

4. In PowerFlex GUI:

PowerFlex Versions prior to PowerFlex 3.5

Inactivate PowerFlex Protection Domains (both source and destination protection domains if asynchronous replication is enabled). a. In Configuration, select the Protected Domains

and click More > Inactive. b. Click Inactivate in the pop up. c. Verify the operation is completed successfully and

click Dismiss. d. Click OK and then type the administrator

password when prompted. e. Repeat for each protection domain and verify that

each is deactivated. f. Exit the PowerFlex GUI presentation server. a. Click Configuration > Protection Domain. b. For each protection domain click More >

Inactive. c. Click OK and type the administrator password

when prompted. d. Repeat for each protection domain and verify that

each is deactivated. e. Exit the PowerFlex GUI.

a. Click Backend > Storage and change the view to By SDSs. b. Right-click on a protection domain and select Inactivate. c. Click OK and type the administrator password when prompted. d. Repeat for each protection domain and verify that each is

deactivated. e. Exit the PowerFlex GUI.

5. SSH to each of the PowerFlex appliance storage only nodes and shutdown the nodes by typing shutdown -h.

6. Use iDRAC to confirm the PowerFlex appliance storage nodes have been powered off.

7. In the VMware vSphere Web Client that manages the PowerFlex Gateway VM:

a. Shut down the PowerFlex Gateway by running the command shutdown -h in the console.

b. Confirm PowerFlex Gateway VM is shut down by observing if vSphere shows the VM as Powered Off.

8. Using the appropriate VMware vSphere web client, shut down both CloudLink center VMs.

9. Using the appropriate VMware vSphere web client, shut down the PowerFlex Manager VM.

NOTE: If you shut down the PowerFlex Manager VM while a job (such as a service deployment) is still in progress, the

job will not complete successfully.

10. Power off the access switches first and then the management switch.

Powering off and on the PowerFlex appliance cluster 233

Powering on PowerFlex appliance two-layer cluster To safely power on the PowerFlex appliance cluster, power on one component at a time in the order specified in this procedure.

About this task

This procedure applies to PowerFlex appliance two-layer cluster with ESXi for compute nodes and the embedded operating system based on CentOS for storage only nodes.

Prerequisites

Verify that all connections are correct and properly seated.

Steps

1. Power on the network components in the following order:

NOTE: Network components take about 10 minutes to power on.

a. Management switch b. Access switches

NOTE: Ping the management IP address of the switches to verify power on is complete.

2. Using the appropriate VMware vSphere web client power on these VMs in the following order:

a. PowerFlex Gateway b. Both CloudLink Center VMs. c. PowerFlex Manager

3. Power on the PowerFlex appliance nodes by doing the following:

a. Use SSH to connect to all network switches:

To verify that connected interfaces are not in a "not connected/down" state, use the command: show interface status

b. Use iDRAC to power on all the PowerFlex appliance storage nodes and verify that they are fully booted to the Linux prompt.

c. Log in to the PowerFlex GUI.

PowerFlex Versions prior to PowerFlex 3.5

i. Verify that all software-defined storage (SDS) is online. Verify that all disks are online.

ii. In Configuration > Protected domain, select the protected domain, click More > Activate and repeat for each protection domain.

iii. Repeat the steps for source and destination, if asynchronous replication is enabled

iv. Verify the following if asynchronous replication is enabled

v. Click Protection > SDR. Verify all the SDRs are healthy.

vi. Click Protection > Journal Capacity. Ensure journal capacity has already added.

vii. Click Protection > RCGs. Verify that the RCG in the replication cluster returns to a working state.

i. Verify that all software-defined storage (SDS) is online by noting SDSs widget on the Dashboard.

ii. Verify that all disks are online by looking at Backend > Property Sheet > General for each SDS node.

iii. Select Backend/Storage > Protection Domain > Activate and repeat for each protection domain.

iv. Verify that there are no errors, warning, or alerts on the system.

v. Exit the PowerFlex GUI.

d. Use iDRAC to power on all the PowerFlex appliance compute nodes and verify that they are fully booted to the VMware ESXi console screen.

e. Using the VMware vSphere Web Client of the vCenter that manages the PowerFlex appliance cluster:

i. Take each PowerFlex appliance compute-only node out of maintenance mode. ii. Enable DRS and HA on the PowerFlex appliance compute-only cluster. iii. Rescan to rediscover PowerFlex storage datastores.

234 Powering off and on the PowerFlex appliance cluster

iv. Power on the customer VMs.

Powering off PowerFlex compute-only nodes with Windows Server 2016 or 2019 Use this procedure to power off PowerFlex Gateway with Windows Server.

Steps

1. Connect to the Windows Server system from the Remote Desktop with an account set up with an administrator privilege.

2. Power off through any one of the following modes:

a. GUI : Click Start > Power > Shutdown. b. Command line using PowerShell: Run the Stop-Computer cmdlet.

Powering off PowerFlex compute-only nodes with Red Hat Use this procedure to power off PowerFlex compute-only nodes with Red Hat.

Steps

SSH to the PowerFlex appliance Red Hat compute-only nodes and shutdown the nodes by using the command:

shutdown -h

Powering off and on the PowerFlex appliance cluster 235

Ports and authentication protocols

PowerFlex Manager ports and protocols PowerFlex Manager uses the following ports and protocols for data communication:

Port Protocol Port type Direction Use

22 SSH TCP Inbound/outbound I/O module

SSH with root account is disabled by default

22, 80, 135 N/A TCP/IP Outbound Duplicate IP detection

53 DNS UDP Outbound DNS server

67, 68 DHCP UDP Outbound DHCP server

69 TFTP UDP Inbound Firmware updates

TFTP is used only for operating system installation (PXE) boot when provisioning servers

80, 8080 HTTP TCP Inbound/outbound HTTP communication

All traffic is redirected to HTTPs

111 rpcbind TCP Inbound/outbound NFS

123 NTP UDP Outbound Time synchronization

162, 11620 SNMP UDP Inbound SNMP synchronization

389 LDAP TCP/UDP inbound/outbound use this port for unsecured and unencrypted LDAP transmission

443 HTTPs TCP Inbound/outbound Secure HTTP communication

SSL v3, TLS v1.0, and TLS v1.1 are disabled

443, 4433 WS-MAN TCP Outbound iDRAC and CMC communication

139, 445 CIFS TCP Inbound/outbound Back up to CIFS share

514 rsyslog TCP Outbound Remote syslog server communication

636 LDAP SSL TCP/UDP inbound/outbound LDAPS is a secure version of the LDAP where LDAP communication is transmitted over an SSL tunnel

2049 NFS TCP/UDP Inbound/outbound Back up to NFS share

4002, 4003 NFS TCP/UDP Inbound/outbound nlockmgr and mountd

8140 Puppet over HTTPs

TCP Inbound New node provisioning

9443 HTTPS TCP Outbound Secure Remote Services gateway communication

17

236 Ports and authentication protocols

PowerFlex ports and authentication For information about the ports and protocols used by PowerFlex components, see the Dell EMC PowerFlex Security Configuration Guide. You can find this guide at the Dell Technologies Technical Resource Center.

VMware vSphere ports and protocols This section contains information for VMware vSphere ports and protocols.

VMware vSphere 7.0

For information about ports and protocols for VMware vCenter Server and VMware ESXi hosts, see VMware Ports and Protocols.

VMware vSphere 6.7

For information about ports and protocols for VMware vCenter Server and Platform Services Controller, see Required Ports for vCenter Server and Platform Services Controller or Additional vCenter Server TCP and UPD Ports.

For information about ports and protocols for VMware ESXi hosts, see Incoming and Outgoing Firewall Ports for ESXi Hosts.

VMware vSphere 6.5

For information about ports and protocols for VMware vCenter Server and Platform Services Controller, see Required Ports for vCenter Server and Platform Services Controller.

For information about ports and protocols for VMware ESXi hosts, see Incoming and Outgoing Firewall Ports for ESXi Hosts.

CloudLink Center ports and protocols CloudLink Center uses the following ports and protocols for data communication:

Port Protocol Port type Direction Use

80 HTTP TCP Inbound/outbound CloudLink agent download and cluster communication

443 HTTPs TCP Inbound/outbound CloudLink Center web access and cluster communication

1194 Proprietary over TLS 1.2

TCP, UDP Inbound CloudLink agent communication

5696 KMIP TCP Inbound KMIP service

123 NTP UDP Outbound NTP traffic

162 SNMP UDP Outbound SNMP traffic

514 syslog UDP Outbound Remote syslog server communication

Ports and authentication protocols 237

Additional documentation The following information contains documentation resources to complete administrative procedures on PowerFlex appliance, and general resources:

Dell EMC PowerFlex Access PowerFlex documentation here: docs.delltechnologies.com/.

VMware vSphere Web Server Refer to VMware documentation and select the appropriate version for detailed information to complete administrative

procedures for vSphere Server on PowerFlex appliance. Changing the vCenter administrative password Adding or replacing NTP servers in the VMware vCenter Server Appliance configuration Configuring the DNS, IP address, and proxy settings. Joining the VMware vCenter Server Appliance to the Active Directory domain Leaving an Active Directory domain Setting an alarm Migrating VMs Using and migrating vSphere Update Manager Configuring VMware vCenter High availability

Secure Remote Services Access Secure Remote Services Technical Documentation and Downloads here: support.emc.com/products/

37716_EMC-Secure-Remote-Services-Virtual-Edition. Related information

VMware Documentation - docs.vmware.com/en/VMware-vSphere/index.html CloudLink

docs.delltechnologies.com/

Configure VMware vCenter high availability Use this procedure to enable VMware vCenter high availability.

About this task

VMware vCenter high availability (vCenter HA) protects the VMware vCenter Server against host and hardware failures. The active-passive architecture of the solution can also help reduce downtime significantly when you patch the vCenter Server.

Steps

After you create a three-node PowerFlex cluster that contains active, passive, and witness nodes. Different configuration paths are available, your selection depends on your existing configuration.

VMware vCente

Manualsnet FAQs

If you want to find out how the R6525 Dell works, you can view and download the Dell PowerFlex Appliance R6525 Solution Administration Guide on the Manualsnet website.

Yes, we have the Administration Guide for Dell R6525 as well as other Dell manuals. All you need to do is to use our search bar and find the user manual that you are looking for.

The Administration Guide should include all the details that are needed to use a Dell R6525. Full manuals and user guide PDFs can be downloaded from Manualsnet.com.

The best way to navigate the Dell PowerFlex Appliance R6525 Solution Administration Guide is by checking the Table of Contents at the top of the page where available. This allows you to navigate a manual by jumping to the section you are looking for.

This Dell PowerFlex Appliance R6525 Solution Administration Guide consists of sections like Table of Contents, to name a few. For easier navigation, use the Table of Contents in the upper left corner.

You can download Dell PowerFlex Appliance R6525 Solution Administration Guide free of charge simply by clicking the “download” button in the upper right corner of any manuals page. This feature allows you to download any manual in a couple of seconds and is generally in PDF format. You can also save a manual for later by adding it to your saved documents in the user profile.

To be able to print Dell PowerFlex Appliance R6525 Solution Administration Guide, simply download the document to your computer. Once downloaded, open the PDF file and print the Dell PowerFlex Appliance R6525 Solution Administration Guide as you would any other document. This can usually be achieved by clicking on “File” and then “Print” from the menu bar.