Contents

Dell VBlock 540 Converged Infrastructure Extensions for Storage Product Guide PDF

1 of 22
1 of 22

Summary of Content for Dell VBlock 540 Converged Infrastructure Extensions for Storage Product Guide PDF

Dell Converged Technology Extensions for Storage Product Guide

June 2022 Rev. 1.18

Notes, cautions, and warnings

NOTE: A NOTE indicates important information that helps you make better use of your product.

CAUTION: A CAUTION indicates either potential damage to hardware or loss of data and tells you how to avoid

the problem.

WARNING: A WARNING indicates a potential for property damage, personal injury, or death.

2016 - 2022 Dell Inc. or its subsidiaries. All rights reserved. Dell Technologies, Dell, and other trademarks are trademarks of Dell Inc. or its subsidiaries. Other trademarks may be trademarks of their respective owners.

Revision history..........................................................................................................................................................................4

Chapter 1: Introduction................................................................................................................. 6

Chapter 2: Understanding the architecture................................................................................... 7 System architecture and components............................................................................................................................7 Converged Technology Extensions support ................................................................................................................ 8 Converged Technology Extensions deployment .........................................................................................................8 Converged Technology Extensions connectivity.........................................................................................................9

Chapter 3: Administering the system........................................................................................... 14 Administering a Converged Technology Extension....................................................................................................14 Server parameter settings with mixed arrays (VMware vSphere 7.0)................................................................. 14 Advanced Settings for Storage Technology Extensions with VMware vSphere 6.5 and later.......................16 VMware para virtual SCSI controllers ..........................................................................................................................19

Chapter 4: Sample configuration................................................................................................. 20 Sample elevation for Converged Technology Extension for XtremIO storage.................................................. 20

Chapter 5: Additional references.................................................................................................22 Converged Technology Extension storage components......................................................................................... 22

Contents

Contents 3

Revision history

Date Document

revision

Description of changes

June 2022 1.18 Updated to remove references to the VxBlock System 340 and Vblock System 340, which are no longer supported as of June 30, 2022.

January 2022 1.17 Updated Server parameter settings with mixed arrays (VMware vSphere 7.0) and Advanced Settings for Storage Technology Extensions with VMware vSphere 6.5 and later to provide commands for setting parameters.

September 2021 1.16 Added information about Dell Unity XT.

Added the section Server parameter settings with mixed arrays (VMware vSphere 7.0).

Updated the section Converged Technology Extensions support.

December 2020 1.15 Updated with nontechnical updates.

June 2020 1.14 Added information about XtremIO X2 storage.

September 2019 1.13 Added support for Dell Unity XT.

March 2019 1.12 Added support for VMware vSphere 6.7 by merging the following two topics into one that applies to VMware vSphere 6.0, 6.5, and 6.7: Advanced Settings for Storage Technology Extensions with VMware vSphere

6.0 Advanced Settings for Storage Technology Extensions with VMware vSphere

6.5

The new topic is Advanced Settings for Storage Technology Extensions with VMware vSphere 6.x.

August 2018 1.11 Added support for Converged Technology Extension for PowerMax storage.

May 2018 1.10 Added information about XtremIO X2.

December 2017 1.9 Removed the topic, Storage Technology Extensions (VMware vSphere 5.5)

Storage Technology Extensions (VMware vSphere 6.0) - changed title to Advanced Settings for Storage Technology Extensions with VMware vSphere 6.0. Updated third row of table.

Added the topic, Advanced Settings for Storage Technology Extensions with VMware vSphere 6.5.

Converged Technology Extension for XtremIO storage - removed "Interconnect" from table header.

September 2017 1.8 Updated to introduce support for Dell Unity 350F, 450F, 550F and 650F

August 2017 1.7 Updated to introduce support for VMAX All Flash 950F and 950FX

July 2017 1.6 Updated to introduce support for Symmetrix Remote Data Facility (SRDF)

February 2016 1.5 Add a table describing which technology extensions are supported on which platforms

November 2016 1.4 Add information about the VMAX3 250F All Flash Array

Add Dell Unity Hybrid and Dell Unity All-Flash storage arrays

Add information for connecting XtremIO arrays with any other array type

July 2016 1.3 Add VMAX All Flash storage array

4 Revision history

Date Document

revision

Description of changes

May 2016 1.2 Remove physical planning information from this book and move it to the Converged Systems Physical Planning Guide

March 2016 1.1 Add XtremIO

February 2016 1.0 Initial release

Revision history 5

Introduction This document provides an overview of the Converged Technology Extensions for storage. It also provides the information required for planning the installation of the Converged Technology Extensions.

The target audience for this document includes build teams, deployment and installation personnel, sales engineers, field consultants, and advanced services specialists. This document is designed for people familiar with:

Storage arrays Converged Systems Cisco MDS switches Cisco Nexus switches

Refer to the Glossary for terms, definitions, and acronyms.

1

6 Introduction

Understanding the architecture

System architecture and components The following information summarizes the system architecture and components.

Converged Technology Extensions for storage have a number of features including: Standardized Dell cabinets, with multiple North American and international power solutions, for XtremIO, Dell Unity, and Dell

Unity XT storage arrays. Standardized Dell cabinets for the VMAX3 and VMAX All Flash storage arrays. Standardized VxBlock cabinets for VMAX 250F/FX and PowerMax. Block (SAN) and unified storage options (SAN and NAS). Support for multiple features of the Dell operating environment. Separated network architecture that provides the option to leverage Cisco Nexus switches to support IP and Cisco MDS

switches to support SAN.

Each Converged Technology Extension contains the following key storage components:

Dell Unity Hybrid arrays: Dell Unity 600 Dell Unity 500 Dell Unity 400 Dell Unity 300

Dell Unity All-Flash arrays: Dell Unity 650F Dell Unity 600F Dell Unity 550F Dell Unity 500F Dell Unity 450F Dell Unity 400F Dell Unity 350F Dell Unity 300F

Dell Unity XT Hybrid arrays:

Dell Unity 880 Dell Unity 680 Dell Unity 480 Dell Unity 380

Dell Unity XT All-Flash arrays:

Dell Unity 880F Dell Unity 680F Dell Unity 480F Dell Unity 380F

XtremIO arrays: XtremIO 40 TB - 1, 2, 4, 6, or 8 X-Bricks XtremIO 20 TB - 1, 2, 4, 6, or 8 X-Bricks XtremIO 10 TB (encryption capable) - 1, 2, or 4 X-Bricks XtremIO X2-R 1, 2, 3, or 4 X-Bricks XtremIO X2-S 1, 2, 3, or 4 X-Bricks

VMAX3 Hybrid arrays: VMAX 400K

2

Understanding the architecture 7

VMAX 200K VMAX 100K

VMAX All Flash arrays: VMAX All Flash 950F and 950FX VMAX All Flash 850F and 850FX VMAX All Flash 450F and 450FX VMAX All Flash 250F and 250FX

PowerMax arrays:

PowerMax 8000 Essentials and Pro PowerMax 2000 Essentials and Pro

The Release Certification Matrix provides a list of the certified versions of components for the Converged System and the Converged Technology Extensions.

Converged Technology Extensions support For information about Converged Technology Extensions support, see the following resources:

Converged Systems End of Service Life Dashboard Dell Support Service Descriptions

Converged Technology Extensions deployment Converged Technology Extensions are deployed with new or existing Converged Systems, in various configurations.

See the appropriate Release Certification Matrix (RCM) for a list of Converged Technology Extension configurations that are supported on your Converged System.

The following table contains information about how the Converged Technology Extensions are deployed physically:

Converged Technology Extension Physical Deployment details

Dell Unity storage Can be deployed with the following Converged Technology Extensions: Dell Unity, VMAX3, PowerMax

Dedicated, IPI-enabled, 42 RU cabinets Follows existing VxBlock and Vblock Systems 350 physical build standards

Dell Unity XT storage Can be deployed with the following Converged Technology Extensions: Dell Unity, VMAX3, Dell Unity XT, PowerMax, XtremIO

Dedicated, IPI-enabled, 40 RU and 42 RU cabinets Follows existing VxBlock Systems 1000 physical build standards

XtremIO storage Can be deployed with the following Converged Technology Extensions: XtremIO

Dedicated, IPI-enabled, 42 RU cabinets Follows existing VxBlock and Vblock Systems 540 physical build standards

VMAX3 storage Can be deployed with the following Converged Technology Extensions: VMAX3, VMAX All Flash, PowerMax

Standard VMAX3 cabinets

VMAX All Flash storage Can be deployed with the following Converged Technology Extensions: Dell Unity, VMAX All Flash, VMAX3, PowerMax

Standard VxBlock System cabinets

PowerMax storage Can be deployed with the following Converged Technology Extensions: Dell Unity, VMAX3, VMAX All Flash, PowerMax

Standard VxBlock System cabinets

The following table contains information about how the Converged Technology Extensions are deployed logically:

8 Understanding the architecture

Converged Technology Extension Logical configuration build standards

Dell Unity storage VxBlock and Vblock Systems 350 logical build standards

Dell Unity XT storage VxBlock Systems 1000 logical build standards

XtremIO storage VxBlock and Vblock Systems 540 logical build standards

VMAX3 storage VxBlock and Vblock Systems 740 logical build standards

VMAX All Flash storage VxBlock and Vblock Systems 740 logical build standards

PowerMax storage VxBlock Systems 1000 logical build standards

The following table contains information about how the Converged Technology Extensions are managed:

Converged Technology Extension Management tools

for Dell Unity storage Unisphere for Dell Unity

for Dell Unity XT storage Unisphere for Dell Unity XT

for XtremIO storage XtremIO Management Server (XMS) Each XMS instance can support up to eight XtremIO clusters. XMS version 6.0.1-30 can manage both X1 and X2 clusters. The X1 must be

at XIO version 4.0.15-15 or later.

for VMAX3 storage Unisphere for VMAX3 or Solutions Enabler Unisphere for VMAX3 manages multiple VMAX3 storage arrays* Solutions Enabler

for VMAX All Flash storage Unisphere for VMAX3 or Solutions Enabler Unisphere for VMAX3 manages multiple VMAX3 storage arrays* Solutions Enabler

for PowerMax storage Unisphere for PowerMax or Solutions Enabler Unisphere for PowerMax manages multiple PowerMax storage arrays* Solutions Enabler

NOTE: * Multiple arrays can be managed from a single instance of Unisphere when gatekeepers from each array are

allocated to the Solutions Enabler server.

Converged Technology Extensions connectivity To connect a Converged Technology Extension to a Converged System, follow the same procedures as described in the documentation for the host Converged System.

NOTE: The servers should be dedicated to the XtremIO storage array if VMAX storage arrays are also deployed in the same

Converged System. Failure to adhere to this practice might result in performance issues with some or all of the storage

arrays.

Converged Technology Extensions for VMAX3, VMAX All Flash, and PowerMax

The following table shows the number of FC ports per fabric, for each engine for the following arrays:

VMAX 100K, 200K, 400K VMAX 250F, 250FX PowerMax 2000 Essentials and Pro

Understanding the architecture 9

PowerMax 8000 Essentials and Pro

Number of engines FC ports per fabric

(two SLICs per director)

FC ports per fabric

(four SLICs per director)

1 8 16

2 16 32

3 24 48

4 32 64

5 40 80

6 48 96

7 56 112

8 64 128

The following table shows the number of FC ports per fabric, for each engine for the following arrays:

VMAX 450F, 450FX VMAX 850F, 850FX VMAX 950F, 950FX PowerMax 8000 Essentials and Pro (as a single engine only)

Number of engines FC ports per fabric

(two SLICs per director)

FC ports per fabric

(three SLICs per director)

1 8 12

2 16 24

3 24 36

4 32 48

5 40 60

6 48 72

7 56 84

8 64 96

For more information about Converged Technology Extensions for: VMAX3 or VMAX All Flash storage, see the Dell VxBlock and Vblock Systems 740 Architecture Overview PowerMax storage, see the Dell VxBlock System 1000 Architecture Overview

Converged Technology Extension for XtremIO X1 storage

XtremIO X1 arrays contain one, two, four, six, or eight X-Bricks per cluster. Each X-Brick has four 8 Gbps FC ports. Two of the FC ports connect an X-Brick to each fabric. The following table shows the number of ports per fabric, for each X-Brick:

Number of X-Bricks FC ports per Fabric

1 2

10 Understanding the architecture

Number of X-Bricks FC ports per Fabric

2 4

4 8

6 12

8 16

XtremIO X1 requires IP connectivity to all storage controllers in addition to the XMS.

For more information about a Converged Technology Extension for XtremIO X1 storage, see the Dell VxBlock and Vblock Systems 540 Architecture Overview.

Converged Technology Extension for XtremIO X2 Storage

The XtremIO X2 Storage Technology Extension includes two models. The X2-S is for data that benefits from data deduplication/compression. The X2-R is for larger physical capacity requirements, such as databases.

Parameter X2-R X2-S

Maximum capacity per X-Brick 138.2 TB 28.8 TB

Maximum capacity per cluster 552.8 TB 115.2 TB

Drive size 1.92 TB 400 GB

XtremIO X2 includes:

Top load 72-drive DAE for higher density Scale-up and scale-out architecture HTML graphical user interface Inline deduplication and compression XtremIO Integrated Copy Data Management for snapshots Thin provisioning Data at Rest Encryption

Each X-Brick is composed of:

One 2U Disk Array Enclosure (DAE) containing: Up to 72 SSDs Two redundant power supply units Two redundant SAS interconnect models

Two 1U storage controllers Each controller includes: Two redundant power supply units (PSUs) Two 16 Gbps FC ports Two 56 Gbps InfiniBand ports One 10 Gbps management port NVRAM for power loss protection

The following table provides connectivity information:

Number of X-Bricks 16 Gbps FC ports per fabric

1 2

2 4

Understanding the architecture 11

Number of X-Bricks 16 Gbps FC ports per fabric

3 6

4 8

XtremIO X2 requires a management connection to the storage controllers in the first X-Brick and an IP address for the XMS.

For more information about the XtremIO X2 storage, see the Dell VxBlock System 1000 Administration Guide and Dell VxBlock System 1000 Architecture Overview.

Scale-Out Architecture

An XtremIO storage system can include a single X-Brick or multiple X-Bricks. Both the X2-R and X2-S support up to 4 X-Bricks.

With clusters of two or more X-Bricks, XtremIO utilizes redundant 56 Gbps InfiniBand switches for an ultra-low latency back-end connectivity between storage controllers. The X2-S includes a 12-port InfiniBand switch, while the X2-R includes a 36-port InfiniBand switch for future expansion.

Scale-Up Architecture

More capacity can be added to an existing configuration without adding compute resources. The minimum number of drives in a DAE is 18. Disks can be added in packs of 6 until a total of 36 are reached. After 36 have been added, the next disk addition must include 18 additional disks for a total of 54. Disks may again be added in packs of 6 until the DAE is full at 72 disks.

Management

An XtremIO cluster is managed by an XtremIO Management Server (XMS). A single XMS can manage up to eight clusters. The XMS provides the GUI, CLI, and RESTful API interfaces. The XMS is deployed as a VM in the AMP.

The X2 XMS can be configured in two different sizes, regular for 16,000 volumes or less and expanded for greater than 16,000 volumes. The expanded version has additional vCPU and memory resources.

For more information about a for XtremIO X2 storage, see the Dell VxBlock System 1000 Administration Guide and Dell VxBlock System 1000 Architecture Overview.

Physical Requirements

Number of X-Bricks Rack units required

1 5

2 13

3 17

4 22

Converged Technology Extension for VMAX3 storage with Symmetrix Remote Data Facility (SRDF)

SRDF is a native remote replication technology that can be bundled with a Converged Technology Extension for VMAX3 storage. Replication connections can be made via Ethernet or Fibre Channel (FC) protocols. These connections are made through ports on dedicated SLICs on the VMAX Directors. A minimum of two 16 Gb FC SLICs for host connections are required on each director of every Engine/V-Brick. The use of dedicated SLICs for SRDF limits the number of slots available for other connectivity options (such as eNAS or additional host connections).

12 Understanding the architecture

Converged Technology Extension for Dell Unity storage

Each Converged Technology Extension has 2 to 10 FC ports per fabric, depending on the number of hosts. The following table shows the number of FC ports for each platform:

Converged Technology Extension storage array

Smallest configuration of FC ports per fabric

Largest configuration of FC ports per fabric

Dell Unity 300, 300F, and 350F 2 10

Dell Unity 400, 400F, and 450F 2 10

Dell Unity 500, 500F, and 550F 2 10

Dell Unity 600, 600F, and 650F 2 10

Storage connectivity types

Depending on the Converged System configuration, the Converged Technology Extension supports block and file storage.

Block storage Follows the same configuration options as the VxBlock Systems 1000 and VxBlock and Vblock Systems 350 for block storage.

File storage Follows the same configuration options as the VxBlock and Vblock Systems 350 for file storage.

Converged Technology Extension for Dell Unity XT storage

Each Converged Technology Extension has 2 to 10 FC ports per fabric, depending on the number of hosts. The following table shows the number of FC ports for each platform:

Converged Technology Extension storage array

Smallest configuration of FC ports per fabric

Largest configuration of FC ports per fabric

Dell Unity XT 380 and 380F 2 10

Dell Unity XT 480 and 480F 4 8

Dell Unity XT 680 and 680F 4 8

Dell Unity XT 880 and 880F 4 8

Storage connectivity types

Depending on the Converged System configuration, the Converged Technology Extension supports block and file storage.

Block storage Follows the same configuration options as the VxBlock Systems 1000 for block storage. File storage Follows the same configuration options as the VxBlock Systems 1000 for file storage.

Understanding the architecture 13

Administering the system

Administering a Converged Technology Extension Converged Technology Extension administration procedures are described in the host Converged System documentation.

For more information, refer to the appropriate administration guide for your Converged System.

Server parameter settings with mixed arrays (VMware vSphere 7.0) Use the advanced settings for optimal operation when deploying an XtremIO array with another array type.

With VMware vSphere 7.0, the number of outstanding IO with competing worlds parameter is limited to Max Queue Depth of a device.

See the CLI commands below to configure the parameters in the following table:

Parameter name VxBlock System 1000 with ...

Dell Unity or Unity XT and XtremIO X2

XtremIO X2 and VMAX All Flash

XtremIO X2 and PowerStore

XtremIO X2 and PowerMax

Dell Unity or Unity XT, XtremIO X2, PowerStore, and VMAX All Flash

FC Adapter Policy IO Throttle Count

256 (default) 256 (default) 256 (default) 256 (default) 256 (default)

lun_queue_depth_ per_path

32 (default) 32 (default) 32 (default) 32 (default) 32 (default)

Disk_SchedNumR eqOutstanding

32 (default) 32 (default) 32 (default) 32 (default) 32 (default)

Disk_SchedQuantum 8 (default) 8 (default) 8 (default) 8 (default) 8 (default)

Disk_DiskMaxIOSize 4 Mb 4 Mb 1 Mb 4 Mb 4 Mb

XCOPY (Primitive) 4 Mb XtremIO: 4 Mb/ Datamover/ MaxHWTransferSi ze

XtremIO: 4 Mb/ Datamover/ MaxHWTransferSize

XtremIO: 4 Mb/ Datamover/ MaxHWTransferSi ze

XtremIO: 4 Mb/ Datamover/ MaxHWTransferSi ze

XCOPY (Claim rule) Dell Unity or Unity XT: Use Claim Rule to set XCOPY size to 16 Mb.

VMAX3: Use Claim Rule to set XCOPY to 240 Mb.

N/A PowerMax: Use Claim Rule to set XCOPY to 240 Mb.

PowerMax: Use Claim Rule to set XCOPY to 240 Mb.

Dell Unity or Unity XT: Use Claim Rule to set XCOPY size to 16 Mb.

VMware vCenter Concurrent Clones

8 (default) 8 (default) 8 (default) 8 (default) 8 (default)

3

14 Administering the system

Configure advanced settings

To configure any of the advanced settings, perform the following steps:

1. Put the host into maintenance mode. 2. Connect to the ESX Shell as root. 3. Execute the additional commands below.

Configure lun_queue_depth_per_path

1. To view the current setting, enter:

esxcli system module parameters list m nfnic The lun_queue_depth_per_path has no value until it has been set explicitly using the configuration instruction in the next step.

2. To set lun_queue_depth_per_path to the value from the table enter.

esxcli system module parameters set p lun_queue_depth_per_path=128 m nfnic The value 128 is used in the example above.

3. Reboot the host so that the new lun_queue_depth_per_path setting takes effect. 4. To confirm the new setting, enter:

# esxcli system module parameters list m nfnic

Configure Disk.SchedNumReqOutstanding

1. To view the current setting on the specific device, enter:

# esxcli storage core device list d naa.514f0c53e4e00005 Perform the following step for each LUN in a specific array family. XtremIO has different values from other array families which is explained in the table.

2. To set Disk.SchedNumReqOutstanding to the value from the table, enter:

# esxcli storage core device set d naa.514f0c53e4e00005 o 128 The value of 128 is used in the example above.

Configure Disk.SchedQuantum

1. To view the current setting, enter:

# esxcli system settings advanced list o /Disk/SchedQuantum 2. To set Disk.SchedQuantum to the value from the table, enter:

# esxcli system settings advanced set -int-value=64 o /Disk/SchedQuantum The value of 64 is used in the example above.

Configure Disk.DiskMaxIOSize

1. To view the current setting, enter:

# esxcli system settings advanced list o /Disk/DiskMaxIOSize 2. To set Disk.DiskMaxIOSize to the value from the table enter:

# esxcli system settings advanced set -int-value=4096 o /Disk/DiskMaxIOSize

Administering the system 15

The value of 4096 is used in the example above.

Configure XCOPY

1. To view the current setting, enter:

esxcli system settings advanced list -o /DataMover/MaxHWTransferSize 2. The default value is 256. If different, to set XCOPY(/DataMover/MaxHWTransferSize) to the value from the table, enter:

esxcli system settings advanced set --int-value=256 -o /DataMover/MaxHWTransferSize The value of 256 is used in the example above.

Advanced Settings for Storage Technology Extensions with VMware vSphere 6.5 and later Use the advanced settings for optimal operation when deploying an XtremIO array with another array type.

In VMware vSphere ESXi 6.5 and later versions, the number of outstanding IOs with competing worlds parameter is limited to Max Queue Depth of a device. This requires a change in DSNRO for vSphere 6.5 and later versions on XtremIO.

CAUTION: Changing the following advanced settings to anything other than what is in the table may cause

hosts to over stress other arrays connected to the ESXi host, resulting in performance degradation while

communicating with them.

For XtremIO storage with VMware vSphere it is recommended to set the DSNRO parameter to the maximum value of 256.

The following table provides the values for the FC Adapter Policy IO Throttle Count parameter:

Converged Technology Extension Value

Dell Unity and XtremIO 256

Dell Unity XT and XtremIO 256

PowerMax, VMAX, and XtremIO 256

PowerMax, VMAX, and Unity 256

PowerMax, Unity XT, and XtremIO 256

XtremIO only 1024

VMware vSphere 6.5 uses an FNIC driver whereas VMware vSphere 6.7 and later VMware vSphere versions use an NFNIC driver. The following table provides configuration information for both the fnic_max_qdepth parameter (VMware vSphere 6.5) and the lun_queue_depth_per_path parameter (VMware vSphere 6.7 and later versions):

Converged Technology Extension Recommended Value

Dell Unity and XtremIO 32

Dell Unity XT and XtremIO 32

PowerMax, VMAX, and XtremIO 32

PowerMax, VMAX, and Unity 256

PowerMax, VMAX, and Unity XT 256

XtremIO only 128

The following table provides the values for the Disk.SchedNumReqOutstanding (set per device or LUN) parameter:

16 Administering the system

Converged Technology Extension Value

Dell Unity and XtremIO 32

Dell Unity XT and XtremIO 32

PowerMax, VMAX, and XtremIO 32

PowerMax, VMAX, and Unity 32

PowerMax, VMAX, Dell Unity XT, and XtremIO 32

XtremIO only 128

The following table provides the values for the Disk_SchedQuantum parameter:

Converged Technology Extension Value

Dell Unity and XtremIO 8

Dell Unity XT and XtremIO 8

PowerMax, VMAX, and XtremIO 8

PowerMax, VMAX, and Unity 8

PowerMax, VMAX, Dell Unity XT, and XtremIO 8

XtremIO only 64

The following table provides the values for the Disk_DiskMaxIOSize parameter:

Converged Technology Extension Value

Dell Unity and XtremIO 4 MB

Dell Unity XT and XtremIO 4 MB

PowerMax, VMAX, and XtremIO 4 MB

PowerMax, VMAX, and Unity 4 MB

PowerMax, VMAX, and Dell Unity XT 4 MB

XtremIO only 4 MB

The following table provides the values for the XCOPY parameter:

Converged Technology Extension Value

Dell Unity and XtremIO 4 MB

Dell Unity, Dell Unity XT, and XtremIO 4 MB

PowerMax, VMAX, and XtremIO PowerMax and VMAX3: 240 MB - Claim Rule XtremIO X1: 256 KB - /Datamover/

MaxHWTransferSize XtremIO X2: 4 MB - /Datamover/

MaxHWTransferSize

PowerMax, VMAX, and Unity PowerMax and VMAX3: 240 MB - Claim Rule

Unity: 16 MB

Administering the system 17

Converged Technology Extension Value

XtremIO only X1 clusters: 256 KB

X2: clusters 4 MB

For a configuration that includes a mixture of X1 and X2, use the 4 MB value.

The following table provides the values for the vCenter Concurrent Clones parameter:

Converged Technology Extension Value

Dell Unity and XtremIO 8

Dell Unity XT and XtremIO 8

PowerMax, VMAX, and XtremIO 8

PowerMax, VMAX, and Unity 8

PowerMax, VMAX, Dell Unity XT, and XtremIO 8

XtremIO only 8 per X-Brick

Configure advanced settings

To configure any of the advanced settings, perform the following steps:

1. Put the host into maintenance mode. 2. Connect to the ESX Shell as root. 3. Execute the additional commands below.

Configure lun_queue_depth_per_path

1. To view the current setting, enter:

esxcli system module parameters list m nfnic The lun_queue_depth_per_path has no value until it has been set explicitly using the configuration instruction in the next step.

2. To set lun_queue_depth_per_path to the value from the table enter.

esxcli system module parameters set p lun_queue_depth_per_path=128 m nfnic The value 128 is used in the example above.

3. Reboot the host so that the new lun_queue_depth_per_path setting takes effect. 4. To confirm the new setting, enter:

# esxcli system module parameters list m nfnic

Configure Disk.SchedNumReqOutstanding

1. To view the current setting on the specific device, enter:

# esxcli storage core device list d naa.514f0c53e4e00005 Perform the following step for each LUN in a specific array family. XtremIO has different values from other array families which is explained in the table.

2. To set Disk.SchedNumReqOutstanding to the value from the table, enter:

18 Administering the system

# esxcli storage core device set d naa.514f0c53e4e00005 o 128 The value of 128 is used in the example above.

Configure Disk.SchedQuantum

1. To view the current setting, enter:

# esxcli system settings advanced list o /Disk/SchedQuantum 2. To set Disk.SchedQuantum to the value from the table, enter:

# esxcli system settings advanced set -int-value=64 o /Disk/SchedQuantum The value of 64 is used in the example above.

Configure Disk.DiskMaxIOSize

1. To view the current setting, enter:

# esxcli system settings advanced list o /Disk/DiskMaxIOSize 2. To set Disk.DiskMaxIOSize to the value from the table enter:

# esxcli system settings advanced set -int-value=4096 o /Disk/DiskMaxIOSize The value of 4096 is used in the example above.

Configure XCOPY

1. To view the current setting, enter:

esxcli system settings advanced list -o /DataMover/MaxHWTransferSize 2. The default value is 256. If different, to set XCOPY(/DataMover/MaxHWTransferSize) to the value from the table, enter:

esxcli system settings advanced set --int-value=256 -o /DataMover/MaxHWTransferSize The value of 256 is used in the example above.

VMware para virtual SCSI controllers

Para virtual SCSI controllers

VMware vSphere 6.x selects the virtual SCSI controller that is recommended for the operating system that is installed to the VM. To increase I/O perfromance and reduce CPU use on the VM, change the VMware para virtual SCSI controller. To optimize use of VMs with XtremIO, configure VMs with VMware para virtual SCSI controllers.

See About VMware Para virtual SCSI Controllers in the vSphere Virtual Machine Administration Guide for the more information about the VMware para virtual adapter.

Administering the system 19

Sample configuration

Sample elevation for Converged Technology Extension for XtremIO storage The following sample Converged Technology Extension for XtremIO storage cabinet elevation varies based on the specific configuration.

Cabinet 1

This elevation is provided for sample purposes only. For specifications on a specific Converged Technology Extension design, consult your Dell Technologies Sales Engineer.

4

20 Sample configuration

Sample configuration 21

Additional references

Converged Technology Extension storage components Storage component information and links to documentation are provided.

For more information, see the manufacturer documentation.

Product Description Link to documentation

VMAX3 Built for reliability, availability, and scalability, delivers infrastructure services in the next generation data center.

https://www.dell.com/support/ home/en-us/product-support/ product/vmax3-series/overview

An authorized account is required.

VMAX All Flash Provides high-density flash storage. https://www.dell.com/support/ home/en-us/product-support/ product/vmax-all-flash/overview

An authorized account is required.

PowerMax Delivers technology leadership in hardware and software the fastest NVMe enterprise array future-proofed for end-to-end NVMe and a built-in machine learning engine.

https://www.dell.com/support/ home/en-us/product-support/ product/powermax/overview

An authorized account is required.

XtremIO X1 Delivers industry-leading performance, scale, and efficiency for hybrid cloud environments.

https:// www.corporatearmor.com/ documents/ Technical_Intro_to_EMC_XtremIO _Array_Whitepaper.pdf

An authorized account is required.

XtremIO X2 Delivers high levels of performance and scalability and new levels of ease-of-use to SAN storage, while offering advanced features.

https:// www.delltechnologies.com/asset/ en-us/products/storage/industry- market/h16444-introduction- xtremio-x2-storage-array-wp.pdf

An authorized account is required.

Dell Unity Hybrid and All-Flash arrays

Virtually provisioned, flash optimized, small form fa

Manualsnet FAQs

If you want to find out how the VBlock 540 Dell works, you can view and download the Dell VBlock 540 Converged Infrastructure Extensions for Storage Product Guide on the Manualsnet website.

Yes, we have the Extensions for Storage Product Guide for Dell VBlock 540 as well as other Dell manuals. All you need to do is to use our search bar and find the user manual that you are looking for.

The Extensions for Storage Product Guide should include all the details that are needed to use a Dell VBlock 540. Full manuals and user guide PDFs can be downloaded from Manualsnet.com.

The best way to navigate the Dell VBlock 540 Converged Infrastructure Extensions for Storage Product Guide is by checking the Table of Contents at the top of the page where available. This allows you to navigate a manual by jumping to the section you are looking for.

This Dell VBlock 540 Converged Infrastructure Extensions for Storage Product Guide consists of sections like Table of Contents, to name a few. For easier navigation, use the Table of Contents in the upper left corner.

You can download Dell VBlock 540 Converged Infrastructure Extensions for Storage Product Guide free of charge simply by clicking the “download” button in the upper right corner of any manuals page. This feature allows you to download any manual in a couple of seconds and is generally in PDF format. You can also save a manual for later by adding it to your saved documents in the user profile.

To be able to print Dell VBlock 540 Converged Infrastructure Extensions for Storage Product Guide, simply download the document to your computer. Once downloaded, open the PDF file and print the Dell VBlock 540 Converged Infrastructure Extensions for Storage Product Guide as you would any other document. This can usually be achieved by clicking on “File” and then “Print” from the menu bar.