Contents

Dell PowerFlex Appliance R650 Solution Architecture Overview PDF

1 of 31
1 of 31

Summary of Content for Dell PowerFlex Appliance R650 Solution Architecture Overview PDF

Dell EMC PowerFlex Appliance with PowerFlex 4.x Architecture Overview

August 2022 Rev. 1.0

Notes, cautions, and warnings

NOTE: A NOTE indicates important information that helps you make better use of your product.

CAUTION: A CAUTION indicates either potential damage to hardware or loss of data and tells you how to avoid

the problem.

WARNING: A WARNING indicates a potential for property damage, personal injury, or death.

2022 Dell Inc. or its subsidiaries. All rights reserved. Dell Technologies, Dell, and other trademarks are trademarks of Dell Inc. or its subsidiaries. Other trademarks may be trademarks of their respective owners.

Chapter 1: Introduction................................................................................................................. 4 Overview................................................................................................................................................................................4

Chapter 2: Revision history........................................................................................................... 6

Chapter 3: Architecture considerations......................................................................................... 7 System components............................................................................................................................................................7 Key architecture considerations...................................................................................................................................... 8 Network architecture......................................................................................................................................................... 9

Access and aggregation architecture....................................................................................................................... 9 Leaf-spine architecture.............................................................................................................................................. 10

PowerFlex storage-only deployment..............................................................................................................................11 PowerFlex two-layer deployment.................................................................................................................................. 12 PowerFlex hyperconverged deployment...................................................................................................................... 12 VMware NSX-T Edge node deployment...................................................................................................................... 13

Chapter 4: PowerFlex software-defined storage architecture...................................................... 14 PowerFlex components.................................................................................................................................................... 14 Storage schemas................................................................................................................................................................16 PowerFlex features............................................................................................................................................................17

Chapter 5: System hardware........................................................................................................19 Storage-providing nodes.................................................................................................................................................. 19 Storage-consuming nodes............................................................................................................................................... 19 Management controller....................................................................................................................................................20

Chapter 6: PowerFlex node networking........................................................................................21 Node network requirements............................................................................................................................................21

Chapter 7: Management control plane ........................................................................................ 24

Chapter 8: PowerFlex file services.............................................................................................. 26 PowerFlex file architecture.............................................................................................................................................26

Chapter 9: Security considerations............................................................................................. 29

Chapter 10: Additional references................................................................................................ 31

Contents

Contents 3

Introduction The PowerFlex Appliance Architecture Overview describes the high-level architecture and key hardware and software components of PowerFlex appliance.

The target audience for this document includes customers, sales engineers, field consultants, and advanced services specialists who want to deploy a high-performance, scalable, and flexible infrastructure using PowerFlex appliance.

PowerFlex appliance architecture is based on Dell PowerEdge servers, Cisco Nexus switches, Dell PowerSwitch switches or customer provided switches and PowerFlex software defined storage. PowerFlex Manager provides the management and orchestration functionality for PowerFlex appliance. PowerFlex appliance is an engineered system with optional full network automation (when using supported Cisco Nexus switches or Dell PowerSwitch switches) or partial network automation (when using customer provided switches). PowerFlex appliance serves as highly scalable and high performance hyperconverged infrastructure building block for modern and cloud native data center workloads.

Overview PowerFlex appliance is an engineered system designed to meet modern data center needs. You have flexibility in deploying two layer (separate compute-only and storage-only nodes), fully converged, storage-only, or hybrid combinations. PowerFlex allows for block and file storage within the same system.

PowerFlex appliance is a modular software-defined compute and storage platform that enables linear performance with scale and flexible deployment options for next-generation cloud applications and mixed workloads. The scale-out architecture of the PowerFlex appliance enables you to add PowerFlex nodes with various CPU, memory, and drive configuration options to meet the business need. PowerFlex appliance is designed for deployments involving large numbers of virtualized and bare metal workloads. PowerFlex appliance is built-in N+1 redundancy at component level to deliver high availability.

PowerFlex appliance has many advantages: Engineered system with automated end-to-end life cycle management using PowerFlex Manager Choice of the following network topologies to meet scale and performance business needs:

Access and aggregation leaf-spine

Choice of network hardware: Cisco Nexus switches, Dell PowerSwitch switches or customer preferred switches Multiple PowerFlex node types and node configurations options to meet compute and storage needs:

PowerFlex hyperconverged nodes PowerFlex storage-only nodes PowerFlex compute-only nodes PowerFlex file nodes

Flexible compute and storage resources deployment options such as Hyperconverged - compute and storage in same chassis allowing proportional scale Two layer - compute and storage deployed in separate chassis allowing independent scale of compute and storage

resources Storage only - only storage resources are part of PowerFlex appliance; compute resides outside the system boundaries Hybrid - combination of two or more of the above deployment options

Highly available management and orchestration (M&O) control plane that runs on a dedicated three or more physical nodes cluster Cost effective management and orchestration that runs on a single physical PowerFlex management node.

Use of your existing servers for management and orchestration

Multiple hypervisors (VMware ESXi, RedHat Virtualization, or Hyper-V), and bare metal options in the same cluster Software and hardware-based data at rest encryption (D@RE) options

Dell CloudLink Optional SEDs

Supports 25 GbE or 100 GbE port bandwidth for backend connectivity

1

4 Introduction

Dual network environment using your existing software-defined network (SDN), such as Cisco ACI (optional) or VMware NSX-T

Supports both block and file storage Supports native asynchronous replication between sites Built-in component level redundancy to ensure data availability Self-healing architecture with integrated call home feature Storage only option allows external compute resources to access data in the PowerFlex appliance PowerFlex nodes support SSD and NVMe drive technologies

Introduction 5

Revision history

Date Document revision Description of changes

August 2022 1.0 Initial release

2

6 Revision history

Architecture considerations PowerFlex appliance is a modular hyperconverged platform that enables extreme scalability and flexibility for next-generation cloud applications and mixed workloads.

System components PowerFlex appliance contains compute, network, software-defined storage, virtualization, and management and orchestration (M&O) control plane.

The following table lists the PowerFlex appliance software components:

Software Function Description

PowerFlex Manager

Management and orchestration

PowerFlex Manager is the management layer of the PowerFlex system. It provides the user interface for both the block storage (PowerFlex software) and file storage services as well as lifecycle management of the hardware components in the PowerFlex appliance.

PowerFlex Software defined storage

PowerFlex software provides block services to the PowerFlex system and is the software-defined storage layer that forms the core of the offer.

PowerFlex file

Software defined file storage

Network-attached storage enables data access through files, rather than block devices. This is the software that provides file services to the PowerFlex system.

VMware vSphere

Virtualization VMware ESXi: This is the default supported hypervisor for PowerFlex compute- only and PowerFlex hyperconverged nodes.

VMware vCenter Server Appliance (VCSA): The VMware VCSA provides management services to the VMware compute environment including both compute-only and hyper-converged nodes in the PowerFlex system. For PowerFlex appliance deployments it also manages the virtual machines of the PowerFlex Management controller.

Secure Connect Gateway

Call home Secure Connect Gateway is an enterprise monitoring technology that monitors your devices and proactively detects hardware issues that may occur. It automates support request creation for issues that are detected on the monitored devices.

NOTE: Secure Connect Gateway automatically collects the telemetry that is required to troubleshoot the issue that is detected. The collected telemetry helps technical support provide a proactive and personalized support experience.

CloudLink (Optional)

Software encryption and key management

CloudLink is an optional component of the system that provides key management for self-encrypting drives, and software data-at-rest encryption for non-self-encrypting units.

The following table describes the PowerFlex appliance key hardware components:

Resource Vendor Components

Compute Dell PowerFlex nodes: Dell PowerEdge R650/R750/R6525 servers and/or Dell PowerEdge R640/R740/R840

servers PowerFlex hyperconverged nodes PowerFlex compute-only nodes PowerFlex storage-only nodes PowerFlex file nodes

4 x 25 Gbe or 4 x 100 Gbe NIC options

3

Architecture considerations 7

Resource Vendor Components

Storage Dell PowerFlex nodes Dell PowerEdge servers (R650/R750/R6525) and/or Dell PowerEdge servers (R640/

R740/RR840) with PowerFlex software defined storage

Network

(PowerFlex Manager supports full network automation for the listed switches. Customer-preferred switches are supported with partial network automation.)

Cisco Preferred validated switch options:

Management switch options: Cisco Nexus 92348GC-X

Aggregation switch options: Cisco Nexus 9336C-FX2 Cisco Nexus 9364C-GX

Access switch options: Cisco Nexus 93240YC-FX2 Cisco Nexus 93180YC-FX

Leaf switch options: Cisco Nexus 93240YC-FX2 Cisco Nexus 9336C-FX2 Cisco Nexus 9364C-GX

Border-leaf switch option: Cisco Nexus 9336C-FX2

Dell Preferred validated switch options:

Management switch options: Dell PowerSwitch S4148T-ON Access switch options: Dell PowerSwitch S5248F-ON

Management control plane

Dell PowerFlex management nodes: Dell PowerEdge R650 servers custom configuration

Key architecture considerations

Flexible network architecture is a key value proposition of PowerFlex appliance. In addition to vendor (Cisco Nexus, Dell PowerSwitch or customer preferred switches), PowerFlex appliance architecture offers the following network topologies to meet your business needs: Access and aggregation leaf-spine

PowerFlex appliance also offers the ability to support both hardware enabled software-defined networking (Cisco Application Centric Infrastructure) and native software-defined networking (VMware NSX-T).

PowerFlex appliance offers four node configuration types to meet performance, scale, and storage and compute capacity business requirements.

PowerFlex hyperconverged nodes PowerFlex compute-only nodes PowerFlex storage-only nodes PowerFlex file nodes

You can deploy these nodes using one or more of the following resource deployment options: Hyperconverged deployment Storage-only deployment Two-layer deployment with disaggregated compute and storage only Hybrid deployment as a combination of above PowerFlex file deployment

8 Architecture considerations

PowerFlex appliance can be deployed with either full network automation (FNA) or partial network automation (PNA). With full network automation, PowerFlex Manager configures the node facing ports on the customer network switches, if they are Dell supported switches. Partial network automation is used when you have customer network switches or a configuration mode that are not supported by Dell. In this case, you are responsible for configuring the node facing ports along with the rest of your network. The PowerFlex nodes can be fully managed by PowerFlex Manager in either full network automation or partial network automation mode.

Network architecture PowerFlex appliance supports two different network architectures, which have the ability to meet the requirements for different performance and scaling requirements.

The network architectures are: Access and aggregation Leaf-spine

Access and aggregation architecture

The following figure shows the logical layout of the PowerFlex appliance integrated with your access and aggregation network architecture:

NOTE: There is an additional 1 Gb link from the PowerFlex controller nodes to the out-of-band management switch.

Architecture considerations 9

NOTE: A PowerFlex management controller is optional in a PowerFlex appliance.

Leaf-spine architecture

The following diagram shows the logical layout of the PowerFlex appliance integrated with your leaf-spine network architecture:

NOTE: There is an additional 1 Gb link from the PowerFlex controller nodes to the out-of-band management switch.

10 Architecture considerations

NOTE: A PowerFlex management controller is optional in a PowerFlex appliance.

PowerFlex storage-only deployment A PowerFlex appliance storage-only deployment has a base configuration that is a minimum set of PowerFlex storage-only nodes and fixed network resources.

Within the base configuration, you can customize the following hardware aspects:

Hardware Minimum set

Network One customer-provided management switch One pair of access or leaf switches (Dell PowerSwitch switches or customer-provided switches) A pair of border-leaf switches

Architecture considerations 11

Hardware Minimum set

NOTE: Only in a leaf-spine configuration.

Storage At least four PowerFlex storage-only nodes are required. However, Dell Technologies recommends using at least six nodes to build a PowerFlex storage pool.

If storage compression is active, a minimum of two NVDIMM components per PowerFlex node are required. A recommendation is made according to the system sizing calculation.

Management (optional)

Standalone or multi-node PowerFlex management controller with high availability or customer-provided management infrastructure.

PowerFlex two-layer deployment A PowerFlex appliance two-layer deployment has a base configuration that is similar to a PowerFlex storage-only node deployment, but adds a minimum set of PowerFlex compute-only nodes. The minimum set of PowerFlex storage-only nodes and fixed network resources are also required.

Within the base configuration, you can customize the following hardware aspects:

Hardware Minimum set

Compute At least three PowerFlex compute-only nodes.

Network One customer-provided management switch One pair of access or leaf switches (Dell PowerSwitch switches or customer-provided switches) A pair of border-leaf switches

NOTE: Only in a leaf-spine configuration.

Storage At least four PowerFlex storage-only nodes are required. However, Dell Technologies recommends using at least six nodes to build a PowerFlex storage pool.

Software-defined SAN storage (uses local disks to build a PowerFlex storage pool).

If storage compression is active, a minimum of two NVDIMM components per PowerFlex node are required. A recommendation is made according to the system sizing calculation.

Management (optional)

Standalone or multi-node PowerFlex management controller with high availability or customer provided management infrastructure.

PowerFlex hyperconverged deployment A PowerFlex appliance hyperconverged deployment has a base configuration that is a minimum set of hyperconverged components and fixed network resources.

Within the base configuration, you can customize the following hardware aspects:

Hardware Minimum set

Compute and storage

A minimum of four PowerFlex hyperconverged nodes are required; however, six is the recommended minimum. PowerFlex hyperconverged nodes provide both storage and compute resources to the system.

If storage compression is active, a minimum of two NVDIMM components per PowerFlex node are required. A recommendation is made according to the system sizing calculation.

Network One customer-provided management switch One pair of access or leaf switches (Dell PowerSwitch switches or customer-provided switches) A pair of border-leaf switches

NOTE: Only in a leaf-spine configuration.

12 Architecture considerations

Hardware Minimum set

Management (optional)

Standalone or multi-node PowerFlex management controller with high availability or customer provided management infrastructure.

VMware NSX-T Edge node deployment The NSX-T ready deployment is a variation of standard deployment that includes PowerFlex hyperconverged or compute-only nodes.

This includes an additional NSX-T Edge node cluster deployment.

Hardware Minimum set

Compute VMware NSX-T transport is configured on PowerFlex compute-only nodes or PowerFlex hyperconverged nodes.

Network Supports either a traditional Ethernet architecture (Cisco Nexus or Dell PowerSwitch) or leaf-spine topology (Cisco Nexus).

By default, the NSX-T Edge physical nodes connect directly to either the aggregation or border leaf switches, depending on the network topology. If there is a limitation because of port capacity or cable distance, the management and transport connections (not Edge/BGP uplinks) are relocated from the aggregation or border leaf switches to the access or leaf switches.

Storage NSX-T Edge nodes can run in either a local RAID1+0 storage (recommended) or a VMware vSAN storage solution.

NSX-T managers runs on the general shared datastores provided by PowerFlex within the PowerFlex management controller.

PowerFlex storage-only nodes are not supported as an NSX-T transport node.

Management Four PowerFlex controller nodes with high availability. A fourth controller node is included to host NSX-T Manager.

NSX-T Edge A minimum of two NSX-T Edge nodes, if using local RAID storage option. A minimum of four NSX-T Edge nodes, if using vSAN storage option . Each VMware NSX-T Edge node uses three dual-port 25 Gb cards to connect to either the border leaf or aggregation switches. At minimum, four of the six NIC interfaces that are used for transport and external edge traffic must be configured as an individual trunk. The other two NIC interfaces that are used for VMware ESXi management or vSAN traffic are configured with Link Aggregation Control Protocol (LACP) enabled vPC.

NOTE: Do not deploy non-NSX-T Edge workloads in the NSX-T Edge VMware vSphere cluster.

Architecture considerations 13

PowerFlex software-defined storage architecture

PowerFlex applies the principles of server virtualization to standard x86 servers with local disks, creating high-performance, sharable pools of block storage. PowerFlex abstracts the local storage contained within each server.

PowerFlex pools all the storage resources together. In the following figure, there is a global pool of 1 million IOPS and 100 terabytes, instead of having 100K IOPS and 10 terabytes available in each server. The applications are not constrained by what is within the local server, these resources are shared across the entire cluster.

PowerFlex automatically maintains balance across all resources, supporting application needs. Storage and/or compute can be added dynamically with no downtime or impact to applications because PowerFlex seamlessly balances the available resources. This enables data center operation in the most efficient and cost-effective way possible, regardless of organization size.

PowerFlex components

Storage data client (SDC)

The storage data client (SDC) is installed on PowerFlex nodes that consume the system storage volumes. The volumes data and copies are spread evenly across the nodes and drives that comprise the pool. the storage data client communicates over multiple pathways to all the nodes. In this multi-point peer-to-peer fashion, it reads and writes data to and from all points simultaneously, eliminating bottlenecks and quickly routing around failed paths. The storage data client: Provides front-end volume access to applications and file systems. Is installed on servers consuming storage. Maintains peer-to-peer connections to every storage data server managing a pool of storage.

Storage data server (SDS)

The storage data server is installed on every PowerFlex node that contributes its storage to the system. It owns the contributing drives and together with the other storage data servers forms a protected mesh from which storage pools are created. Volumes carved out of the pool are presented to the storage data clients for consumption. The storage data server: Abstracts local storage, maintains storage pools, and presents volumes to the storage data clients. Is installed on servers contributing local storage to the cluster.

4

14 PowerFlex software-defined storage architecture

Metadata manager (MDM)

The metadata manager software installs on three or five PowerFlex nodes and forms a cluster that supervises the operations of the entire cluster and its parts, while staying outside of the data path itself. The metadata manager hands out instructions to each storage data client and storage data server about its role and how to perform it, giving each component the information it needs. The metadata manager: Oversees storage cluster configurations, monitoring, rebalances, and rebuilds. Is highly available, independent cluster installed on three or five different PowerFlex nodes. Sits outside the data path.

Storage data replicator (SDR)

The storage data replicator is to proxy the I/O of replicated volumes between the storage data client and the storage data servers where data is ultimately stored. It splits writes, sending one copy to the destination storage data servers and another to a replication journal volume. Sitting between the storage data server and storage data client, from the point-of-view of the storage data server, the storage data replicator appears as if it were an storage data client sending writes (from a networking perspective, however, the storage data replicator to storage data server traffic is still backend/storage traffic). Conversely, to the storage data client, the storage data replicator appears as if it were an storage data server to which writes can be sent. The storage data replicator only mediates the flow of traffic for replicated volumes. Non-replicated volume I/Os flow, as usual, between storage data clients and storage data servers directly. As always, the metadata manager instructs each of the storage data clients where to read and write their data. The volume address space mapping, presented to the storage data client by the metadata manager, determines where the volume's data is sent. But the storage data client is not aware of the write-destination as an storage data server or an storage data replicator. The storage data client is not aware of replication.

Storage data target (SDT)

The storage data target (SDT) is installed with the storage data server to connect compute/application clients to storage using NVMe over TCP. NVMe over TCP front-end capability will alows you to use agentless solution (no storage data client), providing more flexible options for operating systems where storage data client is not supported and reducing the operational complexity of deploying and maintaining the host agent.

PowerFlex software-defined storage architecture 15

Storage schemas

Protection domains

A protection domain (PD) is a group of nodes or storage data servers that provide data isolation, security, and performance benefits. A node participates in only one protection domain at a time. Only nodes in the same protection domain can affect each other, nodes outside the protection domain are isolated. Secure multi-tenancy can be created with protection domains since data does not mingle across protection domains. You can create different protection domains for different node types with unequal performance profiles.

Storage pools

Storage pools are a subset of physical storage devices in a protection domain. Each storage device belongs to one (and only one) storage pool. The best practice is to have the same type of storage devices (HDD versus SSD or SSD versus NVMe) within a storage pool to ensure that the volumes are distributed over the same type of storage within the protection domain.

PowerFlex supports two types of storage pools. You can choose between both layouts. A system can support both fine granularity (FG) and medium granularity (MG) pools on the same storage data server nodes. Volumes can be non-disruptively migrated between the two layouts. Within an fine granularity pool, you can enable or disable compression on a per-volume basis: Medium granularity: Volumes are divided into 1MB allocation units, distributed, and replicated across all disks contributing

to a pool. MG storage pools support either thick or thin-provisioned volumes, and no attempt is made to reduce the size of user-data written to disk (except with all-zero data). MG storage pools have higher storage access performance than fine granularity storage pools but use more disk space.

Fine granularity: A space efficient layout, with an allocation unit of just 4 KB and a physical data placement scheme based on log structure array (LSA) architecture. Fine granularity layout requires both flash media (SSD or NVMe) as well as NVDIMM to create an fine granularity storage pool. fine granularity layout is thin-provisioned and zero-padded by nature, and enables PowerFlex to support in-line compression, more efficient snapshots, and persistent checksums. FG storage pools use less disk space than MG storage pools but have slightly lower storage access performance.

Fault sets

A fault set is a logical entity that contains a group of storage data servers within a protection domain that have a higher chance of going down together; for example, if they are all powered in the same rack. By grouping them into a fault set, PowerFlex mirrors data for a fault set on storage data servers that are outside the fault set. Thus, availability is assured even if all the servers within one fault set fail simultaneously.

16 PowerFlex software-defined storage architecture

PowerFlex features PowerFlex appliance is a scale-out solution that enables you to add PowerFlex appliance nodes with various CPU, memory, and drive options.

PowerFlex appliance is designed for deployments involving large numbers of virtualized and bare metal workloads.

Replication

The following figure depicts where the storage data replicator (SDR) fits into the overall PowerFlex replication architecture:

The storage data replicator is to proxy the I/O of replicated volumes between the storage data client and the storage data servers where data is ultimately stored. Write I/Os are split, sending one copy on to the destination storage data servers and another to a replication journal volume. Sitting between the storage data server and storage data client, from the point-of-view of the storage data server, the storage data replicator appears as if it were an storage data client sending writes. (From a networking perspective, however, the storage data replicator to storage data server traffic is still backend/storage traffic.) Conversely, to the storage data client, the storage data replicator appears as if it were an storage data server to which writes can be sent. The storage data replicator only mediates the flow of traffic for replicated volumes. (In fact, only actively replicating volumes; the nuance will be covered below). Non-replicated volume I/Os flow, as usual, between storage data clients and storage data servers directly. As always, the metadata manager instructs each of the storage data clients where to read and write their data. The volume address space mapping, presented to the storage data client by the metadata manager, determines where the volume's data is sent. But the storage data client is ignorant of the write-destination as an storage data server or an storage data replicator. The storage data client is not aware of replication.

Compression

Fine granularity (FG) layout requires both flash media (SSD or NVMe) as well as NVDIMM to create an FG pool. FG layout is thin-provisioned and zero-padded by nature, and enables PowerFlex to support in-line compression, more efficient snapshots, and persistent checksums. FG pools support only thin-provisioned, zero-padded volumes, and whenever possible the actual size of user-data stored on disk is reduced. You should expect an average compression ratio of at least 2:1. Because of the 4K allocation, FG pools drastically reduce snapshot overhead, because new writes and updates to the volumes data do not each require a 1 MB read/copy action. All data written to an FG pool receives a checksum and is tested for compressibility. The checksum for every write is stored with the metadata and adds an additional layer of data integrity to the system.

PowerFlex offers a distinctive, competitive advantage with the ability to enable compression per-volume versus globally, and the ability to choose the best layout for each individual workload. The MG layout is still the best choice for workloads with high performance requirements. Fine granularity pools offer space-saving services and additional data integrity. Within an FG pool, enabling compression or making heavy use of snapshots has almost zero impact on the performance of the volumes.

PowerFlex software-defined storage architecture 17

Snapshots

Snapshots are a block image in the form of a storage volume or logical unit number (LUN) used to instantaneously capture the state of a volume at a specific point in time. Snapshots can be initiated manually or by new, automated snapshot policies. Snapshots in fine granularity storage pools are more space efficient and have better performance in comparison to medium granularity snapshots. PowerFlex supports snapshot policies based on a time retention mechanism. You can define up to 60 policy-managed snapshots per root volume A snapshot policy defines a cadence and the number of snapshots to keep at each level.

Volume migration

Migration is non-disruptive to ongoing I/O and is supported across storage pools within the same protection domain or across protection domains. Migrating volumes from one storage pool migrates the volume and all its snapshots together (known as VTree granularity). There are several use cases where volume migration is useful: Migrating volumes between different storage performance tiers Migrating volumes to a different storage pool or protection domain driven by multi-tenancy needs Extract volumes from a deprecating storage pool or protection domain to shrink a system Change a volume personality between thick or thin or fine granularity or medium granularity

18 PowerFlex software-defined storage architecture

System hardware System hardware consists of multiple sections on how PowerFlex nodes are used.

There are four types of PowerFlex nodes: Storage providing nodes, storage consuming nodes, management nodes and VMware NSX-T Edge nodes. PowerFlex hyperconverged nodes provide storage and consume storage. The table below shows the combinations of provision and consumption allowed by PowerFlex rack.

PowerFlex storage provided by ...

PowerFlex storage consumed by...

Hyperconverged Hyperconverged and storage only

Storage-only

Compute only N/A Hybrid Two-layer

Hyperconverged and compute only

Hybrid Hybrid Hybrid

Hyperconverged Hyperconverged Hybrid N/A

External N/A N/A Storage only

Storage-providing nodes

PowerFlex hyperconverged nodes

PowerFlex hyperconverged nodes are based on Dell PowerEdge R650, R750, R640, R740xd, and R840 servers. PowerFlex is deployed on these nodes in a true hyperconverged form where PowerFlex SDC and SDS software components are installed on the same PowerFlex node. PowerFlex hyperconverged nodes nodes provide and consume storage.

PowerFlex storage-only nodes

PowerFlex storage-only nodes are based on Dell PowerEdge R650, R750, R640, R740xd, and R840 servers. PowerFlex storage- only nodes are designed to provide storage capacity but no compute power to the compute cluster. Only the SDS component of the PowerFlex runs on PowerFlex storage-only nodes. PowerFlex storage-only nodes run an embedded operating system and do not require any VMware ESXi license. PowerFlex storage-only nodes have the ability to add additional storage capacity to a PowerFlex cluster without additional compute power.

Storage-consuming nodes

PowerFlex hyperconverged nodes

PowerFlex hyperconverged nodes are based on Dell PowerEdge R650, R750, R640, R740xd, and R840 server. PowerFlex is deployed on these nodes in a true hyperconverged form where PowerFlex ge data client and storage data serve software components are installed on the same PowerFlex node. PowerFlex hyperconverged nodes provide and consume storage.

5

System hardware 19

PowerFlex compute-only nodes

PowerFlex compute-only nodes are based on Dell PowerEdge R650, R750, R6525, R640, R740xd, and R840 server. The PowerFlex compute-only node enables you to deploy PowerFlex in a two-layer architecture that delivers ultimate flexibility when it comes to independently scaling compute and storage resources. The PowerFlex SDC software component is installed on PowerFlex compute-only nodes.

PowerFlex file nodes

PowerFlex file is based on Dell PowerEdge R650 with two third generation Intel Xeon scalable processors with up to 24 cores per processor. PowerFlex file nodes are deployed in a cluster of 2 to 16 nodes. The PowerFlex storage data client software component is installed on PowerFlex file nodes.

Management controller

PowerFlex controller nodes

PowerFlex controller nodesare based on the Dell PowerEdge R650 server. PowerFlex controller nodes uses PowerFlex to provide a reliable, and highly available storage cluster for the management plane. PowerFlex appliance supports standalone and multi-node PowerFlex management controllers.

VMware NSX-T Edge nodes

VMware NSX-T Edge nodes host the VMware NSX-T Edge Gateway instances (VMs), and two or more VMware NSX-T Edge nodes provided with NSX-T ready configuration within the PowerFlex appliance.

20 System hardware

PowerFlex node networking A PowerFlex appliance is based on either an access/aggregation or customer-provided leaf-spine topology. You also have the option to implement your preferred networking as long as the connections to and between PowerFlex nodes meet PowerFlex requirements.

General network connectivity descriptions A pair of access switches are required to handle all inter-cabinet network traffic between the nodes.

A standard deployment is one pair of access or leaf switches per cabinet. A management switch is required to support the out-of-band management requirements of the system. Management switches are needed to support network connectivity from the following equipment: One for each PowerFlex controller to support management traffic One for each PowerFlex controller iDRAC connection One for each PowerFlex node iDRAC connections One for each switch for the out-of-band connection if switches are managed by PowerFlex Manager

Node network requirements

PowerFlex R650 nodes

Standard PowerFlex nodes have five connections. The first four are connected to the access/leaf switches. The last connection is the out-of-band management switch from the iDRAC port.

Slot layout

PowerFlex R650 node Dual CPU

Slot 0 (OCP) CX5

Slot 1 Empty

Slot 2 Empty

Slot 3 CX5

6

PowerFlex node networking 21

PowerFlex R750 nodes

Slot layout

PowerFlex R750 nodes Dual CPU/GPU

Slot 0 (OCP) CX5

Slot 1 Empty

Slot 2 Empty

Slot 3 Empty

Slot 4 Empty

Slot 5 CX5

Slot 6 Empty

Slot 7 Empty

Slot 8 Empty

PowerFlex controller nodes

PowerFlex controller nodes have six connections. The first four are connected to the access/leaf. The last two connections are the out-of-band management switch.

The following information describes the cabling requirements for PowerFlex controller node:

Slot layout

PowerFlex R650 node Dual CPU

Slot 0 (OCP) CX5

Slot 1 CX5

Slot 2 Empty

Slot 3 CX5

VMware NSX-T Edge nodes

This solution enables PowerFlex to connect to software-defined network using NSX-T. NSX-T ready refers to PowerFlex nodes that are configured for NSX-T installation. In addition, NSX-T Edge nodes are added to the PowerFlex appliance. The VMware services team deploys the NSX-T ready data center at the customersite. PowerFlex with VMware NSX-T ready can be used in either of the network switch design topologies: aggregation/access or leaf-spine.

The following diagram provides the server port map for PowerEdge R650:

22 PowerFlex node networking

Slot layout

PowerFlex R650 nodes with dual CPU 6x25 GB Dual CPU

Slot 0 (OCP) CX5

Slot 1 CX5

Slot 2 Empty

Slot 3 CX5

PowerFlex node networking 23

Management control plane

The PowerFlex appliance management control plane consists of : VMware ESXi to deliver high availability for VMs PowerFlex management node - PowerEdge server with custom configuration One of the following:

PowerFlex storage data server for cluster storage high availability

Single PowerFlex management controller with RAID for data protection

Customer provided management nodes

The management network connection consists of the following:

PowerFlex management platform -The PowerFlex management platform is the software management and orchestration stack for PowerFlex. It is implemented on the PowerFlex management controller. It includes the container environment running on physical or virtual Linux instances, and containers that provide services.

PowerFlex Manager - provides IT operations management for PowerFlex appliance. It increases efficiency by reducing time- consuming manual tasks that are required to manage system operations. Use PowerFlex Manager to deploy and manage new and existing PowerFlex appliance environments.

PowerFlex Manager discovers, deploys, and operates the PowerFlex appliance by using resources, templates, and resource groups.

The following table explains key PowerFlex Manager terminology:

Term Description

Resource The PowerFlex nodes, network switches, and virtual machine managers in the system. When PowerFlex Manager discovers resources, it identifies each component and how to talk to its management interface.

Template Contains the configuration requirements that must be applied to a resource or group of resources during deployment. These requirements include firmware and software, operating system and/or hypervisor, and PowerFlex. A template represents the desired state of your deployed configuration. PowerFlex Manager provides sample templates that represent the standard configuration options. You can clone a sample template and use it for your implementation. You can also define a custom template if necessary.

7

24 Management control plane

Term Description

Resource group A group of resources that are managed within PowerFlex appliance as a cluster. It is the state of your deployed configuration. PowerFlex Manager performs health monitoring, compliance monitoring and remediation, and the ability to add or remove resources. A resource group is the finished outcome.

PowerFlex Manager offers the following features: Resource discovery, inventory, and management Simplified and efficient day-to-day operations Management of block and file storage objects Creating template-based configurations for consistent and secure deployment of large numbers of compute, storage and

network resources Built-in role-based authorization and identity management Comprehensive health alerting, monitoring, reporting and dashboards End-to-end automated life cycle management Life-cycle compliance management and reporting

For an in-depth overview of PowerFlex Manager, see the Dell EMC PowerFlex Manager Technical Overview.

VMware vCenter - Is used for orchestrations, management, monitoring and reporting of virtual compute resources. Secure Connect Gateway (SCG) - Is an enterprise monitoring technology that is delivered as an appliance and a stand-alone

application. It monitors your devices and proactively detects hardware issues that may occur.

Management control plane 25

PowerFlex file services PowerFlex has optional native file capabilities that are highly scalable, efficient, performance focused and flexible.

PowerFlex file nodes enable accessing data over file protocols such as server message block (SMB), network file system (NFS) and secure file transfer protocol (SFTP). PowerFlex file nodes supports two primary business cases: Traditional NAS: Home directories and file sharing Transactional NAS: Database and VMware workloads

PowerFlex file architecture PowerFlex file is deployed on PowerFlex file nodes to provide file services to applications.

PowerFlex file nodes provide compute capabilities (CPU and memory) and consume storage from PowerFlex block (SDS) providing a highly scalable performance for transactional and traditional workloads. PowerFlex file can be scaled independently of PowerFlex storage providing more flexible options for customers.

The following figure highlights applications consuming both PowerFlex file storage:

With the native file capabilities available on PowerFlex appliance, administrators can easily implement a highly scalable, efficient, high performance, and flexible solution that is designed for the modern data center. The rich supporting feature set and mature architecture provides the ability to support a wide array of use cases. PowerFlex file uses virtualized NAS servers to enable access to file systems, provide data separation, and act as the basis for multitenancy. PowerFlex file services can be accessed through a wide range of protocols and can take advantage of advanced protocol features.

PowerFlex file servers - PowerFlex file uses virtualized file servers that are called NAS servers. A NAS server contains the configuration, interfaces, and environmental information that is used to facilitate access to the file systems. This includes services such as Domain Name System (DNS), Lightweight Directory Access Protocol (LDAP), Network Information Service (NIS), protocols, anti virus, NDMP, and so on.

8

26 PowerFlex file services

Multi-tenancy - NAS servers can be used to enforce multi-tenancy. This is useful when hosting multiple tenants on a single system, such as for service providers. Since each NAS server has its own independent configuration, it can be tailored to the requirements of each tenant without impacting the other NAS servers on the same appliance. Each NAS server is logically separated from each other, and clients that have access to one NAS server do not inherently have access to the file systems on the other NAS servers. File systems are assigned to a NAS server upon creation and cannot be moved between NAS servers.

High availability - New NAS servers are automatically assigned across the available nodes. The preferred node acts as a marker to indicate the node that the NAS server should be running on. Once provisioned, the preferred node for a NAS server never changes. The current node indicates the node that the NAS server is running on. Changing the current node moves the NAS server to a different node, which can be used for load-balancing purposes. When a NAS server is moved to a new node, all file systems on the NAS server are moved along with it.

Protocols - PowerFlex file supports SMB1 through 3.1.1. SMB3 enhancements such as continuous availability, offload copy, protocol encryption, multichannel, and shared VHDX in Hyper-V are supported on PowerFlex file. PowerFlex file also supports the Microsoft Distributed File System (DFS) namespace. This ability enables the administrator to present shares from multiple file systems through a single mapped share. PowerFlex file SMB servers can be configured as a stand-alone DFS root node or as a leaf node on an Active Directory DFS root. DFS-R (replication) is not supported on PowerFlex file SMB servers.

PowerFlex file supports NFSv3 through NFSv4.1 and as Secure NFS. Each NAS server has options to enable NFSv3 and NFSv4 independently. Support for advanced NFS protocol options is also available. NFSv4 is a version of the NFS protocol that differs considerably from previous implementations. Unlike NFSv3, this version is a stateful protocol, meaning that it maintains a session state and does not treat each request as an independent transaction without the need for additional preexisting information NFSv4 brings support for several new features including NFS ACLs that expand on the existing mode-bit-based access control in previous versions of the protocol.

NAS servers and file systems also support access for FTP and SFTP. SFTP is more secure since, it does not transmit usernames and passwords in clear text. FTP and SFTP access can be enabled or disabled individually at the NAS server level. Only active mode FTP and SFTP connections are supported.

Multi-protocol support - When a NAS server has both the SMB and NFS protocols enabled, multi-protocol access is automatically enabled. Multiprotocol access enables accessing a single file system using the SMB and NFS protocols simultaneously.

Naming and directory services - PowerFlex file supports the following naming and directory services: DNS - A service that provides translations between hostnames and IP addresses LDAP/NIS - Services that provide a centralized user directory for username and ID resolution Local files - Individual files used to provide username and ID resolution

Filesystem - PowerFlex file leverages a 64-bit file system that is highly scalable, efficient, performant, and flexible. The PowerFlex file is mature and robust, enabling it to be used in many of the traditional NAS use cases.

Compression - PowerFlex file supports compression using fine granularity storage pools. Shrink and extend - PowerFlex file provides increased flexibility by providing the ability to shrink and extend file systems as

needed. Shrink and extend operations are used to resize the file system and update the capacity that is seen by the client. Quotas - PowerFlex file includes quota support to allow administrators to place limits on the amount of space that can be

consumed to regulate file system storage consumption. PowerFlex file supports user quotas, quota trees, and user quotas on tree quotas. All three types of quotas can co-exist on the same file system and can be used together to achieve fine grained control over storage usage. User quotas: User quotas are set at a file system level and limit the amount of space a user may consume on a file

system. Quotas are disabled by default. Tree quotas: Quota trees limit the maximum size of a directory on a file system. Unlike user quotas, which are applied and

tracked on a user-by-user basis, quota trees are applied to directories within the file system. Quota trees can be applied on new or existing directories.

User quotas on tree quotas: Once a quota tree is created, it is also possible to create additional user quotas within that specific directory by choosing to enforce user quotas. When multiple limits apply, users are bound by the limit that they reach first.

Snapshots - PowerFlex file features pointer-based snapshots. These can be used for restoring individual files or the entire file system back to a previous point in time. Since these snapshots leverage redirect-on-write on technology, no additional capacity is consumed when the snapshot is first created. Capacity only starts to be consumed as data is written to the file system and changes are tracked.

CAVA - Common Anti-Virus Agent (CAVA) provides an ant virus solution to SMB clients by using third-party anti virus software to identify and eliminate known viruses before they infect files on the storage system. Windows clients require this to reduce the chance of storing infected files on the file system and protects them if they happen to open an infected file. The CAVA solution is for clients running the SMB protocol only. If clients use the NFS or FTP protocols to create, modify, or move files, the CAVA solution does not scan these files for viruses.

PowerFlex file services 27

NDMP - PowerFlex file supports three-way Network Data Management Protocol (NDMP) backups, allowing administrators to protect file systems by backing up to a tape library or other backup device. In an NDMP configuration, there are three primary components: Primary system - Source system to be backed up, such as PowerFlex file. Data Management Application (DMA) - Backup application that orchestrates the backup sessions, such as NetWorker. Secondary system - The backup target, such as PowerProtect.

Three-way NDMP transfers both the metadata and backup data over the network. The metadata travels from the primary system to the DMA. The data travels from the primary system to the DMA and then finally to the secondary system.

28 PowerFlex file services

Security considerations Enterprises have many reasons for encrypting their data, including addressing regulatory compliance, protecting against theft of customer data, and sensitive intellectual property.

PowerFlex appliance offers numerous built-in security features and capabilities cross multiple security domains to help you meet security and compliance requirements. Here is a summary of the PowerFlex appliance security features by security domains.

Asset management PowerFlex Manager simplifies asset discovery and system resources inventory management Resource deployment services template and resource tagging allow you to efficiently deploy a complex environment with

consistency

Identity authentication and authorization PowerFlex appliance architecture offers built-in security controls to meet authentication and authorization needs. Some of the key security controls are: LDAP/Active Directory integration Role-based access control (RBAC) RSA SecurID MFA option (using key cloak)

Data confidentiality Confidentiality is one of the key pillars of the security triad (CIA). PowerFlex appliance offers both software and hardware based FIPS 140-2 compliant data at rest encryption. For hardware-based D@RE, you can choose self encrypting drives (SED)s that meet your business needs and use integrated CloudLink for key management. The integrated CloudLink can also be used to provide software-based encryption for PowerFlex storage data servers (SDS) that is transparent to the features and operation of the PowerFlex solution. CloudLink uses dm-crypt, a native Linux encryption package, to secure SDS devices. A proven high-performance volume encryption solution, dm-crypt is widely implemented for Linux machines.

CloudLink encrypts the storage data server devices with unique keys that are controlled by enterprise security administrators. CloudLink Center provides centralized, policy-based management for these keys, enabling single-screen security monitoring and management across one or more PowerFlex deployments.

System trust PowerFlex appliance is built with Dell PowerEdge servers that are called PowerFlex nodes. PowerFlex nodes inherit all the cutting-edge cyber-resiliency and security features such as: An immutable silicon-based root of trust to securely boot iDRAC, BIOS and firmware Virtual lock for preventing server configuration/firmware changes and drift detection Rapid recovery to a trusted image when authentication fails Rollback to known good firmware version if firmware is compromised Secure system erase internal server storage devices including HDD, SSD, and NVMe drives Industry leading secure supply chain PowerFlex software integrity check

9

Security considerations 29

Network security PowerFlex appliance not only offers built-in access/aggregation or leaf-spine network topology but also incorporates many advanced security features that are available with Cisco and Dell network switches. These security features help you protect your network against data loss or compromise resulting from intentional attacks and from unintended but damaging actions made by well-meaning network users. Some of the key security features include: Network segmentation with, ACL, firewall, and VLAN TACACS+ security protocols support LDAP authentication and authorization support Role-based access control (RBAC) to control and limit access to operations on the Cisco NX-OS device Authentication, authorization, and accounting (AAA) an architectural framework support Access control list (ACL) support. IP ACLs, MAC ACL and VACL are available options to filter traffic based on IPv4

addresses, MAC address in the packet header, and VLAN routing. Simple Certificate Enrollment Protocol (SCEP) support Dynamic ARP inspection, DHCP snooping, key chain management, and control plane policing can used to further harden the

security.

Auditing and accountability Audit and accountability's primary objectives are to maintain a record of system activities, and provide the ability to establish individual accountability, detect system anomalies, reconstruct system events using audit logs and records. PowerFlex appliance creates and retains system audit logs, event logs and alert records to that can be used for monitoring, trend and behavior analysis, incident investigation, and reporting of unlawful or unauthorized system activities.

30 Security considerations

Additional references This section provides references to related documentation for network, storage, and virtualization components.

Product Description Link to documentation

PowerFlex Converges storage and compute resources into a single layer architecture, aggregating capacity and performance, simplifying management, and scaling to thousands of PowerFlex nodes.

https:// www.delltechnologies.com/en-us/ Storage/powerflex.htm

VMware vCenter Server Provides a scalable and extensible platform that forms the foundation for virtualization management.

www.vmware.com/products/ vcenter-server/

Virtualized infrastructure for PowerFlex

Virtualized infrastructure for PowerFlex rack and PowerFlex appliance. Virtualizes all application servers and provides VMware High Availability (HA) and Dynamic Resource Scheduling (DRS).

www.vmware.com/products/ vs

Manualsnet FAQs

If you want to find out how the R650 Dell works, you can view and download the Dell PowerFlex Appliance R650 Solution Architecture Overview on the Manualsnet website.

Yes, we have the Architecture Overview for Dell R650 as well as other Dell manuals. All you need to do is to use our search bar and find the user manual that you are looking for.

The Architecture Overview should include all the details that are needed to use a Dell R650. Full manuals and user guide PDFs can be downloaded from Manualsnet.com.

The best way to navigate the Dell PowerFlex Appliance R650 Solution Architecture Overview is by checking the Table of Contents at the top of the page where available. This allows you to navigate a manual by jumping to the section you are looking for.

This Dell PowerFlex Appliance R650 Solution Architecture Overview consists of sections like Table of Contents, to name a few. For easier navigation, use the Table of Contents in the upper left corner.

You can download Dell PowerFlex Appliance R650 Solution Architecture Overview free of charge simply by clicking the “download” button in the upper right corner of any manuals page. This feature allows you to download any manual in a couple of seconds and is generally in PDF format. You can also save a manual for later by adding it to your saved documents in the user profile.

To be able to print Dell PowerFlex Appliance R650 Solution Architecture Overview, simply download the document to your computer. Once downloaded, open the PDF file and print the Dell PowerFlex Appliance R650 Solution Architecture Overview as you would any other document. This can usually be achieved by clicking on “File” and then “Print” from the menu bar.