Contents

Dell Bare Metal Orchestrator 1.3 Software Installation Guide PDF

1 of 39
1 of 39

Summary of Content for Dell Bare Metal Orchestrator 1.3 Software Installation Guide PDF

Bare Metal Orchestrator 1.3 Installation Guide

Version 1.3

Abstract

This guide describes how to install Bare Metal Orchestrator on a hypervisor. It includes how to scale Bare Metal Orchestrator, modify and delete nodes, set up high availability, and how to upgrade Bare Metal Orchestrator.

Dell Technologies Solutions

September 2022 Rev. 05

Notes, cautions, and warnings

NOTE: A NOTE indicates important information that helps you make better use of your product.

CAUTION: A CAUTION indicates either potential damage to hardware or loss of data and tells you how to avoid

the problem.

WARNING: A WARNING indicates a potential for property damage, personal injury, or death.

2021 - 2022 Dell Inc. or its subsidiaries. All rights reserved. Dell Technologies, Dell, and other trademarks are trademarks of Dell Inc. or its subsidiaries. Other trademarks may be trademarks of their respective owners.

Preface.........................................................................................................................................................................................4 Revision history..........................................................................................................................................................................5 Product support.........................................................................................................................................................................6

Contacting Dell Support.....................................................................................................................................................6

Chapter 1: Bare Metal Orchestrator installation overview.............................................................. 7 Introduction........................................................................................................................................................................... 7 Bare Metal Orchestrator high availability.......................................................................................................................7 Installation workflow........................................................................................................................................................... 8 Access and accounts........................................................................................................................................................ 10 About Ansible...................................................................................................................................................................... 10

Chapter 2: Installing Bare Metal Orchestrator.............................................................................. 11 Prerequisites........................................................................................................................................................................ 11 Download the OVA ........................................................................................................................................................... 16 Deploy the OVA on an ESXi server................................................................................................................................ 17 Deploy the OVA image on vCenter................................................................................................................................ 17 Configure a single node Bare Metal Orchestrator after deployment.................................................................... 18 Configure an HA Bare Metal Orchestrator after deployment................................................................................. 21 Configure a secondary interface for DHCP auto-discovery................................................................................... 25 Change default CIDR subnets for Bare Metal Orchestrator...................................................................................27 Verify Global Controller partition assignments...........................................................................................................27 Verify Global Controller node creation.........................................................................................................................28

Viewing nodes.............................................................................................................................................................. 28 Uninstall and redeploy Global Controller and HA nodes...........................................................................................28 Uninstall Bare Metal Orchestrator................................................................................................................................ 29 Create a new user and log in to the web UI............................................................................................................... 30

Chapter 3: Scaling Bare Metal Orchestrator................................................................................ 32 Scaling overview................................................................................................................................................................32 Edit the hosts file.............................................................................................................................................................. 32 Create worker nodes........................................................................................................................................................ 33 Verify worker nodes are created................................................................................................................................... 34

Chapter 4: Deleting Nodes...........................................................................................................35 Delete worker nodes.........................................................................................................................................................35 Verify worker nodes are deleted................................................................................................................................... 35

Chapter 5: Upgrading Bare Metal Orchestrator........................................................................... 36 Upgrade overview............................................................................................................................................................. 36 High-level upgrade workflow..........................................................................................................................................36 Upgrade the Global Controller and one or more worker nodes..............................................................................37

Contents

Contents 3

Preface

Purpose This guide provides instructions to install Bare Metal Orchestrator and create worker nodes for an initial cluster deployment, as well as how to upgrade Bare Metal Orchestrator software and set up high availability.

Audience This guide is primarily intended for administrators who are responsible to deploy and upgrade Bare Metal Orchestrator nodes.

Disclaimer This guide may contain language that is not consistent with Dell Technologies current guidelines. Dell Technologies plans to update the guide over subsequent future releases to revise the language accordingly.

4 Preface

Revision history

This revision history lists major changes to this document.

Table 1. Revisions

Date Release Description

September 2022 1.3 Minor edits and changes to the port requirements Updates to the Upgrade overview chapter

May 2022 1.2 High availability updated with distributed storage and deployment process updated Single node requirements and deployment process updated

March 2022 1.1 High availability deployment added Upgrading nodes chapter added Minor changes across the guide

November 2021 1.0 Inaugural release

Revision history 5

Product support Resources to help you to provision the infrastructure and fix problems.

Documentation You can find these Bare Metal Orchestrator documents on the Bare Metal Orchestrator Documentation site:

Bare Metal Orchestrator Release Notes Bare Metal Orchestrator Installation Guide Bare Metal Orchestrator Command Line Interface User's Guide Bare Metal Orchestrator Web User Interface Guide Bare Metal Orchestrator Command Line Interface Reference Guide Bare Metal Orchestrator Network Planning Guide Bare Metal Orchestrator API Guide

The Bare Metal Orchestrator API Guide is on the Dell Technologies Developer Portal site.

Bare Metal Orchestrator product support page Bare Metal Orchestrator Product Support Overview

Where to get help The Dell Technologies Support site (https://www.dell.com/support) contains important information about products and services including drivers, installation packages, product documentation, knowledge base articles, and advisories.

A valid support contract and account might be required to access all the available information about a specific Dell Technologies product or service.

Dell Technologies Support contact information Dell provides several online and telephone-based support and service options. Availability varies by country or region and product, and some services may not be available in your area.

NOTE: If you do not have an active Internet connection, you can find contact information from your purchase invoice,

packing slip, bill, or Dell product catalog.

Call 1-800-782-4362 or the support phone number for your country or region. Go to Dell Support to find the support phone number for your country or region. Tell the support person that you want to open a service request for Bare Metal Orchestrator. Give the support person your Product ID and a description of the problem.

You can also go to Dell Support and search for Bare Metal Orchestrator. The product support page requires you to sign in and enter your Product ID.

Contacting Dell Support How to contact your Dell account representative, Dell technical support, or Dell customer service.

Steps

1. Go to Dell Support and select a support category.

2. From the Choose a Country/Region list, verify your country or region. Then, select the appropriate service or support link.

6 Product support

Bare Metal Orchestrator installation overview

This chapter describes a single node Bare Metal Orchestrator cluster and a five-node high availability (HA) cluster with distributed storage, provides installation workflows, and account information.

Topics:

Introduction Bare Metal Orchestrator high availability Installation workflow Access and accounts About Ansible

Introduction The Dell Technologies Bare Metal Orchestrator software is provided as a virtual appliance that can be installed on a hypervisor. The virtual appliance is based on Kubernetes RKE2 and is delivered as an Open Virtual Appliance (OVA) file.

To install Bare Metal Orchestrator, you must download the OVA and deploy it on a hypervisor.

Bare Metal Orchestrator is installed on a single node RKE2 (next-generation) cluster. The node that Bare Metal Orchestrator is installed on is called the Global Controller (GC) node.

The Global Controller is a fully contained management cluster with onboard services and components that function as a site. This cluster is also called the GC site. The GC site simplifies the administration and management of Bare Metal Orchestrator and is the default site that is created during OVA deployment.

You can deploy Bare Metal Orchestrator in one of the following configurations:

A scalable, single node RKE2 (next-generation) cluster A five-node high availability (HA) cluster with internal or external distributed storage

NOTE:

You cannot convert a single node Bare Metal Orchestrator deployment to a five-node high availability deployment. For more

information, see Bare Metal Orchestrator high availability.

After a successful deployment, you can scale the Bare Metal Orchestrator node to a multi-node cluster. Scaling is done by adding one or more worker nodes to the Global Controller node. Worker nodes support the creation of remote sites. For more information about sites, see the Bare Metal Orchestrator Command Line Interface User's Guide.

For upgrade instructions, see Upgrade overview.

Bare Metal Orchestrator high availability With high availability (HA), the Bare Metal Orchestrator OVA is deployed on a five-node HA cluster by default. The Global Controller (GC) services deploy on the first node and is a fully functional, scalable Bare Metal Orchestrator cluster to which the two HA nodes are added. The two HA nodes function as a redundant pair for HA failover and must be reachable from the GC host.

The Global Controller site data and services are fully replicated on the two HA nodes. A keepalive is used to monitor the availability of services on each node in the control plane. An automatic fail over is triggered if a node failure is detected.

A redundant pair of Load Balancers provide highly reliable management access for the Bare Metal Orchestrator Web UI, CLI, and API using a virtual IP address (VIP) address. The VIP must be set to an available IP address on the same subnet as the two Load Balancers.

1

Bare Metal Orchestrator installation overview 7

Each Load Balancer is considered a node in the five-node HA cluster and must be reachable from the GC. These servers must support NGINX.

Load Balancer key tasks:

Setting the virtual IP address (VIP) of the Load Balancers to an Available IP address in the same subnet as the two Load Balancers.

Directing front-end traffic to the three control plane nodes for HA redundancy Managing load distribution Managing control planes

The following figure shows the architecture of a five-node HA deployment with distributed storage. The three control plane nodes and the redundant pair of Load Balancers comprise the five-node HA cluster. All nodes and the distributed storage volumes are active.

Figure 1. Bare Metal Orchestrator five-node HA cluster with distributed storage

GlusterFS provides distributed file storage for the Global Controller and the two redundant HA nodes in the control plane cluster. The distributed storage volumes replicate the Bare Metal Orchestrator cluster data when using PersistentVolumeClaim (PVC).

Distributed storage can be deployed locally in the three-node control plane cluster or externally. For external storage deployments, the VMs hosting the storage volumes must be reachable by the HA cluster. A minimum of three storage nodes are required.

NOTE: The remote site uses local-path as the storage class.

Observe the following:

You cannot upgrade a single node Bare Metal Orchestrator deployment to a five-node HA deployment. When using a local copy of the CLI as a remote client, you must specify the virtual IP (VIP) address of the server that is

hosting the Load Balancers in the user's config file. For more about using the CLI as a remote client, see the Bare Metal Orchestrator Command Line Interface User's Guide.

If any two control plane nodes in a high availability deployment fail at the same time, you must reboot the Global Controller node before high availability functionality can resume. Using the CLI, log in to the Global Controller as installer and enter reboot.

Installation workflow You can install Bare Metal Orchestrator as a single node cluster or as a five-node HA cluster with distributed storage.

The following diagram shows the high-level steps to install a single node Bare Metal Orchestrator cluster.

8 Bare Metal Orchestrator installation overview

Figure 2. Single node Bare Metal Orchestrator cluster installation flow

The following diagram shows the high-level steps to install a five-node HA cluster that uses either internal or external distributed storage.

Bare Metal Orchestrator installation overview 9

Figure 3. High availability Bare Metal Orchestrator cluster installation flow

Access and accounts Default dell user and installer accounts are available for the initial Dell Technologies Bare Metal Orchestrator Open Virtual Appliance (OVA) deployment.

When you SSH into Bare Metal Orchestrator for an initial OVA deployment using the default dell user or the installer account, you can run CLI commands with elevated levels of administrator access. We recommend that you change the default passwords using the $ passwd Linux command as soon as possible and record the new passwords for future reference.

After Bare Metal Orchestrator is deployed, you can use the CLI or the web UI to continue the setup. We recommend that you create a Global Admin user account and use that instead of the default accounts. To log in and use the Bare Metal Orchestrator web UI, a Global Admin account is required, see Create a new user and log in to the web UI.

For more information about user roles and creating user accounts, see the Bare Metal Orchestrator Command Line Interface User's Guide.

About Ansible

Ansible is an open-source software provisioning and configuration management tool. In Bare Metal Orchestrator, you must use Ansible and a text editor to:

Edit the hosts.ini file. The hosts.ini file lists IP addresses for the Global Controller and all worker nodes, as well as the two high availability (HA) nodes and the Load Balancer for HA configurations.

Run a playbook. Playbooks are the YAML files that you store and manage; passing them to Ansible to run as needed. Every time a playbook is run, Ansible checks for the listed nodes in the hosts file, establishes connection with the nodes, and uses this information to create or delete remote nodes.

10 Bare Metal Orchestrator installation overview

Installing Bare Metal Orchestrator This chapter provides instructions on how to deploy the Bare Metal Orchestrator OVA.

Topics:

Prerequisites Download the OVA Deploy the OVA on an ESXi server Deploy the OVA image on vCenter Configure a single node Bare Metal Orchestrator after deployment Configure an HA Bare Metal Orchestrator after deployment Configure a secondary interface for DHCP auto-discovery Change default CIDR subnets for Bare Metal Orchestrator Verify Global Controller partition assignments Verify Global Controller node creation Uninstall and redeploy Global Controller and HA nodes Uninstall Bare Metal Orchestrator Create a new user and log in to the web UI

Prerequisites

Hardware requirements

The following tables describe the minimum hardware requirements for Bare Metal Orchestrator OVA deployment and for worker nodes installed at remote sites.

NOTE: For Bare Metal Orchestrator to operate properly after OVA deployment, a minimum of 15 GB of free space must be

maintained on the Global Controller (GC) and the worker nodes. For high availability (HA) deployments, the two redundant

HA nodes must also maintain a minimum of 15 GB free space.

The following table lists the hardware requirements for the Global Controller in a single node Bare Metal Orchestrator cluster deployment and in high availability deployments. For high availability, use the same hardware requirements for the Global Controller and the two redundant HA nodes.

Table 2. Hardware requirements for Global Controller and HA nodes

Resource Minimum requirements

Single node cluster HA deployment

CPU eight CPU cores, physical or virtual eight CPU cores, physical or virtual

Memory 32 GB RAM 32 GB RAM

Hard Disk 200 GB (partition 1, sda)

250 GB (partition 2, non-boot partition, sdb)

200 GB (partition 1, sda)

250 GB (partition 2, non-boot partition, sdb)

Network Interface Card (NIC) two NICs

The two NICs are installed by default with the OVA deployment. One NIC is required for network management and the other

two NICs per VM

Two NICs on the Global Controller node are installed by default with the OVA deployment. One NIC is required for network management and the other

2

Installing Bare Metal Orchestrator 11

Table 2. Hardware requirements for Global Controller and HA nodes (continued)

Resource Minimum requirements

Single node cluster HA deployment

NIC for the Dynamic Host Configuration Protocol (DHCP) configuration. You can add additional NICs for every DHCP subnet.

NIC for the Dynamic Host Configuration Protocol (DHCP) configuration. You can add additional NICs for every DHCP subnet.

NOTE: Partition 2 is used for GlusterFS storage. By default, the OVA reserves 250 GB of SSD memory for storage. To

increase the size of partition 2, consult your Dell representative and the Bare Metal Orchestrator Network Planning Guide.

The following table lists the worker node hardware requirements in a single node Bare Metal Orchestrator cluster deployment and for an HA deployment.

Table 3. Bare Metal Orchestrator cluster worker node hardware requirements

Resource Minimum requirements

Single node cluster HA deployment

CPU four CPU cores, physical or virtual eight CPU cores, physical or virtual

Memory 16 GB RAM 32 GB RAM

Hard Disk 100 GB (free space) 200 GB (free space)

Network Interface Card (NIC) two NICs

The two NICs are installed by default with the OVA deployment. One NIC is required for network management and the other NIC for the Dynamic Host Configuration Protocol (DHCP) configuration. You can add additional NICs for every DHCP subnet.

two NICs

The two NICs are installed by default with the OVA deployment. One NIC is required for network management and the other NIC for the Dynamic Host Configuration Protocol (DHCP) configuration. You can add additional NICs for every DHCP subnet.

Firmware recommendations

Ensure you have the latest recommended Dell firmware versions installed on your hardware.

Table 4. Supported firmware

Servers Firmware

Dell PowerEdge 14th generation servers iDRAC firmware version 5.00.00.00 or higher

NOTE: For Dell PowerEdge R740 Rack Servers, the iDRAC firmware version supported is 5.00.10.20 or higher.

Dell PowerEdge 15th generation servers iDRAC firmware version 5.00.10.20 or higher

Software requirements

The following table lists the supported hypervisor for the OVA deployment. A dedicated server is required.

Table 5. Supported hypervisor

Hypervisor Supported versions

VMware ESXi 6.7 Update 3

7.0 Update 3

12 Installing Bare Metal Orchestrator

The following table lists the supported management software for the OVA deployment. A dedicated server is required.

Table 6. Supported management software

Management software Supported versions

VMware vCenter 6.7 Update 3

7.0 Update 3

The following table lists the supported distributed file storage system software for a Bare Metal Orchestrator deployment.

Table 7. Supported file storage system software

File storage system software Supported versions

GlusterFS 9.2

By default, GlusterFS is installed and running on the Bare Metal Orchestrator host after you import the OVA.

For high availability (HA) deployments, you must have the GlusterFS software installed and running. For HA deployments with external storage, at least three VMs are required. Each VM hosting the distributed storage must have GlusterFS installed and configured.

NOTE: If you choose to set up your own, external GlusterFS storage cluster for use with Bare Metal Orchestrator, you

assume responsibility to manage and administer that distributed storage cluster.

Reserved IP addresses and network requirements

Bare Metal Orchestrator reserves IP addresses in subnet ranges 10.42.0.0/16 and 10.43.0.0/16 by default for the Global Controller cluster communications.

CAUTION: Check for potential conflicts before deploying the Global Controller cluster on a VM. The VM will

fail to onboard the Global Controller if it is on the same subnet that Bare Metal Orchestrator uses for internal

communications.

If you cannot resolve IP address conflicts by changing the subnet of your VM, you can change the default cluster-cidr and service-cidr subnets for Bare Metal Orchestrator, see Change default CIDR subnets for Bare Metal Orchestrator.

The following are the network requirements for Bare Metal Orchestrator to be able to connect to the Integrated Dell Remote Access Controller (iDRAC):

Bare Metal Orchestrator and the iDRAC should be Layer 3 reachable. The OVA must be assigned an IP address that is accessible from the iDRACs of the servers that Bare Metal Orchestrator will

manage. The OVA cannot be behind a Network Address Translation (NAT) unless the iDRACs of the target servers are also in the

same NATed network.

Port requirements

If you are using a firewall, you must open all ports that are listed in the following table to ensure that Bare Metal Orchestrator functions correctly. The following table lists the ports that Bare Metal Orchestrator uses.

Table 8. Port requirements

Port Required on Description

22 Global Controller (GC) and remote sites

Used for SSH access to run Ansible playbooks and for GlusterFS distributed storage.

67 Global Controller (GC) and remote sites

Used by the TFTP server. Optionally open on the remote site if PXE is used.

69 Global Controller (GC) and remote sites

Used when DHCP is configured. Optionally open on the remote site if PXE is used.

123 Remote site Used for NTP synchronization.

Installing Bare Metal Orchestrator 13

Table 8. Port requirements (continued)

Port Required on Description

441 GC site Used by the global NGINX to store operating systems and firmware images.

442 GC site Used by the internal NGINX.

443 (HTTPS) and 80 (HTTP)

GC site Used by the web user interface.

2379 (TCP) GC site Used by the ETCD client for data access and management.

2380 (TCP) GC site Used by the ETCD peer for data access and management.

5047 GC site Used by localregistry.io as a docker container repository.

6443 (TCP) GC site Used for communicating with remote sites and the application programming interface (API).

8081 GC site Used for setting up remote sites.

8082 GC site Heketi CLI port.

8472 (UDP) GC and remote sites Used for Flannel VXLAN.

9345 (TCP) GC site Used for API communications.

10250 GC and remote sites Used by the kubelet node agent to register the node and manage containers.

30500 GC site Used by the global MinIO S3 to store the backups.

32569 GC site Used for Heketi pod to communicate with server.

Consult the Gluster documentation to configure the firewall on each of the GlusterFS nodes if you are using external distributed storage with Bare Metal Orchestrator. For more information, see the Gluster FS Quick Start Guide on the GlusterFS website.

Global Controller node requirements

Before deploying the Bare Metal Orchestrator OVA, you must configure the minimum virtual memory count to 262144 on the server that is used for the Global Controller node.

CAUTION: If the virtual memory is not properly configured on the Global Controller node, Bare Metal

Orchestrator logs do not dsiplay in the OpenSearch dashboard.

To set the server's default virtual memory limit to 262114 and make it persistent:

1. Check the default virtual memory limit in the sysctl.conf file, run:

$sudo sysctl vm.max_map_count 2. Change the memory limit and save the sysctl.conf file.

$ sudo vi /etc/sysctl.conf vm.max_map_count=262144

3. Run the following command to make the change persist in the current session:

$sudo sysctl -p

Worker node requirements

Ensure that worker node servers and virtual machines are accessible over the network using the root account.

To manage a server at the remote site, the network that the server is on must be routable to the primary network of the worker node or routable to the primary network of the Global Controller site.

The following table lists the worker node requirements. For worker node hardware requirements, see Hardware requirements.

14 Installing Bare Metal Orchestrator

Table 9. Worker node requirements

Software Supported versions/requirements

Linux systemd distribution Ubuntu 19.10LTS

Ubuntu 20.04LTS

Debian 11

NTP Install NTP on the worker node server before adding the node to the Bare Metal Orchestrator cluster.

In an Ubuntu or Debian Linux distribution, you can run apt-get install ntp on the worker node as root to install NTP.

HA and Load Balancer requirements

For the two redundant HA nodes and Load Balancers, you must provide four VMs based on either Debian 11 or Ubuntu 20 Linux.

Ensure the VMs that host the two redundant HA nodes and Load Balancers are reachable from the Global Controller host over the network. From the Global Controller host, you should be able to ssh root@ to each of the four VMs, where is the IP address of the VM.

The two redundant HA nodes have the same hardware requirements as the Global Controller (GC). Set up both HA VMs as described in the following table:

Table 10. HA node requirements

Item Details

Set hostnames. bmo-manager-2 and bmo-manager-3 (respectively)

Install NTP, Python 3, and Logical Volume Manager (LVM).

Run apt-get install ntp python3 lvm2 as root or sudo.

Set up two hard disks on each VM and partition.

Each HA node (bmo-manager-2 and 3) requires two separate hard disks: /dev/sda and /dev/sdb. These partition names must exactly match the default partitions that are created on the Global Controller when the OVA is deployed.

Configure a 200 GB partition (free space) on each hard disk. The partition on disk sdb is used for the GlusterFS storage system and must be a non-boot partition.

Install GlusterFS. Run the following commands, where GlusterFS version 9.2 and above are supported. For example:

apt update add-apt-repository ppa:gluster/glusterfs-9 apt install --assume-yes glusterfs-server gluster --version

Set the Opensearch minimum virtual memory limit, and the maximum number of watchers and user instances.

Edit /etc/sysctl.conf to change the default parameters to the following values and then save the file.

$ vi /etc/sysctl.conf vm.max_map_count=262144 fs.inotify.max_user_watchers=1048576 fs.inotify.max_user_instances=256

To enable the change in the current session, run:

$ sysctl -p

Set up both Load Balancer VMs as described in the following table:

Installing Bare Metal Orchestrator 15

Table 11. Load Balancer requirements

Item Details

Set hostnames. bmo-manager-lb-1 and bmo-manager-lb-2 (respectively)

Install NTP and python 3. Run apt-get install ntp python3 as root or sudo.

Install NGINX. Then stop NGINX and disable the process.

Run apt-get install nginx keepalived as root.

Then run the following commands to stop and disable NGINX:

systemctl stop nginx systemctl disable nginx

Add dell and installer users. Run the following commands to add the dell and installer users without assigning passwords:

useradd dell useradd installer

Distributed storage requirements

By default, an internal GlusterFS storage is already included in the Bare Metal Orchestrator OVA. This is the second 200 GB partition on the Global Controller node.

For high availability (HA) deployments with internal distributed storage (default), each VM that is hosting the three nodes in the control plane cluster must have a second, non-boot partition of at least 200 GB of free space available for storage.

CAUTION: The second, non-boot partition that is used for GlusterFS distributed storage is wiped if Bare Metal

Orchestrator is uninstalled.

For high availability (HA) deployments with external distributed storage, the external GlusterFS storage nodes must have a second, non-boot partition of at least 200 GB of free space available for storage. Use the same partition name for the second partition on each external GlusterFS storage node.

To optionally use external distributed storage with Bare Metal Orchestrator, observe the following:

Each VM hosting the distributed storage must have GlusterFS installed on a non-boot partition with at least 200 GB of available storage. Each VM requires two separate hard disks to match the default partitions on the Global Controller (/dev/sda and /dev/sdb). On each hard disk, configure a single partition with 200 GB of free space. The partition on the second disk sdb is used for the GlusterFS storage system and must be a non-boot partition.

NOTE: The partition names must exactly match the default partitions that are created on the Global Controller when

the OVA is deployed. The first partition on the first disk is sda and the second partition on the second disk is sdb.

A minimum of three Gluster FS storage nodes are required. Each partition used for external storage must be assigned the same partition name. The external data storage volumes must be reachable from the HA cluster.

Consult the Gluster documentation to configure the firewall on each of the GlusterFS nodes if you are using external distributed storage with Bare Metal Orchestrator. For more information, see the Gluster FS Quick Start Guide on the GlusterFS website.

Download the OVA You can download and deploy the OVA on an ESXi server or a vCenter. The OVA is prepackaged with the necessary software and system settings.

Download the OVA from Dell Digital Locker.

The key contents of the OVA file are:

Operating system Ansible playbook Local registries and sample configuration templates

NOTE: The local registry hosts all Docker images required for the Bare Metal Orchestrator components.

16 Installing Bare Metal Orchestrator

Deploy the OVA on an ESXi server

About this task

Perform this procedure on the Global Controller node to deploy the OVA image for an ESXi server.

Steps

1. Log in to the ESXi server.

2. In the Navigator panel, go to Virtual Machines, right-click, and select Create/Register VM.

The New virtual machine window is displayed.

3. In Select creation type, select Deploy a virtual machine from an OVF or OVA file, and then click Next.

4. In Enter a name for the virtual machine, enter a hostname for the VM.

5. Select the path to the OVA or drag and drop the OVA image into the wizard, and then click Next.

6. Select a datastore to use for your VM, and then click Next.

7. In Network mappings, accept the defaults for the VLAN destination networks.

NOTE: You configure destination networks after the OVA is deployed.

8. In Disk provisioning, leave Thin as the selected value.

9. Clear the Power on automatically check box.

10. Click Next and then click Finish to start the OVA deployment.

11. After the deployment completes, right-click the VM and click Edit Settings.

12. In Select networks, select Adapter 1 as the VLAN adapter for OVA management and then click Ok.

13. Power on the VM.

Results

After successful OVA deployment, the Bare Metal Orchestrator VM starts.

NOTE: An error message may appear during startup that states the virtual device sound cannot connect because no

corresponding device is available on the host. Click Answer question to dismiss the message. This improves the time it

takes for the VM to start up.

Next steps

Proceed with the post-deployment node setup.

For a single node Bare Metal Orchestrator setup, see Configure a single node Bare Metal Orchestrator after deployment. For an HA setup, see Configure an HA Bare Metal Orchestrator after deployment.

Deploy the OVA image on vCenter

About this task

Perform this procedure on the Global Controller node to deploy the OVA image in vCenter.

Steps

1. Launch the VMware vSphere web client.

2. In the navigator, under the Hosts and Clusters icon, right-click the ESXi cluster, and select Deploy OVF Template.

3. In Select an OVF template, go to Local file and choose the OVA you have downloaded.

4. Click Next.

5. In Select a name and folder, specify a unique hostname and target location for the VM. Click Next.

6. In Select a computer resource, select a destination compute resource to deploy the OVF template and then click Next.

Installing Bare Metal Orchestrator 17

7. In Review details, accept the template defaults and click Next.

8. In Select storage, select a datastore to store the deployed OVF template and then click Next.

9. In Select networks, accept the defaults for the VLAN destination networks and then click Next.

NOTE: You configure destination networks after the OVA is deployed.

10. In Ready to complete, click Finish to start the OVF deployment.

11. After the deployment completes, right-click the VM and click Edit Settings.

12. In Select networks, select Adapter 1 as the VLAN adapter for OVA management and then click Ok.

13. Power on the VM.

Results

After successful OVA deployment, the Bare Metal Orchestrator VM starts.

Next steps

Proceed with the post-deployment node setup.

For a single-node Bare Metal Orchestrator setup, see Configure a single node Bare Metal Orchestrator after deployment. For an HA setup, see Configure an HA Bare Metal Orchestrator after deployment.

Configure a single node Bare Metal Orchestrator after deployment

Prerequisites

Check the subnet of the VM for potential conflicts before proceeding to configure Bare Metal Orchestrator and deploy the Global Controller.

CAUTION: A VM that has an IP address within the reserved subnet ranges 10.42.0.0/16 and 10.43.0.0/16 will fail

to onboard when deploying the Global Controller. For more information, see Prerequisites.

About this task

After you deploy the OVA, use the following procedure to setup Bare Metal Orchestrator as a single node.

Steps

1. Connect to the ESXi server or the vCenter.

2. Go to the VM and launch the console.

3. Log in as the installer user with the password Dell1234.

4. The OVA is configured to use a static IP address by default. Edit /etc/network/interfaces as follows to configure the static IP address. Then save the file.

source/etc/network/interfaces.d/* # The loopback network interface auto lo iface lo inet loopback # The primary network interface allow-hotplug ens33 iface ens33 inet static address netmask gateway

where is your static IP address, is the netmask, and is the IP address of your gateway.

18 Installing Bare Metal Orchestrator

NOTE: Optionally, if you are using DHCP auto-discovery on the Global Controller, you must configure a secondary

interface and ensure that it is not routable. For instructions, see Configure a secondary interface for DHCP auto-

discovery.

5. Do the following to configure a DNS IP address:

a. Edit /etc/resolv.conf to look like this:

domain localdomain search localdomain nameserver 8.8.8.8

NOTE: For air-gapped environments, use 127.0.0.1 as the nameserver IP address.

b. If required, update the nameserver IP address and save the file.

6. Edit /etc/hosts and add the following line to point to the localregistry.io on the Global Controller, and then save the file.

localregistry.io

where is the IP address of the Global Controller node.

7. Enter reboot to reboot the system.

8. Log back into the VM as the installer user with the password Dell1234.

9. Change directory to mw-ova-ansible.

10. Update the IP address of the Global Controller node and the internal gluster_nodes in inventory/my-cluster/ hosts.ini and optionally add worker node IP addresses.

The following is an example hosts.ini file:

[global_controller]

[ha]

[loadbalancer] [gluster_nodes]

[node]

[node-remove]

[hosts] ansible_python_interpreter=/usr/bin/python3 ansible_python_interpreter=/usr/bin/python3 ansible_python_interpreter=/usr/bin/python3

NOTE:

When you are ready to scale up the Bare Metal Orchestrator cluster and run the add_workernode.yaml playbook,

the worker nodes entered below [node] are added. The worker nodes you enter below [node-remove] are removed

from the cluster if you run remove_workernode.yaml. For more information, see Scaling Bare Metal Orchestrator.

11. Update the storage_volume attribute in the file inventory/my-cluster/group_vars/all.yml as shown. The gluster_volume_type attribute appears further down in the file and should be set to none for a single node deployment.

# Gluster storage partition # use a non boot secondary partition # WARNING: this partition will be wiped before installation and after uninstalla

Installing Bare Metal Orchestrator 19

# The same storage partition name must be configured on each node. storage_volume: "/dev/sdb1" . . . gluster_volume_type: "none"

NOTE: Optionally, if a docker-based external registry is used with Bare Metal Orchestrator, update the

external_registry attribute with the ipaddress:port of the external registry. If the value is left blank, internal

registry is used.

12. Run the lsblk command and confirm the correct mounted partition assignments on the Global Controller.

The following is an example of the correct mounted partition assignments, where sda is the first partition with sda1, sda2, and sda3. The second partition is sdb.

sda sda1 sda2 sda3

sdb sdb1

CAUTION: Incorrect mounted partition assignments results in the loss of the Bare Metal Orchestrator cluster

when you upgrade the system. Reboot the VM to reorder the mounted partition assignments until partitions

sda and sdb are correctly mounted as shown.

13. Update the Common Language Location Identifier (CLLI) information for the Global Controller site in the file singlenode- site.yaml and add a unique location identifier plus the following mandatory parameters: , ,

, and .

For example:

metadata: id: MiamiFL-1 city: Miami state: Florida address: "123 Main Street, FL" country: USA latLong: "37.404882, -121.978486"

14. Run the following commands to deploy the Global Controller cluster:

sudo ansible-playbook ssh-copy-heketi.yaml -i inventory/my-cluster/hosts.ini sudo ansible-playbook setup.yaml -i inventory/my-cluster/hosts.ini

15. When prompted, enter the sudo password Dell1234 for the Global Controller.

Next steps

Proceed to verify that the nodes were created, see Viewing nodes.

If the cluster installation fails or you need to reinstall the cluster, you can uninstall the cluster and then redeploy it, see Uninstall and redeploy Global Controller and HA nodes.

If server onboarding fails because of an IP address conflict, see Change default CIDR subnets for Bare Metal Orchestrator.

20 Installing Bare Metal Orchestrator

Configure an HA Bare Metal Orchestrator after deployment

Prerequisites

Check the subnet of your servers for potential conflicts before proceeding to configure Bare Metal Orchestrator.

CAUTION: A VM that has an IP address within the reserved subnet ranges 10.42.0.0/16 and 10.43.0.0/16 will fail

to onboard when deploying the Global Controller. For more information, see Prerequisites.

For a high availability (HA) Bare Metal Orchestrator configuration, the two redundant HA nodes and the Load Balancers must be set up and reachable from the GC host. For more information, see the HA and Load Balancer setup requirements in the Prerequisites.

Ensure that each server in the five-node HA cluster is assigned a unique hostname. For example: bmo-manager-1 (for the Global Controller node), bmo-manager-2, bmo-manager-3, bmo-manager-lb-1, and bmo-manager-lb-2.

About this task

After you deploy the OVA and have prepared the required VMs for the HA deployment, use the following procedure to setup the Bare Metal Orchestrator nodes.

Steps

1. Connect to the ESXi server or the vCenter.

2. Go to the VM and launch the console.

3. Log in as the installer user with the password Dell1234.

4. The OVA is configured to use a static IP address by default. Edit /etc/network/interfaces as follows to configure the static IP address. Then save the file.

source/etc/network/interfaces.d/* # The loopback network interface auto lo iface lo inet loopback # The primary network interface allow-hotplug ens33 iface ens33 inet static address netmask gateway

where is your static IP address, is the netmask, and is the IP address of your gateway.

NOTE: Optionally, if you are using DHCP auto-discovery on the Global Controller, you must configure a secondary

interface and ensure that it is not routable. For instructions, see Configure a secondary interface for DHCP auto-

discovery.

5. Do the following to configure a DNS IP address:

a. Edit /etc/resolv.conf to look like this:

domain localdomain search localdomain nameserver 8.8.8.8

NOTE: For air-gapped environments, use 127.0.0.1 as the nameserver IP address.

b. If required, update the nameserver IP address and save the file.

6. Edit /etc/hosts on the Global Controller to include the following, and then save the file:

IP addresses of the Global Controller and localhost IP address of localregistry.io on the Global Controller

Installing Bare Metal Orchestrator 21

The following is an example /etc/hosts file for the Global Controller bmo-manager-1.

## this is example /etc/hosts for bmo-manager-1 127.0.0.1 localhost 127.0.1.1 bmo-manager-1 localregistry.io

where is the IP address of the Global Controller. In this example, enter 127.0.1.1 as the value.

7. Enter reboot to reboot the system.

8. Reconnect to the ESXi server or the vCenter and launch the console.

9. Log in as the installer user with the password Dell1234.

10. Change directory to mw-ova-ansible.

11. Update the IP address of the Global Controller node, the two redundant HA nodes, and the load balancer in inventory/ my-cluster/hosts.ini file on the Global Controller node. Optionally, you can add worker node IP addresses.

The following is an example hosts.ini file for an HA deployment that uses internal distributed storage, where:

The GC node is bmo-manager-1.

The two redundant HA node hostnames are bmo-manager-2 and bmo-manager-3.

The load balancer hostnames in this example are bmo-manager-lb1 and bmo-manager-lb2.

Nodes to associate with the local GlusterFS storage system. The three nodes in the HA control plane cluster when using internal GlusterFS storage are: bmo-manager-1, bmo-manager-2 and bmo-manager-3.

[global_controller]

[ha]

[loadbalancer]

[gluster_nodes] [node]

[node-remove]

[hosts] ;; this is IP for a five-node HA with internal storage and two worker nodes ansible_python_interpreter=/usr/bin/python3 ansible_python_interpreter=/usr/bin/python3 ansible_python_interpreter=/usr/bin/python3 ansible_python_interpreter=/usr/bin/python3 ansible_python_interpreter=/usr/bin/python3 ansible_python_interpreter=/usr/bin/python3 ansible_python_interpreter=/usr/bin/python3

If your deployment uses external distributed storage, then the hosts.ini might appear as follows:

[global_controller]

[ha]

[loadbalancer]

22 Installing Bare Metal Orchestrator

[gluster_nodes] [node]

[node-remove]

[hosts] ;; this is IP for a five-node HA with external storage and two worker nodes ansible_python_interpreter=/usr/bin/python3 ansible_python_interpreter=/usr/bin/python3 ansible_python_interpreter=/usr/bin/python3 ansible_python_interpreter=/usr/bin/python3 ansible_python_interpreter=/usr/bin/python3 ansible_python_interpreter=/usr/bin/python3 ansible_python_interpreter=/usr/bin/python3 ansible_python_interpreter=/usr/bin/python3 ansible_python_interpreter=/usr/bin/python3 ansible_python_interpreter=/usr/bin/python3

NOTE:

When you are ready to scale up the Bare Metal Orchestrator cluster and run the add_workernode.yaml playbook,

the worker nodes entered below [node] are added. The worker nodes you enter below [node-remove] are removed

from the cluster if you run remove_workernode.yaml. For more information, see Scaling Bare Metal Orchestrator.

12. Log in to the VM that is hosting the Load Balancer node and update the /etc/hosts file on the Load Balancer as shown. Do this for each Load Balancer node.

127.0.0.1 localhost 127.0.1.1 bmo-manager-lb1 bmo-manager-1 bmo-manager-2 bmo-manager-3

13. For each of the redundant HA nodes, log into the VM that is hosting the node and update the /etc/hosts file on the as follows:

## this is an example /etc/hosts for bmo-manager-2 127.0.0.1 localhost 127.0.1.1 bmo-manager-2 localregistry.io

## this is an example /etc/hosts for bmo-manager-3 127.0.0.1 localhost 127.0.1.1 bmo-manager-3 localregistry.io

14. Reconnect to the ESXi server or the vCenter and relaunch the console (if necessary) and log in as the installer user if your previous session closed. Then, continue to edit the HA section of the file inventory/my-cluster/group_vars/ all.yml as follows:

Set the rke2_ha_mode attribute to true.

Uncomment the lines needed to deploy a multi-node control plane. Add the Load Balancer virtual IP (VIP) address. Ensure the gluster_user attribute is set to admin.

Optionally, if a docker-based external registry is used with Bare Metal Orchestrator, update the external_registry attribute with the ipaddress:port of the external registry. If the value is left blank, internal registry is used.

Installing Bare Metal Orchestrator 23

Do not change the default gluster_volume_type replica count of two replications.

The following is an example of the HA section of the all.yml file that is set for a high availability configuration.

# Deploy the cluster in HA mode rke2_ha_mode" true # Uncomment values to deploy multi node control-plane after setting rke2_ha_mode: true ha_worker_ip: "{{ hostvars[groups]['ha'] | default(groups['ha']) }}" lb_ip_1: "{{ hostvars[groups['loadbalancer'][0]]['ansible_host'] | default(groups['loadbalancer'][0]) }}" lb_ip_2: "{{ hostvars[groups['loadbalancer'][1]]['ansible_host'] | default(groups['loadbalancer'][1]) }}" # Uncomment and set the hostname of the loadbalancers lb_hostname_1: "bmo-manager-lb-1" lb_hostname_2: "bmo-manager-lb-2" lb_vip_ip: "" # Gluster storage partition # use a non boot secondary partition # WARNING: this partition will be wiped before installation and after uninstalla # The same storage partition name must be configured on each node. storage_volume: "/dev/sdb1" . . . #Set gluster volume type as replicate: or none. gluster_volume_type: "replicate:2"

NOTE: The GlusterFS storage partition name is /dev/sdb1 by default in the OVA. The same storage partition name

must be set on each HA node to match the Global Controller.

15. Run the lsblk command and confirm the correct mounted partition assignments on the Global Controller, and on the two redundant HA nodes.

The following is an example of the correct mounted partition assignments, where sda is the first partition with sda1, sda2, and sda3. The second partition is sdb.

sda sda1 sda2 sda3

sdb sdb1

CAUTION: Incorrect mounted partition assignments results in the loss of the Bare Metal Orchestrator cluster

when you reinstall the system. Reboot the VM to reorder the mounted partition assignments until partitions

sda and sdb are correctly mounted as shown.

16. Update the Common Language Location Identifier (CLLI) information for the Global Controller site in the file singlenode- site.yaml and add a unique location identifier plus the following mandatory parameters: , ,

, and .

For example:

metadata: id: MiamiFL-1 city: Miami state: Florida address: "123 Main Street, FL" country: USA latLong: "37.404882, -121.978486"

17. Run the following commands in the order listed to deploy the cluster. This step also deploys the SSH keys required for high availability. When prompted, enter the sudo password Dell1234 for the Global Controller node, as well as the root passwords for the HA nodes and Load Balancer hosts.

sudo ansible-playbook ssh-copy-heketi.yaml -i inventory/my-cluster/hosts.ini sudo ansible-playbook setup.yaml -i inventory/my-cluster/hosts.ini sudo ansible-playbook ssh-copy-ha.yaml -i inventory/my-cluster/hosts.ini

24 Installing Bare Metal Orchestrator

sudo ansible-playbook add-ha-node.yaml -i inventory/my-cluster/hosts.ini sudo ansible-playbook setup-lb.yaml -i inventory/my-cluster/hosts.ini

Next steps

Proceed to verify that the nodes were created, see Viewing nodes.

If the cluster installation fails or you need to reinstall the cluster, you can uninstall the cluster and then redeploy it, see Uninstall and redeploy Global Controller and HA nodes.

If server onboarding fails because of an IP address conflict, see Change default CIDR subnets for Bare Metal Orchestrator.

Configure a secondary interface for DHCP auto- discovery

About this task

If you are using DHCP auto-discovery on the Global Controller, you must configure a secondary interface on Bare Metal Orchestrator using this procedure. For instructions to configure DHCP, see the Bare Metal Orchestrator Command Line Interface User's Guide.

To configure a secondary interface for DHCP auto-discovery:

Steps

1. Log in to the Global Controller node console as the installer user with the password Dell1234.

2. Edit /etc/network/interfaces.d/* to add the secondary interface, then save the file.

The following example shows a secondary interface of ens37.

# The loopback network interface auto lo iface lo inet loopback # The primary network interface allow-hotplug ens33 iface ens33 inet static address netmask gateway allow-hotplug ens37 iface ens37 inet static address netmask

where is the IP address of your DHCP server, is the netmask.

NOTE: Ensure that the primary interface ens33 is routable and that the second interface is not routable.

3. Change the directory to mw-ova-ansible.

cd mw-ova-ansible 4. Using an editor such as Vim, edit the hosts.ini file.

vim inventory/my-cluster/hosts.ini 5. Add the secondary interface IP address for the Global Controller. If this is a high availability (HA) deployment, you must also

add the secondary interface IP address for the two redundant HA nodes.

Installing Bare Metal Orchestrator 25

The following is an example hosts.ini file for a high availability deployment that has secondary interface IP addresses configured, where the GC node is bmo-manager-1 and the two redundant HA nodes are bmo-manager-2 and bmo- manager-3:

[global_controller]

[ha]

[loadbalancer]

[gluster_nodes]

[secondary_ip] [node]

[node-remove]

[hosts] ;; this is IP for a five-node HA with internal storage and two worker nodes ansible_python_interpreter=/usr/bin/python3 ansible_python_interpreter=/usr/bin/python3 ansible_python_interpreter=/usr/bin/python3 ansible_python_interpreter=/usr/bin/python3 ansible_python_interpreter=/usr/bin/python3 ansible_python_interpreter=/usr/bin/python3 ansible_python_interpreter=/usr/bin/python3

6. Save the file.

7. Edit the file inventory/my-cluster/group_vars/all.yaml and add the secondary IP address for the Global Controller. For HA deployments, add secondary IP addresses for the Global Controller and the two redundant HA nodes.

The following is an example of the lines in the all.yaml file to update and remove the comments, where cp1 is Global Controller, and cp2 and cp3 are the two redundant HA nodes.

# Add Secondary IPs for Certificate Generation. Uncomment cp1_secondary_ip for singlenode only. Uncomment all 3 for HA #cp1_secondary_ip: "{{ hostvars[groups['secondary_ip'][0]]['ansible_host'] | default(groups['secondary_ip'][0]) }}" #cp2_secondary_ip: "{{ hostvars[groups['secondary_ip'][1]]['ansible_host'] | default(groups['secondary_ip'][1]) }}" #cp3_secondary_ip: "{{ hostvars[groups['secondary_ip'][2]]['ansible_host'] | default(groups['secondary_ip'][2]) }}"

8. When you're done, save the file and exit the editor.

26 Installing Bare Metal Orchestrator

Change default CIDR subnets for Bare Metal Orchestrator Bare Metal Orchestrator reserves IP addresses in subnet ranges 10.42.0.0/16 and 10.43.0.0/16 for Global Controller cluster communication. If your server workload for Bare Metal Orchestrator uses IP addresses in those subnets and you cannot change them to a different subnet, then you can change the default CIDR subnets for Bare Metal Orchestrator.

Prerequisites

If you need to change the default cluster-cidr and service-cidr subnets for Bare Metal Orchestrator, do that before you configure Bare Metal Orchestrator and deploy the Global Controller.

If you deployed the Global Controller and onboarding failed because of an IP address conflict, then you must do the following:

1. Change the default CIDR subnets for Bare Metal Orchestrator. 2. Perform the procedure to Uninstall and redeploy Global Controller and HA nodes.

About this task

To change the default CIDR subnets for Bare Metal Orchestrator, do the following:

Steps

1. Establish a CLI session on the VM that has Bare Metal Orchestrator installed and login as the installer user with the password Dell1234.

2. Change directory:

cd mw-ova-ansible 3. Using an editor such as vim, edit the file rke2-server.service and change the line ExecStart=/user/local/

bin.rke2.server to the following:

ExecStart=/user/local/bin.rke2.server --cluster-cidr= --service- cidr=

where and are valid private CIDR subnet values, for example: 172.27.0.0/16

Verify Global Controller partition assignments For Bare Metal Orchestrator to function properly, two partitions are mounted on the Global Controller, as well as on each of the two, redundant high availability (HA) nodes in an HA configuration. You must ensure the partitions are mounted correctly before upgrading Bare Metal Orchestrator to avoid critical data loss.

About this task

For example, there are two mounted partitions: sda and sdb. The Bare Metal Orchestrator cluster must run on the first partition (sda) and the GlusterFS distributed storage must run on the second partition (sdb).

If the mounted partitions are incorrectly assigned, you can reboot the nodes to reset the mounted partition assignments.

CAUTION: Incorrect mounted partition assignments results in the loss of the Bare Metal Orchestrator cluster

when you reinstall the system.

Steps

1. Establish a CLI session on the VM that has Bare Metal Orchestrator installed and login as the installer user with the password Dell1234.

2. Run the lsblk command and confirm the correct mounted partition assignments on the Global Controller.

Installing Bare Metal Orchestrator 27

The following is an example of the correct mounted partition assignments, where sda is the first partition with sda1, sda2, and sda3. The second partition is sdb.

sda sda1 sda2 sda3

sdb sdb1

For an HA configuration, repeat this step for each of the two redundant HA nodes.

Verify Global Controller node creation

To verify that the Global Controller node was successfully created, follow the procedure in Viewing nodes. If successfully created, the Global Controller node is displayed in the output.

Viewing nodes

View nodes to verify they are deployed.

Prerequisites

Bare Metal Orchestrator is deployed.

About this task

Use the following procedure to verify that the Global Controller node is deployed.

Steps

1. Open a CLI session on the Bare Metal Orchestrator VM.

2. Run the following command:

bmo get node

Results

The node details are displayed. For more information about the fields, see the node field definitions in the Bare Metal Orchestrator Command Line Interface User's Guide. The following example output shows deployed nodes.

NAME ON-BOARDED SITE AGE INTERNAL-IP bmo-manager-1 gc 13d 111.11.0.11 worker1 austin 13d 100.10.0.10

Uninstall and redeploy Global Controller and HA nodes If the Global Controller cluster or one of the high availability (HA) nodes fails to deploy during the Bare Metal Orchestrator deployment, you can uninstall the nodes and redeploy them.

About this task

Use this procedure to uninstall the Global Controller node for a single node Bare Metal Orchestrator deployment or all nodes for an HA deployment, and then redeploy the nodes.

28 Installing Bare Metal Orchestrator

Steps

1. Establish a CLI session on the VM that has Bare Metal Orchestrator installed and login as the installer user with the password Dell1234.

2. For a single node deployment, do the following:

a. Run:

sudo ansible-playbook uninstall.yaml -i inventory/my-cluster/hosts.ini

b. After the system reboots, run:

sudo ansible-playbook cleanup.yaml -i inventory/my-cluster/hosts.ini

c. Run the following command to redeploy the Global Controller cluster and enter the sudo password Dell1234 for the Global Controller when prompted:

sudo ansible-playbook ssh-copy-heketi.yaml -i inventory/my-cluster/hosts.ini sudo ansible-playbook setup.yaml -i inventory/my-cluster/hosts.ini

3. For an HA deployment, do the following:

a. Uninstall each of the nodes in the following order:

sudo ansible-playbook uninstall-lb.yaml -i inventory/my-cluster/hosts.ini sudo ansible-playbook uninstall-ha-nodes.yaml -i inventory/my-cluster/hosts.ini sudo ansible-playbook uninstall.yaml -i inventory/my-cluster/hosts.ini

b. After the system reboots, run:

sudo ansible-playbook cleanup.yaml -i inventory/my-cluster/hosts.ini

c. Run the following commands in the order listed to redeploy the cluster. When prompted, enter the sudo password Dell1234 for the Global Controller node, as well as the root passwords for the HA nodes and Load Balancer hosts.

sudo ansible-playbook ssh-copy-heketi.yaml -i inventory/my-cluster/hosts.ini sudo ansible-playbook setup.yaml -i inventory/my-cluster/hosts.ini sudo ansible-playbook ssh-copy-ha.yaml -i inventory/my-cluster/hosts.ini sudo ansible-playbook add-ha-node.yaml -i inventory/my-cluster/hosts.ini sudo ansible-playbook setup-lb.yaml -i inventory/my-cluster/hosts.ini

This step also redeploys the SSH keys for high availability.

Next steps

Proceed to verify node creation, see Viewing nodes.

Uninstall Bare Metal Orchestrator

About this task

Use the following procedure to uninstall Bare Metal Orchestrator from the Global Controller node and all worker nodes, as well as the two HA nodes and the Load Balancer for HA configurations.

We recommend performing a backup to an external MinIO S3 storage location before uninstalling Bare Metal Orchestrator. For information about backups, see the Bare Metal Orchestrator Command Line Interface User's Guide.

CAUTION:

Uninstalling completely removes all Bare Metal Orchestrator components from the nodes.

Installing Bare Metal Orchestrator 29

Steps

1. Establish a CLI session on the server that has the Bare Metal Orchestrator installed and login as the installer user with the password Dell1234.

2. Change directory to mw-ova-ansible.

cd mw-ova-ansible 3. Do one of the following:

For a single node setup, run:

sudo ansible-playbook uninstall.yaml -i inventory/my-cluster/hosts.ini For an HA setup, do the following to uninstall the Load Balancer, the two HA nodes, and the Global Controller. Run:

sudo ansible-playbook uninstall-lb.yaml -i inventory/my-cluster/hosts.ini sudo ansible-playbook uninstall-ha-nodes.yaml -i inventory/my-cluster/hosts.ini sudo ansible-playbook uninstall.yaml -i inventory/my-cluster/hosts.ini

After the system reboots, run:

sudo ansible-playbook cleanup.yaml -i inventory/my-cluster/hosts.ini

4. When prompted, enter the sudo password Dell1234 for the Global Controller.

Create a new user and log in to the web UI After Bare Metal Orchestrator is deployed, you can use the CLI or the web UI to continue the setup. To use the web UI, you need to create a user with the Global Admin role.

Prerequisites

You must create a user-profile YAML file for the new admin user.

About this task

Use the default dell user account to create the new admin user YAML file. This new admin user account must have the Global Admin role. See Role-Based Access Control in the Bare Metal Orchestrator Command Line Interface User's Guide for additional guidance.

Refer to the following example user YAML file content to help you create the admin user YAML file with the Global Admin role.

name: Admin email: admin@dell.com country: USA city: Denver organization: Dell orgUnit: BDC province: Co roles: - global-admin

Steps

1. Establish a CLI session on the Bare Metal Orchestrator virtual appliance.

2. Run the following command to create a user config file that contains your access credentials:

bmo create user -f .yaml > .yaml For example:

bmo create user -f admin.yaml > adminconfig.yaml

The adminconfig.yaml file is saved locally on the Dell Technologies Bare Metal Orchestrator virtual appliance.

3. Copy the generated user config file (adminconfig.yaml in this example) to your local machine.

4. Open a web browser and enter https://[Bare Metal Orchestrator IP address or hostname] in the address bar to display the Login screen.

30 Installing Bare Metal Orchestrator

NOTE: For high availability configurations, enter the virtual IP (VIP) address for the Load Balancers.

Figure 4. Bare Metal Orchestrator Login screen

5. Click Select User Config File.

6. Select your user config file and click Open to log in to the web UI.

The user config file contains your access credentials and assigned role.

Installing Bare Metal Orchestrator 31

Scaling Bare Metal Orchestrator Scale the Bare Metal Orchestrator cluster.

Topics:

Scaling overview Edit the hosts file Create worker nodes Verify worker nodes are created

Scaling overview

After the Global Controller node is created, you can scale it up by adding additional worker nodes. This chapter describes the prerequisites required for adding worker nodes. The chapter also provides information about Ansible scripts, how to update the hosts file, and add worker nodes.

Edit the hosts file

Prerequisites

You require the IP address of the nodes you want to edit, including the Global Controller and Worker Nodes. For high availability (HA) deployments, this includes the two HA nodes, the Load Balancer virtual IP address (VIP), and the three GlusterFS nodes.

About this task

To edit the hosts file:

Steps

1. Log in to the Global Controller node console as the installer user with the password Dell1234.

2. Change the directory to mw-ova-ansible.

cd mw-ova-ansible 3. Using an editor such as Vim, edit the hosts.ini file.

vim inventory/my-cluster/hosts.ini 4. Update the IP address of the nodes.

The GC node is bmo-manager-1.

The two redundant HA node hostnames are bmo-manager-2 and bmo-manager-3.

The load balancer hostnames in this example are bmo-manager-lb1 and bmo-manager-lb2.

Nodes to associate with the local GlusterFS storage system. Those are the three nodes in the HA control plane cluster when using internal GlusterFS storage: bmo-manager-1, bmo-manager-2 and bmo-manager-3.

The following is an example hosts.ini file for a high availability deployment that uses internal distributed storage:

[global_controller]

[ha]

[loadbalancer]

3

32 Scaling Bare Metal Orchestrator

[gluster_nodes] [node]

[node-remove]

[hosts] ;; this is IP for a five-node HA with internal storage and two worker nodes ansible_python_interpreter=/usr/bin/python3 ansible_python_interpreter=/usr/bin/python3 ansible_python_interpreter=/usr/bin/python3 ansible_python_interpreter=/usr/bin/python3 ansible_python_interpreter=/usr/bin/python3 ansible_python_interpreter=/usr/bin/python3 ansible_python_interpreter=/usr/bin/python3

5. Save the file and quit the editor.

6. To copy the SSH credentials to the Global Controller and worker nodes, run:

sudo ansible-playbook ssh-copy.yaml -i inventory/my-cluster/hosts.ini 7. When prompted, enter the sudo password Dell1234 for the Global Controller and then enter the root password for the

worker node.

Create worker nodes Use this procedure to create worker nodes that you can manage with the Global Controller.

Prerequisites

Ensure that bare metal servers and virtual machines are available and reachable from the Global Controller. Ensure that worker node servers and virtual machines are accessible over the network using the root account. Gather the IPv4 addresses and administrator credentials of the bare metal servers and virtual machines. Install NTP on the worker node server before adding the node to the Bare Metal Orchestrator cluster. In an Ubuntu or

Debian Linux distribution, you can run apt-get install ntp on the worker node as root to install NTP.

The following systemd Linux distributions were tested on the bare metal servers and virtual machines: Ubuntu 19.10LTS Ubuntu 20.04LTS Debian 11

To manage a server at the remote site, the network that the server is on must be routable to the primary network of the worker node or routable to the primary network of the Global Controller site.

Steps

1. Edit the inventory/mw-cluster/hosts.ini file to add the IP address of the worker node to be created. For more information about how to edit the hosts file, see Edit the hosts file.

2. Enter the following commands to run the Ansible playbooks and pass the YAML and hosts files needed to create a worker node:

sudo ansible-playbook ssh-copy.yaml -i inventory/my-cluster/hosts.ini sudo ansible-playbook add_workernode.yaml -i inventory/my-cluster/hosts.ini

3. Enter exit to end the SSH session.

Scaling Bare Metal Orchestrator 33

Verify worker nodes are created

To verify a worker node is created, see the View nodes section in the Bare Metal Orchestrator Command Line Interface User's Guide. If a node is successfully created, the new node is displayed in the output.

After the worker nodes are successfully created, you must create sites to manage your infrastructure. For more information, see the Managing Sites section in the Bare Metal Orchestrator Command Line Interface User's Guide.

34 Scaling Bare Metal Orchestrator

Deleting Nodes This chapter provides instructions on how to delete worker nodes and verify that worker nodes are deleted.

Topics:

Delete worker nodes Verify worker nodes are deleted

Delete worker nodes Use this procedure to delete worker nodes.

About this task

Before you can delete a worker node, you must manually add the IP addresses of the worker nodes to delete in the [node- remove] section of the inventory/my-cluster/hosts.ini file. We recommend creating a backup before deleting a worker node and its associated sites. For information about backups, see the Bare Metal Orchestrator Command Line Interface User's Guide.

CAUTION: Deleting the Global Controller node uninstalls Bare Metal Orchestrator.

Steps

1. Log in to the Global Controller node console as the installer user with the password Dell1234.

2. Change the directory to mw-ova-ansible.

cd mw-ova-ansible 3. Using an editor such as Vim, add the IP addresses of the worker nodes to delete in the [node-remove] section of the

hosts.ini file.

For example:

# vim inventory/my-cluster/hosts.ini [node-remove] < Worker node 1 IP > < Worker node 2 IP >

4. Save the file and quit the editor.

5. Run the Ansible playbook and pass the YAML and hosts files needed to delete the worker node. Do the following:

a. Run:

sudo ansible-playbook remove_workernode.yaml -i inventory/my-cluster/hosts.ini b. If you get a warning that Bare Metal Orchestrator sites exits on the worker node and removal of the worker node failed,

run the following command to force the deletion:

sudo ansible-playbook remove_workernode.yaml -i inventory/my-cluster/hosts.ini -e param=force_delete

6. Enter exit to end the SSH session.

Verify worker nodes are deleted

To verify worker nodes are deleted, see the View nodes section in the Bare Metal Orchestrator Command Line Interface User's Guide. If a node is successfully deleted, the deleted node is not displayed in the output.

4

Deleting Nodes 35

Upgrading Bare Metal Orchestrator You can upgrade Bare Metal Orchestrator on the Global Controller cluster and all worker nodes to the latest available version. For high availability, all nodes are upgraded.

Topics:

Upgrade overview High-level upgrade workflow Upgrade the Global Controller and one or more worker nodes

Upgrade overview You can upgrade all Bare Metal Orchestrator components for one or more sites in the cluster (including the Global Controller and worker nodes). Upgrading the CRD component is a required step in the process.

All necessary images, binaries, and version components needed for the upgrade are included in the upgrade bundle. The amount of time it takes to upgrade a cluster depends on the number of worker nodes installed at each site and the number of sites.

The CRD custom resources and site component upgrade step upgrades the API version of all components in all sites. The components that are upgraded include:

Firmware media Hardware profiles Media License media Profile telemetries SDN controllers Servers Server telemetries Sites Stack deployers Switches Switch port configurations Tenants

Perform a Bare Metal Orchestrator upgrade when the upgrade path is in chronological order; for example from release 1.2.0 to release 1.2.1, or from release 1.3 to 1.3.1. Contact Dell Support or your Dell representative for information on how to obtain an upgrade bundle and the supported upgrade paths.

For instructions to perform an upgrade, see Upgrade the Global Controller and one or more worker nodes

CAUTION: Service disruptions are possible during the upgrade process. We recommend scheduling the update

when traffic at the affected sites is low.

A high-level workflow showing how sites are updated in the Bare Metal Orchestrator cluster is available in High-level upgrade workflow.

For help troubleshooting an upgrade, contact your Dell Support representative.

High-level upgrade workflow

Initiate the Bare Metal Orchestrator upgrade from the Global Controller node. The Site Controller is the first component that is upgraded on the Global Controller.

5

36 Upgrading Bare Metal Orchestrator

The Site Controller upgrades the Site Manager for the Global Controller, and then updates the Site Managers for the remote sites. Each Site Manager upgrades the local Bare Metal Orchestrator components for their individual node.

NOTE: You must update the YAML definition files before you update the sites; otherwise, new functions introduced in the

upgrade may not work properly.

Figure 5. Bare Metal Orchestrator upgrade workflow

A general high-level workflow to upgrade one or multiple sites in a cluster is as follows:

1. Optional: Before upgrading the system, create a backup to recover the cluster in case of a catastrophic failure, see Bare Metal Orchestrator Command Line Interface User's Guide.

2. Obtain the upgrade bundle. For information on how to obtain an upgrade bundle and the supported upgrade paths, contact Dell Support or your Dell representative.

3. Upload the upgrade bundle file to a folder you create on the VM that has the Bare Metal Orchestrator software installed that you want to update.

4. Extract the upgrade orchestrator. 5. Update the CRD component to update the YAML definition files. 6. Initiate the upgrade process for one or multiple sites.

The Site Controller on the Global Controller node updates. The Site Controller updates the site manager for the Global Controller and for each site. Each site manager updates the components for their respective site.

7. Verify that the sites have been updated successfully.

NOTE: You must update the YAML definition files before you update the sites; otherwise, new functions introduced in the

upgrade may not work properly.

Upgrade the Global Controller and one or more worker nodes Use this procedure to update Bare Metal Orchestrator from the last release of Bare Metal Orchestrator to the next release in sequence. All Bare Metal Orchestrator components are updated to the most recent release.

Prerequisites

Observe the following:

Bare Metal Orchestrator is installed and the Global Controller and worker nodes are configured. If you are not upgrading all sites in the cluster, then you need to know the site name of the specific sites you want to

upgrade. For example, if you want to test the upgrade on only a few sites.

Upgrading Bare Metal Orchestrator 37

Obtain the upgrade bundle. For information on how to obtain an upgrade bundle and the supported upgrade paths, contact Dell Support or your Dell representative.

An FTP client such as WinSCP to upload the file to the OVA.

Before upgrading the system, we recommend that you back up the cluster in case the cluster needs to be recovered, see Bare Metal Orchestrator Command Line Interface User's Guide.

CAUTION: To avoid inadvertently overwriting the Bare Metal Orchestrator cluster, verify the configuration

of the mounted partitions on the Global Controller, as well as on the two redundant high availability (HA)

nodes for an HA configuration before upgrading Bare Metal Orchestrator, see Verify Global Controller partition

assignments.

About this task

The upgrade bundle file contains all the necessary images, binaries, and the version file you need for the upgrade installation. All sites are upgraded, starting with the Global Controller and followed by the remote sites.

To upgrade Bare Metal Orchestrator components on a VM:

Steps

1. Log in to the Global Controller node console as the installer user with the password Dell1234.

2. Create a directory for the upgrade bundle file and change to that directory.

mkdir > cd For example:

mkdir upgrade cd upgrade

3. Using an FTP client tool such as WinSCP, connect to your OVA server as the installer user and copy the upgrade bundle .tar file to the upgrade folder you created.

4. Using the Global Controller node console, go to /home/installer/mw-ova-ansible/upgrade.

cd /home/installer/mw-ova-ansible/upgrade 5. Extract the upgrade bundle file, where is the upgrade bundle tar filename. Run:

tar -xvzf > For example:

tar -xvzf bmo_bundle-v0.3.555_TAG.tar.gz

6. Optional: From the directory where the extracted bundle files are located, run the following command to list the upgrade components and record the versions for comparison after the upgrade:

bmo version

7. Change directory to mw_bundle.

cd mw_bundle 8. Update the CRD component to update the YAML definition files. Run:

./mw-install -i upgrade crd 9. Upgrade one or multiple sites.

To upgrade all sites in the cluster, run:

./mw-install -i upgrade site --all To upgrade one or more sites in the cluster, run the following command and enter individual site names separated by a

space:

./mw-install -i upgrade site sitename1 sitename2 sitename3 The upgrade images upload to the localregistry that is running in the cluster for each site you specify.

NOTE: The upgrade process can take some time, depending on the size and number of sites. Do not interrupt the

upgrade pr

Manualsnet FAQs

If you want to find out how the 1.3 Dell works, you can view and download the Dell Bare Metal Orchestrator 1.3 Software Installation Guide on the Manualsnet website.

Yes, we have the Installation Guide for Dell 1.3 as well as other Dell manuals. All you need to do is to use our search bar and find the user manual that you are looking for.

The Installation Guide should include all the details that are needed to use a Dell 1.3. Full manuals and user guide PDFs can be downloaded from Manualsnet.com.

The best way to navigate the Dell Bare Metal Orchestrator 1.3 Software Installation Guide is by checking the Table of Contents at the top of the page where available. This allows you to navigate a manual by jumping to the section you are looking for.

This Dell Bare Metal Orchestrator 1.3 Software Installation Guide consists of sections like Table of Contents, to name a few. For easier navigation, use the Table of Contents in the upper left corner.

You can download Dell Bare Metal Orchestrator 1.3 Software Installation Guide free of charge simply by clicking the “download” button in the upper right corner of any manuals page. This feature allows you to download any manual in a couple of seconds and is generally in PDF format. You can also save a manual for later by adding it to your saved documents in the user profile.

To be able to print Dell Bare Metal Orchestrator 1.3 Software Installation Guide, simply download the document to your computer. Once downloaded, open the PDF file and print the Dell Bare Metal Orchestrator 1.3 Software Installation Guide as you would any other document. This can usually be achieved by clicking on “File” and then “Print” from the menu bar.