Contents

Dell Bare Metal Orchestrator 1.4 Software Installation Guide PDF

1 of 44
1 of 44

Summary of Content for Dell Bare Metal Orchestrator 1.4 Software Installation Guide PDF

Bare Metal Orchestrator 1.4 Installation Guide

Version 1.4

Abstract

This guide describes how to install Bare Metal Orchestrator on a hypervisor. It includes how to scale Bare Metal Orchestrator, modify and delete nodes, set up high availability, and how to upgrade Bare Metal Orchestrator.

Dell Technologies Solutions

December 2022 Rev. 06

Notes, cautions, and warnings

NOTE: A NOTE indicates important information that helps you make better use of your product.

CAUTION: A CAUTION indicates either potential damage to hardware or loss of data and tells you how to avoid

the problem.

WARNING: A WARNING indicates a potential for property damage, personal injury, or death.

2021 - 2022 Dell Inc. or its subsidiaries. All rights reserved. Dell Technologies, Dell, and other trademarks are trademarks of Dell Inc. or its subsidiaries. Other trademarks may be trademarks of their respective owners.

Preface.........................................................................................................................................................................................4 Revision history..........................................................................................................................................................................5 Product support.........................................................................................................................................................................6

Contacting Dell Support.....................................................................................................................................................6

Chapter 1: Bare Metal Orchestrator installation overview.............................................................. 7 Introduction........................................................................................................................................................................... 7 Bare Metal Orchestrator high availability.......................................................................................................................7 Installation workflow........................................................................................................................................................... 8 Access and accounts........................................................................................................................................................ 10 About Ansible....................................................................................................................................................................... 11

Chapter 2: Installing Bare Metal Orchestrator..............................................................................12 Prerequisites........................................................................................................................................................................12 Download the OVA ........................................................................................................................................................... 18 Deploy the OVA on an ESXi server................................................................................................................................18 Deploy the OVA image on vCenter................................................................................................................................19 Configure a single node Bare Metal Orchestrator after deployment.................................................................... 19 Configure an HA Bare Metal Orchestrator after deployment................................................................................ 23 Configure a secondary interface for DHCP auto-discovery................................................................................... 27 Change default CIDR subnets for Bare Metal Orchestrator.................................................................................. 29 Verify Global Controller partition assignments.......................................................................................................... 30 Verify Global Controller node creation.........................................................................................................................30

Viewing nodes.............................................................................................................................................................. 30 Uninstall and redeploy Global Controller and HA nodes........................................................................................... 31 Uninstall Bare Metal Orchestrator................................................................................................................................ 32 Log in to the web UI......................................................................................................................................................... 32

Chapter 3: Scaling Bare Metal Orchestrator................................................................................ 34 Scaling overview................................................................................................................................................................34 Edit the hosts file.............................................................................................................................................................. 34 Create worker nodes........................................................................................................................................................ 35 Verify worker nodes are created................................................................................................................................... 36 Delete worker nodes.........................................................................................................................................................36 Verify worker nodes are deleted................................................................................................................................... 36

Chapter 4: SSO integration......................................................................................................... 37 Single sign-on integration overview..............................................................................................................................37 Integrate Microsoft Azure...............................................................................................................................................37

Chapter 5: Upgrading Bare Metal Orchestrator........................................................................... 40 Upgrade overview............................................................................................................................................................. 40 High-level upgrade workflows........................................................................................................................................ 41 Upgrade the Global Controller and one or more worker nodes............................................................................. 42 Use maintenance mode....................................................................................................................................................43

Contents

Contents 3

Preface

Purpose This guide provides instructions to install Bare Metal Orchestrator and create worker nodes for an initial cluster deployment, as well as how to upgrade Bare Metal Orchestrator software and set up high availability.

Audience This guide is primarily intended for administrators who are responsible to deploy and upgrade Bare Metal Orchestrator nodes.

Disclaimer This guide may contain language that is not consistent with Dell Technologies current guidelines. Dell Technologies plans to update the guide over subsequent future releases to revise the language accordingly.

4 Preface

Revision history

This revision history lists major changes to this document.

Table 1. Revisions

Date Release Description

December 2022 1.4 Single sign-on integration for Microsoft Azure added Maintenance mode to safeguard the cluster during updates Changing the default Bare Metal Orchestrator hostname and accessing the web UI using

the hostname Minor edits across the guide

September 2022 1.3 Minor edits and changes to the port requirements Updates to the Upgrade overview chapter

May 2022 1.2 High availability updated with distributed storage Single node requirements and deployment process updated

March 2022 1.1 High availability deployment added Upgrading nodes chapter added Minor changes across the guide

November 2021 1.0 Inaugural release

Revision history 5

Product support Resources to help you to provision the infrastructure and fix problems.

Documentation You can find these Bare Metal Orchestrator documents on the Bare Metal Orchestrator Documentation site:

Bare Metal Orchestrator Release Notes Bare Metal Orchestrator Installation Guide Bare Metal Orchestrator Command Line Interface User's Guide Bare Metal Orchestrator Web User Interface Guide Bare Metal Orchestrator Command Line Interface Reference Guide Bare Metal Orchestrator Network Planning Guide Bare Metal Orchestrator API Guide

The Bare Metal Orchestrator API Guide is on the Dell Technologies Developer Portal site.

Bare Metal Orchestrator product support page Bare Metal Orchestrator Product Support Overview

Where to get help The Dell Technologies Support site (https://www.dell.com/support) contains important information about products and services including drivers, installation packages, product documentation, knowledge base articles, and advisories.

A valid support contract and account might be required to access all the available information about a specific Dell Technologies product or service.

Dell Technologies Support contact information Dell provides several online and telephone-based support and service options. Availability varies by country or region and product, and some services may not be available in your area.

NOTE: If you do not have an active Internet connection, you can find contact information from your purchase invoice,

packing slip, bill, or Dell product catalog.

Call 1-800-782-4362 or the support phone number for your country or region. Go to Dell Support to find the support phone number for your country or region. Tell the support person that you want to open a service request for Bare Metal Orchestrator. Give the support person your Product ID and a description of the problem.

You can also go to Dell Support and search for Bare Metal Orchestrator. The product support page requires you to sign in and enter your Product ID.

Contacting Dell Support How to contact your Dell account representative, Dell technical support, or Dell customer service.

Steps

1. Go to Dell Support and select a support category.

2. From the Choose a Country/Region list, verify your country or region. Then, select the appropriate service or support link.

6 Product support

Bare Metal Orchestrator installation overview

This chapter describes a single node Bare Metal Orchestrator cluster and a five-node high availability (HA) cluster with distributed storage, provides installation workflows, and account information.

Topics:

Introduction Bare Metal Orchestrator high availability Installation workflow Access and accounts About Ansible

Introduction The Dell Technologies Bare Metal Orchestrator software is provided as a virtual appliance that can be installed on a hypervisor. The virtual appliance is based on Kubernetes RKE2 and is delivered as an Open Virtual Appliance (OVA) file.

To install Bare Metal Orchestrator, you must download the OVA and deploy it on a hypervisor.

Bare Metal Orchestrator is installed on a single node RKE2 (next-generation) cluster. The node that Bare Metal Orchestrator is installed on is called the Global Controller (GC) node.

The Global Controller is a fully contained management cluster with onboard services and components that function as a site. This cluster is also called the GC site. The GC site simplifies the administration and management of Bare Metal Orchestrator and is the default site that is created during OVA deployment.

You can deploy Bare Metal Orchestrator in one of the following configurations:

A scalable, single node RKE2 (next-generation) cluster A five-node high availability (HA) cluster with internal or external distributed storage

NOTE:

You cannot convert a single node Bare Metal Orchestrator deployment to a five-node high availability deployment. For more

information, see Bare Metal Orchestrator high availability.

After a successful deployment, you can scale the Bare Metal Orchestrator node to a multi-node cluster. Scaling is done by adding one or more worker nodes to the Global Controller node. Worker nodes support the creation of remote sites. For more information about sites, see the Bare Metal Orchestrator Command Line Interface User's Guide.

For upgrade instructions, see Upgrade overview.

Bare Metal Orchestrator high availability With high availability (HA), the Bare Metal Orchestrator OVA is deployed on a five-node HA cluster by default. The Global Controller (GC) services deploy on the first node and is a fully functional, scalable Bare Metal Orchestrator cluster to which the two HA nodes are added. The two HA nodes function as a redundant pair for HA failover and must be reachable from the GC host.

The Global Controller site data and services are fully replicated on the two HA nodes. A keepalive is used to monitor the availability of services on each node in the control plane. An automatic fail over is triggered if a node failure is detected.

A redundant pair of Load Balancers provide highly reliable management access for the Bare Metal Orchestrator Web UI, CLI, and API using a virtual IP address (VIP) address. The VIP must be set to an available IP address on the same subnet as the two Load Balancers.

1

Bare Metal Orchestrator installation overview 7

Each Load Balancer is considered a node in the five-node HA cluster and must be reachable from the GC. These servers must support NGINX.

Load Balancer key tasks:

Setting the virtual IP address (VIP) of the Load Balancers to an Available IP address in the same subnet as the two Load Balancers.

Directing front-end traffic to the three control plane nodes for HA redundancy Managing load distribution Managing control planes

The following figure shows the architecture of a five-node HA deployment with distributed storage. The three control plane nodes and the redundant pair of Load Balancers comprise the five-node HA cluster. All nodes and the distributed storage volumes are active.

Figure 1. Bare Metal Orchestrator five-node HA cluster with distributed storage

GlusterFS provides distributed file storage for the Global Controller and the two redundant HA nodes in the control plane cluster. The distributed storage volumes replicate the Bare Metal Orchestrator cluster data when using PersistentVolumeClaim (PVC).

Distributed storage can be deployed locally in the three-node control plane cluster or externally. For external storage deployments, the VMs hosting the storage volumes must be reachable by the HA cluster. A minimum of three storage nodes are required.

NOTE: The remote site uses local-path as the storage class.

Observe the following:

You cannot upgrade a single node Bare Metal Orchestrator deployment to a five-node HA deployment. When using a local copy of the CLI as a remote client, you must specify the virtual IP (VIP) address of the server that is

hosting the Load Balancers in the user's config file. For more about using the CLI as a remote client, see the Bare Metal Orchestrator Command Line Interface User's Guide.

If any two control plane nodes in a high availability deployment fail at the same time, you must reboot the Global Controller node before high availability functionality can resume. Using the CLI, log in to the Global Controller as installer and enter reboot.

Installation workflow You can install Bare Metal Orchestrator as a single node cluster or as a five-node HA cluster with distributed storage.

The following diagram shows the high-level steps to install a single node Bare Metal Orchestrator cluster.

8 Bare Metal Orchestrator installation overview

Figure 2. Single node Bare Metal Orchestrator cluster installation flow

The following diagram shows the high-level steps to install a five-node HA cluster that uses either internal or external distributed storage.

Bare Metal Orchestrator installation overview 9

Figure 3. High availability Bare Metal Orchestrator cluster installation flow

Access and accounts Default dell user and installer accounts are available for the initial Dell Technologies Bare Metal Orchestrator Open Virtual Appliance (OVA) deployment.

When you SSH into Bare Metal Orchestrator for an initial OVA deployment using the default dell user or the installer account, you can run CLI commands with elevated levels of administrator access. We recommend that you change the default passwords using the $ passwd Linux command as soon as possible and record the new passwords for future reference.

At the time of the OVA deployment, an Identity and Access Management (IAM) admin user is created by default. You will be prompted to provide a password for this user while configuring Bare Metal Orchestrator after deployment. For more information, see Configure a single node Bare Metal Orchestrator after deployment or Configure an HA Bare Metal Orchestrator after deployment.

After Bare Metal Orchestrator is deployed, you can use the CLI or the web UI to continue the setup. To log in to the Bare Metal Orchestrator CLI or the web UI for the first time, use the credentials of the default admin user created. For information about logging in to the web UI, see Log in to the web UI.

For information about user roles and creating user accounts, see the Bare Metal Orchestrator Command Line Interface User's Guide.

Adding the Bare Metal Orchestrator hostname before logging in to the web UI

The default hostname for Bare Metal Orchestrator is bmo-globalcontroller. Optionally, you can change the hostname of Bare Metal Orchestrator in the all.yml file when you set up the Bare Metal Orchestrator cluster after installing the OVA.

10 Bare Metal Orchestrator installation overview

You must use the Bare Metal Orchestrator hostname when accessing Bare Metal Orchestrator using the web user interface.

For windows-based management consoles, add an entry for the Bare Metal Orchestrator hostname in the hosts file . For example:

100.10.0.1 bmo-globalcontroller

where 100.10.0.1 is an example IP address for Bare Metal Orchestrator. For high availability configurations, enter the VIP of the Load Balancer.

Now enter the Bare Metal Orchestrator hostname in your web browser to access the web UI login page. For example:

https://bmo-globalcontroller For Linux-based management consoles, add the Bare Metal Orchestrator hostname in the /etc/hosts file.

About Ansible

Ansible is an open-source software provisioning and configuration management tool. In Bare Metal Orchestrator, you must use Ansible and a text editor to:

Edit the hosts.ini file. The hosts.ini file lists IP addresses for the Global Controller and all worker nodes, as well as the two high availability (HA) nodes and the Load Balancer for HA configurations.

Run a playbook. Playbooks are the YAML files that you store and manage; passing them to Ansible to run as needed. Every time a playbook is run, Ansible checks for the listed nodes in the hosts file, establishes connection with the nodes, and uses this information to create or delete remote nodes.

Bare Metal Orchestrator installation overview 11

Installing Bare Metal Orchestrator This chapter provides instructions on how to deploy the Bare Metal Orchestrator OVA.

Topics:

Prerequisites Download the OVA Deploy the OVA on an ESXi server Deploy the OVA image on vCenter Configure a single node Bare Metal Orchestrator after deployment Configure an HA Bare Metal Orchestrator after deployment Configure a secondary interface for DHCP auto-discovery Change default CIDR subnets for Bare Metal Orchestrator Verify Global Controller partition assignments Verify Global Controller node creation Uninstall and redeploy Global Controller and HA nodes Uninstall Bare Metal Orchestrator Log in to the web UI

Prerequisites

Hardware requirements

The following tables describe the minimum hardware requirements for Bare Metal Orchestrator OVA deployment and for worker nodes installed at remote sites.

NOTE: For Bare Metal Orchestrator to operate properly after OVA deployment, a minimum of 15 GB of free space must be

maintained on the Global Controller (GC) and the worker nodes. For high availability (HA) deployments, the two redundant

HA nodes must also maintain a minimum of 15 GB free space.

The following table lists the hardware requirements for the Global Controller in a single node Bare Metal Orchestrator cluster deployment and in high availability deployments. For high availability, use the same hardware requirements for the Global Controller and the two redundant HA nodes.

Table 2. Hardware requirements for Global Controller and HA nodes

Resource Minimum requirements

Single node cluster HA deployment

CPU eight CPU cores, physical or virtual eight CPU cores, physical or virtual

Memory 32 GB RAM 32 GB RAM

Hard Disk 200 GB (partition 1, sda)

250 GB (partition 2, non-boot partition, sdb)

200 GB (partition 1, sda)

250 GB (partition 2, non-boot partition, sdb)

Network Interface Card (NIC) two NICs

The two NICs are installed by default with the OVA deployment. One NIC is required for network management and the other

two NICs per VM

Two NICs on the Global Controller node are installed by default with the OVA deployment. One NIC is required for network management and the other

2

12 Installing Bare Metal Orchestrator

Table 2. Hardware requirements for Global Controller and HA nodes (continued)

Resource Minimum requirements

Single node cluster HA deployment

NIC for the Dynamic Host Configuration Protocol (DHCP) configuration. You can add additional NICs for every DHCP subnet.

NIC for the Dynamic Host Configuration Protocol (DHCP) configuration. You can add additional NICs for every DHCP subnet.

NOTE: Partition 2 is used for GlusterFS storage. By default, the OVA reserves 250 GB of SSD memory for storage. To

increase the size of partition 2, consult your Dell representative and the Bare Metal Orchestrator Network Planning Guide.

The following table lists the worker node hardware requirements in a single node Bare Metal Orchestrator cluster deployment and for an HA deployment.

Table 3. Bare Metal Orchestrator cluster worker node hardware requirements

Resource Minimum requirements

Single node cluster HA deployment

CPU four CPU cores, physical or virtual eight CPU cores, physical or virtual

Memory 16 GB RAM 32 GB RAM

Hard Disk 100 GB (free space) 200 GB (free space)

Network Interface Card (NIC) two NICs

The two NICs are installed by default with the OVA deployment. One NIC is required for network management and the other NIC for the Dynamic Host Configuration Protocol (DHCP) configuration. You can add additional NICs for every DHCP subnet.

two NICs

The two NICs are installed by default with the OVA deployment. One NIC is required for network management and the other NIC for the Dynamic Host Configuration Protocol (DHCP) configuration. You can add additional NICs for every DHCP subnet.

Firmware recommendations

Ensure you have the latest recommended Dell firmware versions installed on your hardware. Consult the documentation for your iDRAC device.

The following table describes the validated Dell PowerEdge 15th generation servers.

Table 4. Dell PowerEdge 15th generation servers

Validated models Supported BIOS versions

Supported iDRAC firmware versions

PowerEdge R6515 Rack Servera 2.3.6 5.00.10.20

5.10.30.00PowerEdge XE2420 Edge Server 2.12.3

PowerEdge R650 Rack Server 1.3.8

PowerEdge R750 Rack Server

PowerEdge XR11 Rack Serverb 1.3.8

1.0.2PowerEdge XR12 Rack Serverb

a. Only AMD MILAN CPUs are supported. b. Excluding HBA series controllers.

The following table describes the validated Dell PowerEdge 14th generation servers.

Installing Bare Metal Orchestrator 13

Table 5. Dell PowerEdge 14th generation servers

Validated models Supported BIOS version Supported iDRAC firmware versions

PowerEdge R640 Rack Server 2.12.2 5.00.10.20

5.10.30.00PowerEdge R740 Rack Server

PowerEdge R740xd Rack Server

Software requirements

The following table lists the supported hypervisor for the OVA deployment. A dedicated server is required.

Table 6. Supported hypervisor

Hypervisor Supported versions

VMware ESXi 6.7 Update 3

7.0 Update 3

The following table lists the supported management software for the OVA deployment. A dedicated server is required.

Table 7. Supported management software

Management software Supported versions

VMware vCenter 6.7 Update 3

7.0 Update 3

The following distributed file storage system software is supported for a Bare Metal Orchestrator deployment.

GlusterFS version 9.2

By default, GlusterFS is installed and running on the Bare Metal Orchestrator host after you import the OVA.

For high availability (HA) deployments, you must have the GlusterFS software installed and running. For HA deployments with external storage, at least three VMs are required. Each VM hosting the distributed storage must have GlusterFS installed and configured.

NOTE: If you choose to set up your own, external GlusterFS storage cluster for use with Bare Metal Orchestrator, you

assume responsibility to manage and administer that distributed storage cluster.

Reserved IP addresses and network requirements

Bare Metal Orchestrator reserves IP addresses in subnet ranges 10.42.0.0/16 and 10.43.0.0/16 by default for the Global Controller cluster communications.

CAUTION: Check for potential conflicts before deploying the Global Controller cluster on a VM. The VM will

fail to onboard the Global Controller if it is on the same subnet that Bare Metal Orchestrator uses for internal

communications.

If you cannot resolve IP address conflicts by changing the subnet of your VM, you can change the default cluster-cidr and service-cidr subnets for Bare Metal Orchestrator, see Change default CIDR subnets for Bare Metal Orchestrator.

The following are the network requirements for Bare Metal Orchestrator to be able to connect to the Integrated Dell Remote Access Controller (iDRAC):

Bare Metal Orchestrator and the iDRAC should be Layer 3 reachable. The OVA must be assigned an IP address that is accessible from the iDRACs of the servers that Bare Metal Orchestrator will

manage. The OVA cannot be behind a Network Address Translation (NAT) unless the iDRACs of the target servers are also in the

same NATed network.

14 Installing Bare Metal Orchestrator

Port requirements

If you are using a firewall, you must open all ports that are listed in the following table to ensure that Bare Metal Orchestrator functions correctly. The following table lists the ports that Bare Metal Orchestrator uses.

Table 8. Port requirements

Port Required on Description

22 Global Controller (GC) and remote sites

Used for SSH access to run Ansible playbooks and for GlusterFS distributed storage.

67 Global Controller (GC) and remote sites

Used by the TFTP server. Optionally open on the remote site if PXE is used.

69 Global Controller (GC) and remote sites

Used when DHCP is configured. Optionally open on the remote site if PXE is used.

TCP/81 (HTTP) GC site Used for downloading ESXi driver into the endpoint.

123 Remote site Used for NTP synchronization.

441 GC site Used by the global web server to store operating systems and firmware images.

442 GC site Used by the internal web server.

TCP/442 (HTTPS) GC site Used for downloading firmware/ESXi images.

443 (HTTPS) and 80 (HTTP)

GC site Used by the web user interface.

2379 (TCP) GC site Used by the ETCD client for data access and management.

2380 (TCP) GC site Used by the ETCD peer for data access and management.

5047 GC site Used by localregistry.io as a docker container repository.

6443 (TCP) GC site Used for communicating with remote sites and the application programming interface (API).

8081 GC site Used for setting up remote sites.

8082 GC site Heketi CLI port.

8472 (UDP) GC and remote sites Used for Flannel VXLAN.

9345 (TCP) GC site Used for API communications.

10250 GC and remote sites Used by the kubelet node agent to register the node and manage containers.

30500 GC site Used by the global MinIO S3 to store the backups.

32569 GC site Used for Heketi pod to communicate with server.

Consult the Gluster documentation to configure the firewall on each of the GlusterFS nodes if you are using external distributed storage with Bare Metal Orchestrator. For more information, see the Gluster FS Quick Start Guide on the GlusterFS website.

Global Controller node requirements

Before deploying the Bare Metal Orchestrator OVA, you must configure the minimum virtual memory count to 262144 on the server that is used for the Global Controller node.

CAUTION: If the virtual memory is not properly configured on the Global Controller node, Bare Metal

Orchestrator logs do not display in the OpenSearch dashboard.

To set the server's default virtual memory limit to 262114 and make it persistent:

1. Check the default virtual memory limit in the sysctl.conf file, run:

Installing Bare Metal Orchestrator 15

$sudo sysctl vm.max_map_count 2. Change the memory limit and save the sysctl.conf file.

$ sudo vi /etc/sysctl.conf vm.max_map_count=262144

3. Run the following command to make the change persist in the current session:

$sudo sysctl -p

Worker node requirements

Ensure that worker node servers and virtual machines are accessible over the network using the root account.

To manage a server at the remote site, the network that the server is on must be routable to the primary network of the worker node or routable to the primary network of the Global Controller site.

The following table lists the worker node requirements. For worker node hardware requirements, see Hardware requirements.

Table 9. Worker node requirements

Software Supported versions/requirements

Linux systemd distribution Ubuntu 19.10LTS

Ubuntu 20.04LTS

Debian 11

NTP Install NTP on the worker node server before adding the node to the Bare Metal Orchestrator cluster.

In an Ubuntu or Debian Linux distribution, you can run apt-get install ntp on the worker node as root to install NTP.

HA and Load Balancer requirements

For the two redundant HA nodes and Load Balancers, you must provide four VMs based on either Debian 11 or Ubuntu 20 Linux.

Ensure the VMs that host the two redundant HA nodes and Load Balancers are reachable from the Global Controller host over the network. From the Global Controller host, you should be able to ssh root@ to each of the four VMs, where is the IP address of the VM.

The two redundant HA nodes have the same hardware requirements as the Global Controller (GC). Set up both HA VMs as described in the following table:

Table 10. HA node requirements

Item Details

Set hostnames. bmo-manager-2 and bmo-manager-3 (respectively)

Install NTP, Python 3, and Logical Volume Manager (LVM).

Run apt-get install ntp python3 lvm2 as root or sudo.

Set up two hard disks on each VM and partition.

Each HA node (bmo-manager-2 and 3) requires two separate hard disks: /dev/sda and /dev/sdb. These partition names must exactly match the default partitions that are created on the Global Controller when the OVA is deployed.

Configure the first partition to have 200 GB (free space) on /dev/sda. Configure the second partition as a non-boot partition with 250 GB (free space) on /dev/sdb for the GlusterFS storage system.

16 Installing Bare Metal Orchestrator

Table 10. HA node requirements (continued)

Item Details

Install GlusterFS. Run the following commands, where GlusterFS version 9.2 and above are supported. For example:

apt update add-apt-repository ppa:gluster/glusterfs-9 apt install --assume-yes glusterfs-server gluster --version

Set the Opensearch minimum virtual memory limit, and the maximum number of watchers and user instances.

Edit /etc/sysctl.conf to change the default parameters to the following values and then save the file.

$ vi /etc/sysctl.conf vm.max_map_count=262144 fs.inotify.max_user_watches=1048576 fs.inotify.max_user_instances=256

To enable the change in the current session, run:

$ sysctl -p

Set up both Load Balancer VMs as described in the following table:

Table 11. Load Balancer requirements

Item Details

Set hostnames. bmo-manager-lb-1 and bmo-manager-lb-2 (respectively)

Install NTP and python 3. Run apt-get install ntp python3 as root or sudo.

Install NGINX. Then stop NGINX and disable the process.

Run apt-get install nginx keepalived as root.

Then run the following commands to stop and disable the NGINX web server:

systemctl stop nginx systemctl disable nginx

Add dell and installer users. Run the following commands to add the dell and installer users without assigning passwords:

useradd dell useradd installer

Distributed storage requirements

By default, an internal GlusterFS storage is already included in the Bare Metal Orchestrator OVA. This is the second 250 GB partition on the Global Controller node.

For high availability (HA) deployments with internal distributed storage (default), each VM that is hosting the three nodes in the control plane cluster must have a second, non-boot partition of at least 250 GB of free space available for storage.

CAUTION: The second, non-boot partition that is used for GlusterFS distributed storage is wiped if Bare Metal

Orchestrator is uninstalled.

For high availability (HA) deployments with external distributed storage, the external GlusterFS storage nodes must have a second, non-boot partition of at least 250 GB of free space available for storage. Use the same partition name sdb for the second partition on each external GlusterFS storage node.

To optionally use external distributed storage with Bare Metal Orchestrator, observe the following:

Each VM hosting the distributed storage must have GlusterFS installed on a non-boot partition with at least 200 GB of available storage. Each VM requires two separate hard disks to match the default partitions on the Global Controller (/dev/sda and /dev/sdb). Configure the first hard disk to have a single partition with 200 GB of free space. The partition

Installing Bare Metal Orchestrator 17

on the second disk sdb that is used for the GlusterFS storage system must be a non-boot partition with at least 250 GB of free space available.

NOTE: The partition names must exactly match the default partitions that are created on the Global Controller when

the OVA is deployed. The first partition on the first disk is sda and the second partition on the second disk is sdb.

A minimum of three Gluster FS storage nodes are required. Each partition used for external storage must be assigned the same partition name. The external data storage volumes must be reachable from the HA cluster.

Consult the Gluster documentation to configure the firewall on each of the GlusterFS nodes if you are using external distributed storage with Bare Metal Orchestrator. For more information, see the Gluster FS Quick Start Guide on the GlusterFS website.

Download the OVA You can download and deploy the OVA on an ESXi server or a vCenter. The OVA is prepackaged with the necessary software and system settings.

Download the OVA from Dell Digital Locker.

The key contents of the OVA file are:

Operating system Ansible playbook Local registries and sample configuration templates

NOTE: The local registry hosts all Docker images required for the Bare Metal Orchestrator components.

Deploy the OVA on an ESXi server

About this task

Perform this procedure on the Global Controller node to deploy the OVA image for an ESXi server.

Steps

1. Log in to the ESXi server.

2. In the Navigator panel, go to Virtual Machines, right-click, and select Create/Register VM.

The New virtual machine window is displayed.

3. In Select creation type, select Deploy a virtual machine from an OVF or OVA file, and then click Next.

4. In Enter a name for the virtual machine, enter a hostname for the VM.

5. Select the path to the OVA or drag and drop the OVA image into the wizard, and then click Next.

6. Select a datastore to use for your VM, and then click Next.

7. In Network mappings, accept the defaults for the VLAN destination networks.

NOTE: You configure destination networks after the OVA is deployed.

8. In Disk provisioning, leave Thin as the selected value.

9. Clear the Power on automatically check box.

10. Click Next and then click Finish to start the OVA deployment.

11. After the deployment completes, right-click the VM and click Edit Settings.

12. In Select networks, select Adapter 1 as the VLAN adapter for OVA management and then click Ok.

13. Power on the VM.

Results

After successful OVA deployment, the Bare Metal Orchestrator VM starts.

18 Installing Bare Metal Orchestrator

NOTE: An error message may appear during startup that states the virtual device sound cannot connect because no

corresponding device is available on the host. Click Answer question to dismiss the message. This improves the time it

takes for the VM to start up.

Next steps

Proceed with the post-deployment node setup.

For a single node Bare Metal Orchestrator setup, see Configure a single node Bare Metal Orchestrator after deployment. For an HA setup, see Configure an HA Bare Metal Orchestrator after deployment.

Deploy the OVA image on vCenter

About this task

Perform this procedure on the Global Controller node to deploy the OVA image in vCenter.

Steps

1. Launch the VMware vSphere web client.

2. In the navigator, under the Hosts and Clusters icon, right-click the ESXi cluster, and select Deploy OVF Template.

3. In Select an OVF template, go to Local file and choose the OVA you have downloaded.

4. Click Next.

5. In Select a name and folder, specify a unique hostname and target location for the VM. Click Next.

6. In Select a computer resource, select a destination compute resource to deploy the OVF template and then click Next.

7. In Review details, accept the template defaults and click Next.

8. In Select storage, select a datastore to store the deployed OVF template and then click Next.

9. In Select networks, accept the defaults for the VLAN destination networks and then click Next.

NOTE: You configure destination networks after the OVA is deployed.

10. In Ready to complete, click Finish to start the OVF deployment.

11. After the deployment completes, right-click the VM and click Edit Settings.

12. In Select networks, select Adapter 1 as the VLAN adapter for OVA management and then click Ok.

13. Power on the VM.

Results

After successful OVA deployment, the Bare Metal Orchestrator VM starts.

Next steps

Proceed with the post-deployment node setup.

For a single-node Bare Metal Orchestrator setup, see Configure a single node Bare Metal Orchestrator after deployment. For an HA setup, see Configure an HA Bare Metal Orchestrator after deployment.

Configure a single node Bare Metal Orchestrator after deployment

Prerequisites

Check the subnet of the VM for potential conflicts before proceeding to configure Bare Metal Orchestrator and deploy the Global Controller.

Installing Bare Metal Orchestrator 19

CAUTION: A VM that has an IP address within the reserved subnet ranges 10.42.0.0/16 and 10.43.0.0/16 will fail

to onboard when deploying the Global Controller. For more information, see Prerequisites.

About this task

After you deploy the OVA, use the following procedure to setup Bare Metal Orchestrator as a single node.

Steps

1. Connect to the ESXi server or the vCenter.

2. Go to the VM and launch the console.

3. Log in as the installer user with the password Dell1234.

4. The OVA is configured to use a static IP address by default. Edit vi /etc/network/interfaces as follows to configure the static IP address. Then save the file.

source/etc/network/interfaces.d/* # The loopback network interface auto lo iface lo inet loopback # The primary network interface allow-hotplug ens33 iface ens33 inet static address netmask gateway

where is your static IP address, is the netmask, and is the IP address of your gateway.

NOTE: Optionally, if you are using DHCP auto-discovery on the Global Controller, you must configure a secondary

interface and ensure that it is not routable. For instructions, see Configure a secondary interface for DHCP auto-

discovery.

5. Do the following to configure a DNS IP address:

a. Edit vi /etc/resolv.conf as follows, then save the file:

domain localdomain search localdomain nameserver 8.8.8.8 nameserver 8.8.4.4

The nameserver must point to the IP address of a valid and working DNS server. Change the default nameserver IP addresses of 8.8.8.8 and 8.8.4.4 as required.

NOTE: For air-gapped environments, use 127.0.0.1 as the nameserver IP address.

6. Edit vi /etc/hosts and add the following line to point to the localregistry.io on the Global Controller, and then save the file.

localregistry.io

where is the IP address of the Global Controller node.

7. Enter reboot to reboot the system.

8. Log back into the VM as the installer user with the password Dell1234.

9. Change directory to mw-ova-ansible.

10. Update the IP address of the Global Controller node and the internal gluster_nodes in inventory/my-cluster/ hosts.ini and optionally add worker node IP addresses.

vi hosts.ini The following is an example hosts.ini file:

[global_controller]

20 Installing Bare Metal Orchestrator

[ha]

[loadbalancer] [gluster_nodes]

[secondary_ip] ;; This section is optional. ; ;; set for single node and HA cluster ; ;; set for HA cluster ; ;; set for HA cluster

[node]

[node-remove]

[hosts] ansible_python_interpreter=/usr/bin/python3 ansible_python_interpreter=/usr/bin/python3 ansible_python_interpreter=/usr/bin/python3

NOTE:

When you are ready to scale up the Bare Metal Orchestrator cluster and run the playbook, the worker nodes entered

below [node] are added. The worker nodes you enter below [node-remove] are removed from the cluster if you run

remove_workernode.yaml. For more information, see Scaling Bare Metal Orchestrator.

11. Update the storage_volume attribute in the file inventory/my-cluster/group_vars/all.yml as shown. The gluster_volume_type attribute appears further down in the file and should be set to none for a single node deployment.

cd inventory/my-cluster/group_vars vi all.yml

The following shows an example all.yml file.

# Gluster storage partition # use a non boot secondary partition # WARNING: this partition will be wiped before installation and after uninstalla # The same storage partition name must be configured on each node. storage_volume: "/dev/sdb1" . . . gluster_volume_type: "none"

NOTE: Optionally, if a docker-based external registry is used with Bare Metal Orchestrator, update the

external_registry attribute with the ipaddress:port of the external registry. If the value is left blank, internal

registry is used.

12. Optionally, change the default Bare Metal Orchestrator hostname in the inventory/my-cluster/group_vars/ all.yml file to a hostname of your choice.

The following is an example of the hostname entry in the all.yml file, where the default hostname is bmo- globalcontroller.

# If the default setting is used, then access BMO web UI from a web browser using # https://bmo-globalcontroller # If using a web browser on a Windows PC, update C:\windows\system32\drivers\etc\hosts # with the Bare Metal Orchestrator hostname. keycloak_access_hostname: "bmo-globalcontroller"

Installing Bare Metal Orchestrator 21

For instructions to add the Bare Metal Orchestrator hostname to the hosts file on your management console, see Access and accounts.

13. Run the lsblk command and confirm the correct mounted partition assignments on the Global Controller.

The following is an example of the correct mounted partition assignments, where sda is the first partition with sda1, sda2, and sda5. The second partition is sdb.

CAUTION: Incorrect mounted partition assignments results in the loss of the Bare Metal Orchestrator cluster

when you upgrade the system. Reboot the VM to reorder the mounted partition assignments until partitions

sda and sdb are correctly mounted as shown.

14. Update the Common Language Location Identifier (CLLI) information for the Global Controller site in the file singlenode- site.yaml and add a unique location identifier plus the following mandatory parameters: , ,

, and .

cd mw-ova-ansible vi singlenode-site.yaml

For example:

metadata: id: MiamiFL-1 city: Miami state: Florida address: "123 Main Street, FL" country: USA latLong: "37.404882, -121.978486"

15. Run the following commands to deploy the Global Controller cluster:

sudo ansible-playbook ssh-copy-heketi.yaml -i inventory/my-cluster/hosts.ini sudo ansible-playbook setup.yaml -i inventory/my-cluster/hosts.ini

16. When prompted, enter the sudo password Dell1234 for the Global Controller.

17. When prompted, you must create a temporary Identity and Access Management (IAM) password for a default admin user to complete the installation.

Record the temporary IAM admin user password for future reference. The first-time you log in to Bare Metal Orchestrator using the CLI, API, or the Web User interface, you must enter the default credentials and change the password. For more information about Identity and Access Management, see Access and accounts.

Next steps

Proceed to verify that the nodes were created, see Viewing nodes.

If the cluster installation fails or you need to reinstall the cluster, you can uninstall the cluster and then redeploy it, see Uninstall and redeploy Global Controller and HA nodes.

If server onboarding fails because of an IP address conflict, see Change default CIDR subnets for Bare Metal Orchestrator.

22 Installing Bare Metal Orchestrator

Configure an HA Bare Metal Orchestrator after deployment

Prerequisites

Check the subnet of your servers for potential conflicts before proceeding to configure Bare Metal Orchestrator.

CAUTION: A VM that has an IP address within the reserved subnet ranges 10.42.0.0/16 and 10.43.0.0/16 will fail

to onboard when deploying the Global Controller. For more information, see Prerequisites.

For a high availability (HA) Bare Metal Orchestrator configuration, the two redundant HA nodes and the Load Balancers must be set up and reachable from the GC host. For more information, see the HA and Load Balancer setup requirements in the Prerequisites.

Ensure that each server in the five-node HA cluster is assigned a unique hostname. For example: bmo-manager-1 (for the Global Controller node), bmo-manager-2, bmo-manager-3, bmo-manager-lb-1, and bmo-manager-lb-2.

About this task

After you deploy the OVA and have prepared the required VMs for the HA deployment, use the following procedure to setup the Bare Metal Orchestrator nodes.

Steps

1. Connect to the ESXi server or the vCenter.

2. Go to the VM and launch the console.

3. Log in as the installer user with the password Dell1234.

4. The OVA is configured to use a static IP address by default. Edit /etc/network/interfaces as follows to configure the static IP address. Then save the file.

source/etc/network/interfaces.d/* # The loopback network interface auto lo iface lo inet loopback # The primary network interface allow-hotplug ens33 iface ens33 inet static address netmask gateway

where is your static IP address, is the netmask, and is the IP address of your gateway.

NOTE: Optionally, if you are using DHCP auto-discovery on the Global Controller, you must configure a secondary

interface and ensure that it is not routable. For instructions, see Configure a secondary interface for DHCP auto-

discovery.

5. Do the following to configure a DNS IP address:

a. Edit /etc/resolv.conf as follows, then save the file:

domain localdomain search localdomain nameserver 8.8.8.8 nameserver 8.8.4.4

The nameserver must point to the IP address of a valid and working DNS server. Change the default nameserver IP addresses of 8.8.8.8 and 8.8.4.4 as required.

NOTE: For air-gapped environments, use 127.0.0.1 as the nameserver IP address.

6. Edit /etc/hosts on the Global Controller to include the following, and then save the file:

Installing Bare Metal Orchestrator 23

IP addresses of the Global Controller and localhost IP address of localregistry.io on the Global Controller

The following is an example /etc/hosts file for the Global Controller bmo-manager-1.

## this is example /etc/hosts for bmo-manager-1 127.0.0.1 localhost 127.0.1.1 bmo-manager-1 localregistry.io

where is the IP address of the Global Controller. In this example, enter 127.0.1.1 as the value.

7. Enter reboot to reboot the system.

8. Reconnect to the ESXi server or the vCenter and launch the console.

9. Log in as the installer user with the password Dell1234.

10. Change directory to mw-ova-ansible.

11. Update the IP address of the Global Controller node, the two redundant HA nodes, and the load balancer in inventory/ my-cluster/hosts.ini file on the Global Controller node. Optionally, you can add worker node IP addresses.

The following is an example hosts.ini file for an HA deployment that uses internal distributed storage, where:

The GC node is bmo-manager-1.

The two redundant HA node hostnames are bmo-manager-2 and bmo-manager-3.

The load balancer hostnames in this example are bmo-manager-lb1 and bmo-manager-lb2.

Nodes to associate with the local GlusterFS storage system. The three nodes in the HA control plane cluster when using internal GlusterFS storage are: bmo-manager-1, bmo-manager-2 and bmo-manager-3.

[global_controller]

[ha]

[loadbalancer]

[gluster_nodes]

[secondary_ip] ;; This section is optional. ; ;; set for single node and HA cluster ; ;; set for HA cluster ; ;; set for HA cluster

[node]

[node-remove]

[hosts] ;; this is IP for a five-node HA with internal storage and two worker nodes ansible_python_interpreter=/usr/bin/python3 ansible_python_interpreter=/usr/bin/python3 ansible_python_interpreter=/usr/bin/python3 ansible_python_interpreter=/usr/bin/python3 ansible_python_interpreter=/usr/bin/python3 ansible_python_interpreter=/usr/bin/python3 ansible_python_interpreter=/usr/bin/python3

24 Installing Bare Metal Orchestrator

If your deployment uses external distributed storage, then the hosts.ini might appear as follows:

[global_controller]

[ha]

[loadbalancer]

[gluster_nodes] [node]

[node-remove]

[hosts] ;; this is IP for a five-node HA with external storage and two worker nodes ansible_python_interpreter=/usr/bin/python3 ansible_python_interpreter=/usr/bin/python3 ansible_python_interpreter=/usr/bin/python3 ansible_python_interpreter=/usr/bin/python3 ansible_python_interpreter=/usr/bin/python3 ansible_python_interpreter=/usr/bin/python3 ansible_python_interpreter=/usr/bin/python3 ansible_python_interpreter=/usr/bin/python3 ansible_python_interpreter=/usr/bin/python3 ansible_python_interpreter=/usr/bin/python3

NOTE:

When you are ready to scale up the Bare Metal Orchestrator cluster and run the add_workernode.yaml playbook,

the worker nodes entered below [node] are added. The worker nodes you enter below [node-remove] are removed

from the cluster if you run remove_workernode.yaml. For more information, see Scaling Bare Metal Orchestrator.

12. Log in to the VM that is hosting the Load Balancer node and update the /etc/hosts file on the Load Balancer as shown. Do this for each Load Balancer node.

127.0.0.1 localhost 127.0.1.1 bmo-manager-lb1 bmo-manager-1 bmo-manager-2 bmo-manager-3

13. For each of the redundant HA nodes, log into the VM that is hosting the node and update the /etc/hosts file on the as follows:

## this is an example /etc/hosts for bmo-manager-2 127.0.0.1 localhost 127.0.1.1 bmo-manager-2 localregistry.io

## this is an example /etc/hosts for bmo-manager-3 127.0.0.1 localhost 127.0.1.1 bmo-manager-3 localregistry.io

Installing Bare Metal Orchestrator 25

14. Reconnect to the ESXi server or the vCenter and relaunch the console (if necessary) and log in as the installer user if your previous session closed. Then, continue to edit the HA section of the file inventory/my-cluster/group_vars/ all.yml as follows:

Set the rke2_ha_mode attribute to true.

Uncomment the lines needed to deploy a multi-node control plane. Add the Load Balancer virtual IP (VIP) address. Ensure the gluster_user attribute is set to admin.

Optionally, if a docker-based external registry is used with Bare Metal Orchestrator, update the external_registry attribute with the ipaddress:port of the external registry. If the value is left blank, internal registry is used.

Do not change the default gluster_volume_type replica count of two replications.

The following is an example of the HA section of the all.yml file that is set for a high availability configuration.

# Deploy the cluster in HA mode rke2_ha_mode" true # Uncomment values to deploy multi node control-plane after setting rke2_ha_mode: true ha_worker_ip: "{{ hostvars[groups]['ha'] | default(groups['ha']) }}" lb_ip_1: "{{ hostvars[groups['loadbalancer'][0]]['ansible_host'] | default(groups['loadbalancer'][0]) }}" lb_ip_2: "{{ hostvars[groups['loadbalancer'][1]]['ansible_host'] | default(groups['loadbalancer'][1]) }}" # Uncomment and set the hostname of the loadbalancers lb_hostname_1: "bmo-manager-lb-1" lb_hostname_2: "bmo-manager-lb-2" lb_vip_ip: "" # Gluster storage partition # use a non boot secondary partition # WARNING: this partition will be wiped before installation and after uninstalla # The same storage partition name must be configured on each node. storage_volume: "/dev/sdb1" . . . #Set gluster volume type as replicate: or none. gluster_volume_type: "replicate:2"

NOTE: The GlusterFS storage partition name is /dev/sdb1 by default in the OVA. The same storage partition name

must be set on each HA node to match the Global Controller.

15. Optionally, change the default Bare Metal Orchestrator hostname in the inventory/my-cluster/group_vars/ all.yml file to a hostname of your choice.

The following is an example of the hostname entry in the all.yml file, where the default hostname is bmo- globalcontroller.

# If the default setting is used, then access the web UI from a web browser using # https://bmo-globalcontroller # If using a web browser on a Windows PC, update C:\windows\system32\drivers\etc\hosts # with the Bare Metal Orchestrator hostname. keycloak_access_hostname: "bmo-globalcontroller"

For instructions to add the Bare Metal Orchestrator hostname to the hosts file on your management console, see Access and accounts.

16. Run the lsblk command and confirm the correct mounted partition assignments on the Global Controller, and on the two redundant HA nodes.

The following is an example of the correct mounted partition assignments, where sda is the first partition with sda1, sda2, and sda5. The second partition is sdb.

26 Installing Bare Metal Orchestrator

CAUTION: Incorrect mounted partition assignments results in the loss of the Bare Metal Orchestrator cluster

when you reinstall the system. Reboot the VM to reorder the mounted partition assignments until partitions

sda and sdb are correctly mounted as shown.

17. Update the Common Language Location Identifier (CLLI) information for the Global Controller site in the file singlenode- site.yaml and add a unique location identifier plus the following mandatory parameters: , ,

, and .

For example:

metadata: id: MiamiFL-1 city: Miami state: Florida address: "123 Main Street, FL" country: USA latLong: "37.404882, -121.978486"

18. Run the following commands in the order listed to deploy the cluster. This step also deploys the SSH keys required for high availability. When prompted, enter the sudo password Dell1234 for the Global Controller node, as well as the root passwords for the HA nodes and Load Balancer hosts.

sudo ansible-playbook ssh-copy-heketi.yaml -i inventory/my-cluster/hosts.ini sudo ansible-playbook setup.yaml -i inventory/my-cluster/hosts.ini sudo ansible-playbook ssh-copy-ha.yaml -i inventory/my-cluster/hosts.ini sudo ansible-playbook add-ha-node.yaml -i inventory/my-cluster/hosts.ini sudo ansible-playbook setup-lb.yaml -i inventory/my-cluster/hosts.ini

19. When prompted, you must create a temporary Identity and Access Management (IAM) password for a default admin user to complete the installation.

Record the temporary IAM admin user password for future reference. The first-time you log in to Bare Metal Orchestrator using the CLI, API, or the Web User interface, you must enter the default credentials and change the password. For more information about Identity and Access Management, see Access and accounts.

Next steps

Proceed to verify that the nodes were created, see Viewing nodes.

If the cluster installation fails or you need to reinstall the cluster, you can uninstall the cluster and then redeploy it, see Uninstall and redeploy Global Controller and HA nodes.

If server onboarding fails because of an IP address conflict, see Change default CIDR subnets for Bare Metal Orchestrator.

Configure a secondary interface for DHCP auto- discovery

About this task

If you are using DHCP auto-discovery on the Global Controller, you must configure a secondary interface on Bare Metal Orchestrator using this procedure. For instructions to configure DHCP, see the Bare Metal Orchestrator Command Line Interface User's Guide.

To configure a secondary interface for DHCP auto-discovery:

Steps

1. Log in to the Global Controller node console as the installer user with the password Dell1234.

2. Edit /etc/network/interfaces.d/* to add the secondary interface, then save the file.

The following example shows a secondary interface of ens37.

# The loopback network interface

Installing Bare Metal Orchestrator 27

auto lo iface lo inet loopback # The primary network interface allow-hotplug ens33 iface ens33 inet static address netmask gateway allow-hotplug ens37 iface ens37 inet static address netmask

where is the IP address of your DHCP server, is the netmask.

NOTE: Ensure that the primary interface ens33 is routable and that the second interface is not routable.

3. Change the directory to mw-ova-ansible.

cd mw-ova-ansible 4. Using an editor such as Vim, edit the hosts.ini file.

vim inventory/my-cluster/hosts.ini 5. Add the secondary interface IP address for the Global Controller. If this is a high availability (HA) deployment, you must also

add the secondary interface IP address for the two redundant HA nodes.

The following is an example hosts.ini file for a high availability deployment that has secondary interface IP addresses configured, where the GC node is bmo-manager-1 and the two redundant HA nodes are bmo-manager-2 and bmo- manager-3:

[global_controller]

[ha]

[loadbalancer]

[gluster_nodes]

[secondary_ip] [node]

[node-remove]

[hosts] ;; this is IP for a five-node HA with internal storage and two worker nodes ansible_python_interpreter=/usr/bin/python3 ansible_python_interpreter=/usr/bin/python3 ansible_python_interpreter=/usr/bin/python3 ansible_python_interpreter=/usr/bin/python3 ansible_python_interpreter=/usr/bin/python3

28 Installing Bare Metal Orchestrator

ansible_python_interpreter=/usr/bin/python3 ansible_python_interpreter=/usr/bin/python3

6. Save the file.

7. Edit the file inventory/my-cluster/group_vars/all.yml and add the secondary IP address for the Global Controller. For HA deployments, add secondary IP addresses for the Global Controller and the two redundant HA nodes.

The following is an example of the lines in the all.yml file to update and remove the comments, where cp1 is Global Controller, and cp2 and cp3 are the two redundant HA nodes.

# Add Secondary IPs for Certificate Generation. Uncomment cp1_secondary_ip for singlenode only. Uncomment all 3 for HA #cp1_secondary_ip: "{{ hostvars[groups['secondary_ip'][0]]['ansible_host'] | default(groups['secondary_ip'][0]) }}" #cp2_secondary_ip: "{{ hostvars[groups['secondary_ip'][1]]['ansible_host'] | default(groups['secondary_ip'][1]) }}" #cp3_secondary_ip: "{{ hostvars[groups['secondary_ip'][2]]['ansible_host'] | default(groups['secondary_ip'][2]) }}"

8. When you're done, save the file and exit the editor.

Change default CIDR subnets for Bare Metal Orchestrator Bare Metal Orchestrator reserves IP addresses in subnet ranges 10.42.0.0/16 and 10.43.0.0/16 for Global Controller cluster communication. If your server workload for Bare Metal Orchestrator uses IP addresses in those subnets and you cannot change them to a different subnet, then you can change the default CIDR subnets for Bare Metal Orchestrator.

Prerequisites

If you need to change the default cluster-cidr and service-cidr subnets for Bare Metal Orchestrator, do that before you configure Bare Metal Orchestrator and deploy the Global Controller.

If you deployed the Global Controller and onboarding failed because of an IP address conflict, then you must do the following:

1. Change the default CIDR subnets for Bare Metal Orchestrator. 2. Perform the procedure to Uninstall and redeploy Global Controller and HA nodes.

About this task

To change the default CIDR subnets for Bare Metal Orchestrator, do the following:

Steps

1. Establish a CLI session on the VM that has Bare Metal Orchestrator installed and login as the installer user with the password Dell1234.

2. Change directory:

cd mw-ova-ansible 3. Using an editor such as vim, edit the file rke2-server.service and change the line ExecStart=/user/local/

bin.rke2.server to the following:

ExecStart=/user/local/bin.rke2.server --cluster-cidr= --service- cidr=

where and are valid private CIDR subnet values, for example: 172.27.0.0/16

Installing Bare Metal Orchestrator 29

Verify Global Controller partition assignments For Bare Metal Orchestrator to function properly, two partitions are mounted on the Global Controller, as well as on each of the two, redundant high availability (HA) nodes in an HA configuration. You must ensure the partitions are mounted correctly before upgrading Bare Metal Orchestrator to avoid critical data loss.

About this task

For example, there are two mounted partitions: sda and sdb. The Bare Metal Orchestrator cluster must run on the first partition (sda) and the GlusterFS distributed storage must run on the second partition (sdb).

If the mounted partitions are incorrectly assigned, you can reboot the nodes to reset the mounted partition assignments.

CAUTION: Incorrect mounted partition assignments results in the loss of the Bare Metal Orchestrator cluster

when you reinstall the system.

Steps

1. Establish a CLI session on the VM that has Bare Metal Orchestrator installed and login as the installer user with the password Dell1234.

2. Run the lsblk command and confirm the correct mounted partition assignments on the Global Controller.

The following is an example of the correct mounted partition assignments, where sda is the first partition with sda1, sda2, and sda3. The second partition is sdb.

sda sda1 sda2 sda3

sdb sdb1

For an HA configuration, repeat this step for each of the two redundant HA nodes.

Verify Global Controller node creation

To verify that the Global Controller node was successfully created, follow the procedure in Viewing nodes. If successfully created, the Global Controller node is displayed in the output.

Viewing nodes

View nodes to verify they are deployed.

Prerequisites

Bare Metal Orchestrator is deployed.

About this task

Use the following procedure to verify that the Global Controller node is deployed.

Steps

1. Open a CLI session on the Bare Metal Orchestrator VM.

2. Run the following command:

bmo get node

30 Installing Bare Metal Orchestrator

Results

The node details are displayed. For more information about the fields, see the node field definitions in the Bare Metal Orchestrator Command Line Interface User's Guide. The following example output shows deployed nodes.

NAME ON-BOARDED SITE AGE INTERNAL-IP bmo-manager-1 gc 13d 111.11.0.11 worker1 austin 13d 100.10.0.10

Uninstall and redeploy Global Controller and HA nodes If the Global Controller cluster or one of the high availability (HA) nodes fails to deploy during the Bare Metal Orchestrator deployment, you can uninstall the nodes and redeploy them.

About this task

Use this procedure to uninstall the Global Controller node for a single node Bare Metal Orchestrator deployment or all nodes for an HA deployment, and then redeploy the nodes.

Steps

1. Establish a CLI session on the VM that has Bare Metal Orchestrator installed and login as the installer user with the password Dell1234.

2. For a single node deployment, do the following:

a. Run:

sudo ansible-playbook uninstall.yaml -i inventory/my-cluster/hosts.ini

b. After the system reboots, run:

sudo ansible-playbook cleanup.yaml -i inventory/my-cluster/hosts.ini

c. Run the following command to redeploy the Global Controller cluster and enter the sudo password Dell1234 for the Global Controller when prompted:

sudo ansible-playbook ssh-copy-heketi.yaml -i inventory/my-cluster/hosts.ini sudo ansible-playbook setup.yaml -i inventory/my-cluster/hosts.ini

3. For an HA deployment, do the following:

a. Uninstall each of the nodes in the following order:

sudo ansible-playbook uninstall-lb.yaml -i inventory/my-cluster/hosts.ini sudo ansible-playbook uninstall-ha-nodes.yaml -i inventory/my-cluster/hosts.ini sudo ansible-playbook uninstall.yaml -i inventory/my-cluster/hosts.ini

b. After the system reboots, run:

sudo ansible-playbook cleanup.yaml -i inventory/my-cluster/hosts.ini

c. Run the following commands in the order listed to redeploy the cluster. When prompted, enter the sudo password Dell1234 for the Global Controller node, as well as the root passwords for the HA nodes and Load Balancer hosts.

sudo ansible-playbook ssh-copy-heketi.yaml -i inventory/my-cluster/hosts.ini sudo ansible-playbook setup.yaml -i inventory/my-cluster/hosts.ini sudo ansible-playbook ssh-copy-ha.yaml -i inventory/my-cluster/hosts.ini sudo ansible-playbook add-ha-node.yaml -i inventory/my-cluster/hosts.ini sudo ansible-playbook setup-lb.yaml -i inventory/my-cluster/hosts.ini

Installing Bare Metal Orchestrator 31

This step also redeploys the SSH keys for high availability.

Next steps

Proceed to verify node creation, see Viewing nodes.

Uninstall Bare Metal Orchestrator

About this task

Use the following procedure to uninstall Bare Metal Orchestrator from the Global Controller node and all worker nodes, as well as the two HA nodes and the Load Balancer for HA configurations.

We recommend performing a backup to an external MinIO S3 storage location before uninstalling Bare Metal Orchestrator. For information about backups, see the Bare Metal Orchestrator Command Line Interface User's Guide.

CAUTION:

Uninstalling completely removes all Bare Metal Orchestrator components from the nodes.

Steps

1. Establish a CLI session on the server that has the Bare Metal Orchestrator installed and login as the installer user with the password Dell1234.

2. Change directory to mw-ova-ansible.

cd mw-ova-ansible 3. Do one of the following:

For a single node setup, run:

sudo ansible-playbook uninstall.yaml -i inventory/my-cluster/hosts.ini For an HA setup, do the following to uninstall the Load Balancer, the two HA nodes, and the Global Controller. Run:

sudo ansible-playbook uninstall-lb.yaml -i inventory/my-cluster/hosts.ini sudo ansible-playbook uninstall-ha-nodes.yaml -i inventory/my-cluster/hosts.ini sudo ansible-playbook uninstall.yaml -i inventory/my-cluster/hosts.ini

After the system reboots, run:

sudo ansible-playbook cleanup.yaml -i inventory/my-cluster/hosts.ini

4. When prompted, enter the sudo password Dell1234 for the Global Controller.

Log in to the web UI After Bare Metal Orchestrator is deployed, you can use the CLI or the web UI to continue the setup. To use the web UI for the first time, use the default IAM admin user account.

Prerequisites

You require the user credentials of the default IAM admin user. In a single node deployment, the Bare Metal Orchestrator hostname must be mapped to the IP address of the Bare Metal

Orchestrator VM. In an HA deployment, the Bare Metal Orchestrator hostname must be mapped to the Virtual IP (VIP) address of the Load

Balancers. You must add the Bare Metal Orchestrator hostname to the hosts file on your management console before you can access

Bare Metal Orchestrator using the web UI. Use the default hostname bmo-globalcontroller or the hostname that was assigned during Bare Metal Orchestrator installation. You also need the IP address of the Global Controller node. For high availability deployments, you need the VIP of the Load Balancer. For more information and how to update the hosts file, see Access and accounts.

32 Installing Bare Metal Orchestrator

About this task

To log in to the web UI:

Steps

1. Open a web browser and enter https:// in the address bar to display the Login screen.

where the is the hostname assigned to Bare Metal Orchestrator.

Figure 4. Bare Metal Orchestrator Login screen

2. Enter the admin username and the password.

The admin user can log in to the web UI with the user credentials provided.

3. Click Log in to access the Web UI dashboard.

Installing Bare Metal Orchestrator 33

Scaling Bare Metal Orchestrator Scale the Bare Metal Orchestrator cluster.

Topics:

Scaling overview Edit the hosts file Create worker nodes Verify worker nodes are created Delete worker nodes Verify worker nodes are deleted

Scaling overview

After the Global Controller node is created, you can scale it up by adding additional worker nodes. You must first update the hosts.ini file with the IP addresses of the worker nodes. Then, you can create worker nodes. To delete worker nodes, you must add the IP addresses of the worker nodes in the [node-remove] section of the hosts.ini file. After that, you can run the Ansible playbook to delete worker nodes.

Edit the hosts file

Prerequisites

You require the IP address of the nodes you want to edit, including the Global Controller and Worker Nodes. For high availability (HA) deployments, this includes the two HA nodes, the Load Balancer virtual IP address (VIP), and the three GlusterFS nodes.

About this task

To edit the hosts file:

Steps

1. Log in to the Global Controller node console as the installer user with the password Dell1234.

2. Change the directory to mw-ova-ansible.

cd mw-ova-ansible 3. Using an editor such as Vim, edit the hosts.ini file.

vim inventory/my-cluster/hosts.ini 4. Update the IP address of the nodes.

The GC node is bmo-manager-1.

The two redundant HA node hostnames are bmo-manager-2 and bmo-manager-3.

The load balancer hostnames in this example are bmo-manager-lb1 and bmo-manager-lb2.

Nodes to associate with the local GlusterFS storage system. Those are the three nodes in the HA control plane cluster when using internal GlusterFS storage: bmo-manager-1, bmo-manager-2 and bmo-manager-3.

The following is an example hosts.ini file for a high availability deployment that uses internal distributed storage:

[global_controller]

[ha]

3

34 Scaling Bare Metal Orchestrator

[loadbalancer]

[gluster_nodes] [node]

[node-remove]

[hosts] ;; this is IP for a five-node HA with internal storage and two worker nodes ansible_python_interpreter=/usr/bin/python3 ansible_python_interpreter=/usr/bin/python3 ansible_python_interpreter=/usr/bin/python3 ansible_python_interpreter=/usr/bin/python3 ansible_python_interpreter=/usr/bin/python3 ansible_python_interpreter=/usr/bin/python3 ansible_python_interpreter=/usr/bin/python3

5. Save the file and quit the editor.

6. To copy the SSH credentials to the Global Controller and worker nodes, run:

sudo ansible-playbook ssh-copy.yaml -i inventory/my-cluster/hosts.ini 7. When prompted, enter the sudo password Dell1234 for the Global Controller and then enter the root password for the

worker node.

Create worker nodes Use this procedure to create worker nodes that you can manage with the Global Controller.

Prerequisites

Ensure that bare metal servers and virtual machines are available and reachable from the Global Controller. Ensure that worker node servers and virtual machines are accessible over the network using the root account. Gather the IPv4 addresses and administrator credentials of the bare metal servers and virtual machines. Install NTP on the worker node server before adding the node to the Bare Metal Orchestrator cluster. In an Ubuntu or

Debian Linux distribution, you can run apt-get install ntp on the worker node as root to install NTP.

The following systemd Linux distributions were tested on the bare metal servers and virtual machines: Ubuntu 19.10LTS Ubuntu 20.04LTS Debian 11

To manage a server at the remote site, the network that the server is on must be routable to the primary network of the worker node or routable to the primary network of the Global Controller site.

Steps

1. Edit the inventory/mw-cluster/hosts.ini file to add the IP address of the worker node to be created. For more information about how to edit the hosts file, see Edit the hosts file.

2. Enter the following commands to run the Ansible playbooks and pass the YAML and hosts files needed to create a worker node:

sudo ansible-playbook ssh-copy.yaml -i inventory/my-cluster/hosts.ini sudo ansible-playbook add_workernode.yaml -i inventory/my-cluster/hosts.ini

Scaling Bare Metal Orchestrator 35

3. Enter exit to end the SSH session.

Verify worker nodes are created

To verify a worker node is created, see the View nodes section in the Bare Metal Orchestrator Command Line Interface User's Guide. If a node is successfully created, the new node is displayed in the output.

After the worker nodes are successfully created, you must create sites to manage your infrastructure. For more information, see the Managing Sites section in the Bare Metal Orchestrator Command Line Interface User's Guide.

Delete worker nodes Use this procedure to delete worker nodes.

About this task

Before you can delete a worker node, you must manually add the IP addresses of the worker nodes to delete in the [node- remove] section of the inventory/my-cluster/hosts.ini file. We recommend creating a backup before deleting a worker node and its associated sites. For information about backups, see the Bare Metal Orchestrator Command Line Interface User's Guide.

CAUTION: Deleting the Global Controller node uninstalls Bare Metal Orchestrator.

Steps

1. Log in to the Global Controller node console as the installer user with the password Dell1234.

2. Change the directory to mw-ova-ansible.

cd mw-ova-ansible 3. Using an editor such as Vim, add the IP addresses of the worker nodes to delete in the [node-remove] section of the

hosts.ini file.

For example:

# vim inventory/my-cluster/hosts.ini [node-remove] < Worker node 1 IP > < Worker node 2 IP >

4. Save the file and quit the editor.

5. Run the Ansible playbook and pass the YAML and hosts files needed to delete the worker node. Do the following:

a. Run:

sudo ansible-playbook remove_workernode.yaml -i inventory/my-cluster/hosts.ini b. If you get a warning that Bare Metal Orchestrator sites exits on the worker node and removal of the worker node failed,

run the following command to force the deletion:

sudo ansible-playbook remove_workernode.yaml -i inventory/my-cluster/hosts.ini -e param=force_delete

6. Enter exit to end the SSH session.

Verify worker nodes are deleted

To verify worker nodes are deleted, see the View nodes section in the Bare Metal Orchestrator Command Line Interface User's Guide. If a node is successfully deleted, the deleted node is not displayed in the output.

36 Scaling Bare Metal Orchestrator

SSO integration This chapter provides instructions on how to integrate Bare Metal Orchestrator with the Microsoft Azure cloud platform and enable single sign-on (SSO).

Topics:

Single sign-on integration overview Integrate Microsoft Azure

Single sign-on integration overview

Bare Metal Orchestrator supports integration with Microsoft Azure cloud platform to enable single sign-on (SSO). Contact your Dell Support representative for assistance to enable this feature on your system.

Integrate Microsoft Azure Integrate Bare Metal Orchestrator with the Microsoft Azure cloud platform to use single sign-on (SSO) when logging in to the Bare Metal Orchestrator web user interface.

Prerequisites

Before you can integrate Bare Metal Orchestrator with Microsoft Azure and enable single sign-on (SSO):

Ensure Bare Metal Orchestrator is deployed. Contact Dell Support for assistance to fetch the Bare Metal Orchestrator secret that is required for this process.

About this task

Use the following procedure to integrate Bare Metal Orchestrator with the Microsoft Azure cloud platform and enable single sign on (SSO).

High-level steps are provided for registering Bare Metal Orchestrator as an application using Microsoft Azure. For detail instructions, consult the latest Microsoft Azure documentation.

Steps

1. Log in to your Microsoft Azure Home portal and do the following to register Bare Metal Orchestrator as an application:

a. From the Register an application page, edit the fields for the Bare Metal Orchestrator application.

For the account type, select your organization's directory account. For example: Account in this organizational directory only ( - Single tenant)

Leave the Redirect URI empty for now. You will set this later.

4

SSO integration 37

b. From the Certificates and secrets page, select the Client secret tab and add a new client secret for the Bare Metal Orchestrator application.

Be sure to copy the secret value shown on the pop-up window and save it for reference later in this process. If you close the pop-up window or lose the secret, you must create another one.

c. From the Overview page for the Bare Metal Orchestrator application registration, configure the redirect URI.

Set the Redirect URI as a Web platform and configure the redirect URI as follows: https:// / keycloak/realms/bmo/broker/oidc/endpoint where is either the user-defined hostname you configured during Bare Metal Orchestrator installation, or the default hostname bmo-globalcontroller.

NOTE: This is the same hostname that you use to access the Bare Metal Orchestrator web user interface.

d. From the Endpoints tab on the Overview page, copy the OpendID Connect Metadata document link for the endpoint for future reference.

2. Log in to Bare Metal Orchestrator using the CLI and run the following command to fetch the secret:

kubectl get secret -n iam keycloak -o json | grep \"admin-password | awk '{print $2}' | tr -d '"' | base64 --decode # This is password for Keycloak default super admin "user"

NOTE: Request assistance from your Dell Support representative to perform this step and the steps that follow.

3. Add the Bare Metal Orchestrator hostname to the hosts file on your management console, see Access and accounts.

4. From your web browser, enter the following URL to open the Bare Metal Orchestrator web UI, and then click Administration Console.

http:// /keycloak/ where is either the user-defined hostname you configured during Bare Metal Orchestrator installation, or the default hostname bmo-globalcontroller.

The Bare Metal Orchestrator web UI opens as the super admin.

5. Enter user as the username and enter the Bare Metal Orchestrator secret password that you fetched in an earlier step.

You are logged in.

6. Go to Admin Console > Identity Providers > OpenID Connect v1.0 and fill in the details. Click Add when you're done.

38 SSO integration

Under OpenID Connect settings, the Client ID is the Application ID of the OIDC APP and the Client Secret is the Bare Metal Orchestrator secret password that you fetched in an earlier step.

7. Go to Authentication > Flows > Browser and set Default Identity Provider to microsoft. Then click Save.

8. Log off from the Administration Console.

Results

The next time you log in to the Bare Metal Orchestrator web user interface, you will see an option to sign in using SSO.

SSO integration 39

Upgrading Bare Metal Orchestrator You can upgrade Bare Metal Orchestrator on the Global Controller cluster and all worker nodes to the latest available version. For high availability, all nodes are upgraded.

Topics:

Upgrade overview High-level upgrade workflows Upgrade the Global Controller and one or more worker nodes Use maintenance mode

Upgrade overview You can upgrade all Bare Metal Orchestrator components for one or more sites in the cluster (including the Global Controller and worker nodes). Upgrading the CRD component is a required step in the process.

All necessary images, binaries, and version components needed for the upgrade are included in the upgrade bundle. The amount of time it takes to upgrade a cluster depends on the number of worker nodes installed at each site and the number of sites.

The CRD custom resources and site component upgrade step upgrades the API version of all components in all sites. The components that are upgraded include:

Firmware media Hardware profiles Media License media Profile telemetries SDN controllers Servers Server telemetries Sites Stack deployers Switches Switch port configurations Tenants

Do not configure Bare Metal Orchestrator or perform management tasks on the cluster during an upgrade. Put Bare Metal Orchestrator into maintenance mode before upgrading the cluster, see Use maintenance mode.

CAUTION: Service disruptions are possible during the upgrade process. We recommend scheduling the upgrade

when traffic at the affected sites is low.

Contact Dell Support or your Dell representative for information on how to obtain an upgrade bundle and the supported upgrade paths.

For a high-level workflow showing how sites are updated in the Bare Metal Orchestrator cluster, see High-level upgrade workflows.

For help troubleshooting an upgrade, contact your Dell Support representative.

5

40 Upgrading Bare Metal Orchestrator

High-level upgrade workflows

Cluster upgrade workflow

When you initiate a Bare Metal Orchestrator upgrade from the Global Controller node, the Site Controller is the first component that is upgraded on the Global Controller.

The Site Controller upgrades the Site Manager for the Global Controller, and then updates the Site Managers for the remote sites. Each Site Manager upgrades the local Bare Metal Orchestrator components for their individual node. You can upgrade one or multiple sites at the same time.

NOTE: You must update the YAML definition files before you update the sites; otherwise, new functions introduced in the

upgrade may not work properly.

Figure 5. Bare Metal Orchestrator upgrade workflow

CAUTION: Before upgrading the system, create a backup to recover the cluster data in case of a catastrophic

failure, see Bare Metal Orchestrator Command Line Interface User's Guide. Follow all instructions and

prerequisites specified in the upgrade procedures before you upgrade.

General workflow for a minor upgrade

The following is a general high-level workflow for a minor upgrade and is not intended to replace the detailed procedures that are documented in this guide:

1. Obtain the upgrade bundle. For information on how to obtain an upgrade bundle and the supported upgrade paths, contact Dell Support or your Dell representative.

2. Optional: Enter Bare Metal Orchestrator into maintenance mode to prevent other users from making updates while you upgrade the system, see Use maintenance mode.

3. Upload the upgrade bundle file to a folder you create on the GC node VM where the Bare Metal Orchestrator software is installed.

4. Extract the upgrade orchestrator. 5. Update the CRD component to update the YAML definition files. 6. Initiate the upgrade process for one or multiple sites.

The Site Controller on the Global Controller node updates. The Site Controller updates the site manager for the Global Controller and for each site. Each site manager updates the components for their respective site.

7. Verify that the sites have been updated successfully.

Upgrading Bare Metal Orchestrator 41

NOTE: You must update the YAML definition files before you update the sites; otherwise, new functions introduced in the

upgrade may not work properly.

Upgrade the Global Controller and one or more worker nodes Use this procedure to update Bare Metal Orchestrator from the last release of Bare Metal Orchestrator to the next release in sequence. All Bare Metal Orchestrator components are updated to the most recent release.

Prerequisites

Observe the following:

Bare Metal Orchestrator is installed and the Global Controller and worker nodes are configured. If you are not upgrading all sites in the cluster, then you need to know the site name of the specific sites you want to

upgrade. For example, if you want to test the upgrade on only a few sites. Obtain the upgrade bundle. For information on how to obtain an upgrade bundle and the supported upgrade paths, contact

Dell Support or your Dell representative. An FTP client such as WinSCP to upload the file to the OVA. Bare Metal Orchestrator is in maintenance mode.

Do not configure Bare Metal Orchestrator or perform management tasks on the cluster during an upgrade. Put Bare Metal Orchestrator into maintenance mode before upgrading the cluster, see Use maintenance mode.

Before upgrading the system, we recommend that you back up the cluster in case the cluster needs to be recovered, see Bare Metal Orchestrator Command Line Interface User's Guide.

CAUTION: To avoid inadvertently overwriting the Bare Metal Orchestrator cluster, verify the configuration

of the mounted partitions on the Global Controller, as well as on the two redundant high availability (HA)

nodes for an HA configuration before upgrading Bare Metal Orchestrator, see Verify Global Controller partition

assignments.

About this task

The upgrade bundle file contains all the necessary images, binaries, and the version file you need for the upgrade installation. All sites are upgraded, starting with the Global Controller and followed by the remote sites.

To upgrade Bare Metal Orchestrator components on a VM:

Steps

1. Log in to the Global Controller node console as the installer user with the password Dell1234.

2. Create a directory for the upgrade bundle file and change to that directory.

mkdir > cd For example:

mkdir upgrade cd upgrade

3. Using an FTP client tool such as WinSCP, connect to your OVA server as the installer user and copy the upgrade bundle .tar file to the upgrade folder you created.

4. Using the Global Controller node console, go to /home/installer/mw-ova-ansible/upgrade.

cd /home/installer/mw-ova-ansible/upgrade 5. Extract the upgrade bundle file, where is the upgrade bundle tar filename. Run:

tar -xvzf >

42 Upgrading Bare Metal Orchestrator

For example:

tar -xvzf bmo_bundle-v0.3.555_TAG.tar.gz

6. Optional: From the directory where the extracted bundle files are located, run the following command to list the upgrade components and record the versions for comparison after the upgrade:

bmo version

7. Change directory to mw_bundle.

cd mw_bundle 8. Update the CRD component to update the YAML definition files. Run:

./mw-install -i upgrade crd 9. Upgrade one or multiple sites.

To upgrade all sites in the cluster, run:

./mw-install -i upgrade site --all To upgrade one or more sites in the cluster, run the following command and enter individual site names separated by a

space:

./mw-install -i upgrade site sitename1 sitename2 sitename3 The upgrade images upload to the localregistry that is running in the cluster for each site you specify.

NOTE: The upgrade process can take some time, depending on the size and number of sites. Do not interrupt the

upgrade process. Contact Dell Support if you are experiencing issues upgrading your cluster.

10. Optional: To monitor the upgrade progress, run:

bmo get sites 11. Optional: Run the following command to confirm the version of the Bare Metal Orchestrator components upgraded at the

site.

$ bmo version

Use maintenance mode

About this task

Using maintenance mode prevents other users from making updates to Bare Metal Orchestrator while carrying out software updates and upgrades. Only users with a Support Admin role can enable and disable maintenance mode. In addition, only users with a Support Admin role can perform operations on Bare Metal Orchestrator while it is in maintenance mode.

Steps

1. Establish a CLI session on the Bare Metal Orchestrator VM. For high availability configurations, establish a CLI session using the virtual IP (VIP) of the Load Balancers for the Bare Metal Orchestrator cluster.

2. Create token. Run the following command:

bmo create token --username= where username is the name of the Support Admin user.

NOTE: The create token command creates three tokens:

Access token

ID token

Refresh token

3. Export the ID token. Run the following command:

export id_token= 4. Run the following command:

bmo get maintenancemode --token $id_token --tenant NOTE: The tenant name is optional. By default, the tenant is Bare Metal Orchestrator.

Upgrading Bare Metal Orchestrator 43

5. Export the access token. Run the following command:

export access_token= 6. Run the following co

Manualsnet FAQs

If you want to find out how the Bare Metal Orchestrator Dell works, you can view and download the Dell Bare Metal Orchestrator 1.4 Software Installation Guide on the Manualsnet website.

Yes, we have the Installation Guide for Dell Bare Metal Orchestrator as well as other Dell manuals. All you need to do is to use our search bar and find the user manual that you are looking for.

The Installation Guide should include all the details that are needed to use a Dell Bare Metal Orchestrator. Full manuals and user guide PDFs can be downloaded from Manualsnet.com.

The best way to navigate the Dell Bare Metal Orchestrator 1.4 Software Installation Guide is by checking the Table of Contents at the top of the page where available. This allows you to navigate a manual by jumping to the section you are looking for.

This Dell Bare Metal Orchestrator 1.4 Software Installation Guide consists of sections like Table of Contents, to name a few. For easier navigation, use the Table of Contents in the upper left corner.

You can download Dell Bare Metal Orchestrator 1.4 Software Installation Guide free of charge simply by clicking the “download” button in the upper right corner of any manuals page. This feature allows you to download any manual in a couple of seconds and is generally in PDF format. You can also save a manual for later by adding it to your saved documents in the user profile.

To be able to print Dell Bare Metal Orchestrator 1.4 Software Installation Guide, simply download the document to your computer. Once downloaded, open the PDF file and print the Dell Bare Metal Orchestrator 1.4 Software Installation Guide as you would any other document. This can usually be achieved by clicking on “File” and then “Print” from the menu bar.