- Manuals
- Brands
- Dell
- Storage
- Symmetrix VMAX
- CLI Reference Guide
Dell Symmetrix VMAX 100K Storage CLI Reference Guide PDF
Summary of Content for Dell Symmetrix VMAX 100K Storage CLI Reference Guide PDF
EMC VMAX
eNAS CLI Reference Guide Version 8.1.12.27
For: VMAX3 Family: VMAX 100K, 200K, 400K VMAX All Flash: 250F, 450F, 850F, 950F REVISION 01
Copyright 2016-2017 EMC Corporation All rights reserved.
Published May 2017
Dell believes the information in this publication is accurate as of its publication date. The information is subject to change without notice.
THE INFORMATION IN THIS PUBLICATION IS PROVIDED AS-IS. DELL MAKES NO REPRESENTATIONS OR WARRANTIES OF ANY KIND
WITH RESPECT TO THE INFORMATION IN THIS PUBLICATION, AND SPECIFICALLY DISCLAIMS IMPLIED WARRANTIES OF
MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. USE, COPYING, AND DISTRIBUTION OF ANY DELL SOFTWARE DESCRIBED
IN THIS PUBLICATION REQUIRES AN APPLICABLE SOFTWARE LICENSE.
Dell, EMC, and other trademarks are trademarks of Dell Inc. or its subsidiaries. Other trademarks may be the property of their respective owners.
Published in the USA.
EMC Corporation Hopkinton, Massachusetts 01748-9103 1-508-435-1000 In North America 1-866-464-7381 www.EMC.com
2 eNAS CLI Reference Guide 8.1.12.27 For: VMAX3 Family: VMAX 100K, 200K, 400K VMAX All Flash: 250F,
450F, 850F, 950F
PREFACE
As part of an effort to improve its product lines, EMC periodically releases revisions of its software and hardware. Therefore, some functions described in this document might not be supported by all versions of the software or hardware currently in use. The product release notes provide the most up-to-date information on product features.
Contact your EMC representative if a product does not function properly or does not function as described in this document.
Note
This document was accurate at publication time. New versions of this document might be released on EMC Online Support (https://support.emc.com). Check to ensure that you are using the latest version of this document.
Purpose This reference guide provides man pages for all the eNAS CLI commands.
Audience This manual provides reference information for command-line users and script programmers that focus on configuring and managing eNAS on VMAX arrays.
Related documentation The following documents provide additional eNAS information:
l VMAX eNAS Release Notes Describes new features and identifies any known functionality restrictions and performance issues that may exist with the current version and your specific storage environment.
l VMAX eNAS File Auto Recovery with SRDF/S Describes how to install and use File Auto Recovery to failover/move eNAS Virtual Data Movers from source eNAS systems to destination eNAS systems using SRDF/S.
l Using SRDF/S with VNX for Disaster Recovery Explains how to configure and manage SRDF/S.
l EMC VNX Command Line Interface Reference for File Explains the command used to configure and manage an EMC file storage system.
l Managing Volumes and File Systems on VNX Manually Explains how to create and aggregate different volume types into usable file system storage.
l UsingVNX SnapSure Explains how to use EMC SnapSure to create and manage checkpoints.
l Configuring Virtual Data Movers on VNX Explains how to configure and manage VDMs on a file storage system.
l Configuring CIFS on VNX Explains how to configure and manage NFS.
l Parameters Guide for VNX for File Explains how to view and modify parameters and system settings.
PREFACE 3
Where to get help EMC support, product, and licensing information can be obtained as follows:
Note
To open a service request through EMC Online Support (https://support.emc.com), you must have a valid support agreement. Contact your EMC sales representative for details about obtaining a valid support agreement or to answer any questions about your account.
Product information
For documentation, release notes, software updates, or information about EMC products, go to EMC Online Support at https://support.emc.com.
Technical support
EMC offers a variety of support options.
l Support by Product EMC offers consolidated, product-specific information on the Web at: https://support.EMC.com/products The Support by Product web pages offer quick links to Documentation, White Papers, Advisories (such as frequently used Knowledgebase articles), and Downloads, as well as more dynamic content, such as presentations, discussion, relevant Customer Support Forum entries, and a link to EMC Live Chat.
l EMC Live Chat Open a Chat or instant message session with an EMC Support Engineer.
eLicensing support
To activate your entitlements and obtain your VMAX license files, visit the Service Center on https://support.EMC.com, as directed on your License Authorization Code (LAC) letter emailed to you.
l For help with missing or incorrect entitlements after activation (that is, expected functionality remains unavailable because it is not licensed), contact your EMC Account Representative or Authorized Reseller.
l For help with any errors applying license files through Solutions Enabler, contact the EMC Customer Support Center.
l If you are missing a LAC letter, or require further instructions on activating your licenses through the Online Support site, contact EMC's worldwide Licensing team at licensing@emc.com or call:
n North America, Latin America, APJK, Australia, New Zealand: SVC4EMC (800-782-4362) and follow the voice prompts.
n EMEA: +353 (0) 21 4879862 and follow the voice prompts.
Your comments Your suggestions help us improve the accuracy, organization, and overall quality of the documentation. Send your comments and feedback to: VMAXContentFeedback@emc.com
PREFACE
4 eNAS CLI Reference Guide 8.1.12.27 For: VMAX3 Family: VMAX 100K, 200K, 400K VMAX All Flash: 250F,
450F, 850F, 950F
eNAS components
The following terminology is used throughout this document:
l Management Module Control Station (MMCS): Used by EMC Customer Support to configure eNAS, if necessary.
l Network Address Translation (NAT) Gateway: Used to configure the external IP address of the Control Station.
l Control Station (CS): Provides management functions to the file-side components referred to as Data Movers.
l Data Mover (DM): Clients communicate with a Data Mover using either/both NFS and CIFS/SMB protocols. Clients are physically connected to the Data Mover through I/O modules on the storage array that are assigned to the Data Mover. The Data Mover accesses the client data by way of an internal interface to the storage array on which the Data Mover resides
eNAS components 5
eNAS components
6 eNAS CLI Reference Guide 8.1.12.27 For: VMAX3 Family: VMAX 100K, 200K, 400K VMAX All Flash: 250F,
450F, 850F, 950F
Control station
The Control Station provides utilities for managing, configuring, and monitoring of the Data Movers in the eNAS system.
As the system administrator, you may type commands through the Control Station to perform tasks that include the following:
l Managing and Configuring the database and Data Movers
l Monitoring statistics of the eNAS components
Accessing the Control Station You may use either local or remote access to the Control Station.
Note
To access locally a connection to serial port have to be established.
l Local access to the command line interface is available directly at the Control Station console.
l Remote access to the command line interface by using a secure, encrypted login application allows the use of the eNAS command set.
Accessing the command line interface A description of how to gain local or remote access to the command line interface for the eNAS follows.
Note
For a local connection, connect a client to the Control Station serial port.
l For local access to the command line interface, at the prompt, log in with your administrative username and password. Establish the connection to the Control Station with the following settings:
Table 1 Control Station serial port connection settings
Setting Value
Bits per second 19200
Data bits 8
Parity None
Stop bits 1
Flow control None
Emulation Auto Detect
Telnet terminal ID ANSI
l For remote access to the command line interface:
Control station 7
1. Use a secure, encrypted, remote login application capable of SSH. Type the IP address of the Control Station.
2. Log in with your administrative username and password.
Control station
8 eNAS CLI Reference Guide 8.1.12.27 For: VMAX3 Family: VMAX 100K, 200K, 400K VMAX All Flash: 250F,
450F, 850F, 950F
Role-Based access
The administrative user account you use to access the command line interface is associated with specific privileges, also referred to as roles. A role defines the privileges (operations) a user can perform on a particular eNAS object. The ability to select a predefined role or define a custom role that gives a user certain privileges is supported for users who access eNAS through the CLI, EMC Unisphere, and the XML API.
The Security Configuration Guide for VNX provides detailed information about how role- based access is used to determine the commands a particular user can execute. You create and manage user accounts and roles in Unisphere by using Settings > User Management.
Role-Based access 9
Role-Based access
10 eNAS CLI Reference Guide 8.1.12.27 For: VMAX3 Family: VMAX 100K, 200K, 400K VMAX All Flash: 250F,
450F, 850F, 950F
Command set conventions
This manual uses commonly known command set conventions for the eNAS for file man pages. Each man page presents the command name at the top of the man page followed by a brief overview of what the command does. The synopsis contains the actual command usage. The description contains a more detailed breakdown of the features of the command, and the options describe what each switch or option does specifically.
The See Also section refers to the technical modules that support the feature, in addition to any other commands that interact with the command.
The examples are at the end of the command.
The naming convention for the Data Mover variable in the command line interface is
The commands are prefixed, then appear in alphabetical order.
Synopsis The synopsis is usage of each command. The synopsis appears in courier typeface, with variables such as movername, filename, and device name enclosed by angle brackets, with the command name appearing in bold. The switches and other options also appear in bold and, in most cases, are prefixed by a minus sign:
server_umount {
Required entries A switch or variable enclosed with curly brackets, or not enclosed at all, indicates a required entry:
{
Optional entries A switch or variable enclosed with square brackets indicates an optional entry:
[-perm|-temp]
Formatting The variable name enclosed by angle brackets indicates the name of a specified object:
{
Options An option is prefixed with a minus (-) sign: -perm
If the option is spelled out, for example, -perm, in the command syntax, you may use just the first letter: -p
Command set conventions 11
Options and names are case-sensitive. If an uppercase letter is specified in the syntax, a lowercase letter is not accepted.
The vertical bar symbol ( | ) represents or, meaning an alternate selection:
{-all|
Command prefixes Commands are prefixed depending on what they are administering. For example, commands prefixed with:
l cel_ execute to the remotely linked eNAS system.
l cs_ execute to the Control Station.
l fs_ execute to the specified file system.
l nas_ execute directly to the Control Station database.
l server_ require a movername entry and execute directly to a Data Mover.
Command set conventions
12 eNAS CLI Reference Guide 8.1.12.27 For: VMAX3 Family: VMAX 100K, 200K, 400K VMAX All Flash: 250F,
450F, 850F, 950F
General notes
Note the following:
l If a command is interrupted by using Ctrl-C, then the following messages or traces on the console are expected:
n nas_cmd: system execution failed.
n nas_cmd: PANIC: caught signal #11 (Segmentation fault) -- Giving up
l Use eNAS CLI for file to add IPv6 addresses to the NFS export host list. Enclose the IPv6 address in { } or square brackets in the CLI. The IPv6 addresses added to the NFS export list by using the CLI are displayed as read-only fields in the Unisphere software.
General notes 13
General notes
14 eNAS CLI Reference Guide 8.1.12.27 For: VMAX3 Family: VMAX 100K, 200K, 400K VMAX All Flash: 250F,
450F, 850F, 950F
NASCLI Commands
This chapter lists the eNAS Command Set provided for managing,
configuring, and monitoring of NAS database. The commands are prefixed
with nas and appear alphabetically. The command line syntax (Synopsis), a
description of the options, and an example of usage are provided for each
command.
nas_acl nas_autodiskmark nas_automountmap
nas_ca_certificate nas_cel nas_checkup
nas_ckpt_schedule nas_config nas_connecthome
nas_copy nas_cs nas_dbtable
nas_devicegroup nas_disk nas_diskmark
nas_emailuser nas_environment nas_event
nas_fs nas_fsck nas_halt
nas_inventory nas_license nas_logviewer
nas_message nas_migrate nas_mview
nas_pool nas_quotas nas_rdf
nas_replicate nas_server nas_stats
nas_storage nas_syncrep nas_task
nas_version nas_volume
nas_acl Manages the access control level table. SYNOPSIS -------- nas_acl -list | -info {-user|-group|-owner}
Modifies the
name Name given for the entry. level Level of access permitted. user_id Also known as the num_id. EXAMPLE #4 ---------- To modify an access control level entry, type: # nas_acl -modify -user 211 level=7 done EXAMPLE #5 ---------- To delete an access control level entry, type: # nas_acl -delete -user 211 done -------------------------------------- Last Modified: March 3, 2011 12:05 pm
nas_autodiskmark enable/disable autodiskamrk feature. SYNOPSIS -------- nas_autodiskmark -info | -modify -enabled { yes | no } DESCRIPTION ----------- This command is used to enable/disable autodiskamrk feature. OPTIONS ------- -info Displays whether autodiskmark feature is enabled or not. -modify -enabled { yes | no } Enables or disables autodiskmark feature. EXAMPLE #1 ---------- To check autodiskmark feature is enabled or not, type: $ nas_autodiskmark -info Feature Enabled = No $ nas_autodiskmark -info Feature Enabled = Yes EXAMPLE #2 ---------- To enable/disable autodiskmark feature, type: $ nas_autodiskmark -modify -enabled yes OK
nas_automountmap Manages the automount map file. SYNOPSIS -------- nas_automountmap -list_conflict
To merge an automount map file with an existing map file, type: $ nas_automountmap -create -in automountmap -out automountmap1 -------------------------------------- Last Modified: March 3, 2011 12:10 pm
nas_ca_certificate Manages the Control Station as a Certificate Authority (CA) for VNXs Public Key Infrastructure (PKI). SYNOPSIS -------- nas_ca_certificate -display | -generate DESCRIPTION ----------- nas_ca_certificate generates a public/private key set and a CA certificate for the Control Station. When the Control Station is serving as a CA, it must have a private key with which to sign the certificates it generates for the Data Mover. The Control Station CA certificate contains the corresponding public key, which is used by clients to verify the signature on a certificate received from the Data Mover. nas_ca_certificate also displays the text of the CA certificate so you can copy it and distribute it to network clients. In order for a network client to validate a certificate sent by a Data Mover that has been signed by the Control Station, the client needs the Control Station CA certificate (specifically the public key from the CA certificate) to verify the signature of the Data Mover.s certificate. The initial Control Station public/private key set and CA certificate are generated automatically during a VNX software 5.6 install or upgrade. A new Control Station public/private key set and CA certificate is not required unless the CA key set is compromised or the CA certificate expires. The Control Station CA certificate is valid for 5 years. You must be root to execute the -generate option from the /nas/sbin directory. Once a Control Station CA certificate is generated, you must perform several additional tasks to ensure that the new certificate is integrated into VNX.s PKI framework. The Security Configuration Guide for File and the Unisphere online help for the PKI interface explain these tasks. OPTIONS ------- -display Displays the Control Station CA certificate. The certificate text is displayed on the terminal screen. Alternatively, you can redirect it to a file. -generate Generates a new CA public/private key set and certificate for the Control Station. This certificate is valid for 5 years from the date it is generated. SEE ALSO -------- server_certificate. EXAMPLE #1 ---------- To generate a new Control Station CA certificate, type: # /nas/sbin/nas_ca_certificate -generate New keys and certificate were successfully generated. EXAMPLE #2 ---------- To display the Control Station.s CA certificate, type:
# /nas/sbin/nas_ca_certificate -display Clients need only the certificate text enclosed by BEGIN CERTIFICATE and END CERTIFICATE although most clients can handle the entire output. Certificate: Data: Version: 3 (0x2) Serial Number: 3 (0x3) Signature Algorithm: sha1WithRSAEncryption Issuer: O=Celerra Certificate Authority, CN=eng173100 Validity Not Before: Mar 23 21:07:40 2007 GMT Not After : Mar 21 21:07:40 2012 GMT Subject: O=Celerra Certificate Authority, CN=eng173100 Subject Public Key Info: Public Key Algorithm: rsaEncryption RSA Public Key: (2048 bit) Modulus (2048 bit): 00:da:b2:37:86:05:a3:73:d5:9a:04:ba:db:05:97: d2:12:fe:1a:79:06:19:eb:c7:2c:c2:51:93:7f:7a: 93:59:37:63:1e:53:b3:8d:d2:7f:f0:e3:49:42:22: f4:26:9b:b4:e4:a6:40:6d:8d:e7:ea:07:8e:ca:b7: 7e:88:71:9d:11:27:5a:e3:57:16:03:a7:ee:19:25: 07:d9:42:17:b4:eb:e6:97:61:13:54:62:03:ec:93: b7:e6:f1:7f:21:f0:71:2d:c4:8a:8f:20:d1:ab:5a: 6a:6c:f1:f6:2f:26:8c:39:32:93:93:67:bb:03:a7: 22:29:00:11:e0:a1:12:4b:02:79:fb:0f:fc:54:90: 30:65:cd:ea:e6:84:cc:91:fe:21:9c:c1:91:f3:17: 1e:44:7b:6f:23:e9:17:63:88:92:ea:80:a5:ca:38: 9a:b3:f8:08:cb:32:16:56:8b:c4:f7:54:ef:75:db: 36:7e:cf:ef:75:44:11:69:bf:7c:06:97:d1:87:ff: 5f:22:b5:ad:c3:94:a5:f8:a7:69:21:60:5a:04:5e: 00:15:04:77:47:03:ec:c5:7a:a2:bf:32:0e:4d:d8: dc:44:fa:26:39:16:84:a7:1f:11:ef:a3:37:39:a6: 35:b1:e9:a8:aa:a8:4a:72:8a:b8:c4:bf:04:70:12: b3:31 Exponent: 65537 (0x10001) X509v3 extensions: X509v3 Subject Key Identifier: 35:06:F2:FE:CC:21:4B:92:DA:74:C9:47:CE:BB:37:21:5E:04:E2:E6 X509v3 Authority Key Identifier: keyid:35:06:F2:FE:CC:21:4B:92:DA:74:C9:47:CE:BB:37:21:5E:04:E2:E6 DirName:/O=Celerra Certificate Authority/CN=eng173100 serial:00 X509v3 Basic Constraints: CA:TRUE X509v3 Subject Alternative Name: DNS:eng173100 Signature Algorithm: sha1WithRSAEncryption 09:c3:13:26:16:be:44:56:82:5d:0e:63:07:19:28:f3:6a:c4: f3:bf:93:25:85:c3:55:48:4e:07:84:1d:ea:18:cf:8b:b8:2d: 54:13:25:2f:c9:75:c1:28:39:88:91:04:df:47:2c:c0:8f:a4: ba:a6:cd:aa:59:8a:33:7d:55:29:aa:23:59:ab:be:1d:57:f6: 20:e7:2b:68:98:f2:5d:ed:58:31:d5:62:85:5d:6a:3f:6d:2b: 2d:f3:41:be:97:3f:cf:05:8b:7e:f5:d7:e8:7c:66:b2:ea:ed: 58:d4:f0:1c:91:d8:80:af:3c:ff:14:b6:e7:51:73:bb:64:84: 26:95:67:c6:60:32:67:c1:f7:66:f4:79:b5:5d:32:33:3c:00: 8c:75:7d:02:06:d3:1a:4e:18:0b:86:78:24:37:18:20:31:61: 59:dd:78:1f:88:f8:38:a0:f4:25:2e:c8:85:4f:ce:8a:88:f4: 4f:12:7e:ee:84:52:b4:91:fe:ff:07:6c:32:ca:41:d0:a6:c0: 9d:8f:cc:e8:74:ee:ab:f3:a5:b9:ad:bb:d7:79:67:89:34:52: b4:6b:39:db:83:27:43:84:c3:c3:ca:cd:b2:0c:1d:f5:20:de: 7a:dc:f0:1f:fc:70:5b:71:bf:e3:14:31:4c:7e:eb:b5:11:9c: 96:bf:fe:6f -----BEGIN CERTIFICATE----- MIIDoDCCAoigAwIBAgIBAzANBgkqhkiG9w0BAQUFADA8MSYwJAYDVQQKEx1DZWxl cnJhIENlcnRpZmljYXRlIEF1dGhvcml0eTESMBAGA1UEAxMJZW5nMTczMTAwMB4X DTA3MDMyMzIxMDc0MFoXDTEyMDMyMTIxMDc0MFowPDEmMCQGA1UEChMdQ2VsZXJy YSBDZXJ0aWZpY2F0ZSBBdXRob3JpdHkxEjAQBgNVBAMTCWVuZzE3MzEwMDCCASIw DQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBANqyN4YFo3PVmgS62wWX0hL+GnkG
GevHLMJRk396k1k3Yx5Ts43Sf/DjSUIi9CabtOSmQG2N5+oHjsq3fohxnREnWuNX FgOn7hklB9lCF7Tr5pdhE1RiA+yTt+bxfyHwcS3Eio8g0ataamzx9i8mjDkyk5Nn uwOnIikAEeChEksCefsP/FSQMGXN6uaEzJH+IZzBkfMXHkR7byPpF2OIkuqApco4 mrP4CMsyFlaLxPdU73XbNn7P73VEEWm/fAaX0Yf/XyK1rcOUpfinaSFgWgReABUE d0cD7MV6or8yDk3Y3ET6JjkWhKcfEe+jNzmmNbHpqKqoSnKKuMS/BHASszECAwEA AaOBrDCBqTAdBgNVHQ4EFgQUNQby/swhS5LadMlHzrs3IV4E4uYwZAYDVR0jBF0w W4AUNQby/swhS5LadMlHzrs3IV4E4uahQKQ+MDwxJjAkBgNVBAoTHUNlbGVycmEg Q2VydGlmaWNhdGUgQXV0aG9yaXR5MRIwEAYDVQQDEwllbmcxNzMxMDCCAQAwDAYD VR0TBAUwAwEB/zAUBgNVHREEDTALggllbmcxNzMxMDAwDQYJKoZIhvcNAQEFBQAD ggEBAAnDEyYWvkRWgl0OYwcZKPNqxPO/kyWFw1VITgeEHeoYz4u4LVQTJS/JdcEo OYiRBN9HLMCPpLqmzapZijN9VSmqI1mrvh1X9iDnK2iY8l3tWDHVYoVdaj9tKy3z Qb6XP88Fi3711+h8ZrLq7VjU8ByR2ICvPP8UtudRc7tkhCaVZ8ZgMmfB92b0ebVd MjM8AIx1fQIG0xpOGAuGeCQ3GCAxYVndeB+I+Dig9CUuyIVPzoqI9E8Sfu6EUrSR /v8HbDLKQdCmwJ2PzOh07qvzpbmtu9d5Z4k0UrRrOduDJ0OEw8PKzbIMHfUg3nrc 8B/8cFtxv+MUMUx+67URnJa//m8= -----END CERTIFICATE----- ---------------------------------------------- Last Modified: March 3, 2011 12:37 pm
nas_cel Performs management of remotely linked VNX or a linked pair of Data Movers. SYNOPSIS -------- nas_cel -list | -delete {
For the remote VNX, updates all Data Movers that were down or experiencing errors during the -create or -modify and restores them to service by using the configuration required for Data Mover authentication. Data Mover authentication is used in iSCSI replication as the mechanism enabling two Data Movers (local or remote) to authenticate themselves and perform the requested operations. The -update option communicates with each Data Mover and either updates the configuration, or creates the configuration if it is being done for the first time. -modify {
twice, once from each side (the local side and its peer side). Both sides of the interconnect must exist before VNX Replicator sessions for local or remote replication can use the interconnect. Only the local side of an interconnect on which the source replication object resides is specified when creating the replication session. Loopback interconnects are created and named automatically and can be viewed using nas_cel -interconnect -list. You cannot create, modify, or delete loopback interconnects. -create
[,{
Note: To avoid problems with interface selection, any changes made to the interface lists should be reflected on both sides of an interconnect. [-destination_interfaces{
-local_fsidrange
messages to the remote (wdev), the other to read messages from them. Thi s value is unique to the Symmetrix storage system. net_path IP address of the remote VNX. VNX_id Unique VNX ID number. passphrase Used for authentication with a remote VNX. EXAMPLE #2 ---------- For the VNX for block, to list all remote VNXs, type: $ nas_cel -list id name owner mount_dev channel net_path CMU 0 cs100 0 172.24.102.236 APM00042000818000 0 3 cs110 0 172.24.102.240 APM00043807043000 0 For the VNX with a Symmetrix storage system, to list all remote VNXs, type: $ nas_cel -list id name owner mount_dev channel net_path CMU 0 cs30 0 172.24.172.152 0028040001900006 1 cs40 500 /dev/sdj1 /dev/sdg 172.24.172.151 0028040002180000 Where: Value Definition ----- ---------- id ID of the remote VNX on the local VNX. name Name assigned in the local view to the remote VNX. owner ACL ID assigned automatically. mount_dev Mounted database from the remote VNX in the SRDF environment. This value is unique to the Symmetrix storage system. channel RDF channel from where information is read and written. This valu e is unique to the Symmetrix storage system. net_path IP address of the remote VNX. CMU VNX Management Unit (unique VNX ID number). EXAMPLE #3 ---------- To display information for the remote VNX, cs110, type: $ nas_cel -info cs110 id = 3 name = cs110 owner = 0 device = channel = net_path = 172.24.102.240 VNX_id = APM000438070430000 passphrase = nasdocs EXAMPLE #1 provides information for a description of command outputs. EXAMPLE #4 ---------- To update the Control Station entry for cs110, type: $ nas_cel -update cs110 operation in progress (not interruptible)...
id = 3 name = cs110 owner = 0 device = channel = net_path = 172.24.102.240 VNX_id = APM000438070430000 passphrase = nasdocs EXAMPLE #1 provides information for a description of command outputs. EXAMPLE #5 ---------- To modify the passphrase and name for the remote Control Station cs110, type: $ nas_cel -modify cs110 -passphrase nasdocs_replication -name cs110_target operation in progress (not interruptible)... id = 3 name = cs110_target owner = 0 device = channel = net_path = 172.24.102.240 VNX_id = APM000438070430000 passphrase = nasdocs_replication EXAMPLE #1 provides information for a description of command outputs. EXAMPLE #6 ---------- To delete the Control Station entry of the remote VNX, cs110_target, type: $ nas_cel -delete cs110_target operation in progress (not interruptible)... id = 3 name = cs110_target owner = 0 device = channel = net_path = 172.24.102.240 VNX_id = APM000438070430000 passphrase = nasdocs_replication EXAMPLE #1 provides information for a description of command outputs. EXAMPLE #7 ---------- To create an interconnect NYs3_LAs2 between Data Mover server_3 and remote Data Mover server_2, and use a bandwidth limit of 2000 Kb/s from 7 A.M. to 6 P.M. Monday through Friday; otherwise, use a bandwidth limit of 8000 Kb/s, type: $ nas_cel -interconnect -create NYs3_LAs2 -source_server server_3 -destination_system cs110 -destination_server server_2 -source_interfaces ip=10.6.3.190 -destination_interfaces ip=10.6.3.173 -bandwidth MoTuWeThFr07:00-18:00/2000,/8000 operation in progress (not interruptible)... id = 30003 name = NYs3_LAs2 source_server = server_3 source_interfaces = 10.6.3.190
destination_system = cs110 destination_server = server_2 destination_interfaces = 10.6.3.173 bandwidth schedule = MoTuWeThFr07:00-18:00/2000,/8000 crc enabled = yes number of configured replications = 0 number of replications in transfer = 0 status = The interconnect is OK. Where: Value Definition ----- ---------- id ID of the interconnect. name Name of the interconnect. source_server Name of an available local Data Mover to use for the local side of the interconnect. source_interfaces IP addresses available for the local side of the interconnect (at least one, or a name service interface name) . destination_system Control Station names of the VNX systems available for use in a remote replication session. Local System is the defa ult. destination_server Name of an available peer Data Mover to use for the peer side of the interconnect. destination_interfaces IP addresses available for the peer side of the interconnect (at least one, or a name service interface name) . For loopback interconnects, the interface is fixed at 127.0.0 .1. bandwidth schedule Bandwidth schedule with one or more comma-separated entries, most specific to least specific. crc enabled Indicates that the Cyclic Redundancy Check (CRC) method is in use for verifying the integrity of data sent over the interco nnect. CRC is automatically enabled and cannot be disabled. number of configured replications Number of replication sessions currently config ured. number of replications in transfer Number of replications are currently in transf er. status Status of the interconnect. EXAMPLE #8 ---------- To modify the bandwidth schedule of the interconnect NYs3_LAs2, type: $ nas_cel -interconnect -modify NYs3_LAs2 -bandwidth MoWeFr07:00-18:00/2000,TuTh07:00-18:00/4000,/8000 operation in progress (not interruptible)... id = 30003 name = NYs3_LAs2 source_server = server_3 source_interfaces = 10.6.3.190 destination_system = cs110 destination_server = server_2 destination_interfaces = 10.6.3.173 bandwidth schedule = MoWeFr07:00-18:00/2000,TuTh07:00-18:00/4000,/8000 crc enabled = yes number of configured replications = 0 number of replications in transfer = 0 status = The interconnect is OK. EXAMPLE #7 provides a description of the command outputs. EXAMPLE #9 ----------
To list available interconnects, type: $ nas_cel -interconnect -list id name source_server destination_system destination_server 20001 loopback server_2 cs100 server_2 30001 loopback server_3 cs100 server_3 30003 NYs3_LAs2 server_3 cs110 server_2 EXAMPLE #10 ----------- To pause the interconnect with id=30003, type: $ nas_cel -interconnect -pause id=30003 done EXAMPLE #11 ----------- To resume the interconnect NYs3_LAs2, type: $ nas_cel -interconnect -resume NYs3_LAs2 done EXAMPLE #12 ----------- To validate the interconnect NYs3_LAs2, type: $ nas_cel -interconnect -validate NYs3_LAs2 NYs3_LAs2: validating 9 interface pairs: please wait...ok EXAMPLE #13 ----------- To display the detailed information about the interconnect NYs3_LAs2, type: $ nas_cel -interconnect -info NYs3_LAs2 id = 30003 name = NYs3_LAs2 source_server = server_3 source_interfaces = 10.6.3.190 destination_system = cs110 destination_server = server_2 destination_interfaces = 10.6.3.173 bandwidth schedule = MoWeFr07:00-18:00/2000,TuTh07:00-18:00/4000,/8000 crc enabled = yes number of configured replications = 0 number of replications in transfer = 0 status = The interconnect is OK. EXAMPLE #7 provides a description of the command outputs. EXAMPLE #14 ----------- To delete interconnect NYs3_LAs2, type: $ nas_cel -interconnect -delete NYs3_LAs2 operation in progress (not interruptible)... id = 30003 name = NYs3_LAs2
source_server = server_3 source_interfaces = 10.6.3.190 destination_system = cs110 destination_server = server_2 destination_interfaces = 10.6.3.173 bandwidth schedule = MoWeFr07:00-18:00/2000,TuTh07:00-18:00/4000,/8000 crc enabled = no number of configured replications = 0 number of replications in transfer = 0 status = The interconnect is OK. EXAMPLE #7 provides a description of the command outputs. --------------------------------------------- Last Modified Date: December 3, 2014 1:15 pm EXAMPLE #15 ----------- To enable VDM syncrep service on local and remote Embedded NAS systems. $ nas_cel -syncrep -enable L9C26_CS0 -local_fsidrange 4096,12287 -remote_fsidrange 12288,24575 -local_storage 000196700261 sym_dir=1E:27 rdf_group=99 -remote_storage 000197100007 sym_dir=1E:27 rdf_group=99 Now saving FSID range [12288,24575] on remote system... done Now saving FSID range [4096,12287] on local system... done Now creating LUN mappings (may take several minutes)... done Now adding CTD access to local server server_2... done Now adding CTD access to local server server_3... done Now creating mountpoint for sync replica of NAS database... done Now mounting sync replica of NAS database... done Now enabling sync replication service on remote system... done done EXAMPLE #16 ----------- To disable a VDM syncrep service on local and remote Embedded NAS systems. $ nas_cel -syncrep -disable L9C26_CS0 Now unmounting sync replica of NAS database... done Now deleting mountpoint for sync replica of NAS database... done Now removing CTD access to local server server_2... done Now removing CTD access to local server server_3... done Now deleting local LUN mapping... done Now disabling service (including deleting LUN mapping) on remote system... done Now removing FSID range [12288,24575] on remote system... done Now removing FSID range [4096,12287] on local system... done Now removing other sync replication service settings on local system... done done
nas_checkup Provides a system health checkup for the VNX. SYNOPSIS -------- nas_checkup [-version|-help|-rerun] DESCRIPTION ----------- nas_checkup runs scheduled and unscheduled health checks on the VNX and reports problems that are found, and the actions needed to fix the problem and acts as a system health monitor. The scheduled run time for the nas_checkup command is every 2 weeks by default. If a warning or error is discovered during this time, an alert is posted on the Unisphere. Set up email notification for warnings or errors in the Unisphere Notifications page, or modify and load the sample nas_checkup event configuration file. If a problem is discovered that requires EMC Service Personnel assistance, nas_checkup will notify EMC. OPTIONS ------- No arguments Runs a series of system health checks on the VNX and reports the problems that are found and the actions needed to fix the problem. No email, callhome, or Unisphere alert is posted when the health check is run unscheduled. -version Displays the version of health check that is run on the VNX. -help Provides help. -rerun Reruns the checks that produce error messages in the previous health checkup. It does not rerun the checks that produce warning or information messages. If there are no checks that produce error messages, then the -rerun switch generates a message that there is nothing to rerun. CHECKS ------ Nas_checkup runs a subset of the available checks based on the configuration of your system. The complete list of available checks are: Control Station Checks: Check if minimum free space exists Check if minimum free space exists ns Check if enough free space exists Check if enough free space exists ns Check if NAS Storage API is installed correctly Check if NAS Storage APIs match Check if NBS clients are started Check if NBS configuration exists Check if NBS devices are accessible Check if NBS service is started Check if standby is up Check if Symapi data is present Check if Symapi is synced with Storage System Check integrity of NASDB Check if primary is active
Check all callhome files delivered Check if NAS partitions are mounted Data Mover Checks: Check boot files Check if hardware is supported Check if primary is active Check if root filesystem has enough free space Check if using standard DART image Check MAC address Check network connectivity Check status Storage System Checks: Check disk emulation type Check disk high availability access Check disks read cache enabled Check disks and storage processors write cache enabled Check if access logix is enabled Check if FLARE is committed Check if FLARE is supported Check if microcode is supported Check no disks or storage processors are failed over Check that no disks or storage processors are faulted Check that no hot spares are in use Check that no hot spares are rebuilding Check control lun size Check if storage processors are read cache enabled FILES ----- The files associated with system health checkups are: /nas/log/nas_ checkup-run.
Control Station: Checking if NBS configuration exists......................Pass Control Station: Checking if NBS devices are accessible....................Pass Control Station: Checking if NBS service is started........................Pass Control Station: Checking if standby is up.................................N/A Control Station: Checking if Symapi data is present........................Pass Control Station: Checking if Symapi is synced with Storage System..........Pass Control Station: Checking integrity of NASDB...............................Pass Control Station: Checking all callhome files delivered.....................Pass Control Station: Checking resolv conf......................................Pass Control Station: Checking if NAS partitions are mounted....................Pass Control Station: Checking ipmi connection..................................Pass Control Station: Checking nas site eventlog configuration..................Pass Control Station: Checking nas sys mcd configuration........................Pass Control Station: Checking nas sys eventlog configuration...................Pass Control Station: Checking logical volume status............................Pass Control Station: Checking ups is available.................................Fail Data Movers : Checking boot files.......................................Pass Data Movers : Checking if primary is active.............................Pass Data Movers : Checking if root filesystem has enough free space.........Pass Data Movers : Checking if using standard DART image.....................Pass Data Movers : Checking network connectivity.............................Pass Data Movers : Checking status...........................................Pass Data Movers : Checking dart release compatibility.......................Pass Data Movers : Checking dart version compatibility.......................Pass Data Movers : Checking server name......................................Pass Data Movers : Checking unique id........................................Pass Data Movers : Checking CIFS file server configuration...................N/A Data Movers : Checking domain controller connectivity and configuration.N/A Data Movers : Checking DNS connectivity and configuration...............N/A Data Movers : Checking connectivity to WINS servers.....................N/A Data Movers : Checking connectivity to NTP servers......................N/A Data Movers : Checking connectivity to NIS servers......................Pass Data Movers : Checking virus checker server configuration...............N/A Data Movers : Checking if workpart is OK................................Pass Data Movers : Checking if free full dump is available...................? Data Movers : Checking if each primary data mover has standby...........Fail Storage System : Checking disk emulation type..............................Pass Storage System : Checking disk high availability access....................Pass Storage System : Checking disks read cache enabled.........................Pass Storage System : Checking disks and storage processors write cache enabled.Pass Storage System : Checking if access logix is enabled.......................Pass Storage System : Checking if FLARE is committed............................Pass Storage System : Checking if FLARE is supported............................Pass Storage System : Checking if microcode is supported........................Pass Storage System : Checking no disks or storage processors are failed over...Pass Storage System : Checking that no disks or storage processors are faulted..Pass Storage System : Checking that no hot spares are in use....................Pass Storage System : Checking that no hot spares are rebuilding................Pass Storage System : Checking minimum control lun size.........................Pass Storage System : Checking maximum control lun size.........................Fail Storage System : Checking system lun configuration.........................Pass Storage System : Checking if storage processors are read cache enabled.....Pass Storage System : Checking if auto assign are disabled for all luns.........Pass Storage System : Checking if auto trespass are disabled for all luns.......Pass Storage System : Checking backend connectivity.............................Pass -------------------------------------------------------------------------------- One or more warnings are shown below. It is recommended that you follow the instructions below to correct the problem then try again. -----------------------------------Information---------------------------------- Control Station: Check ups is available Symptom: The following UPS emcnasUPS_i0 emcnasUPS_i1 is(are) not available Data Movers: Check if each primary data mover has standby Symptom: The following primary Data Movers server_2, server_3 does not have a standby Data Mover configured. It is recommended that each primary Data Mover have a standby configured for it with automatic
failover policy for high availability. Storage System: Check maximum control lun size Symptom: * The size of control LUN 5 is 32 GB. It is larger than the recommended size of 14 GB. The additional space will be reserved by the system. -------------------------------------------------------------------------------- ------------------------------------Warnings------------------------------------ Data Movers: Check if free full dump is available Symptom: Cannot get workpart structure. Command failed. * Command: /nas/sbin/workpart -r * Command output: open: Permission denied * Command exit code: 2 Action : Contact EMC Customer Service and refer to EMC Knowledgebase emc146016. Include this log with your support request. -------------------------------------------------------------------------------- EXAMPLE #2 ---------- To display help for nas_checkup, type: $ nas_checkup -help Check Version: 5.6.23.1 Check Command: /nas/bin/nas_checkup usage: nas_checkup [ -help | -version ] EXAMPLE #3 ---------- To display the version of nas_checkup utility, type: $ nas_checkup -version Check Version: 5.6.23.1 Check Command: /nas/bin/nas_checkup DIAGNOSTICS ----------- nas_checkup returns one of the following exit statuses: 0 . No problems found 1 . nas_checkup posted information 2 . nas_checkup discovered a warning 3 . nas_checkup discovered an error 255 . Any other error Examples of errors that could cause a 255 exit status include, but are not limited to: -If nas_checkup is run when another instance of nas_checkup is running -If nas_checkup is run by someone other than root or the administrator group (generally nasadmin) -If nas_checkup is run on the standby Control Station -------------------------------------- Last Modified: March 3, 2011 1:30 pm
nas_ckpt_schedule Manages SnapSure checkpoint scheduling for the VNX. SYNOPSIS -------- nas_ckpt_schedule -list | -info {-all|
If once is specified, the hours and minutes for the snapshot to be run must be specified. A start date and name may be optionally assigned to the checkpoint. For a one-time checkpoint schedule, only one runtime can be provided. For one-time schedules, the option -ckpt_name can specify a name for the single checkpoint; if omitted, the default naming is used (
-modify {
ufs1 once at 3:09 P.M., type: $ nas_ckpt_schedule -create ufs1_ckpt_sched4 -filesystem ufs1 -description "One-time Checkpoint Schedule for ufs1" -recurrence once -runtimes 15:09 This command returns no output. EXAMPLE #5 ---------- To list all checkpoint schedules, type: $ nas_ckpt_schedule -list id = 6 name = ufs1_ckpt_sched2 description = Weekly Checkpoint schedule for ufs1 state = Pending next run = Mon Nov 13 18:00:00 EST 2006 id = 80 name = ufs1_ckpt_sched4 description = One-time Checkpoint Schedule for ufs1 state = Pending next run = Tue Nov 14 15:09:00 EST 2006 id = 5 name = ufs1_ckpt_sched1 description = Daily Checkpoint schedule for ufs1 state = Pending next run = Mon Nov 13 20:00:00 EST 2006 id = 7 name = ufs1_ckpt_sched3 description = Monthly Checkpoint schedule for ufs1 state = Pending next run = Wed Nov 15 19:00:00 EST 2006 EXAMPLE #6 ---------- To modify the recurrence of the checkpoint schedule ufs1_ckpt_sched3 to run every 10th of the month, type: $ nas_ckpt_schedule -modify ufs1_ckpt_sched3 -recurrence monthly -every 1 -days_of_month 10 This command returns no output. EXAMPLE #7 ---------- To get detailed information about checkpoint schedule, type: $ nas_ckpt_schedule -info ufs1_ckpt_sched3 id = 7 name = ufs1_ckpt_sched3 description = Monthly Checkpoint schedule for ufs1 CVFS name prefix = monthly tasks = Checkpoint ckpt_ufs1_ckpt_sched3_001 on filesystem id=25, Checkpoint ckpt_ufs1_ckpt_sched3_002 on filesystem id=25, Checkpoint ckpt_ufs1_ckpt_sched3_003 on filesystem id=25, Checkpoint ckpt_ufs1_ckpt_sched3_004 on filesystem id=25, Checkpoint ckpt_ufs1_ckpt_sched3_005 on filesystem id=25, Checkpoint ckpt_ufs1_ckpt_sched3_006 on filesystem id=25, Checkpoint ckpt_ufs1_ckpt_sched3_007 on filesystem id=25, Checkpoint ckpt_ufs1_ckpt_sched3_008 on filesystem id=25, Checkpoint ckpt_ufs1_ckpt_sched3_009 on filesystem id=25, Checkpoint ckpt_ufs1_ckpt_sched3_010 on filesystem id=25, Checkpoint ckpt_ufs1_ckpt_sched3_011 on filesystem id=25, Checkpoint
ckpt_ufs1_ckpt_sched3_012 on filesystem id=25 next run = Sun Dec 10 19:00:00 EST 2006 state = Pending recurrence = every 1 months start on = Mon Nov 13 16:47:51 EST 2006 end on = at which times = 19:00 on which days of week = on which days of month = 10 EXAMPLE #8 ---------- To pause a checkpoint schedule, type: $ nas_ckpt_schedule -pause ufs1_ckpt_sched1 This command returns no output. EXAMPLE #9 ---------- To resume a checkpoint schedule, type: $ nas_ckpt_schedule -resume ufs1_ckpt_sched1 This command returns no output. EXAMPLE #10 ----------- To delete a checkpoint schedule, type: $ nas_ckpt_schedule -delete ufs1_ckpt_sched2 This command returns no output. ---------------------------------------------------------------------- Last Modified: March 4 2011, 11:20 am
nas_config Manages a variety of configuration settings on the Control Station, some of which are security based. SYNOPSIS -------- nas_config -IPalias {-list | -create [-name
value is 60 minutes. Session timeout is enabled by default. To disable session timeout, type off or 0 to indicate zero minutes. The -sessiontimeout option enables the native timeout properties of the underlying shells on the Control Station. The relevant shell man page provides a description of how the mechanism works. -password Prompts for specific password policy definitions. The current value for each policy definition is shown in brackets. [-min <6..15>] defines the minimum length of the new password. The default length is eight characters. The length has to be a value between 6 and 15 characters. [-retries
# /nas/sbin/nas_config -IPalias -delete 0 All current sessions using alias eth2:0 will terminate Do you want to continue [yes or no]: yes done EXAMPLE #4 ---------- To generate and install a certificate for the Apache Web server on the Control Station, type: # /nas/sbin/nas_config -ssl Installing a new SSL certificate requires restarting the Apache web server. Do you want to proceed? [y/n]: y New SSL certificate has been generated and installed successfully. EXAMPLE #5 ---------- To change the session timeout value from the default value of 60 minutes to 100 minutes, type: # /nas/sbin/nas_config -sessiontimeout 100 done EXAMPLE #6 ---------- To disable session timeout, type: # /nas/sbin/nas_config -sessiontimeout 0 done or # /nas/sbin/nas_config -sessiontimeout off done EXAMPLE #7 ---------- To set specific password policy definitions, type: # /nas/sbin/nas_config -password Minimum length for a new password (Between 6 and 15): [8] Number of attempts to allow before failing: [3] Number of new characters (not in the the old password): [3] Number of digits that must be in the new password: [1] Number of special characters that must be in a new password: [0] Number of lower case characters that must be in password: [0] Number of upper case characters that must be in password: [0] EXAMPLE #8 ---------- To set the minimum length of a new password to 10 characters, type: # /nas/sbin/nas_config -password -min 10 EXAMPLE #9 ---------- To reset the current password policy definitions to their default values, type:
# /nas/sbin/nas_config -password -default --------------------------------------------- Last Modified: March 4, 2011 12:45 pm
nas_connecthome Configures email, FTP, modem, HTTPS and ESRS transport mechanisms for transportin g Callhome event files to user-configured destinations. SYNOPSIS -------- nas_connecthome -info | -test {-email_1|-email_2|-ftp_1|-ftp_2|-modem_1|-modem_2|-https|-esrs} | -modify [-modem_priority {Disabled|1|2|3}] [-modem_number
Displays the enabled and disabled configuration parameters for all transport mechanisms. -test {-email_1|-email_2|-ftp_1|-ftp_2|-modem_1|-modem_2|-https|-esrs} Tests connectivity to the destination configured and enabled for the specified transport mechanism. -modify Modifies the following configuration parameters for any or all transport mechanisms: [-modem_priority {Disabled|1|2|3}] Enables modem as a Primary, Secondary, or Tertiary transport mechanism. Specifying Disabled removes modem as a transport mechanism. [-modem_number
[-ftp_mode {active|passive}] Sets or modifies the transfer mode of the primary FTP transport mechanism. Note: Specifying "" (empty double quotes) reverts to the default value of active. [-ftp_server_2
[-email_subject
of yes. [-dial_in_enabled {yes|no}] Enables or disables dial-in login sessions. Note: Specifying "" (empty double quotes) reverts to the default value of yes. SEE ALSO -------- Configuring Events and Notifications on VNX for File. EXAMPLE #1 ---------- To display configuration information, type: # /nas/sbin/nas_connecthome -info ConnectHome Configuration: Encryption Enabled = yes Dial In : Enabled = yes Modem phone number = 9123123123 Site ID = MY SITE Serial number = APM00054703223 ESRS : Priority = 1 Email : Priority = 1 Sender Address = admin@yourcompany.com Recipient Address(es) = emailalert@emc.com Subject = CallHome Alert Primary : Email Server = backup.mailhub.company.com Secondary : Email Server = FTP : Priority = 2 Primary : FTP Server = 1.2.3.4 FTP Port = 22 FTP User Name = onalert FTP Password = ********** FTP Remote Folder = incoming FTP Transfer Mode = active Secondary : FTP Server = 1.2.4.4 FTP Port = 22 FTP User Name = onalert FTP Password = ********** FTP Remote Folder = incoming FTP Transfer Mode = active Modem : Priority = Disabled Primary : Phone Number = BT Tymnet = no Secondary : Phone Number = BT Tymnet = no EXAMPLE #2 ---------- To test the primary email server, type: # /nas/sbin/nas_connecthome -test -email_1
--------------------------------------------------------- ConnectEMC 2.0.27-bl18 Wed Aug 22 10:24:32 EDT 2007 RSC API Version: 2.0.27-bl18 Copyright (C) EMC Corporation 2003-2007, all rights reserved. --------------------------------------------------------- Reading configuration file: ConnectEMC.ini. Run Service begin... Test succeeded for Primary Email. EXAMPLE #3 ---------- To modify the configuration information, type: # /nas/sbin/nas_connecthome -modify -esrs_priority 1 --------------------------------------------------------- ConnectEMC 2.0.27-bl18 Wed Aug 22 10:24:32 EDT 2007 RSC API Version: 2.0.27-bl18 Copyright (C) EMC Corporation 2003-2007, all rights reserved. --------------------------------------------------------- Reading configuration file: ConnectEMC.ini. Run Service begin... Modify succeeded for Primary ESRS. -------------------------------------- Last Modified: September 26, 2012 11:15a.m
nas_copy Creates a replication session for a one-time copy of a file system. This command is available with VNX Replicator. SYNOPSIS -------- nas_copy -name
specifies the storage system for the destination file system to reside. Use the nas_storage -list command to obtain the serial number of the storage system. [-from_base {ckpt_name>|id=
To create a one-time copy of a checkpoint file system with session name ufs1_replica1 with the source checkpoint ufs_ckpt1 and destination pool clar_r5_performance on the interconnect NYs3_LAs2, source interface 10.6.3.190, and destination interface 10.6.3.173, type: $ nas_copy -name ufs1_replica1 -source -ckpt ufs1_ckpt1 -destination -pool clar_r5_performance -interconnect NYs3_LAs2 -source_interface 10.6.3.190 -destination_interface 10.6.3.173 OK EXAMPLE #2 ---------- To create a one-time copy of a read-only file system for the session ufs1_replica1 with source file system ufs1 and overwrite an existing destination file system ufs1 on the interconnect NYs3_LAs2, source interface 10.6.3.190, and destination interface 10.6.3.173, type: $ nas_copy -name ufs1_replica1 -source -fs ufs1 -destination -fs ufs1 -interconnect NYs3_LAs2 -source_interface 10.6.3.190 -destination_interface 10.6.3.173 -overwrite_destination OK EXAMPLE #3 ---------- To initiate a differential copy of ufs_ckpt2 to the ufs1_destination file system using ufs1_ckpt1 as the common base, using the -from_base option, type: $ nas_copy -name ufs1_replica1 -source -ckpt -ufs1_ckpt2 -destination -fs ufs1_destination -from_base ufs1_ckpt1 -interconnect NYs3_LAs2 OK Caution: Using the -from_base option overrides any common base that may exist.Ensure that the specified checkpoint represents the correct state of the destination file system. EXAMPLE #4 ---------- To refresh the destination of the replication session ufs1_replica1 for the source checkpoint ufs1_ckpt1 and destination file system ufs1 on the interconnect NYs3_LAs2, type: $ nas_copy -name ufs1_replica1 -source -ckpt ufs1_ckpt1 -destination -fs ufs1 -interconnect NYs3_LAs2 -refresh OK EXAMPLE #5 ---------- To perform a full copy of the source checkpoint to the destination for the replication session ufs1_replica1 with the source file system ufs1 and destination file system ufs1 on the interconnect NYs3_LAs2, type: $ nas_copy -name ufs1_replica1 -source -fs ufs1 -destination -fs ufs1 -interconnect NYs3_LAs2 -overwrite_destination -full_copy -background Info 26843676673: In Progress: Operation is still running. Check task id 4177 on the Task Status screen for results. --------------------------------------- Last Modified: July 13, 2011 11:00 am
nas_cs Manages the configuration properties of the Control Station. SYNOPSIS -------- nas_cs -info [-timezones] | -set [-hostname
Sets the Domain Name System of which the primary Control Station is a member. It can accept valid domain names. [-search_domain
To set the hostname,nat1_ip4address for the primary Control Station, type: $ nas_cs -set -hostname Ml9q26-cs0 -nat1_ip4address 10.246.124.63 OK EXAMPLE #3 ---------- To set the nat1_ip6address for the primary Control Station, type: $ nas_cs -set -nat1_ip6address 2620:0:170:260:16ff:fe5d:535c:2467/64 OK EXAMPLE #4 ---------- To set the DNS domain, search domains, and DNS servers for the primary Control Station, type: $ nas_cs -set -dns_domain eng.lss.emc.com -search_domain lss.emc.com,rtp.lab.emc.com -dns_servers 172.24.175.172,172.24.175.173 OK EXAMPLE #5 ---------- To set the session monitor timeout and session idle timeout for the primary Control Station, type: $ nas_cs -set -session_monitor_timeout 2 -session_idle_timeout 30 OK EXAMPLE #6 ---------- To set the date, time, timezone, and NTP servers for the primary Control Station, type: $ nas_cs -set -time 200811070205 -timezone America/New_York -ntp_server 128.221.252.0 OK EXAMPLE #7 ----------- To reboot the primary Control Station, type: $ nas_cs -reboot OK ------------------------------------------------------------- Last modifed: May 14, 2012 11:45 am
nas_dbtable Displays the table records of the Control Station. SYNOPSIS -------- nas_dbtable To execute the command against a database that is on the Data Mover area: -info -mover
The
=========== origin = Enumeration Unknown : 0 Secmap : 16 Localgroup : 32 Etc : 48 Nis : 64 AD : 80 Usrmap : 96 Ldap : 112 Ntx : 128 xidType = Enumeration unknown_name : -2 unknown_sid : -1 unknown_type : 0 user : 1 group : 2 fxid = Unsigned Integer size : 4 cdate = Date gid = Unsigned Integer size : 4 name = String, length container size : 2 EXAMPLE #2 ---------- To filter the records of the Secmap schema, type: $ nas_dbtable -query Mapping -mover
nas_devicegroup Manages an established MirrorView/Synchronous (MirrorView/S) consistency group, also known as a device group. SYNOPSIS -------- nas_devicegroup -list | -info {
SEE ALSO -------- Using MirrorView/Synchronous with VNX for Disaster Recovery, nas_acl, and nas_logviewer. STORAGE SYSTEM OUTPUT --------------------- The number associated with the storage device is dependent on the attached storage system of the system; for MirrorView/S, some VNX for block display a prefix of APM before a set of integers, for example, APM00033900124-0019. The VNX for block supports the following system-defined AVM storage pools for MirrorView/S only: cm_r1, cm_r5_performance, cm_r5_economy, cmata_archive, and cmata_r3. EXAMPLE #1 ---------- To list the configured MirrorView/S device groups that are available, type: $ nas_devicegroup -list ID name owner storage ID acl type 2 mviewgroup 500 APM00053001549 0 MVIEW EXAMPLE #2 ---------- To display detailed information for a MirrorView/S device group, type: $ nas_devicegroup -info mviewgroup Sync with CLARiiON backend ...... done name = mviewgroup description = uid = 50:6:1:60:B0:60:27:20:0:0:0:0:0:0:0:0 state = Synchronized role = Primary condition = Active recovery policy = Automatic number of mirrors = 16 mode = SYNC owner = 500 mirrored disks = local clarid = APM00053001549 remote clarid = APM00053001552 mirror direction = local -> remote Where: Value Definition ----- ---------- Sync with CLARiiON storage system Indicates that a sync with the VNX for block was performed to retrieve the most recent information. This does not appear if you specify -info -sync no. name Name of the device group. description Brief description of device group. uid UID assigned, based on the system. state State of the device group (for example, Consistent, Synchronized, Out-of-Sync, Synchronizing, Scrambled, Empty, Incomplete, or Local Only). role Whether the current system is the Primary (source) or Secondary (destination). condition Whether the group is functioning (Active), Inactive, Admin Fractured (suspended), Waiting on Sync, System Fractured (which indicates link down), or Unknown.
recovery policy Type of recovery policy (Automatic is the default and recommended value for group during storage system configuration; if Manual is set, use -resume after a link down failure). number of mirrors Number of mirrors in group. mode MirrorView mode (always SYNC in this release). owner User whom the object is assigned to, and is indicated by the index number in the access control level table. nas_acl provides information. mirrored disks Comma-separated list of disks that are mirrored. local clarid APM number of local VNX for block storage array. remote clarid APM number of remote VNX for block storage array. mirror direction On primary system, local to remote (on primary system); on destination system, local from remote. EXAMPLE #3 ---------- To display detailed information about a MirrorView/S device group without synchronizing the Control Station.s view with the VNX for block, type: $ nas_devicegroup -info id=2 -sync no name = mviewgroup description = uid = 50:6:1:60:B0:60:27:20:0:0:0:0:0:0:0:0 state = Consistent role = Primary condition = Active recovery policy = Automatic number of mirrors = 16 mode = SYNC owner = 500 mirrored disks = local clarid = APM00053001549 remote clarid = APM00053001552 mirror direction = local -> remote EXAMPLE #4 ---------- To halt operation of the specified device group, as root user, type: # nas_devicegroup -suspend mviewgroup Sync with CLARiiON backend ...... done STARTING an MV SUSPEND operation. Device group: mviewgroup ............ done The MV SUSPEND operation SUCCEEDED. done EXAMPLE #5 ---------- To resume operations of the specified device group, as root user, type: # nas_devicegroup -resume mviewgroup Sync with CLARiiON backend ...... done STARTING an MV RESUME operation. Device group: mviewgroup ............ done The MV RESUME operation SUCCEEDED. done ---------------------------------------------------------------- Last modified: May 11, 2011 10:00 am.
nas_disk Manages the disk table. SYNOPSIS -------- nas_disk -list | -delete
EXAMPLE #1 To list the disk table, type: $ nas_disk -list id inuse sizeMB storageID-devID type name servers 1 y 22874 000197100127-00001 STD root_disk 1,2 2 y 11619 000197100127-00002 STD root_ldisk 1,2 3 y 2077 000197100127-00008 STD d3 1,2 4 y 2077 000197100127-00009 STD d4 1,2 5 y 4154 000197100127-00006 STD d5 1,2 6 y 65542 000197100127-00007 STD d6 1,2 7 y 17261 000197100127-00021 DSL d7 1,2 8 n 17261 000197100127-00022 DSL d8 1,2 9 n 17261 000197100127-00023 DSL d9 1,2 10 n 17261 000197100127-00024 DSL d10 1,2 11 n 17261 000197100127-00025 DSL d11 1,2 12 n 17261 000197100127-00026 DSL d12 1,2 13 n 17261 000197100127-00027 DSL d13 1,2 14 n 17261 000197100127-00028 DSL d14 1,2 15 y 17261 000197100127-00029 DSL d15 1,2 17 y 17261 000197100127-0002A DSL d17 1,2 Where: Value Definition ----- ---------- id ID of the disk (assigned automatically). inuse Used by any type of volume or file system. sizeMB Total size of disk. storageID-devID ID of the system and device associated with the disk. type Type of disk contingent on the system attached; CLSTD, CLATA, CMSTD, CLEFD, CMEFD, CMATA, MIXED (indicates tiers used in the pool contain multiple disk types), Performance, Capacity, Extreme_performance, Mirrored_mix ed, Mirrored_performance, Mirrored_capacity, and Mirrored_extreme_performance are VNX disk types and STD, BCV, R1BCV, R2BCV, R1STD, R2STD, ATA, R1ATA, R2ATA,BCVA, R1BCA, R2BCA, EFD, FTS, R1FTS, R2FTS, BCVF, R1BCF, R2BCF , BCVMIXED, R1MIXED, R2MIXED, R1BCVMIXED, and R2BCVMIXED a re Symmetrix disk types. name Name of the disk; dd in a disk name indicates a remote disk. servers Servers that have access to this disk. EXAMPLE #2 ---------- To list the disk table for the system with a Symmetrix system, type: $ nas_disk -list id inuse sizeMB storageID-devID type name servers 1 y 11507 000190100530-00FB STD root_disk 1,2,3,4,5,6,7,8 2 y 11507 000190100530-00FC STD root_ldisk 1,2,3,4,5,6,7,8 3 y 2076 000190100530-00FD STD d3 1,2,3,4,5,6,7,8 4 y 2076 000190100530-00FE STD d4 1,2,3,4,5,6,7,8 5 y 2076 000190100530-00FF STD d5 1,2,3,4,5,6,7,8 6 y 65536 000190100530-04D4 STD d6 1,2,3,4,5,6,7,8 7 n 28560 000190100530-0102 STD d7 1,2,3,4,5,6,7,8 8 n 28560 000190100530-0103 STD d8 1,2,3,4,5,6,7,8 9 n 28560 000190100530-0104 STD d9 1,2,3,4,5,6,7,8 10 n 28560 000190100530-0105 STD d10 1,2,3,4,5,6,7,8 11 n 28560 000190100530-0106 STD d11 1,2,3,4,5,6,7,8 12 n 28560 000190100530-0107 STD d12 1,2,3,4,5,6,7,8 13 n 28560 000190100530-0108 STD d13 1,2,3,4,5,6,7,8 14 n 28560 000190100530-0109 STD d14 1,2,3,4,5,6,7,8 15 n 28560 000190100530-010A STD d15 1,2,3,4,5,6,7,8 16 n 28560 000190100530-010B STD d16 1,2,3,4,5,6,7,8
17 n 28560 000190100530-010C STD d17 1,2,3,4,5,6,7,8 18 n 28560 000190100530-010D STD d18 1,2,3,4,5,6,7,8 19 n 28560 000190100530-010E STD d19 1,2,3,4,5,6,7,8 20 n 28560 000190100530-010F STD d20 1,2,3,4,5,6,7,8 21 n 28560 000190100530-0110 STD d21 1,2,3,4,5,6,7,8 22 n 28560 000190100530-0111 STD d22 1,2,3,4,5,6,7,8 23 n 28560 000190100530-0112 STD d23 1,2,3,4,5,6,7,8 24 n 28560 000190100530-0113 STD d24 1,2,3,4,5,6,7,8 [....] 155 n 28560 000190100530-0196 STD d155 1,2,3,4,5,6,7,8 156 n 28560 000190100530-0197 STD d156 1,2,3,4,5,6,7,8 157 n 28560 000190100530-0198 BCV rootd157 1,2,3,4,5,6,7,8 158 n 28560 000190100530-0199 BCV rootd158 1,2,3,4,5,6,7,8 159 n 28560 000190100530-019A BCV rootd159 1,2,3,4,5,6,7,8 160 n 28560 000190100530-019B BCV rootd160 1,2,3,4,5,6,7,8 161 n 28560 000190100530-019C BCV rootd161 1,2,3,4,5,6,7,8 162 n 28560 000190100530-019D BCV rootd162 1,2,3,4,5,6,7,8 163 n 28560 000190100530-019E BCV rootd163 1,2,3,4,5,6,7,8 164 n 28560 000190100530-019F BCV rootd164 1,2,3,4,5,6,7,8 165 n 28560 000190100530-01A0 BCV rootd165 1,2,3,4,5,6,7,8 166 n 28560 000190100530-01A1 BCV rootd166 1,2,3,4,5,6,7,8 167 n 28560 000190100530-01A2 BCV rootd167 1,2,3,4,5,6,7,8 168 n 28560 000190100530-01A3 BCV rootd168 1,2,3,4,5,6,7,8 169 n 28560 000190100530-01A4 BCV rootd169 1,2,3,4,5,6,7,8 170 n 28560 000190100530-01A5 BCV rootd170 1,2,3,4,5,6,7,8 171 n 28560 000190100530-01A6 BCV rootd171 1,2,3,4,5,6,7,8 172 n 28560 000190100530-01A7 BCV rootd172 1,2,3,4,5,6,7,8 173 n 28560 000190100530-01A8 BCV rootd173 1,2,3,4,5,6,7,8 174 n 28560 000190100530-01A9 BCV rootd174 1,2,3,4,5,6,7,8 175 n 28560 000190100530-01AA BCV rootd175 1,2,3,4,5,6,7,8 176 n 28560 000190100530-01AB BCV rootd176 1,2,3,4,5,6,7,8 177 n 28560 000190100530-01AC BCV rootd177 1,2,3,4,5,6,7,8 178 n 28560 000190100530-01AD BCV rootd178 1,2,3,4,5,6,7,8 179 n 28560 000190100530-01AE BCV rootd179 1,2,3,4,5,6,7,8 180 n 28560 000190100530-01AF BCV rootd180 1,2,3,4,5,6,7,8 181 n 28560 000190100530-01B0 BCV rootd181 1,2,3,4,5,6,7,8 182 n 28560 000190100530-01B1 BCV rootd182 1,2,3,4,5,6,7,8 183 n 28560 000190100530-01B2 BCV rootd183 1,2,3,4,5,6,7,8 184 n 28560 000190100530-01B3 BCV rootd184 1,2,3,4,5,6,7,8 185 n 28560 000190100530-01B4 BCV rootd185 1,2,3,4,5,6,7,8 186 n 28560 000190100530-01B5 BCV rootd186 1,2,3,4,5,6,7,8 187 n 11507 000190100530-051D EFD d187 1,2,3,4,5,6,7,8 188 n 11507 000190100530-051E EFD d188 1,2,3,4,5,6,7,8 189 n 11507 000190100530-051F EFD d189 1,2,3,4,5,6,7,8 190 n 11507 000190100530-0520 EFD d190 1,2,3,4,5,6,7,8 191 n 11507 000190100530-0521 EFD d191 1,2,3,4,5,6,7,8 192 n 11507 000190100530-0522 EFD d192 1,2,3,4,5,6,7,8 193 n 11507 000190100530-0523 EFD d193 1,2,3,4,5,6,7,8 194 n 11507 000190100530-0524 EFD d194 1,2,3,4,5,6,7,8 195 n 11507 000190100530-0525 EFD d195 1,2,3,4,5,6,7,8 196 n 11507 000190100530-0526 EFD d196 1,2,3,4,5,6,7,8 197 n 11507 000190100530-0527 EFD d197 1,2,3,4,5,6,7,8 198 n 11507 000190100530-0528 EFD d198 1,2,3,4,5,6,7,8 199 n 11507 000190100530-0529 EFD d199 1,2,3,4,5,6,7,8 200 n 11507 000190100530-052A EFD d200 1,2,3,4,5,6,7,8 201 n 11507 000190100530-052B EFD d201 1,2,3,4,5,6,7,8 202 n 11507 000190100530-052C EFD d202 1,2,3,4,5,6,7,8 203 n 11507 000190100530-052D EFD d203 1,2,3,4,5,6,7,8 204 y 11507 000190100530-052E EFD d204 1,2,3,4,5,6,7,8 Note: This is a partial listing due to the length of the output. EXAMPLE #1 provides a description of command outputs. EXAMPLE #3 ---------- To view information for disk d7 for a system with a VNX for block, type: $ nas_disk -info d7
id = 7 name = d7 acl = 0 in_use = True pool = TP1 size (MB) = 273709 type = Mixed protection= RAID5(4+1) stor_id = FCNTR074200038 stor_dev = 0012 volume_name = d7 storage_profiles = TP1 thin = True tiering_policy = Auto-tier compressed= False mirrored = False servers = server_2,server_3,server_4,server_5 server = server_2 addr=c0t1l2 server = server_2 addr=c32t1l2 server = server_2 addr=c16t1l2 server = server_2 addr=c48t1l2 server = server_3 addr=c0t1l2 server = server_3 addr=c32t1l2 server = server_3 addr=c16t1l2 server = server_3 addr=c48t1l2 server = server_4 addr=c0t1l2 server = server_4 addr=c32t1l2 server = server_4 addr=c16t1l2 server = server_4 addr=c48t1l2 server = server_5 addr=c0t1l2 server = server_5 addr=c32t1l2 server = server_5 addr=c16t1l2 server = server_5 addr=c48t1l2 Where: Value Definition ----- ---------- id ID of the disk (assigned automatically). name Name of the disk. acl Access control level value of the disk. in_use Used by any type of volume or file system. pool Name of the storage pool in use. size (MB) Total size of the disk. type Type of disk contingent on the system attached; VNX for block disk types are CLSTD, CLATA, CMSTD, CLEFD, CLSAS, CMEFD, CMATA, MIXED (indicates tiers used in the pool contain multiple disk types), Performance, Capacity, Extreme_performance, Mirrored_mixed, Mirrored_performance, Mirrored_capacity, and Mirrored_extreme_performance. protection The type of disk protection that has been assigned. stor_id ID of the system associated with the disk. stor_dev ID of the device associated with the disk. volume_name Name of the volume residing on the disk. storage_profiles The storage profiles to which the disk belongs. thin Indicates whether the block system uses thin provisionin g. Values are: True, False. tiering_policy Indicates the tiering policy in effect. If the initial t ier and the tiering policy are the same, the values are: Aut o-Tier, Highest Available Tier, Lowest Available Tier. If the initial tier and the tiering policy are not the same, th e values are: Auto-Tier/No Data Movement, Highest Available Tier/No Data Movement, Lowest Available Tier/N o Data Movement.
compressed For VNX for block, indicates whether data is compressed. Values are: True, False, Mixed (indicates some of the LU Ns, but not all, are compressed). mirrored Indicates whether the disk is mirrored. servers Lists the servers that have access to this disk. addr Path to system (SCSI address). EXAMPLE #4 ---------- To view information for disk d205 for the system with a Symmetrix system, type: $ nas_disk -info d205 id = 205 name = d205 acl = 0 in_use = True pool = SG0 size (MB) = 28560 type = Mixed protection= RAID1 symm_id = 000190100530 symm_dev = 0539 volume_name = d205 storage_profiles = SG0_000192601245 thin = True tiering_enabled = True compression = True mirrored = False servers = server_2,server_3,server_4,server_5,server_6,server_7,server_8,server_9 server = server_2 addr=c0t14l0 FA=03A FAport=0 server = server_2 addr=c16t14l0 FA=04A FAport=0 server = server_3 addr=c0t14l0 FA=03A FAport=0 server = server_3 addr=c16t14l0 FA=04A FAport=0 server = server_4 addr=c0t14l0 FA=03A FAport=0 server = server_4 addr=c16t14l0 FA=04A FAport=0 server = server_5 addr=c0t14l0 FA=03A FAport=0 server = server_5 addr=c16t14l0 FA=04A FAport=0 server = server_6 addr=c0t14l0 FA=03A FAport=0 server = server_6 addr=c16t14l0 FA=04A FAport=0 server = server_7 addr=c0t14l0 FA=03A FAport=0 server = server_7 addr=c16t14l0 FA=04A FAport=0 server = server_8 addr=c0t14l0 FA=03A FAport=0 server = server_8 addr=c16t14l0 FA=04A FAport=0 server = server_9 addr=c0t14l0 FA=03A FAport=0 server = server_9 addr=c16t14l0 FA=04A FAport=0 Where: Value Definition ----- ---------- id ID of the disk (assigned automatically). name Name of the disk. acl Access control level value of the disk. in_use Used by any type of volume or file system. pool Name of the storage pool in use. size (MB) Total size of disk. type Type of disk contingent on the system attached; Symmetrix disk types are STD, BCV, R1BCV, R2BCV, R1STD, R2STD, ATA, R1ATA, R2ATA, BCVA, R1BCA, R2BCA, EFD, FTS, R1FTS, R2FTS, BCVF, R1BCF, R2BCF, BCVMIXED, R1MIXED, R2MIXED, R1BCVMIXED, and R2BCVMIXED. If multiple disk volumes are used, the type is Mixed. protection The type of disk protection that has been assigned. symm_id ID of the Symmetrix system associated with the disk. symm_dev ID of the Symmetrix device associated with the disk. volume_name Name of the volume residing on the disk.
storage_profiles The storage profiles to which the disk belongs. thin Indicates whether the system uses thin provisioning. Values are: True, False, Mixed. tiering_enabled Indicates whether a tiering policy is being used. compressed For VNX with Symmetrix backend, indicates whether data is compressed. Values are:True, False, Mixed (indicates some of the LUNs, but not all, are compressed). mirrored Indicates whether the disk is mirrored. servers Lists the servers that have access to this disk. addr Path to system (SCSI address). EXAMPLE #5 ---------- To view information for disk d17 (FTS device created using eDisk configured in external provisioning mode) for the system with a Symmetrix system, type: $ nas_disk -info id=17 id = 17 name = d17 acl = 0 in_use = True pool = user_pool size (MB) = 17261 type = DSL protection= TDEV symm_id = 000197100127 symm_dev = 0002A volume_name = d17 storage_profiles = symm_dsl thin = True compressed= False mirrored = False servers = server_2,server_3 server = server_2 addr=c0t1l9 server = server_2 addr=c16t1l9 server = server_2 addr=c32t1l9 server = server_2 addr=c48t1l9 server = server_3 addr=c0t1l9 server = server_3 addr=c16t1l9 server = server_3 addr=c32t1l9 server = server_3 addr=c48t1l9 EXAMPLE #4 provides a description of command outputs. EXAMPLE #6 ---------- To rename a disk in the system with a VNX for block, type: $ nas_disk -rename d7 disk7 id = 7 name = disk7 acl = 0 in_use = True size (MB) = 273709 type = CLSTD protection= RAID5(4+1) stor_id = FCNTR074200038 stor_dev = 0012 volume_name = disk7 storage_profiles = clar_r5_performance virtually_provisioned = False mirrored = False servers = server_2,server_3,server_4,server_5 server = server_2 addr=c0t1l2 server = server_2 addr=c32t1l2 server = server_2 addr=c16t1l2 server = server_2 addr=c48t1l2 server = server_3 addr=c0t1l2
server = server_3 addr=c32t1l2 server = server_3 addr=c16t1l2 server = server_3 addr=c48t1l2 server = server_4 addr=c0t1l2 server = server_4 addr=c32t1l2 server = server_4 addr=c16t1l2 server = server_4 addr=c48t1l2 server = server_5 addr=c0t1l2 server = server_5 addr=c32t1l2 server = server_5 addr=c16t1l2 server = server_5 addr=c48t1l2 EXAMPLE #4 provides a description of command outputs. EXAMPLE #7 ---------- To delete a disk entry from the disk table for the system with a VNX for block, type: $ nas_disk -delete d24 id = 24 name = d24 acl = 0 in_use = False size (MB) = 456202 type = CLATA protection= RAID5(6+1) stor_id = FCNTR074200038 stor_dev = 0023 storage_profiles = clarata_archive virtually_provisioned = False mirrored = False servers = server_2,server_3,server_4,server_5 EXAMPLE #4 provides a description of command outputs. ------------------------------------------------------------------- Last Modified: Jan 11, 2013 3:17 pm
nas_diskmark Queries the system, manages and lists the SCSI devices configuration. SYNOPSIS -------- nas_diskmark -mark {-all|
[-monitor {y|n}] Displays the progress of the query and discovery operations. [-Force {y|n}] Overrides the health check failures and changes the storage configuration. CAUTION Use the -Force option only under the direction of an EMC Customer Service Engineer, as high availability can be lost when changing storage configuration. -list {-all|
tid/lun= 1/13 type= disk sz= 274811 val= -5 info= DGC RAID 5 03245F001D005FNI tid/lun= 1/14 type= disk sz= 274811 val= 14 info= DGC RAID 5 032460001E0060NI tid/lun= 1/15 type= disk sz= 274811 val= -5 info= DGC RAID 5 032461001F0061NI server_2 : chain 1 : no drives on chain server_2 : chain 2 : no drives on chain server_2 : chain 3 : no drives on chain server_2 : chain 4 : no drives on chain server_2 : chain 5 : no drives on chain server_2 : chain 6 : no drives on chain server_2 : chain 7 : no drives on chain server_2 : chain 8 : no drives on chain server_2 : chain 9 : no drives on chain server_2 : chain 10 : no drives on chain server_2 : chain 11 : no drives on chain server_2 : chain 12 : no drives on chain server_2 : chain 13 : no drives on chain server_2 : chain 14 : no drives on chain server_2 : chain 15 : no drives on chain Note: This is a partial listing due to the length of the output. ------------------------------------------------------ Last Modified: Feb 21, 2013 11:00 am
nas_emailuser Manages email notifications for serious system events. SYNOPSIS -------- nas_emailuser -info | -test | -modify [-enabled {yes|no}] [-to
-first\email@yourcompany.com,second\email@yourcompany.com [-cc
OK EXAMPLE #4 ---------- To disable email notification, type: $ nas_emailuser -modify -enabled no OK ----------------------------------------- Last Modified: May 14, 2012 1:00 pm
nas_environment [Nas_environment command is not supported by Embedded NAS] Reports the inlet air temperatures and input power to the user. SYNOPSIS -------- nas_environment -info { | -system [-present|-average] | -dme [enclosure_id] [-intemp [f|c]|-power] [-present]|[-average] | -array [-present|-average] | -shelf {
there is less than one hour worth of data. -array Displays the present or average input power information on the array. -present Displays the current value. -average Displays the average value. It requires an hour to calculate the correct value. N/A is displayed if there is less than one hour worth of data. -shelf Allows to input a value for a selected enclosure. It displays the present and average inlet air temperature and input power information on a specified disk array enclosure. If a specific enclosure_id is not specified, all disk array enclosure information is displayed.
Displays the average value. It requires an hour to calculate the correct value. N/A is displayed if there is less than one hour worth of data. -all Displays the following: * System input power * Data mover enclosure inlet air temperatures and input power * Array input power * Disk array enclosure inlet air temperatures and input power * Storage processor enclosure inlet air temperatures and input power * Standby power supply input power Expected Output For Embedded Nas -------------------------------- [nasadmin@CS-0 ]$ nas_environment -info -system Component Name = VMAX IN-EE-NAS-SN 00019710012200013 Power Status = Error 13690667103: Unsupported Present (watts) = N/A Rolling Average (watts) = N/A [nasadmin@CS-0 ]$ nas_environment -info -all Component Name = VMAX IN-EE-NAS-SN 00019710012200013 Power Status = Error 13690667103: Unsupported Present (watts) = N/A Rolling Average (watts) = N/A Component Name = Symmetrix VMAX200K 000197100122 Power Status = Error 13690667103: Unsupported Present (watts) = N/A Rolling Average (watts) = N/A [nasadmin@CS-0 ]$ nas_environment -info -array -present Component Name = Symmetrix VMAX200K 000197100122 Power Status = Error 13690667103: Unsupported Present (watts) = N/A [nasadmin@CS-0 ]$ nas_environment -info -spe -present Error 14764736517: No SPE found.
nas_event Provides a user interface to system-wide events. SYNOPSIS -------- nas_event -Load {-info|
severity can be specified by either the text name or ID. The output is displayed with parameter names in the form $(paraname, typeIndicator, fmtStr). -action {-info|{trap|logfile|mail|callhome|exec|udprpc|tcprpc|terminate} With the -info option, lists all the possible actions associated with events. If one of the actions trap, logfile, mail, callhome, exec, udprpc, tcprpc, or terminate is specified, lists the possible events that trigger the specified action. These events are categorized by component and facility: [-component {
BaseID, Severity, and Brief_Description. SEE ALSO -------- Configuring Events and Notifications on VNX for File. EXAMPLE #1 ---------- After using a text editor to create an event configuration file, to load the new configuration file into the NAS database, type: $ nas_event -Load /nas/site/new_eventlog.cfg EventLog : will load /nas/site/new_eventlog.cfg...done EXAMPLE #2 ---------- To verify that the configuration file was loaded, type: $ nas_event -Load -info Loaded config. files: 1: /nas/sys/nas_eventlog.cfg 2: /nas/http/webui/etc/web_client_eventlog.cfg 3: /nas/site/new_eventlog.cfg EXAMPLE #3 ---------- To list actions, type: $ nas_event -list -action -info action terminate trap exec mail callhome logfile EXAMPLE #4 ---------- To list the events that trigger the mail action, type: $ nas_event -list -action mail CS_PLATFORM(6) |--> EventLog(130) BaseID Severity Brief_Description 50 EMERGENCY(0) ${text,8,%s} 51 ALERT(1) ${text,8,%s} 52 CRITICAL(2) ${text,8,%s} EXAMPLE #5 ---------- To list the components, type: $ nas_event -list -component -info Id Component 1 DART 2 CS_CORE 5 XML_API 6 CS_PLATFORM EXAMPLE #6 ----------
To list the facilities under the component DART, type: $ nas_event -list -component DART -info DART(1) |->Id Facility 24 ADMIN 26 CAM 27 CFS 36 DRIVERS 40 FSTOOLS 43 IP 45 KERNEL 46 LIP 51 NDMP 52 NFS 54 SECURITY 56 SMB 58 STORAGE 64 UFS 68 LOCK 70 SVFS 72 XLT 73 NETLIB 75 MGFS 77 VRPL 78 LDAP 81 VC 83 RCPD 84 VMCAST 86 CHAMII 93 USRMAP 101 ACLUPD 102 FCP 108 REP 111 DPSVC 115 SECMAP 117 WINS 118 DNS 122 DBMS 144 PERFSTATS 146 CEPP 148 DEDUPE EXAMPLE #7 ---------- To list the events generated by DART in the facility with the ID 146, type: $ nas_event -list -component DART -facility 146 DART(1) |--> CEPP(146) BaseID Severity Brief_Description 1 NOTICE(5) CEPP server ${ipaddr,8,%s} of pool ${pool,8,%s} is ${status,8,%s}. Vendor ${vendor,8,%s}, ntStatus 0x${ntstatus,2,%x}. 2 ERROR(3) Error on CEPP server ${ipaddr,8,%s} of pool ${pool,8,%s}: ${status,8,%s}. Vendor ${vendor,8,%s}, ntStatus 0x${ntstatus,2,%x}. 3 NOTICE(5) The CEPP facility is started. 4 NOTICE(5) The CEPP facility is stopped. EXAMPLE #8 ---------- To list events with severity 4 generated by component CS_CORE and facility DBMS, and to display the MessageID in the output, type: $ nas_event -list -severity 4 -component CS_CORE -facility DBMS -id
CS_CORE(2) |--> DBMS(122) MessageID BaseID Brief_Description 86444212226 2 Db: Compact${compact_option,8,%s}: ${db_name,8,%s}: Failed: ${db_status,8,%s}. 86444212227 3 Db Env: ${db_env,8,%s}: Log Remove: Failed: ${db_status,8,%s}. EXAMPLE #9 ---------- To list events filtered by the keyword freeblocks, type: $ nas_event -list -keyword freeblocks DART(1) |--> DBMS(122) BaseID Severity Brief_Description 2 CRITICAL(2) Only ${freeblocks,3,%llu} free blocks in the root file system (fsid ${fsid,2,%u}) of the VDM ${vdm,8,%s}. 3 ALERT(1) The root file system (fsid ${fsid,2,%u}) of the VDM ${vdm,8,%s} is full. There are only ${freeblocks,3,%llu} free blocks. EXAMPLE #10 ----------- To list events with the keyword data generated in DART with the severity level 6, type: $ nas_event -list -keyword data -component DART -severity 6 DART(1) |--> USRMAP(93) BaseID Severity Brief_Description 1 INFO(6) The Usermapper database has been created. 4 INFO(6) The Usermapper database has been destroyed. 8 INFO(6) The migration of the Usermapper database to the VNX version 5.6 format has started. 9 INFO(6) The Usermapper database has been successfully migrated. DART(1) |--> SECMAP(115) BaseID Severity Brief_Description 1 INFO(6) The migration of the secmap database to the VNX version 5.6 format has started. 2 INFO(6) The secmap database has been successfully migrated. EXAMPLE #11 ----------- To unload the event configuration file, type: $ nas_event -Unload /nas/site/new_eventlog.cfg EventLog : will unload /nas/site/new_eventlog.cfg... done EXAMPLE #12 ----------- To receive email notifications that are sent to multiple recipients, add the following line to your /nas/sys/eventlog.cfg file: disposition severity=0-3, mail "nasadmin@nasdocs.emc.com, helpdesk@nasdocs.emc.com" EXAMPLE #13 -----------
To list the events that trigger a particular trap action, type: $ nas_event -l -a trap | more CS_PLATFORM(6) |--> BoxMonitor(131) BaseID Severity Brief_Description 1 CRITICAL(2) EPP failed to initialize. 3 CRITICAL(2) Failed to create ${threadname,8,%s} thread. 4 CRITICAL(2) SIB Read failure: ${string,8,%s} .. CS_PLATFORM(6) |--> SYR(143) BaseID Severity Brief_Description 5 INFO(6) The SYR file ${src_file_path,8,%s} with ${dest_extension,8,%s} extension is attached. ------------------------------------------------------------- Last modified: May 14, 2012 1:35 pm
nas_fs Manages local file systems for the VNX. SYNOPSIS -------- nas_fs -list [-all] | -delete
Note: The ID is an integer and is assigned automatically, but not always sequentially, depending on ID availability. The name of a file system might be truncated if it is more than 19 characters. To display the full file system name, use the -info option with a file system ID. The file system types are: 1=uxfs (default) 5=rawfs (unformatted file system) 6=mirrorfs (mirrored file system) 7=ckpt (checkpoint) 8=mgfs (migration file system) 100=group file system 102=nmfs (nested mount file system) Note: The file system types uxfs, mgfs, nmfs, and rawfs are created by using nas_fs. Other file system types are created either automatically or with their specific commands. -delete
-acl
the default is not to slice the volumes, which is overridden with slice=y. For symm_efd, the default is slice=y,because TimeFinder/FS is not supported with Flash(EFD) disk types. When clar_r1, clar_r5_performance, clar_r5_economy, clar_r6, clarata_r3, clarata_r6, clarata_r10, clarata_archive, cm_r1, cm_r5_performance, cm_r5_economy, cm_r6, cmata_r3, cmata_archive, cmata_r6, cmata_r10, clarsas_archive, clarsas_r6, clarsas_r10, clarefd_r5, clarefd_r10, cmsas_archive, cmsas_r6, cmsas_r10, and cmefd_r5 pools are specified, the default for standard AVM pools is to slice the volumes (slice=y), which is overridden by using slice=n. The default for mapped pools is not to slice the volumes (slice=n). Use nas_pool to change the default slice option. -modify
[-min_retention {
Creates a file system on the specified volume and assigns an optional name to the file system. If a name is not specified, one is assigned automatically. A file system name cannot: * Begin with a dash (-) * Be comprised entirely of integers * Be a single integer * Contain the word root or contain a colon (:) The -type option assigns the file system type to be uxfs (default), mgfs, or rawfs. [samesize=
be locked and protected from deletion. Type an integer and specify Y for years, M for months, D for days, or infinite. The default value is infini te which means that the files can never be deleted. log_type={common|split} Specifies the type of log file associated with the file system. Log files can be either shared (common) or uniquely assigned to individual file systems(split). For SRDF Async or STAR feature, split option is strongly recommended to avoid fsck before mounting a BCV file system on SiteB or S iteC. [fast_clone_level={1|2}] fast_clone_level=2 enables ability to create fast clone of a fast clone ( also called as the second level fast clone) on the file system. fast_clone_lev el=1 enables ability to create a fast clone. File level retention and fast clo ne creation cannot be enabled together on a file system. Enabling split log implies fast_clone_level=2, if file level retention is not enabled on the filesystem. Replication sessions cannot be created between two different fast_clone_level capable filesystems. Note: fast_clone_level=1 indicates that a fast clone can be created on th e filesystem and it is the default option if nothing is specified. [-option
[-name
which means that the files can never be deleted. [storage=
relevant when using TimeFinder/FS. When symm_std, symm_std_rdf_src, symm_ata, symm_ata_rdf_src, symm_ata_rdf_tgt, and symm_std_rdf_tgt, symm_fts, symm_fts_rdf_tgt, and symm_fts_rdf_src pools are specified, the default is not to slice the volumes, which is overridden with slice=y. For symm_efd, the default is slice=y, because TimeFinder/FS is not supported with Flash disk types. When clar_r1, clar_r5_performance, clar_r5_economy, clar_r6, clarata_r3, clarata_r6, clarata_r10, clarata_archive, cm_r1, cm_r5_performance, cm_r5_economy, cm_r6, cmata_r3, cmata_archive, cmata_r6, cmata_r10, clarsas_archive, clarsas_r6, clarsas_r10, clarefd_r5, clarefd_r10, cmsas_archive, cmsas_r6, cmsas_r10, and cmefd_r5 pools are specified, the default for standard AVM pools is to slice the volumes (slice=y), which is overridden by using slice=n. The default for mapped pools is not to slice the volumes (slice=n). Use nas_pool to change the default slice option. [-name
Mirrored_capacity, and Mirrored_extreme_performance. EXAMPLE #1 ---------- To create a file system named ufs1 on metavolume mtv1, type: $ nas_fs -name ufs1 -create mtv1 id = 37 name = ufs1 acl = 0 in_use = False type = uxfs worm = enterprise with no protected files worm_clock = Clock not initialized worm Max Retention Date = NA worm Default Retention Period = infinite worm Minimum Retention Period = 1 Day worm Maximum Retention Period = infinite FLR Auto_lock = off FLR Policy Interval = 3600 seconds FLR Auto_delete = off FLR Epoch Year = 2003 volume = mtv1 pool = rw_servers = ro_servers = rw_vdms = ro_vdms = auto_ext = no,thin=no deduplication = off stor_devs = APM00042000818-0012,APM00042000818-0014 disks = d7,d9 Where: Value Definition id Automatically assigned ID of a file system. name Name assigned to a file system. acl Access control value assigned to the file system. in_use If a file system is registered into the mount table of a Data Mover. type Type of file system. See -list for a description of the types. volume Volume on which a file system resides. worm Write Once Read Many (WORM) state of the file system.It states whether the file-level retention is disabled or set to either compliance or enterprise. pool Storage pool for the file system. rw_servers Servers with read/write access to a file system. ro_servers Servers with read-only access to a file system. rw_vdms VDM servers with read/write access to a file system. ro_vdms VDM servers with read-only access to a file system. worm_clock Software clock maintained by the file system. The clock functions only when the file system is mounted read/write. worm Max Time when the protected files expire. The file system can be Retention deleted only after this date. The special values returned are: Date * 3 - The file system is is set to File-Level retention enterprise with protected files. * 2 - The file system is scanning for max_retention period. * 1 - The default value (No protected files created). * 0 - Infinite retention period (if the server is up and running.) worm Default Specifies a default retention period that files on an Retention FLR-enabled filesystem will be locked and protected from deletion . Period If you do not set either a minimum retention period or a maximum retention period, this default value is used when the file-level retention is enabled. worm Minimum Specifies the minimum retention period that files on an
Retention FLR-enabled file system will be locked and protected from Period deletion. worm Maximum Specifies the maximum retention period that files on an Retention FLR-enabled file system will be locked and protected from Period deletion. FLR Auto_Lock Specifies whether automatic file locking for all files in an FLR-enabled file system is on or off. FLR Policy Specifies an interval for how long to wait after files are Interval modified before the files are automatically locked and protected from deletion. FLR Auto_delete Specifies whether locked files are automatically deleted once the retention period has expired. FLR Epoch Year Specifies the base year used for calculating the retention date of a file beyond 2038. When a file is locked with its atime set to a value greater than the FLR Epoch Year value, the files retention date is set to the files atime value. When a file is locked with its atime set to a value less than the FLR Epoch Year value, the files retention date is set to 2038 + (YEAR(atime) - 1970). volume Volume on which a file system resides. pool Storage pool for the file system. rw_servers Servers with read/write access to a file system. ro_servers Servers with read-only access to a file system. rw_vdms VDM servers with read/write access to a file system. ro_vdms VDM servers with read-only access to a file system. auto_ext Indicates whether auto-extension and thin provisioning are enabled. deduplication Deduplication state of the file system. The file data is transferred to the storage which performs the deduplication and compression on the data. The states are: * On - deduplication on the file system is enabled. * Suspended - Deduplication on the file system is suspended. Deduplication does not perform any new space reduction but the existing files that were reduced in space remain the same. * Off - Deduplication on the file system is disabled. Deduplication does not perform any new space reduction and the data is now reduplicated. stor_devs Storage system devices associated with a file system. disks Disks on which the metavolume resides. Note: The Deduplication state is unavailable when the file system is unmounted. EXAMPLE #2 ---------- To display information about a file system using the file system ID 14, using the clar_mapped_pool VNX mapped pool, type: $ nas_fs -info id=14 id = 14 name = ufs2_flre acl = 0 in_use = True type = uxfs worm = enterprise with no protected files worm_clock = Fri Jul 29 07:56:42 EDT 2011 worm Max Retention Date= No protected files created worm Default Retention Period= 10 Years worm Minimum Retention Period= 30 Days worm Maximum Retention Period= 10 Years FLR Auto_lock = off FLR Policy Interval= 3600 seconds FLR Auto_delete = off FLR Epoch Year = 2003 volume = v117 pool = clar_mapped_pool member_of = root_avm_fs_group_50 rw_servers = server_2 ro_servers =
rw_vdms = ro_vdms = auto_ext = no,thin=no deduplication = Off thin_storage = True tiering_policy = Auto-tier compressed = False mirrored = False stor_devs = stor_devs = BB005056830430-0019,BB005056830430-0016,BB005056830430-0015,BB005056830430-0010 disks = d16,d13,d12,d7 disk=d16 stor_dev=BB005056830430-0019 addr=c0t1l9 server=server_2 disk=d16 stor_dev=BB005056830430-0019 addr=c16t1l9 server=server_2 disk=d13 stor_dev=BB005056830430-0016 addr=c0t1l6 server=server_2 disk=d13 stor_dev=BB005056830430-0016 addr=c16t1l6 server=server_2 disk=d12 stor_dev=BB005056830430-0015 addr=c0t1l5 server=server_2 disk=d12 stor_dev=BB005056830430-0015 addr=c16t1l5 server=server_2 disk=d7 stor_dev=BB005056830430-0010 addr=c0t1l0 server=server_2 disk=d7 stor_dev=BB005056830430-0010 addr=c16t1l0 server=server_2 Where: Value Definition thin_storage Indicates whether the VNX for Block storage system uses thin provisioning. Values are: True, False, Mixed. tiering_policy Indicates the tiering policy in effect. If the initial tier and the tiering policy are the same, the values are: Auto-Tier, H ighest Available Tier, Lowest Available Tier. If the initial tier and th e tiering policy are not the same, the values are: Auto-Tier/No Data Moveme nt, Highest Available Tier/No Data Movement, Lowest Available Tier/No Data Mo vement. compressed Indicates whether data is compressed. Values are True, False, Mix ed (indicates some of the LUNs, but not all, are compressed). mirrored Indicates whether the disk is mirrored. EXAMPLE #3 ---------- To display a list of file systems, type: $ nas_fs -list id inuse type acl volume name server 1 n 1 0 20 root_fs_1 2 y 1 0 50 root_fs_common 1 3 n 5 0 83 root_fs_ufslog 5 n 5 0 103 root_fs_d3 6 n 5 0 104 root_fs_d4 7 n 5 0 105 root_fs_d5 8 n 5 0 106 root_fs_d6 9 y 1 0 22 root_fs_2 1 10 n 5 0 108 root_panic_reserve 11 y 1 0 112 ufs1 1 13 y 1 0 115 ufs1_flr 1 14 y 1 0 117 ufs2_flre 1 EXAMPLE #4 ---------- To list all the file systems including internal checkpoints, type: $ nas_fs -list -all id inuse type acl volume name server 1 n 1 0 24 root_fs_1 2 y 1 0 26 root_fs_2 1
3 y 1 0 28 root_fs_3 2 4 n 1 0 30 root_fs_4 5 n 1 0 32 root_fs_5 6 n 1 0 34 root_fs_6 7 n 1 0 36 root_fs_7 8 n 1 0 38 root_fs_8 9 n 1 0 40 root_fs_9 10 n 1 0 42 root_fs_10 11 n 1 0 44 root_fs_11 12 n 1 0 46 root_fs_12 13 n 1 0 48 root_fs_13 14 n 1 0 50 root_fs_14 15 n 1 0 52 root_fs_15 16 y 1 0 54 root_fs_common 2,1 17 n 5 0 87 root_fs_ufslog 18 n 5 0 90 root_panic_reserve 212 y 1 0 315 v2src1 1 213 y 101 0 0 root_avm_fs_group_3 214 n 1 0 318 v2dst1 230 y 1 0 346 v2srclun1 1 231 y 1 0 349 v2dstlun1 2 342 y 1 0 560 root_fs_vdm_srcvdm1 1 343 y 1 0 563 root_fs_vdm_srcvdm2 1 986 n 11 0 0 vpfs986 987 y 7 0 1722 gstest 1 988 y 1 0 1725 src1 1 989 y 5 0 1728 dst1 1 1343 n 11 0 0 vpfs1343 1344 y 7 0 2351 root_rep_ckpt_342_2 1 1345 y 7 0 2351 root_rep_ckpt_342_2 1 1346 y 1 0 2354 root_fs_vdm_srcvdm1 1 1347 n 11 0 0 vpfs1347 1348 y 7 0 2358 root_rep_ckpt_1346_ 1 1349 y 7 0 2358 root_rep_ckpt_1346_ 1 1350 y 1 0 2367 fs1 v9 1354 n 1 0 2374 fs1_replica1 1358 n 11 0 0 vpfs1358 1359 y 7 0 2383 root_rep_ckpt_1350_ v9 1360 y 7 0 2383 root_rep_ckpt_1350_ v9 1361 n 1 0 2385 fs1_replica2 1362 n 11 0 0 vpfs1362 1363 n 7 0 2388 root_rep_ckpt_1361_ 1364 n 7 0 2388 root_rep_ckpt_1361_ 1365 y 1 0 2392 fs1365 1 1366 y 7 0 2383 root_rep_ckpt_1350_ v9 1367 y 7 0 2383 root_rep_ckpt_1350_ v9 1368 n 11 0 0 vpfs1368 1369 n 7 0 2395 root_rep_ckpt_1354_ 1370 n 7 0 2395 root_rep_ckpt_1354_ 1371 y 1 0 2399 root_fs_vdm_v1 1 1372 y 1 0 2401 f1 v40 1376 y 1 0 2406 root_fs_vdm_v1_repl 2 1380 n 11 0 0 vpfs1380 1381 y 7 0 2414 root_rep_ckpt_1372_ v40 1382 y 7 0 2414 root_rep_ckpt_1372_ v40 1383 y 1 0 2416 f1_replica1 v41 1384 n 11 0 0 vpfs1384 1385 y 7 0 2419 root_rep_ckpt_1383_ v41 1386 y 7 0 2419 root_rep_ckpt_1383_ v41 1387 y 1 0 2423 cworm 1 1388 n 1 0 2425 cworm1 1389 y 1 0 2427 fs2 2 1390 y 1 0 2429 fs3 2 1391 n 11 0 0 vpfs1391 1392 y 7 0 2432 root_rep_ckpt_1389_ 2 1393 y 7 0 2432 root_rep_ckpt_1389_ 2 1394 n 11 0 0 vpfs1394 1395 y 7 0 2435 root_rep_ckpt_1390_ 2 1396 y 7 0 2435 root_rep_ckpt_1390_ 2 1397 y 7 0 2432 fs2_ckpt1 2 1398 y 1 0 2439 fs4 2
1399 y 1 0 2441 fs5 2 1400 n 11 0 0 vpfs1400 1401 y 7 0 2444 root_rep_ckpt_1398_ 2 1402 y 7 0 2444 root_rep_ckpt_1398_ 2 1403 n 11 0 0 vpfs1403 1404 y 7 0 2447 root_rep_ckpt_1399_ 2 1405 y 7 0 2447 root_rep_ckpt_1399_ 2 1406 y 7 0 2444 fs4_ckpt1 2 Note: NDMP and Replicator internal checkpoints can be identified by specific prefixes in the filename. Using VNX SnapSure provides more information for internal checkpoints naming formats. EXAMPLE #5 ---------- To create a uxfs file system named ufs20 on storage system BB005056830430, with a size of 1 GB, using the clar_r5_performance pool and allowing the file system to share disk volumes with other file systems, type: $ nas_fs -name ufs20 -type uxfs -create size=1G pool=clar_r5_performance storage=BB005056830430 -option slice=y id = 15 name = ufs20 acl = 0 in_use = False type = uxfs worm = off volume = v119 pool = clar_r5_performance member_of = root_avm_fs_group_3 rw_servers= ro_servers= rw_vdms = ro_vdms = auto_ext = no,thin=no deduplication = unavailable stor_devs = BB005056830430-0018,BB005056830430-0017,BB005056830430-0014,BB005056830430-0011 disks = d15,d14,d11,d8 Where: Value Definition member_of Filesystem group to which the filesystem belongs. EXAMPLE #1 provides a description of command output. EXAMPLE #6 ---------- To create a rawfs file system named ufs3 with the same size as the file system ufs1 using the clar_r5_performance pool and allowing the file system to share disk volumes with other filesystems, type: $ nas_fs -name ufs3 -type rawfs -create samesize=ufs1 pool=clar_r5_performance storage=APM00042000818 -option slice=y id = 39 name = ufs3 acl = 0 in_use = False type = rawfs worm = off volume = v173 pool = clar_r5_performance member_of = root_avm_fs_group_3 rw_servers= ro_servers= rw_vdms = ro_vdms = auto_ext = no,thin=no deduplication = unavailable
stor_devs = APM00042000818-001F,APM00042000818-001D,APM00042000818-0019,APM00042 000818-0016 disks = d20,d18,d14,d11 EXAMPLE #1 and EXAMPLE #3 provide for a description of command outputs. EXAMPLE #7 ---------- To create a uxfs file system named ufs4, with a size of 100 GB, using the clar_r5_performance pool, with file-level retention set to enterprise, 4096 bytes per inode, and server_3 for file system building, type: $ nas_fs -name ufs4 -create size=100G pool=clar_r5_performance worm=enterprise -option nbpi=4096,mover=server_3 id = 16 name = ufs4 acl = 0 in_use = False type = uxfs worm = enterprise with no protected files worm_clock= Clock not initialized worm Max Retention Date= NA worm Default Retention Period= infinite worm Minimum Retention Period= 1 Day worm Maximum Retention Period= infinite FLR Auto_lock= off FLR Policy Interval= 3600 seconds FLR Auto_delete= off FLR Epoch Year= 2003 volume = v121 pool = clar_r5_performance member_of = root_avm_fs_group_3 rw_servers= ro_servers= rw_vdms = ro_vdms = auto_ext = no,thin=no deduplication = unavailable stor_devs = BB005056830430-0019,BB005056830430-0016,BB005056830430-0015,BB005056830430-0010 disks = d16,d13,d12,d7 To ensure retention of protected files, it can also be set to compliance by typing: $ nas_fs -name ufs4 -create size=100G pool=clar_r5_performance worm=compliance -option nbpi=4096,mover=server_3 id = 17 name = ufs4 acl = 0 in_use = False type = uxfs worm = compliance with no protected files worm_clock= Clock not initialized worm Max Retention Date= NA worm Default Retention Period= infinite worm Minimum Retention Period= 1 Day worm Maximum Retention Period= infinite FLR Auto_lock= off FLR Policy Interval= 3600 seconds FLR Auto_delete= off FLR Epoch Year= 2003 volume = v123 pool = clar_r5_performance member_of = root_avm_fs_group_3 rw_servers= ro_servers=
rw_vdms = ro_vdms = auto_ext = no,thin=no deduplication = unavailable stor_devs = BB005056830430-0018,BB005056830430-0017,BB005056830430-0014,BB005056830430-0011 disks = d15,d14,d11,d8 EXAMPLE #1 provides a description of command outputs. EXAMPLE #8 ---------- To create a file system named ufs30, with a size of 1 GB, by using the clar_r5_performance pool, with file-level retention set to enterprise, a minimum retention period of 30 days, and a maximum retention period of 10 years, type: $ nas_fs -name ufs30 -create size=1G pool=clar_r5_performance worm=enterprise -min_retention 30D -max_retention 10Y id = 18 name = ufs30 acl = 0 in_use = False type = uxfs worm = enterprise with no protected files worm_clock= Clock not initialized worm Max Retention Date= NA worm Default Retention Period= 10 Years worm Minimum Retention Period= 30 Days worm Maximum Retention Period= 10 Years FLR Auto_lock= off FLR Policy Interval= 3600 seconds FLR Auto_delete= off FLR Epoch Year= 2003 volume = v125 pool = clar_r5_performance member_of = root_avm_fs_group_3 rw_servers= ro_servers= rw_vdms = ro_vdms = auto_ext = no,thin=no deduplication = unavailable stor_devs = BB005056830430-0019,BB005056830430-0016,BB005056830430-0015,BB005056830430-0010 disks = d16,d13,d12,d7 EXAMPLE #1 provides a description of command outputs. EXAMPLE #9 ---------- To display information about file system ufs4, type: $ nas_fs -info ufs4 id = 16 name = ufs4 acl = 0 in_use = False type = uxfs worm = enterprise with no protected files worm_clock= Clock not initialized worm Max Retention Date= NA worm Default Retention Period= infinite worm Minimum Retention Period= 1 Day worm Maximum Retention Period= infinite FLR Auto_lock= off FLR Policy Interval= 3600 seconds FLR Auto_delete= off FLR Epoch Year= 2003
volume = v121 pool = clar_r5_performance member_of = root_avm_fs_group_3 rw_servers= ro_servers= rw_vdms = ro_vdms = auto_ext = no,thin=no deduplication = unavailable stor_devs = BB005056830430-0019,BB005056830430-0016,BB005056830430-0015,BB005056830430-0010 disks = d16,d13,d12,d7 EXAMPLE #1 provides a description of command outputs. EXAMPLE #10 ----------- To create a uxfs file system named ufs40, with a size of 10 GB, by using the clar_r5_performance pool, and an ID of 8000 assigned to ufs1, type: $ nas_fs -name ufs40 -type uxfs -create size=10G pool=clar_r5_performance -option slice=y,id=8000 id = 8000 name = ufs40 acl = 0 in_use = False type = uxfs worm = off volume = v127 pool = clar_r5_performance member_of = root_avm_fs_group_3 rw_servers= ro_servers= rw_vdms = ro_vdms = auto_ext = no,thin=no deduplication = unavailable stor_devs = BB005056830430-0018,BB005056830430-0017,BB005056830430-0014,BB005056830430-0011 disks = d15,d14,d11,d8 EXAMPLE #11 ----------- To create a uxfs file system named ufs41, with a size of 10 GB, by using the clar_r5_performance pool, and an ID of 8000 assigned to ufs1, type: $ nas_fs -name ufs41 -type uxfs -create size=10G pool=clar_r5_performance -option slice=y,id=8000 id = 8001 name = ufs41 acl = 0 in_use = False type = uxfs worm = off volume = v129 pool = clar_r5_performance member_of = root_avm_fs_group_3 rw_servers= ro_servers= rw_vdms = ro_vdms = auto_ext = no,thin=no deduplication = unavailable stor_devs = BB005056830430-0019,BB005056830430-0016,BB005056830430-0015,BB005056830430-0010 disks = d16,d13,d12,d7 Warning 17716815881: unavailable id : 8000.
Note: The warning output is displayed if the desired ID is not available. Because id=8000 was used in Example 10, the system set the id to 8001 instead. EXAMPLE #12 ----------- To view the size of ufs1, type: $ nas_fs -size ufs1 total = 945 avail = 945 used = 1 ( 0% ) (sizes in MB) ( blockcount = 2097152 ) volume: total = 1024 (sizes in MB) ( blockcount = 2097152 ) avail = 944 used = 80 ( 8% ) When a file system is mounted, the size info for the volume and a file system, as well as the number of blocks that are used are displayed. Where: Value Definition total Total size of the file system. blockcount Total number of blocks used. EXAMPLE #13 ----------- To rename a file system from ufs1 to ufs5, type: $ nas_fs -rename ufs1 ufs5 id = 11 name = ufs5 acl = 0 in_use = True type = uxfs worm = off volume = v112 pool = clar_r5_performance member_of = root_avm_fs_group_3 rw_servers= server_2 ro_servers= rw_vdms = ro_vdms = auto_ext = no,thin=no deduplication = Off stor_devs = BB005056830430-0019,BB005056830430-0016,BB005056830430-0015,BB005056830430-0010 disks = d16,d13,d12,d7 disk=d16 stor_dev=BB005056830430-0019 addr=c0t1l9 server=server_2 disk=d16 stor_dev=BB005056830430-0019 addr=c16t1l9 server=server_2 disk=d13 stor_dev=BB005056830430-0016 addr=c0t1l6 server=server_2 disk=d13 stor_dev=BB005056830430-0016 addr=c16t1l6 server=server_2 disk=d12 stor_dev=BB005056830430-0015 addr=c0t1l5 server=server_2 disk=d12 stor_dev=BB005056830430-0015 addr=c16t1l5 server=server_2 disk=d7 stor_dev=BB005056830430-0010 addr=c0t1l0 server=server_2 disk=d7 stor_dev=BB005056830430-0010 addr=c16t1l0 server=server_2 EXAMPLE #1 and EXAMPLE #3 provide a description of command outputs. EXAMPLE #14 ----------- To extend the file system, ufs1, with the volume, emtv2b, type: $ nas_fs -xtend ufs1 emtv2b id = 38 name = ufs1 acl = 0 in_use = True type = uxfs worm = off volume = v171 pool = clar_r5_performance
member_of = root_avm_fs_group_3 rw_servers= server_2 ro_servers= rw_vdms = ro_vdms = auto_ext = no,thin=no deduplication = off stor_devs = APM00042000818-001F,APM00042000818-001D,APM00042000818-0019,APM00042 000818-0016,APM00042000818-001C disks = d20,d18,d14,d11,d17 disk=d20 stor_dev=APM00042000818-001F addr=c0t1l15 server=server_2 disk=d20 stor_dev=APM00042000818-001F addr=c32t1l15 server=server_2 disk=d18 stor_dev=APM00042000818-001D addr=c0t1l13 server=server_2 disk=d18 stor_dev=APM00042000818-001D addr=c32t1l13 server=server_2 disk=d14 stor_dev=APM00042000818-0019 addr=c0t1l9 server=server_2 disk=d14 stor_dev=APM00042000818-0019 addr=c32t1l9 server=server_2 disk=d11 stor_dev=APM00042000818-0016 addr=c0t1l6 server=server_2 disk=d11 stor_dev=APM00042000818-0016 addr=c32t1l6 server=server_2 disk=d17 stor_dev=APM00042000818-001C addr=c0t1l12 server=server_2 disk=d17 stor_dev=APM00042000818-001C addr=c32t1l12 server=server_2 EXAMPLE #1 provides a description of command outputs. EXAMPLE # 15 ------------ To extend the file system named ufs5, with the specified size of 1 GB, by using clar_r5_performance pool, type: $ nas_fs -xtend ufs5 size=1G pool=clar_r5_performance id = 11 name = ufs5 acl = 0 in_use = True type = uxfs worm = off volume = v112 pool = clar_r5_performance member_of = root_avm_fs_group_3 rw_servers= server_2 ro_servers= rw_vdms = ro_vdms = auto_ext = no,thin=no deduplication = Off stor_devs = BB005056830430-0019,BB005056830430-0016,BB005056830430-0015,BB005056830430-0010 disks = d16,d13,d12,d7 disk=d16 stor_dev=BB005056830430-0019 addr=c0t1l9 server=server_2 disk=d16 stor_dev=BB005056830430-0019 addr=c16t1l9 server=server_2 disk=d13 stor_dev=BB005056830430-0016 addr=c0t1l6 server=server_2 disk=d13 stor_dev=BB005056830430-0016 addr=c16t1l6 server=server_2 disk=d12 stor_dev=BB005056830430-0015 addr=c0t1l5 server=server_2 disk=d12 stor_dev=BB005056830430-0015 addr=c16t1l5 server=server_2 disk=d7 stor_dev=BB005056830430-0010 addr=c0t1l0 server=server_2 disk=d7 stor_dev=BB005056830430-0010 addr=c16t1l0 server=server_2 EXAMPLE #1 provides a description of command outputs. EXAMPLE #16 ------------ To set the access control level to 1432 for the file system ufs5, type: $ nas_fs -acl 1432 ufs5 id = 11 name = ufs5 acl = 1432, owner=nasadmin, ID=201 in_use = True type = uxfs
worm = off volume = v112 pool = clar_r5_performance member_of = root_avm_fs_group_3 rw_servers= server_2 ro_servers= rw_vdms = ro_vdms = auto_ext = no,thin=no deduplication = Off stor_devs = BB005056830430-0019,BB005056830430-0016,BB005056830430-0015,BB005056830430-0010 disks = d16,d13,d12,d7 disk=d16 stor_dev=BB005056830430-0019 addr=c0t1l9 server=server_2 disk=d16 stor_dev=BB005056830430-0019 addr=c16t1l9 server=server_2 disk=d13 stor_dev=BB005056830430-0016 addr=c0t1l6 server=server_2 disk=d13 stor_dev=BB005056830430-0016 addr=c16t1l6 server=server_2 disk=d12 stor_dev=BB005056830430-0015 addr=c0t1l5 server=server_2 disk=d12 stor_dev=BB005056830430-0015 addr=c16t1l5 server=server_2 disk=d7 stor_dev=BB005056830430-0010 addr=c0t1l0 server=server_2 disk=d7 stor_dev=BB005056830430-0010 addr=c16t1l0 server=server_2 Note: The value 1432 specifies nasadmin as the owner and gives users with an acce ss level of at least observer read access only, users with an access level of at least operator read/write access, and users with an access level of at least admin read/write/delete access. EXAMPLE #1 provides a description of command outputs. EXAMPLE #17 ----------- To set the maximum retention period for file system ufs2_flre to 11 years, type: $ nas_fs -modify ufs2_flre -worm -max_retention 11Y id = 14 name = ufs2_flre acl = 0 in_use = True type = uxfs worm = enterprise with no protected files worm_clock= Fri Jul 29 11:14:27 EDT 2011 worm Max Retention Date= No protected files created worm Default Retention Period= 10 Years worm Minimum Retention Period= 30 Days worm Maximum Retention Period= 11 Years FLR Auto_lock= off FLR Policy Interval= 3600 seconds FLR Auto_delete= off FLR Epoch Year= 2003 volume = v117 pool = clar_r5_performance member_of = root_avm_fs_group_3 rw_servers= server_2 ro_servers= rw_vdms = ro_vdms = auto_ext = no,thin=no deduplication = Off stor_devs = BB005056830430-0019,BB005056830430-0016,BB005056830430-0015,BB005056830430-0010 disks = d16,d13,d12,d7 disk=d16 stor_dev=BB005056830430-0019 addr=c0t1l9 server=server_2 disk=d16 stor_dev=BB005056830430-0019 addr=c16t1l9 server=server_2 disk=d13 stor_dev=BB005056830430-0016 addr=c0t1l6 server=server_2 disk=d13 stor_dev=BB005056830430-0016 addr=c16t1l6 server=server_2 disk=d12 stor_dev=BB005056830430-0015 addr=c0t1l5 server=server_2 disk=d12 stor_dev=BB005056830430-0015 addr=c16t1l5 server=server_2 disk=d7 stor_dev=BB005056830430-0010 addr=c0t1l0 server=server_2
disk=d7 stor_dev=BB005056830430-0010 addr=c16t1l0 server=server_2 EXAMPLE #1 provides a description of command outputs. EXAMPLE #17 ------------ To set the maximum retention period for file system ufs2_flre to 11 years, type: $ nas_fs -modify ufs2_flre -worm -max_retention 11Y id = 14 name = ufs2_flre acl = 0 in_use = True type = uxfs worm = enterprise with no protected files worm_clock= Fri Jul 29 11:14:27 EDT 2011 worm Max Retention Date= No protected files created worm Default Retention Period= 10 Years worm Minimum Retention Period= 30 Days worm Maximum Retention Period= 11 Years FLR Auto_lock= off FLR Policy Interval= 3600 seconds FLR Auto_delete= off FLR Epoch Year= 2003 volume = v117 pool = clar_r5_performance member_of = root_avm_fs_group_3 rw_servers= server_2 ro_servers= rw_vdms = ro_vdms = auto_ext = no,thin=no deduplication = Off stor_devs = BB005056830430-0019,BB005056830430-0016,BB005056830430-0015,BB005056830430-0010 disks = d16,d13,d12,d7 disk=d16 stor_dev=BB005056830430-0019 addr=c0t1l9 server=server_2 disk=d16 stor_dev=BB005056830430-0019 addr=c16t1l9 server=server_2 disk=d13 stor_dev=BB005056830430-0016 addr=c0t1l6 server=server_2 disk=d13 stor_dev=BB005056830430-0016 addr=c16t1l6 server=server_2 disk=d12 stor_dev=BB005056830430-0015 addr=c0t1l5 server=server_2 disk=d12 stor_dev=BB005056830430-0015 addr=c16t1l5 server=server_2 disk=d7 stor_dev=BB005056830430-0010 addr=c0t1l0 server=server_2 disk=d7 stor_dev=BB005056830430-0010 addr=c16t1l0 server=server_2 EXAMPLE #1 provides a description of command outputs. EXAMPLE #18 ------------ To reset the FLR epoch year for file system ufs2_flre to 2000, type: $ nas_fs -modify ufs2_flre -worm -reset_epoch 2000 id = 14 name = ufs2_flre acl = 0 in_use = True type = uxfs worm = enterprise with no protected files worm_clock= Fri Jul 29 11:18:36 EDT 2011 worm Max Retention Date= No protected files created worm Default Retention Period= 10 Years worm Minimum Retention Period= 30 Days worm Maximum Retention Period= 11 Years FLR Auto_lock= off FLR Policy Interval= 3600 seconds FLR Auto_delete= off FLR Epoch Year= 2000 volume = v117 pool = clar_r5_performance
member_of = root_avm_fs_group_3 rw_servers= server_2 ro_servers= rw_vdms = ro_vdms = auto_ext = no,thin=no deduplication = Off stor_devs = BB005056830430-0019,BB005056830430-0016,BB005056830430-0015,BB005056830430-0010 disks = d16,d13,d12,d7 disk=d16 stor_dev=BB005056830430-0019 addr=c0t1l9 server=server_2 disk=d16 stor_dev=BB005056830430-0019 addr=c16t1l9 server=server_2 disk=d13 stor_dev=BB005056830430-0016 addr=c0t1l6 server=server_2 disk=d13 stor_dev=BB005056830430-0016 addr=c16t1l6 server=server_2 disk=d12 stor_dev=BB005056830430-0015 addr=c0t1l5 server=server_2 disk=d12 stor_dev=BB005056830430-0015 addr=c16t1l5 server=server_2 disk=d7 stor_dev=BB005056830430-0010 addr=c0t1l0 server=server_2 disk=d7 stor_dev=BB005056830430-0010 addr=c16t1l0 server=server_2 EXAMPLE #19 ----------- To enable FLR automatic file locking with a policy interval of 30 minutes for file system ufs2_flre, type: $ nas_fs -modify ufs2_flre -worm -auto_lock enable -policy_interval 30M id = 14 name = ufs2_flre acl = 0 in_use = True type = uxfs worm = enterprise with no protected files worm_clock= Fri Jul 29 12:14:44 EDT 2011 worm Max Retention Date= No protected files created worm Default Retention Period= 10 Years worm Minimum Retention Period= 30 Days worm Maximum Retention Period= 11 Years FLR Auto_lock= on FLR Policy Interval= 1800 seconds FLR Auto_delete= off FLR Epoch Year= 2000 volume = v117 pool = clar_r5_performance member_of = root_avm_fs_group_3 rw_servers= server_2 ro_servers= rw_vdms = ro_vdms = auto_ext = no,thin=no deduplication = Off stor_devs = BB005056830430-0019,BB005056830430-0016,BB005056830430-0015,BB005056830430-0010 disks = d16,d13,d12,d7 disk=d16 stor_dev=BB005056830430-0019 addr=c0t1l9 server=server_2 disk=d16 stor_dev=BB005056830430-0019 addr=c16t1l9 server=server_2 disk=d13 stor_dev=BB005056830430-0016 addr=c0t1l6 server=server_2 disk=d13 stor_dev=BB005056830430-0016 addr=c16t1l6 server=server_2 disk=d12 stor_dev=BB005056830430-0015 addr=c0t1l5 server=server_2 disk=d12 stor_dev=BB005056830430-0015 addr=c16t1l5 server=server_2 disk=d7 stor_dev=BB005056830430-0010 addr=c0t1l0 server=server_2 disk=d7 stor_dev=BB005056830430-0010 addr=c16t1l0 server=server_2 EXAMPLE #20 ----------- To enable FLR automatic file deletion for file system ufs2_flre, type: $ nas_fs -modify ufs2_flre -worm -auto_delete enable id = 40 name = ufs4 acl = 0
in_use = True type = uxfs worm = enterprise with no protected files worm_clock= Wed Jul 6 11:11:13 UTC 2011 worm Max Retention Date= No protected files created worm Default Retention Period= 1 Year worm Minimum Retention Period= 1 Day worm Maximum Retention Period= 1 Year FLR Auto_lock= on FLR Policy Interval= 1800 seconds FLR Auto_delete= on FLR Epoch Year= 2000 volume = v175 pool = clar_r5_performance member_of = root_avm_fs_group_3 rw_servers= ro_servers= rw_vdms = ro_vdms = auto_ext = no,thin=no deduplication = Off stor_devs = APM00042000818-001F,APM00042000818-001D,APM00042000818-0019,APM00042 000818-0016 disks = d20,d18,d14,d11 EXAMPLE #21 ----------- To start the conversion of the file system, ufs2, and to conform to the MIXED access policy mode, type: $ nas_fs -translate ufs2 -access_policy start -to MIXED -from NT id = 38 name = ufs2 acl = 1432, owner=nasadmin, ID=201 in_use = True type = uxfs worm = off volume = v171 pool = clar_r5_performance member_of = root_avm_fs_group_3 rw_servers= server_2 ro_servers= rw_vdms = ro_vdms = auto_ext = no,thin=no deduplication = off stor_devs = APM00042000818-001F,APM00042000818-001D,APM00042000818-0019,APM00042 000818-0016,APM00042000818-001C disks = d20,d18,d14,d11,d17 disk=d20 stor_dev=APM00042000818-001F addr=c0t1l15 server=server_2 disk=d20 stor_dev=APM00042000818-001F addr=c32t1l15 server=server_2 disk=d18 stor_dev=APM00042000818-001D addr=c0t1l13 server=server_2 disk=d18 stor_dev=APM00042000818-001D addr=c32t1l13 server=server_2 disk=d14 stor_dev=APM00042000818-0019 addr=c0t1l9 server=server_2 disk=d14 stor_dev=APM00042000818-0019 addr=c32t1l9 server=server_2 disk=d11 stor_dev=APM00042000818-0016 addr=c0t1l6 server=server_2 disk=d11 stor_dev=APM00042000818-0016 addr=c32t1l6 server=server_2 disk=d17 stor_dev=APM00042000818-001C addr=c0t1l12 server=server_2 disk=d17 stor_dev=APM00042000818-001C addr=c32t1l12 server=server_2 EXAMPLE #1 provides a description of command outputs. EXAMPLE #22 ----------- To display the status of access policy conversion for ufs2, type: $ nas_fs -translate ufs2 -access_policy status status=In progress
percent_inode_scanned=90 EXAMPLE #23 ----------- To create a nested mount file system, nmfs1, type: $ nas_fs -name nmfs1 -type nmfs -create id = 8002 name = nmfs1 acl = 0 in_use = False type = nmfs worm = off volume = 0 pool = rw_servers= ro_servers= rw_vdms = ro_vdms = auto_ext = no,thin=no deduplication = unavailable stor_devs = disks = EXAMPLE #1 provides a description of command outputs. EXAMPLE #24 ----------- To delete ufs1, type: $ nas_fs -delete ufs41 name = ufs41 acl = 0 in_use = False type = uxfs worm = off volume = v129 rw_servers= ro_servers= rw_vdms = ro_vdms = auto_ext = no,thin=no deduplication = unavailable stor_devs = BB005056830430-0019,BB005056830430-0016,BB005056830430-0015,BB005056830430-0010 disks = d16,d13,d12,d7 EXAMPLE #1 provides a description of command outputs. EXAMPLE #25 ----------- To create a file system named ufs3, with a size of 1 GB, by using the clar_r5_performance pool, a maximum size of 10 GB and with auto-extend and thin provisioning enabled, type: $ nas_fs -name ufs3 -create size=1G pool=clar_r5_performance -auto_extend yes -max_size 10G -thin yes id = 8003 name = ufs3 acl = 0 in_use = False type = uxfs worm = off volume = v133 pool = clar_r5_performance member_of = root_avm_fs_group_3 rw_servers= ro_servers=
rw_vdms = ro_vdms = auto_ext = hwm=90%,max_size=10240M,thin=yes deduplication = unavailable stor_devs = BB005056830430-0018,BB005056830430-0017,BB005056830430-0014,BB005056830430-0011 disks = d15,d14,d11,d8 EXAMPLE #1 provides a description of command outputs. EXAMPLE # 26 ------------ To disable thin provisioning on ufs3, type: $ nas_fs -modify ufs3 -thin no id = 8003 name = ufs3 acl = 0 in_use = False type = uxfs worm = off volume = v133 pool = clar_r5_performance member_of = root_avm_fs_group_3 rw_servers= ro_servers= rw_vdms = ro_vdms = auto_ext = hwm=90%,max_size=10240M,thin=no deduplication = unavailable stor_devs = BB005056830430-0018,BB005056830430-0017,BB005056830430-0014,BB005056830430-0011 disks = d15,d14,d11,d8 EXAMPLE #1 provides a description of command outputs. EXAMPLE # 27 ------------ To query the current directory type and translation status for MPD, type: $ nas_fs -info ufs5 -option mpd id = 11 name = ufs5 acl = 1432, owner=nasadmin, ID=201 in_use = True type = uxfs worm = off volume = v112 pool = clar_r5_performance member_of = root_avm_fs_group_3 rw_servers= server_2 ro_servers= rw_vdms = ro_vdms = auto_ext = no,thin=no deduplication = Off stor_devs = BB005056830430-0019,BB005056830430-0016,BB005056830430-0015,BB005056830430-0010 disks = d16,d13,d12,d7 disk=d16 stor_dev=BB005056830430-0019 addr=c0t1l9 server=server_2 disk=d16 stor_dev=BB005056830430-0019 addr=c16t1l9 server=server_2 disk=d13 stor_dev=BB005056830430-0016 addr=c0t1l6 server=server_2 disk=d13 stor_dev=BB005056830430-0016 addr=c16t1l6 server=server_2 disk=d12 stor_dev=BB005056830430-0015 addr=c0t1l5 server=server_2 disk=d12 stor_dev=BB005056830430-0015 addr=c16t1l5 server=server_2 disk=d7 stor_dev=BB005056830430-0010 addr=c0t1l0 server=server_2 disk=d7 stor_dev=BB005056830430-0010 addr=c16t1l0 server=server_2 Multi-Protocol Directory Information
Default_directory_type = DIR3 Needs_translation = False Translation_state = Never Has_translation_error = False where: Value Definition Default_directory_type The default directory type for the file system. Available types are: DIR3 and COMPAT. Needs_translation If true, then the file system may contain more than one directory type. If false, then all directories are of the file system default directory type. Translation_state The current state of the translation thread. Available states are: never, not requested, pending, queued, running, paused, completed, and failed. Has_translation_error Indicated if the most recent translation encountered any errors. Default_directory_type Needs_translation state File system DIR3 False Is MPD. No action require d. DIR3 True Requires translation or f ile system maintenance. Contact EMC Customer Serv ice. COMPAT False Is COMPAT and requires tr anslation. Contact EMC Customer Serv ice. COMPAT True Requires translation. Contact EMC Customer Serv ice. The state where both Default_directory_type=DIR3 and Needs_transalation=False assure that this filesystems directories are all in MPD format, and there are no directories of the obsolete single-protocol format. Any other combination of states, for example, Needs_transalation=True, indicates that there could be non-MPD directories in the filesystem which may not be compatible with a future release. EXAMPLE #28 ------------ To display the information about the file system ufs3 and a valid fast_clone_level of 1 or 2, type: $ nas_fs -info ufs3 id = 478 name = ufs2_flre acl = 0 in_use = False type = uxfs worm = off volume = v1168 pool = clarsas_archive member_of = root_avm_fs_group_32 rw_servers= ro_servers= rw_vdms = ro_vdms = auto_ext = no,thin=no fast_clone_level= unavailable deduplication = unavailable stor_devs = APM00112101832-0019,APM00112101832-0028,APM00112101832-0027,APM00112101832-0022 disks = d25,d19,d32,d16 EXAMPLE #29 ------------
To display the information about a file system using the file system ufs4 using Symmetrix backend mapped pool, type: $ nas_fs -info ufs4 id = 32 name = ufs4 acl = 0 in_use = True type = uxfs worm = off volume = v644 pool = symm_mapped_pool member_of = root_avm_fs_group_21 rw_servers= server_2 ro_servers= rw_vdms = ro_vdms = auto_ext = hwm=50%,max_size=1024M,thin=yes fast_clone_level = 1 deduplication = Off compressed= Mixed frontend_io_quota = maxiopersec 500,maxmbpersec 500 stor_devs = 000196900016-0553 disks = d524 disk=d524 stor_dev=000196900016-0553 addr=c4t3l5-0-0 server=server_2 disk=d524 stor_dev=000196900016-0553 addr=c20t3l5-0-0 server=server_2 disk=d524 stor_dev=000196900016-0553 addr=c36t3l5-0-0 server=server_2 disk=d524 stor_dev=000196900016-0553 addr=c52t3l5-0-0 server=server_2 disk=d524 stor_dev=000196900016-0553 addr=c68t3l5-0-0 server=server_2 disk=d524 stor_dev=000196900016-0553 addr=c84t3l5-0-0 server=server_2 disk=d524 stor_dev=000196900016-0553 addr=c100t3l5-0-0 server=server_2 disk=d524 stor_dev=000196900016-0553 addr=c116t3l5-0-0 server=server_2 where: Value Definition compressed For VNX with Symmetrix backend, indicates whether data is compressed. Values are: True, False, Mixed (indicates some of the LUNs, but not all, are compressed). frontend_io_quota For VNX with Symmetrix backend, indicates if Frot end IO Quota is configured on this mapped pool, could also have v alue as False (indicates Frontend IO Quota is not configured on mapped SG in Symmetrix backend). --------------------------------------------------------------------------------- - Last Modified: Jan 11, 2013 4:12 pm
nas_fsck Manages fsck and aclchk utilities on specified file systems. SYNOPSIS -------- nas_fsck -list | -info {-all|
[-Force] Forces a fsck or aclchk to be run on a enabled file system. SEE ALSO -------- Managing Volumes and File Systems for VNX Manually and nas_fs. EXAMPLE #1 ---------- To start file system check on ufs1 and monitor the progress, type: $ nas_fsck -start ufs1 -monitor id = 27 name = ufs1 volume = mtv1 fsck_server = server_2 inode_check_percent = 10..20..30..40..60..70..80..100 directory_check_percent = 0..0..100 used_ACL_check_percent = 100 free_ACL_check_status = Done cylinder_group_check_status = In Progress..Done Where: Value Definition ----- ---------- id Automatically assigned ID of a file system. name Name assigned to the file system. volume Volume on which the file system resided. fsck_server Name of the Data Mover where the utility is bein g run. inode_check_percent Percentage of inodes in the file system checked and fixed. directory_check_percent Percentage of directories in the file system che cked and fixed. used_ACL_check_percent Percentage of used ACLs that have been checked a nd fixed. free_ACL_check_status Status of the ACL check. cylinder_group_check_status Status of the cylinder group check. EXAMPLE #2 ---------- To start ACL check on ufs1, type: $ nas_fsck -start ufs1 -aclchkonly ACLCHK: in progress for file system ufs1 EXAMPLE #3 ---------- To start a file system check on ufs2 using Data Mover server_5, type: $ nas_fsck -start ufs2 -mover server_5 name = ufs2 id = 23 volume = v134 fsck_server = server_5 inode_check_percent = 40 directory_check_percent = 0
used_ACL_check_percent = 0 free_ACL_check_status = Not Started cylinder_group_check_status = Not Started EXAMPLE #1 provides a description of command outputs. EXAMPLE #4 ---------- To list all current file system checks, type: $ nas_fsck -list id type state volume name server 23 1 FSCK 134 ufs2 4 27 1 ACLCHK 144 ufs1 1 Where: Value Definition ----- ---------- id Automatically assigned ID of a file system. type Type of file system. state Utility being run. volume Volume on which the file system resided. name Name assigned to the file system. server Server on which fsck is being run. EXAMPLE #5 ---------- To display information about file system check for ufs2 that is currently running, type: $ nas_fsck -info ufs2 name = ufs2 id = 23 volume = v134 fsck_server = server_5 inode_check_percent = 100 directory_check_percent = 100 used_ACL_check_percent = 100 free_ACL_check_status = Done cylinder_group_check_status = In Progress EXAMPLE #1 provides a description of command outputs. EXAMPLE #6 ---------- To display information about all file system checks that are currently running, type: $ nas_fsck -info -all name = ufs2 id = 23 volume = v134 fsck_server = server_5 inode_check_percent = 30 directory_check_percent = 0 used_ACL_check_percent = 0 free_ACL_check_status = Not Started cylinder_group_check_status = Not Started name = ufs1 id = 27 volume = mtv1 fsck_server = server_2 inode_check_percent = 100
directory_check_percent = 0 used_ACL_check_percent = 0 free_ACL_check_status = Not Started cylinder_group_check_status = Not Started EXAMPLE #1 provides a description of command outputs. ------------------------------------------------------------------ Last modified: May 11, 2011 9:30 am.
nas_halt Performs a controlled halt of all Control Stations and Data Movers in the VNX. SYNOPSIS -------- nas_halt now DESCRIPTION ----------- nas_halt performs an orderly shutdown of the Control Stations and Data Movers in the VNX. nas_halt must be executed from the /nas/sbin directory. OPTIONS ------- now Performs an immediate halt for the VNX. SEE ALSO -------- VNX System Operations and server_cpu. EXAMPLE #1 ---------- To perform an immediate halt of the VNX, type: # /nas/sbin/nas_halt now usage: nas_halt now Perform a controlled halt of the Control Stations and Data Movers # /nas/sbin/nas_halt now ******************************** WARNING! ******************************* You are about to HALT this system including all of its Control Stations and Data Movers. DATA will be UNAVAILABLE when the system is halted. Note that this command does *not* halt the storage array. ARE YOU SURE YOU WANT TO CONTINUE? [yes or no] : yes Sending the halt signal to the Master Control Daemon...: Done May 3 11:12:54 cs100 EMCServer: nas_mcd: Check and halt other CS...: Done May 3 11:13:26 cs100 JSERVER: *** Java Server is exiting *** May 3 11:13:31 cs100 ucd-snmp[11218]: Received TERM or STOP signal... shutting down... May 3 11:13:31 cs100 snmpd: snmpd shutdown succeeded May 3 11:13:32 cs100 setup_enclosure: Executing -dhcpd stop option May 3 11:13:32 cs100 snmptrapd[11179]: Stopping snmptrapd May 3 11:13:32 cs100 EV_AGENT[13721]: Signal TERM received May 3 11:13:32 cs100 EV_AGENT[13721]: Agent is going down May 3 11:13:40 cs100 DHCPDMON: Starting DHCPD on CS 0 May 3 11:13:41 cs100 setup_enclosure: Executing -dhcpd start option May 3 11:13:41 cs100 dhcpd: Internet Software Consortium DHCP Server V3.0pl1 May 3 11:13:41 cs100 dhcpd: Copyright 1995-2001 Internet Software Consortium. May 3 11:13:41 cs100 dhcpd: All rights reserved. May 3 11:13:41 cs100 dhcpd: For info, please visit http://www.isc.org/products/DHCP May 3 11:13:41 cs100 dhcpd: Wrote 0 deleted host decls to leases file. May 3 11:13:41 cs100 dhcpd: Wrote 0 new dynamic host decls to leases file. May 3 11:13:41 cs100 dhcpd: Wrote 0 leases to leases file. May 3 11:13:41 cs100 dhcpd: Listening on LPF/eth2/00:00:f0:9d:04:13/128.221.253.0/24 May 3 11:13:41 cs100 dhcpd: Sending on LPF/eth2/00:00:f0:9d:04:13/128.221.253.0/24 May 3 11:13:41 cs100 dhcpd: Listening on LPF/eth0/00:00:f0:9d:01:e5/128.221.252.0/24 May 3 11:13:41 cs100 dhcpd: Sending on LPF/eth0/00:00:f0:9d:01:e5/128.221.252.0/24
May 3 11:13:41 cs100 dhcpd: Sending on Socket/fallback/fallback-net May 3 11:13:59 cs100 mcd_helper: : Failed to umount /nas (0) May 3 11:13:59 cs100 EMCServer: nas_mcd: Failed to gracefully shutdown MCD and halt servers. Forcing halt and reboot... May 3 11:13:59 cs100 EMCServer: nas_mcd: Halting all servers... May 3 11:15:00 cs100 get_datamover_status: Data Mover server_5: COMMAND doesnt match. ---------------------------------------------------------- Last modified: May 10, 2011 5:25 pm.
nas_inventory Provides detailed information about hardware components in the system. SYNOPSIS -------- nas_inventory { -list [-location] | {-info
Where: Value Definition ----- ---------- Component Description of the component. Type The type of component. Possible types are: battery, blower, VNX, Control Station, Data Mover, and disk. Status The current status of the component. Status is component type specific. There are several possible status values, each of which is associated with a particular component type. System ID The identifier for the VNX or the storage ID of the syst em containing the component. EXAMPLE #2 ---------- To display a list of components and component locations, type: $ nas_inventory -list -location Component Type Status System ID Location Battery A Battery OK CLARiiON CX4-240 FCNTR083000055 system:NS40G:FCNTR083000055001A|clariionSystem:CX4-240:FCNTR083000055|sps::A Celerra NS40G FCNTR083000055001A Celerra Warning Celerra NS40G FCNTR083000055001A system:NS40G:FCNTR083000055001A CLARiiON CX4-240 FCNTR083000055 CLARiiON OK CLARiiON CX4-240 FCNTR083000055 system:NS40G:FCNTR083000055001A|clariionSystem:CX4-240:FCNTR083000055 DME 0 Data Mover 2 Data Mover OK Celerra NS40G FCNTR083000055001A system:NS40G:FCNTR083000055001A|enclosure:xpe:0|mover:NS40:2 DME 0 Data Mover 2 Ethernet Module Module OK Celerra NS40G FCNTR083000055001A system:NS40G:FCNTR083000055001A|enclosure:xpe:0|mover:NS40:2|module:ethernet: DME 0 Data Mover 2 SFP BE0 SFP OK Celerra NS40G FCNTR083000055001A system:NS40G:FCNTR083000055001A|enclosure:xpe:0|mover:NS40:2|sfp::BE0 DME 0 Data Mover 2 SFP BE1 SFP OK Celerra NS40G FCNTR083000055001A system:NS40G:FCNTR083000055001A|enclosure:xpe:0|mover:NS40:2|sfp::BE1 DME 0 Data Mover 2 SFP FE0 SFP OK Celerra NS40G FCNTR083000055001A system:NS40G:FCNTR083000055001A|enclosure:xpe:0|mover:NS40:2|sfp::FE0 EXAMPLE #3 ---------- To list information for a specific component, type: $ nas_inventory -info "system:NS40G:FCNTR083000055001A| clariionSystem:CX4-240:FCNTR083000055|iomodule::B0" Location = system:NS40G:FCNTR083000055001A|clariionSystem:CX4-240: FCNTR083000055|iomodule::B0 Component Name = IO Module B0 Type = IO Module Status = OK Variant = 4 PORT FIBRE IO MODULE Storage System = CLARiiON CX4-240 FCNTR083000055 Serial Number = CF2YW082800426 Part Number = 103-054-100C History = EMC_PART_NUMBER:103-054-100C EMC_ARTWORK_REVISION:C01 EMC_ASSEMBLY_REVISION:C03 EMC_SERIAL_NUMBER:CF2YW082800426 VENDER_PART_NUMBER:N/A VENDER_ARTWORK_NUMBER:N/A VENDER_ASSEMBLY_NUMBER:N/A VENDER_SERIAL_NUMBER:N/A VENDOR_NAME:N/A LOCATION_OF_MANUFACTURE:N/A YEAR_OF_MANUFACTURE:N/A MONTH_OF_MANUFACTURE:N/A
DAY_OF_MONTH_OF_MANUFACTURE:N/A ASSEMBLY_NAME:4 PORT FIBRE IO MODULE Note: The location string must be enclosed in double quotes. Where: Value Definition ----- ---------- Location The unique identifier of the component and where the component is located in the component hierarchy. Component The description of the component. Type The type of component. Possible types are: battery, blower, VNX for file, VNX for block, Control Station, Data Mover, and disk. Status The current condition of the component. Status is component type specific. There are several possible status values, each of which is associated with a particular component type. Variant The specific type of hardware. Storage System The model and serial number of the system. Serial Number The serial number of the hardware component. Part Number The part number of the hardware component. History If available, the history information of the component. Possible values are: part number, serial number, vendor, date of manufacture, and CPU information. EXAMPLE #4 ---------- To display components in a tree structure, type: $ nas_inventory -tree Component Type Status Celerra NS40G FCNTR083000055001A Celerra Warning CLARiiON CX4-240 FCNTR083000055 CLARiiON OK Battery A Battery OK IO Module A0 IO Module OK IO Module A1 IO Module OK IO Module A2 IO Module Empty IO Module A3 IO Module Empty IO Module A4 IO Module Empty IO Module B0 IO Module OK IO Module B1 IO Module OK IO Module B2 IO Module Empty IO Module B3 IO Module Empty IO Module B4 IO Module Empty Power Supply A0 Power Supply OK Power Supply A1 Power Supply OK Power Supply B0 Power Supply OK Power Supply B1 Power Supply OK EXAMPLE #5 ---------- To list information for a specific component, type: $ nas_inventory -info "system:EA-NAS-SN:00019670026100013|enclosure:SYMM:Eng 3 Di r A|mover:EA-NAS-SN:3|iomodule::3" Location = system:EA-NAS-SN:00019670026100013|enclosure:SYMM:Eng 3 Dir A|mo ver:EA-NAS-SN:3|iomodule::3 Component Name = SYMM Eng 3 Dir A Data Mover 3 IO Module 3 Type = IO Module Status = OK Variant = 4 PORT CU GIGE History = FIRMWARE_VERSION:3.28 ASSEMBLY_NAME:4 PORT CU GIGE
Note: The location string must be enclosed in double quotes. Where: Value Definition ----- ---------- Location The unique identifier of the component and where the component is located in the component hierarchy. Component The description of the component. Type The type of component. Possible types are: battery, blower, VNX for file, VNX for block, Control Station, Data Mover, and disk. Status The current condition of the component. Status is component type specific. There are several possible status values, each of which is associated with a particular component type. Variant The specific type of hardware. Storage System The model and serial number of the system. Serial Number The serial number of the hardware component. Part Number The part number of the hardware component. History If available, the history information of the component. Possible values are: part number, serial number, vendor, date of manufacture, firmware version and CPU information. FIRMWARE_VERSION: Displays firmware version of iomodule component --------------------------------------------------------------- Last modified: May 11, 2011 10:00 am.
nas_license Enables software packages. SYNOPSIS -------- nas_license -list | -create
To install a license for the snapsure software package, type: $ nas_license -create snapsure done EXAMPLE #2 ---------- To display all software packages with currently installed licenses, type: $ nas_license -list key status value site_key online 42 de 6f d1 advancedmanager online nfs online cifs online snapsure online replicator online filelevelretention online EXAMPLE #3 ---------- To delete a license for specified software package, type: $ nas_license -delete snapsure done EXAMPLE #4 ---------- To initialize the database and re-create the license file, type: $ nas_license -init done ------------------------------------------------------------ Last modified: Jan 15, 2013 4:25 pm
nas_logviewer Displays the content of nas_eventlog generated log files. SYNOPSIS -------- nas_logviewer
Note: This is a partial listing due to the length of the outputs. EXAMPLE #2 ---------- To display the contents of the log files in terse format, type: $ nas_logviewer -t /nas/log/sys_log May 12 18:01:57 2007:96108871980:nasdb_backup: NAS_DB checkpoint in progress May 12 18:02:59 2007:96108871985:nasdb_backup: NAS_DB Checkpoint done May 12 18:03:00 2007:83223969994:NAS database error detected May 12 18:03:12 2007:96108871986:nasdb_backup: NAS DB Backup done May 12 19:01:52 2007:96108871980:nasdb_backup: NAS_DB checkpoint in progress May 12 19:02:50 2007:96108871985:nasdb_backup: NAS_DB Checkpoint done May 12 19:02:51 2007:83223969994:NAS database error detected May 12 19:03:02 2007:96108871986:nasdb_backup: NAS DB Backup done May 12 20:01:57 2007:96108871980:nasdb_backup: NAS_DB checkpoint in progress May 12 20:02:58 2007:96108871985:nasdb_backup: NAS_DB Checkpoint done May 12 20:02:59 2007:83223969994:NAS database error detected May 12 20:03:10 2007:96108871986:nasdb_backup: NAS DB Backup done May 12 21:01:52 2007:96108871980:nasdb_backup: NAS_DB checkpoint in progress May 12 21:02:51 2007:96108871985:nasdb_backup: NAS_DB Checkpoint done EXAMPLE #3 ---------- To display the contents of the log files in verbose format, type: $ nas_logviewer -v /nas/log/sys_log|more logged time = May 12 18:01:57 2007 creation time = May 12 18:01:57 2007 slot id = id = 96108871980 severity = INFO component = CS_PLATFORM facility = NASDB baseid = 300 type = EVENT brief discription = nasdb_backup: NAS_DB checkpoint in progress full discription = The Celerra configuration database is being checkpointed. recommended action = No action required. logged time = May 12 18:02:59 2007 creation time = May 12 18:02:59 2007 slot id = id = 96108871985 severity = INFO component = CS_PLATFORM facility = NASDB baseid = 305 type = EVENT brief description = nasdb_backup: NAS_DB Checkpoint done full description = The NAS DB backup has completed a checkpoint of the current reparation for performing a backup of NAS system data. recommended action = No action required. EXAMPLE #4 ---------- To monitor the growth of the current log, type: $ nas_logviewer -f /nas/log/sys_log|more May 12 18:01:57 2007:CS_PLATFORM:NASDB:INFO:300:::::nasdb_backup: NAS_DB checkpoint in progress May 12 18:02:59 2007:CS_PLATFORM:NASDB:INFO:305:::::nasdb_backup: NAS_DB Checkpoint done May 12 18:03:00 2007:CS_PLATFORM:NASDB:ERROR:202:::::NAS database error detectedMay 12 18:03:12 2007 :CS_PLATFORM:NASDB:INFO:306:::::nasdb_backup: NAS DB Backup done
May 12 19:01:52 2007:CS_PLATFORM:NASDB:INFO:300:::::nasdb_backup: NAS_DB checkpoint in progress May 12 19:02:50 2007:CS_PLATFORM:NASDB:INFO:305:::::nasdb_backup: NAS_DB Checkpoint done May 12 19:02:51 2007:CS_PLATFORM:NASDB:ERROR:202:::::NAS database error detectedMay 12 19:03:02 2007 :CS_PLATFORM:NASDB:INFO:306:::::nasdb_backup: NAS DB Backup done May 12 20:01:57 2007:CS_PLATFORM:NASDB:INFO:300:::::nasdb_backup: NAS_DB checkpoint in progress May 12 20:02:58 2007:CS_PLATFORM:NASDB:INFO:305:::::nasdb_backup: NAS_DB Checkpoint done May 12 20:02:59 2007:CS_PLATFORM:NASDB:ERROR:202:::::NAS database error detectedMay 12 20:03:10 2007 :CS_PLATFORM:NASDB:INFO:306:::::nasdb_backup: NAS DB Backup done May 12 21:01:52 2007:CS_PLATFORM:NASDB:INFO:300:::::nasdb_backup: NAS_DB checkpoint in progress May 12 21:02:51 2007:CS_PLATFORM:NASDB:INFO:305:::::nasdb_backup: NAS_DB Checkpoint done May 12 21:02:52 2007:CS_PLATFORM:NASDB:ERROR:202:::::NAS database error detectedMay 12 21:03:03 2007 :CS_PLATFORM:NASDB:INFO:306:::::nasdb_backup: NAS DB Backup done ------------------------------------------------------------------- Last modified: May 10, 2011 1:00 pm.
nas_message Displays message description. SYNOPSIS -------- nas_message -info
nas_migrate Plans migrations for Virtual Data Mover (VDM) level, and manages migrations for both VDM and File system (FS) level. SYNOPSIS -------- nas_migrate -list [{-all|-mover
OPTIONS ------- -list [{-all|-mover
specified. Indicates mapping of source and destination network device s. This will guide the migration to choose the network devices, which will be used when creating destination network interfaces when option -take_over_ips is specified. Without -take_over_ips, the destination interfaces must be manually created on the destination mo ver by the user, with names identical to the source interfaces. No matter whether the source network interfaces are to be taken-over or not, the interfaces attached to the source VDM will be turned down after migration is completed. Note: To take over IPs, the interfaces must be in the same subnet and have the same VLAN settings at the source and the destination. Also, the inter faces must be IPv4. IPv6 interfaces cannot be taken over. Note: To exclude a File System, it must be unexported or unshared and unmounted before creating a VDM level migration plan. [-checkpoint_excluded] Excludes all the existing read-only user checkpoints from the migrati on. [-background] Runs the task in background. Note: When -background is specified, a task ID will be returned, and the user can check nas_task -i
or an error message if failed. -delete {
[-sav{id=
-start {
Name = fsMigEx1 Type = FILESYSTEM State = READY_TO_COMPLETE Network Status = OK Source Celerra/VNX Network Server = spring Peer Dart Interconnect = spring_summer Dart Interconnect = summer_spring File Systems = fs3->fs3 Source Mover = server_2 Destination Mover = server_4 Read-Only User Checkpoints Excluded = Yes Replications = 337_BB005056903C71_0000_2951_BB00 50569059F6_0000 : Filesystem EXAMPLE #3 ---------- To list summary information of all VDM migration plans, type: $ nas_migrate -plan -list -id ID Name Source Celerra/VNX Source VDM Destination VDM Destination Pool 20000034500 PlanEx1 spring vdmEx1 N/A dstpool3 20000035780 PlanEx2 spring vdmEx2 vdmEx2 N/A Note: Either Destination VDM or Pool is N/A because the user can specify a pool t o create destination VDM root File system, or the existing destination VDM. EXAMPLE #4 ---------- To display detailed information about migration plan PlanEx1, type: $ nas_migrate -plan -info PlanEx1 ID = 20000034500 Name = planEx1 Source Celerra/VNX Network Server = spring Peer Dart Interconnect = spring_winter Dart Interconnect = winter_spring Source VDM = vdmEx1 Destination VDM = N/A Destination Pool (for VDM) = dstpool3 File Systems = srcFs = fs1 |-- dstFs(Recommended ID) = 1001,NOT PRESERVED |-- dstPool = dstpool1 |-- srcSavPool = srcpool1 -- dstSavPool = dstpool1 = srcFs = fs2 |-- dstFs(Recommended ID ) = 1002, PRE SERVED |-- dstPool = dstpool2 |-- srcSavPool = srcpool2 -- dstSavPool = Dstpool2 Read-Only User Checkpoints Excluded = No Takeover IP Addresses = Yes Interfaces - Devices = name=eth1:dstDevice=cge1 name=eth2:dstDevice=cge20 Where: Value Definition ----- ---------- NOT PRESERVED The source file system ID cannot be preserved, then the NFS clien ts have to remount this file system after VDM migration completes.
EXAMPLE #5 ---------- To create a VDM migration plan with the default setting when IP-takeover applies, type: $ nas_migrate -plan -create planEx1 -source vdmEx1 -destination -pool dstpool3 -interconnect winter_spring -take_over_ips Info 26843676673: In Progress: Operation is still running. Check task id 24416 on the Background Tasks screen for results. Validate plan name ... succeeded Create plan ... Validate destination system licenses ... succeeded Validate interconenct ... succeeded Validate source system licenses ... succeeded Validate system versions ... succeeded Validate I18N and CIFS service ... succeeded Validate source VDM ... succeeded Make migration plan for VDM ... succeeded Validate source file system(s) ... succeeded Make migration plan for file system(s) ... succeeded Make migration plan for interface(s) ... succeeded Create plan ... succeeded Save plan ... succeeded OK EXAMPLE #6 ---------- To create a VDM migration plan with storage pool mapping and IP-takeover, type: $ nas_migrate -plan -create planEx1 -source vdmEx1 -destination -pool dstpool3 -interconnect winter_spring -storage_pools srcpool1:dstpool1,srcpool2:dstpool2 -take_over_ips Output omitted for brevity. Where: Value Definition ----- ---------- storage_pools Specifies the storage pool mapping. When not specified, the defau lt matching rules are: auto-select a storage pool on the destination for each file system by (in the priority order) storage pool profile, disk type, then size. EXAMPLE #7 ---------- To create a VDM migration plan with network device mapping, type: $ nas_migrate -plan -create planEx1 -source vdmEx1 -destination -pool dstpool3 -interconnect winter_spring -take_over_ips -network_devices cge_src1:cge_dst1, cge_src2:cge_dst2 Output omitted for brevity. Where: Value Definition ----- ---------- network_devices Specifies the network device mapping to create destination interf
aces with the exact same IP addresses as source interfaces. When not specified, the default matching rule is to use the network device s with identical names as those of the source network devices. This opti on is a sub-option for "-take_over_ips." EXAMPLE #8 ---------- To create a VDM migration plan with storage pool mapping, IP-takeover, network device mapping, and file systems excluded, type: $ nas_migrate -plan -create planEx1 -source vdmEx1 -destination -pool dstpool3 -interconnect winter_spring -storage_pools srcpool1:dstpool1,srcpool2:dstpool2 -take_over_ips -network_devices cge1:cge1,cge2:cge20 Output omitted for brevity. EXAMPLE #6 and EXAMPLE #7 provide descriptions of storage pool and netork device mapping. EXAMPLE #9 ----------- To modify a VDM migration plan, type: $ nas_migrate -plan -modify plan001 -name plan001_New -filesystems -id srcFs=100:dstFs=100,srcFs=300:dstPool=3 -interfaces name=eth10:dstDevice=cge10 Output omitted for brevity. EXAMPLE #10 ----------- To delete a VDM migration plan, type: $ nas_migrate -plan -delete plan001 Output omitted for brevity. EXAMPLE #11 ----------- To create a VDM level migration, type: $ nas_migrate -create vdmMigEx1 -vdm -plan planEx1 Output omitted for brevity. EXAMPLE #12 ----------- To create a file system level migration, type: $ nas_migrate -create fsMigEx1 -fs -source fs3 -destination -pool dstpool3 -interconnect summer_spring Info 26843676673: In Progress: Operation is still running. Check task id 63654 on the Background Tasks screen for results. Validate migration name
Create destination file systems ... Create destination file systems: <#created>/<#total>(updated per 2 minutes) Create destination file systems... succeeded Create checkpoints ... Create checkpoints: <#created>/<#total>(updated per 2 minutes) Create checkpoints ... succeeded Create replications ... Create replications: <#created>/<#total>(updated per 2 minutes) Create replications ... succeeded Update Migration State [INITIAL_COPYING] ... succeeded Initial Copy ... Initial Copy: Total=50000(M): Copied=10000(M): Transfer Rate=2000(KB/s)(updated per 10 minutes) Initial Copy: Total=50000(M): Copied=20000(M): Transfer Rate=3000(KB/s)(updated per 10 minutes) Initial Copy ... succeeded Modify RPO of replications ... succeeded Update migration state to [READY_TO_COMPLETE] ... succeeded OK EXAMPLE #13 ----------- To complete a migration with the background flag, type: $ nas_migrate -complete fsMigEx1 -checkpoint_mismatch_ignored -background Info 26843676432: In Progress: Operation is still running. Check task id 134227 on the Background Tasks screen for results. EXAMPLE #14 ----------- To delete a migration with the background flag, type: $ nas_migrate -delete fsMigEx1 -background Info 26843676556: In Progress: Operation is still running. Check task id 142811 on the Background Tasks screen for results. EXAMPLE #15 ----------- To stop a migration with the background flag, type: $ nas_migrate -stop fsMigEx1 -background Info 26843676556: In Progress: Operation is still running. Check task id 144511 on the Background Tasks screen for results. EXAMPLE #16 ----------- To stop a migration, type: $ nas_migrate -stop id=20002224601 Info 26843676673: In Progress: Operation is still running. Check task id 17919 on the Background Tasks screen for results. Check migration state ... succeeded Change migration state to STOPPING ... succeeded Check local replication state ... succeeded Check remote replication state ... succeeded Stop replication in parallel ... Stop replication task state: Total=10 Succeeded=0 Failed=0 Stop replication task state: Total=10 Succeeded=5 Failed=0 Stop replication task state: Total=10 Succeeded=6 Failed=0 Stop replication task state: Total=10 Succeeded=10 Failed=0 Stop replication in parallel succeeded
Change migration state to STOPPED ... succeeded EXAMPLE #17 ----------- To start a migration, type: $ nas_migrate -start id=20002224601 -background Info 26843676673: In Progress: Operation is still running. Check task id 144527 on the Background Tasks screen for results. -------------------------------------------------------------------- Last modified: Feb 22 2013, 4:34 pm
nas_mview Performs MirrorView/Synchronous (MirrorView/S) operations on a system attached to an older version of VNX for block. SYNOPSIS -------- nas_mview -info | -init
standby Data Movers acquire the IP and MAC addresses, file systems, and export tables of their source counterparts. . If the original source site is unavailable, the destination LUNs are promoted to the primary role, making them visible to the destination VNX for file. The original source LUNs cannot be converted to backup images; they stay visible to the source VNX for file, and the original destination site is activated with new source (primary) LUNs only. If the source cannot be shut down in a disaster scenario, any writes occurring after the forced activation will be lost during a restore. -restore Issued from the destination system using the remote administration account, restores a source system after a MirrorView/S failover, and fails back the device group to the source system. The restore process begins by checking the state of the device group. If the device group state is Local Only (where each mirror has only the source LUN), the device group will be fully synchronized and rebuilt before the failback can occur. If the device group condition is fractured, an incremental synchronization is performed before the failback occurs. Source devices are then synchronized with the data on the original destination devices, I/O access is shut down, the original destination Data Movers are rebooted as remote standbys, and the mirrored devices are failed back. When the source side is restored, the source Data Movers and their services are restarted. If the restore fails, the source Control Station is not reachable on the data network. To complete the restore, access the source, log in as root, and type /nasmcd/sbin/nas_mview -restore. SEE ALSO -------- Using MirrorView/Synchronous with VNX for Disaster Recovery, nas_cel, and nas_checkup. STORAGE SYSTEM OUTPUT --------------------- The number associated with the storage device reflects the attached storage system; for MirrorView/S, VNX for block displays a prefix of APM before a set of integers, for example, APM00033900124-0019. The VNX for block supports the following system-defined AVM storage pools for MirrorView/S only: cm_r1, cm_r5_performance, cm_r5_economy, cmata_archive, cmata_r3, cm_r6, and cmata_r6. EXAMPLE #1 ---------- To initialize a destination VNX for file in an active/passive configuration to communicate with source site source_cs, from the destination Control Station, type: # /nas/sbin/nas_mview -init source_cs Celerra with MirrorView/Synchronous Disaster Recovery Initializing source_cs --> target_cs Contacting source_cs for remote storage info Local storage system: APM00053001549 Remote storage system: APM00053001552 Enter the Global CLARiiON account information Username: emc Password: *** Retype your response to validate Password: ***
Discovering storage on source_cs (may take several minutes) Setting security information for APM00053001549 Discovering storage APM00053001552 (may take several minutes) Discovering storage (may take several minutes) Contacting source_cs for remote storage info Gathering server information... Contacting source_cs for server capabilities... Analyzing server information... Source servers available to be configured for remote DR ------------------------------------------------------- 1. server_2:source_cs 2. server_3:source_cs [ local standby ] v. Verify standby server configuration q. Quit initialization process c. Continue initialization Select a source_cs server: 1 Destination servers available to act as remote standby ------------------------------------------------------ 1. server_2:target_cs [ unconfigured standby ] 2. server_3:target_cs [ unconfigured standby ] b. Back Select a target_cs server: 1 Source servers available to be configured for remote DR ------------------------------------------------------- 1. server_2:source_cs [ remote standby is server_2:target_cs ] 2. server_3:source_cs [ local standby ] v. Verify standby server configuration q. Quit initialization process c. Continue initialization Select a source_cs server: 2 Destination servers available to act as remote standby ------------------------------------------------------ server_2:target_cs [ is remote standby for server_2:source_cs ] 2. server_3:target_cs [ unconfigured standby ] b. Back Select a target_cs server: 2 Source servers available to be configured for remote DR ------------------------------------------------------- 1. server_2:source_cs [ remote standby is server_2:target_cs ] 2. server_3:source_cs [ remote standby is server_3:target_cs ] v. Verify standby server configuration q. Quit initialization process c. Continue initialization Select a source_cs server: c Standby configuration validated OK Enter user information for managing remote site source_cs Username: dradmin Password: ******* Retype your response to validate Password: ******* Active/Active configuration Initializing (source_cs-->target_cs) Do you wish to continue? [yes or no] yes Updating MirrorView configuration cache Setting up server_3 on source_cs Setting up server_2 on source_cs Creating user account dradmin Setting acl for server_3 on target_cs Setting acl for server_2 on target_cs Updating the Celerra domain information Creating device group mviewgroup on source_cs done EXAMPLE #2
---------- To get information about a source MirrorView configuration (for example, on new_york configured as active/passive), type: # /nas/sbin/nas_mview -info ***** Device Group Configuration ***** name = mviewgroup description = uid = 50:6:1:60:B0:60:26:BC:0:0:0:0:0:0:0:0 state = Consistent role = Primary condition = Active recovery policy = Automatic number of mirrors = 16 mode = SYNC owner = 0 mirrored disks = root_disk,root_ldisk,d5,d8,d10,d11,d24,d25,d26,d27,d29,d30,d31,d32,d33,d39, local clarid = APM00053001552 remote clarid = APM00053001549 mirror direction = local -> remote ***** Servers configured with RDFstandby ***** id = 1 name = server_2 acl = 1000, owner=nasadmin, ID=201 type = nas slot = 2 member_of = standby = server_3, policy=auto RDFstandby= slot=2 status : defined = enabled actual = online, active id = 2 name = server_3 acl = 1000, owner=nasadmin, ID=201 type = standby slot = 3 member_of = standbyfor= server_2 RDFstandby= slot=3 status : defined = enabled actual = online, ready ***** Servers configured as standby ***** No servers configured as standby Where: Value Definition ----- ---------- Device group configuration: name Name of the consistency (device) group. description Brief description of device group. uid UID assigned, based on the system. state State of the device group (for example, Consistent, Synchronized, Out-of-Sync, Synchronizing, Scrambled, Empty, Incomplete, or Local Only). role Whether the current system is the Primary (source) or Secondary (destination) for this
group. condition Whether the group is functioning (Active), Inactive, Admin Fractured (suspended), Waiting on Sync, System Fractured (which indicates link down), or Unknown. recovery policy Type of recovery policy (Automatic is the defaul t and recommended value for group during storage system configuration; if Manual is set, you must use -resume after a link down failure). number of mirrors Number of mirrors in group. mode MirrorView mode (always SYNC in this release). owner ACL ID assigned (0 indicates no control). nas_ac l provides information. mirrored disks Comma-separated list of disks that are mirrored. local clarid APM number of local VNX for block storage array. remote clarid APM number of remote VNX for block storage array . mirror direction On primary system, local to remote (on primary system); on destination system, local from remot e. Servers configured with RDFstandby/ Servers configured as standby: id Server ID name Server name acl ACL value and owner type Server type (for example, nas or standby) slot Slot number for this Data Mover member_of If applicable, shows membership information. standby If this Data Mover is configured with local stan dbys, the server that is the local standby and any policy information. RDFstandby If this Data Mover is configured with a remote R DF standby, the slot number of the destination Data Mover that serves as the RDF standby. standbyfor If this Data Mover is also configured as a local standby, the server numbers for which it is a lo cal standby. status Indicates whether the Data Mover is defined and online/ready. EXAMPLE #3 ---------- To activate a failover, log in to destination Control Station using dradmin accou nt, su to root, and type: # /nas/sbin/nas_mview -activate Sync with CLARiiON backend ...... done Validating mirror group configuration ...... done Is source site source_cs ready for complete shut down (power OFF)? [yes or no] yes Contacting source site source_cs, please wait... done Shutting down remote site source_cs ...................................... done Sync with CLARiiON backend ...... done STARTING an MV FAILOVER operation. Device group: mviewgroup ............ done The MV FAILOVER operation SUCCEEDED. Failing over Devices ... done Adding NBS access for server_2 ........ done Adding NBS access for server_3 ........ done Activating the target environment ... done
server_2 : going offline rdf : going active replace in progress ...done failover activity complete server_3 : going offline rdf : going active replace in progress ...done failover activity complete commit in progress (not interruptible)...done commit in progress (not interruptible)...done commit in progress (not interruptible)...done commit in progress (not interruptible)...done done EXAMPLE #4 ---------- To restore, log in to the destination Control Station using dradmin account, as root user, and type: # /nas/sbin/nas_mview -restore Sync with CLARiiON backend ...... done Validating mirror group configuration ...... done Contacting source site source_cs, please wait... done Running restore requires shutting down source site source_cs. Do you wish to continue? [yes or no] yes Shutting down remote site source_cs ....... done Is source site source_cs ready for storage restoration ? [yes or no] yes Sync with CLARiiON backend ...... done STARTING an MV RESUME operation. Device group: mviewgroup ............ done The MV RESUME operation SUCCEEDED. Percent synchronized: 100 Updating device group ... done Is source site ready for network restoration ? [yes or no] yes Restoring servers ...... done Waiting for servers to reboot ...... done Removing NBS access for server_2 .. done Removing NBS access for server_3 .. done Waiting for device group ready to failback .... done Sync with CLARiiON backend ...... done STARTING an MV FAILBACK operation. Device group: mviewgroup ............ done The MV FAILBACK operation SUCCEEDED. Restoring remote site source_cs ...... failed Error 5008: -1:Cannot restore source_cs. Please run restore on site source_cs. Then on the Source Control Station, as the root user, type: # /nasmcd/sbin/nas_mview -restore Stopping NAS services. Please wait... Powering on servers ( please wait ) ...... done Sync with CLARiiON backend ...... done STARTING an MV SUSPEND operation. Device group: mviewgroup ............ done The MV SUSPEND operation SUCCEEDED. server_2 : going standby rdf : going active replace in progress ...done failover activity complete server_3 : going standby rdf : going active replace in progress ...done failover activity complete commit in progress (not interruptible)...done
commit in progress (not interruptible)...done Sync with CLARiiON backend ...... done STARTING an MV RESUME operation. Device group: mviewgroup ............ done The MV RESUME operation SUCCEEDED. Restarting NAS services ...... done commit in progress (not interruptible)...done commit in progress (not interruptible)...done done ----------------------------------------------------------------- Last modified: May 11, 2011 11:25 am.
nas_pool Manages the user-defined and system-defined storage pools for the system. SYNOPSIS -------- nas_pool -list | -info {
storage system. -create Creates a user-defined storage pool. [-name
Specifies the stripe size for user pool creation by size. The -stripe_size option works only when both -size and -template options are specified. It overrides the stripe size attribute of the specified system pool template. -modify {
em, the is_greedy attribute is ignored unless there is not enough free space on the existing volumes that the file system is using to meet the requested extension size. -delete {
VNX for block supports the following traditional system-defined storage pools: clar_r1, clar_r5_performance, clar_r5_economy, clar_r6, clarata_r3, clarata_r6, clarata_r10, clarata_archive, cm_r1, cm_r5_performance, cm_r5_economy, cm_r6, cmata_r3, cmata_archive, cmata_r6, cmata_r10, clarsas_archive, clarsas_r6, clarsas_r10, clarefd_r5, clarefd_r10, cmsas_archive, cmsas_r6, cmsas_r10, and cmefd_r5. A mapped pool was formerly called a thin or virtual pool. Disk types when using VNX for block are CLSTD, CLEFD, CLATA, MIXED (indicates that tiers used in the pool contain multiple disk types), Performance, Capacity, and Extreme_performance and for VNX for block involving mirrored disks are: CMEFD, CMSTD, CMATA, Mirrored_mixed, Mirrored_performance, Mirrored_capacity, and Mirrored_extreme_performance. Disk types when using VNX for block are CLSTD, CLEFD, and CLATA, and for VNX for block involving mirrored disks are: CMEFD, CMSTD, and CMATA. VNX with a Symmetrix storage system support the following system-defined storage pools: symm_std, symm_std_rdf_src, symm_ata, symm_ata_rdf_src, symm_ata_rdf_tgt, symm_std_rdf_tgt, symm_efd, symm_fts, symm_fts_rdf_tgt, and symm_fts_rdf_src. For user-defined storage pools, the difference in output is in the disk type. Disk types when using a Symmetrix are STD, R1STD, R2STD, BCV, R1BCV, R2BCV, ATA, R1ATA, R2ATA, BCVA, R1BCA, R2BCA, EFD, FTS, R1FTS, R2FTS, R1BCF, R2BCF, BCVF, BCVMIXED, R1MIXED, R2MIXED, R1BCVMIXED, and R2BCVMIXED. EXAMPLE #1 ---------- To create a storage pool with the name, marketing, with a description, with the following disk members, d12, d13, and with the default slice flag set to y, type: $ nas_pool -create -name marketing -description Storage Pool -volumes d12,d13 -default_slice_flag y id = 20 name = marketing description = Storage Pool acl = 0 in_use = False clients = members = d12,d13 storage_system(s) = FNM00105000212 default_slice_flag = True is_user_defined = True thin = False disk_type = CLSTD server_visibility = server_2,server_3,server_4,server_5 is_greedy = False template_pool = N/A num_stripe_members = N/A stripe_size = N/A Where: Value Definition ----- ---------- id ID of the storage pool. name Name of the storage pool. description Comment assigned to the storage pool. acl Access control level value assigned to the storage pool. in_use Whether the storage pool is being used by a file system. clients File systems using the storage pool. members Volumes used by the storage pool.
storage_systems(s) Storage systems used by the storage pool. default_slice_flag Allows slices from the storage pool. is_user_defined User-defined as opposed to system-defined. thin Indicates whether thin provisioning is enabled or disabled. disk_type Type of disk contingent on the storage system attached. CLSTD, CLATA, CMSTD, CLEFD, CMEFD, CMATA, MIXED (indicates tiers used in the pool contain multiple disk types), Performance, Capacity, Extreme_performance, Mirrored_m ixed, Mirrored_performance, Mirrored_capacity, and Mirrored_extreme_ performance are for VNX for block, and STD, BCV, R1BCV, R2BCV, R1STD, R2ST D, ATA, R1ATA, R2ATA, BCVA, R1BCA, R2BCA, EFD, BCVMIXED, R1MIXED, R2MIXED, R1BCVMIXED, and R2BCVMIXED are for Symmetrix. server_visibility Storage pool is visible to the physical Data Movers specified. is_greedy Indicates whether the system-defined storage pool will use new member volumes as needed. template_pool System pool template used to create the user pool. Only applicable to user pools created by size or if the last member volume is a stripe or both. num_stripe_members Number of stripe members used to create the user pool. Applica ble to system pools and user pools created by size or if the last mem ber volume is a stripe or both. stripe_size Stripe size used to create the user pool. Applicable to system pools and user pools created by size or if the last member vol ume is a stripe or both. EXAMPLE #2 ---------- To change the description for the marketing storage pool to include a descriptive comment, type: $ nas_pool -modify marketing -description Marketing Storage Pool id = 20 name = marketing description = Marketing Storage Pool acl = 0 in_use = False clients = members = d12,d13 storage_system(s) = FNM00105000212 default_slice_flag = True is_user_defined = True thin = False disk_type = CLSTD server_visibility = server_2,server_3,server_4,server_5 is_greedy = False template_pool = N/A num_stripe_members = N/A stripe_size = N/A EXAMPLE #1 provides a description of command output. EXAMPLE #3 ---------- To view the size information for the FP1 mapped pool, type: $ nas_pool -size FP1
id = 40 name = FP1 used_mb = 0 avail_mb = 0 total_mb = 0 potential_mb = 2047 Where: Value Definition ----- ---------- used_mb Space in use by the storage pool specified. avail_mb Unused space still available in the storage pool. total_mb Total space in the storage pool (total of used and unused). potential_mb Available space that can be added to the storage pool. Note: Each of the options used with the command nas_pool - size is filters for the output of the command. For example, if you specify a Data Mover, the output will reflect only the space to which the specified Data Mover has visibility. Physical used_mb, Physical avail_mb, and Physical total_mb are applicable for system-defined virtual AVM pools only. EXAMPLE #4 ---------- To view the size information for the TP1 mapped pool which contains only virtual LUNs, type: $ nas_pool -size TP1 id = 40 name = TP1 used_mb = 0 avail_mb = 0 total_mb = 0 potential_mb = 2047 Physical storage usage in tp1 on FCNTR074200038: used_mb = 0 avail_mb = 20470 Where: Value Definition ----- ---------- Physical used_mb Used physical size of a storage system mapped pool in MB (some may be used by non-VNX hosts). Physical avail_mb Available physical size of a storage system mapped pool i n MB. Note: Physical used_mb and Physical avail_mb are applicable for system-defined AVM pools that contain virtual LUNs only. EXAMPLE #5 ---------- For VNX system, to change the -is_greedy and -is_dynamic options for the system defined, clar_r5_performance storage pool, type: $ nas_pool -modify clar_r5_performance -is_dynamic n -is_greedy y id = 3 name = clar_r5_performance description = CLARiiON RAID5 4plus1 acl = 421 in_use = False clients = members = v120 storage_system(s) =
default_slice_flag = True is_user_defined = False thin = False disk_type = CLSTD server_visibility = server_2,server_3,server_4,server_5 volume_profile = clar_r5_performance_vp is_dynamic = False is_greedy = True num_stripe_members = 4 stripe_size = 32768 EXAMPLE #1 provides a description of command output. EXAMPLE #6 ---------- For VNX for file with a Symmetrix system, to change the -is_greedy and -is_dynamic options for the system-defined, symm_std storage pool, type: $ nas_pool -modify symm_std -is_dynamic y -is_greedy y id = 1 name = symm_std description = Symmetrix STD acl = 1421, owner=nasadmin, ID=201 in_use = True clients = ufs3 members = v169,v171 default_slice_flag = False is_user_defined = False thin = False disk_type = STD compressed = True server_visibility = server_2,server_3,server_4,server_5 volume_profile = symm_std_vp is_dynamic = True is_greedy = True num_stripe_members = 8 stripe_size = 32768 Where: Value Definition ----- ---------- id ID of the storage pool. name Name of the storage pool. description Comment assigned to the storage pool. acl Access control level value assigned to the storage pool. in_use Whether the storage pool is being used by a filesystem. clients File systems using the storage pool. members Disks used by the storage pool. default_slice_flag Allows slices from the storage pool. is_user_defined User-defined as opposed to system-defined. thin Indicates whether thin provisioning is enabled or disabl ed. disk_type Contingent on the storage system attached. compressed For VNX with Symmetrix backend, indicates whether data i s compressed. Values are: True, False, Mixed (indicates so me of the LUNs, but not all, are compressed). server_visibility Storage pool is visible to the physical Data Movers specified. volume_profile Volume profile used. is_dynamic Whether the system-defined storage pool can add or remov e volumes. is_greedy Indicates whether the system-defined storage pool will u se new member volumes as needed.
template_pool System pool template used to create the user pool. Only applicable to user pools created by size or if the last member volume is a stripe or both . num_stripe_members Number of stripe members used to create the user pool. Applicable to system pools and user pools created by siz e or if the last member volume is a stripe or both. stripe_size Stripe size used to create the user pool. Applicable to system pools and user pools created by size or if the last memb er volume is a stripe or both. EXAMPLE #7 ---------- To change the -is_greedy option for the user-defined, user_pool storage pool, type: $ nas_pool -modify user_pool -is_greedy y id = 58 name = user_pool description = acl = 0 in_use = False clients = members = d21,d22,d23,d24 storage_system(s) = FNM00105000212 default_slice_flag = True is_user_defined = True thin = False disk_type = CLSTD server_visibility = server_2 is_greedy = True template_pool = N/A num_stripe_members = N/A stripe_size = N/A EXAMPLE #1 provides a description of command output. EXAMPLE #8 ---------- To add the volumes, d7 and d8, to the marketing storage pool, type: $ nas_pool -xtend marketing -volumes d7,d8 id = 20 name = marketing description = Marketing Storage Pool acl = 0 in_use = False clients = members = d12,d13,d7,d8 default_slice_flag = True is_user_defined = True thin = True disk_type = CLSTD server_visibility = server_2,server_3,server_4,server_5 template_pool = N/A num_stripe_members = N/A stripe_size = N/A EXAMPLE #1 provides a description of command output. EXAMPLE #9 ---------- For a VNX system, to extend the system-defined storage pool
by a specified size with a specified system, type: $ nas_pool -xtend clar_r5_performance -size 128M -storage APM00042000818 id = 3 name = clar_r5_performance description = CLARiiON RAID5 4plus1 acl = 1421, owner=nasadmin, ID=201 in_use = False clients = members = v120 default_slice_flag = True is_user_defined = False thin = False disk_type = CLSTD server_visibility = server_2,server_3,server_4,server_5 volume_profile = clar_r5_performance_vp is_dynamic = False is_greedy = True num_stripe_members = 4 stripe_size = 32768 EXAMPLE #1 provides a description of command output. EXAMPLE #10 ----------- For a VNX system, to remove d7 and d8 from the marketing storage pool, type: $ nas_pool -shrink marketing -volumes d7,d8 id = 20 name = marketing description = Marketing Storage Pool acl = 0 in_use = False clients = members = d12,d13 default_slice_flag = True is_user_defined = True thin = True disk_type = CLSTD server_visibility = server_2,server_3,server_4,server_5 template_pool = N/A num_stripe_members = N/A stripe_size = N/A EXAMPLE #1 provides a description of command output. EXAMPLE #11 ----------- To list the storage pools, type: $ nas_pool -list id inuse acl name storage_system 2 n 421 clar_r1 N/A 3 n 421 clar_r5_performance FCNTR074200038 4 n 421 clar_r5_economy N/A 10 n 421 clarata_archive FCNTR074200038 11 n 421 clarata_r3 N/A 20 n 0 marketing FCNTR074200038 40 y 0 TP1 FCNTR074200038 41 y 0 FP1 FCNTR074200038 Where: Value Definition ----- ----------
id ID of the storage pool. inuse Whether the storage pool is being used by a filesystem. acl Access control level value assigned to the storage pool. name Name of the storage pool. storage_system Name of the storage system where the storage pool resides . EXAMPLE #12 ----------- To display information about the user-defined storage pool called marketing, type: $ nas_pool -info marketing id = 20 name = marketing description = Marketing Storage Pool acl = 0 in_use = False clients = members = d12,d13 storage_system(s) = default_slice_flag = True is_user_defined = True thin = True disk_type = CLSTD server_visibility = server_2,server_3,server_4,server_5 is_greedy = False template_pool = N/A num_stripe_members = N/A stripe_size = N/A EXAMPLE #1 provides a description of command output. EXAMPLE #13 ----------- To display information about the system-defined clar_r5_performance storage pool, type: $ nas_pool -info clar_r5_performance id = 3 name = clar_r5_performance description = CLARiiON RAID5 4plus1 acl = 1421, owner=nasadmin, ID=201 in_use = False clients = members = v120 default_slice_flag = True is_user_defined = False thin = False disk_type = CLSTD server_visibility = server_2,server_3,server_4,server_5 volume_profile = clar_r5_performance_vp is_dynamic = False is_greedy = True num_stripe_members = 4 stripe_size = 32768 EXAMPLE #1 provides a description of command output. EXAMPLE #14 ----------- To display information about the system-defined engineer virtual pool, type: $ nas_pool -info engineer
id = 40 name = engineer description = Mapped Pool engineer on APM00084401666 acl = 0 in_use = True clients = DA_BE_VIRT_FS,vp_test,vp_test1,vp_test12,cvpfs1,cvpfs3 members = v363 default_slice_flag = True is_user_defined = False thin = True disk_type = CLSTD server_visibility = server_2,server_3 volume_profile = engineer_APM00084401666_vp is_dynamic = True is_greedy = True num_stripe_members = N/A stripe_size = N/A EXAMPLE #1 provides a description of command output. EXAMPLE #15 ----------- To display information about the mapped storage pool called FP1 from a VNX for block, type: $ nas_pool -info FP1 id = 40 name = FP1 description = Mapped Pool on FCNTR074200038 acl = 0 in_use = False clients = members = default_slice_flag = True is_user_defined = False thin = True tiering_policy = Auto-tier compressed = False mirrored = False disk_type = Mixed volume_profile = FP1 is_dynamic = True is_greedy = True Where: Value Definition ----- ----------- tiering_policy Indicates the tiering policy in effect. If the initial t ier and the tiering policy are the same, the values are: Auto-Tier, Highest Available Tier, Lowest Available Tier . If the initial tier and the tiering policy are not the same, the values are: Auto-Tier/No Data Movement, Highes t Available Tier/No Data Movement, Lowest Available Tier/No Data Mov ement. compressed For VNX for block, indicates whether data is compressed. Values are: True, False, Mixed (indicates some of the LUNs, but not all, are compressed). mirrored Indicates whether the disk is mirrored. EXAMPLE #16 -----------
To display information about the mapped storage pool called SG0 from a Symmetrix storage system, type: $ nas_pool -info SG0 id = 40 name = SG0 description = Symmetrix Mapped Pool on 000192601245 acl = 0 in_use = False clients = members = default_slice_flag = True is_user_defined = False thin = True tiering_policy = symm_policy_1 compressed = True frontend_io_quota = maxiopersec 500,maxmbpersec 500 disk_type = Mixed volume_profile = True is_dynamic = True is_greedy = N/A Where: Value Definition ----- ----------- id ID of the storage pool. name Name of the storage pool. description Comment assigned to the storage pool. acl Access control level value assigned to the storage pool. in_use Whether the storage pool is being used by a file system. clients File systems using the storage pool. members Volumes used by the storage pool. default_slice_flag Allows slices from the storage pool. is_user_defined User-defined as opposed to system-defined. thin Indicates whether thin provisioning is enabled or disable d. tiering_policy Indicates the tiering policy in effect. If the initial ti er and the tiering policy are the same, the values are: Auto-Tier, Highest Available Tier, Lowest Available Tier. If the initial tier and the tiering policy are not the sa me, the values are: Auto-Tier/No Data Movement,Highest Available Tier/No Data Movement, Lowest Available Tier/No Data Movement. compressed For VNX with Symmetrix backend, indicates whether data is compressed. Values are: True, False, Mixed (indicates some of the LUNs, but not all, are compressed). frontend_io_quota For VNX with Symmetrix backend, indicates if Frotend IO Quota is configured on this mapped pool, could also have value as False (indicates Frontend IO Quota is not configured on mapped SG in Symmetrix backend). disk_type Type of disk contingent on the system attached. CLSTD, CLATA, CMSTD, CLEFD, CMEFD, CMATA, MIXED (indicates tiers used in the pool contain multiple disk types), Performanc e, Capacity, Extreme_performance, Mirrored_mixed, Mirrored_performance, Mirrored_capacity, and Mirrored_extreme_performance are for VNX for block, and S TD, BCV, R1BCV, R2BCV, R1STD, R2STD, ATA, R1ATA, R2ATA, BCVA, R1BCA, R2BCA, EFD, BCVMIXED, R1MIXED, R2MIXED, R1BCVMIXED , and R2BCVMIXED are for Symmetrix. volume_profile Volume profile used.
is_dynamic Whether the system-defined storage pool can add or remove volumes. is_greedy Indicates whether the system-defined storage pool will us e new member volumes as needed. EXAMPLE #17 ----------- To delete the storage pool, marketing, and each of the storage pool member volumes recursively, type: $ nas_pool -delete marketing -deep id = 20 name = marketing description = Marketing Storage Pool acl = 0 in_use = False clients = members = storage_system(s) = default_slice_flag = True is_user_defined = True is_greedy = True thin = True template_pool = N/A num_stripe_members = N/A stripe_size = N/A EXAMPLE #1 provides a description of command output. ----------------------------------------------------------------- Last modified: January 11 2013, 4:31 pm.
nas_quotas Manages quotas for mounted file systems. SYNOPSIS -------- nas_quotas -edit [-user|-group] {-mover
For a group, the ID can be a group ID or GID, however, if NIS or the local password file is available, a group name can also be used. Upon execution, a vi session (unless the EDITOR environment variable specifies otherwise) is opened to edit the quota configuration file. Changes to the file are applied when the vi session is saved and exited. [-proto
Specifies that the event for block hard limits has been sent. -edit -tree -fs
Turns on (enables) tree quotas so that quota tracking and hard-limit enforcement (if enabled) can occur. When enabling tree quotas, the directory must not exist; it is created in this tree-quota-enabling process. Note: The quota path length (which VNX for file calculates as including the file system mountpoint) must be less than 1024 bytes. If Unicode is enabled on the selected Data Mover, -path accepts any characters defined by the Unicode 3.0 standard. Otherwise, it accepts only ASCII characters. [-comment
SEE ALSO -------- Using Quotas on VNX. EXAMPLE# 1 ---------- To enable quotas for users and groups of a file system, type: $ nas_quotas -on -both -fs ufs1 done EXAMPLE #2 ---------- To open a vi session to edit file system quotas on ufs1 for the specified user, 1000, type: $ nas_quotas -edit -user -fs ufs1 1000 Userid : 1000 fs ufs1 blocks (soft = 2000, hard = 3000) inodes (soft = 0, hard = 0) "/tmp/EdP.agGQuIz" 2L, 84C written done EXAMPLE #3 ---------- To change the block limit and inode limit for a file without opening up a vi session, type: $ nas_quotas -edit -user -fs ufs1 -block 7000:6000 -inode 700:600 2000 done EXAMPLE #4 ---------- To view a report of user quotas for ufs1, type: $ nas_quotas -report -user -fs ufs1 Report for user quotas on filesystem ufs1 mounted on /ufs1 +-----------+--------------------------------+--------------------------- ---+ |User | Bytes Used (1K) | Files | +-----------+-------+-------+-------+--------+-------+------+------+----- ---+ | | Used | Soft | Hard |Timeleft| Used | Soft | Hard |Timel eft| +-----------+-------+-------+-------+--------+-------+------+------+----- ---+ |#1000 | 1328| 2000| 3000| | 54| 0| 0| | |#2000 | 6992| 6000| 7000| 7.0days| 66| 600| 700| | |#5000 | 141592| 0| 0| | 516| 0| 0| | +-----------+-------+-------+-------+--------+-------+------+------+----- ---+ done EXAMPLE #5
---------- To select user 300 as prototype user for ufs1, and assign other users the same limits, type: $ nas_quotas -group -edit -fs ufs1 -proto 300 301 302 303 done EXAMPLE #6 ---------- To display the group quotas information for ufs1, type: $ nas_quotas -report -group -fs ufs1 Report for group quotas on filesystem ufs1 mounted on /ufs1 +-----------+--------------------------------+--------------------------- ---+ | Group | Bytes Used (1K) | Files | +-----------+-------+-------+-------+--------+-------+------+------+----- ---+ | | Used | Soft | Hard |Timeleft| Used | Soft | Hard |Timel eft| +-----------+-------+-------+-------+--------+-------+------+------+----- ---+ |#1 | 296| 0| 0| | 12| 0| 0| | |#300 | 6992| 6000| 7000| 7.0days| 67| 600| 700| | |#301 | 0| 6000| 7000| | 0| 600| 700| | |#302 | 0| 6000| 7000| | 0| 600| 700| | |#303 | 0| 6000| 7000| | 0| 600| 700| | |#32772 | 22296| 0| 0| | 228| 0| 0| | +-----------+-------+-------+-------+--------+-------+------+------+----- ---+ done EXAMPLE #7 ---------- To edit the default quota configuration for server_2, type: $ nas_quotas -edit -config -mover server_2 File System Quota Parameters: fs "ufs1" Block Grace: (1.0 weeks) Inode Grace: (1.0 weeks) * Default Quota Limits: User: block (soft = 5000, hard = 8000) inodes (soft = 100, hard= 200) Group: block (soft = 6000, hard = 9000) inodes (soft = 200, hard= 400) Deny disk space to users exceeding quotas: (yes) * Generate Events when: Quota check starts: (no) Quota check ends: (no) soft quota crossed: (no) hard quota crossed: (no) fs "ufs2" Block Grace: (1.0 weeks) Inode Grace: (1.0 weeks) * Default Quota Limits: User: block (soft = 0, hard = 0) inodes (soft = 0, hard= 0) Group: block (soft = 0, hard = 0) inodes (soft = 0, hard= 0) Deny disk space to users exceeding quotas: (yes) * Generate Events when:
Quota check starts: (no) Quota check ends: (no) soft quota crossed: (no) hard quota crossed: (no) "/tmp/EdP.ahCPdAB" 25L, 948C written done EXAMPLE #8 ---------- To open a vi session and edit the quota configuration for a file system, type: $ nas_quotas -edit -config -fs ufs1 File System Quota Parameters: fs "ufs1" Block Grace: (1.0 weeks) Inode Grace: (1.0 weeks) * Default Quota Limits: User: block (soft = 5000, hard = 8000) inodes (soft = 100, hard= 200) Group: block (soft = 6000, hard = 9000) inodes (soft = 200, hard= 400) Deny disk space to users exceeding quotas: (yes) * Generate Events when: Quota check starts: (no) Quota check ends: (no) soft quota crossed: (yes) hard quota crossed: (yes) "/tmp/EdP.a4slhyg" 13L, 499C written done EXAMPLE #9 ---------- To view the quota configuration for the file system, ufs1, type: $ nas_quotas -report -config -fs ufs1 +--------------------------------------------------------+ | Quota parameters for filesystem ufs1 mounted on /ufs1: +--------------------------------------------------------+ | Quota Policy: blocks | User Quota: ON | Group Quota: ON | Block grace period: (1.0 weeks) | Inode grace period: (1.0 weeks) | Default USER quota limits: | Block Soft: ( 5000), Block Hard: ( 8000) | Inode Soft: ( 100), Inode Hard: ( 200) | Default GROUP quota limits: | Block Soft: ( 6000), Block Hard: ( 9000) | Inode Soft: ( 200), Inode Hard: ( 400) | Deny Disk Space to users exceeding quotas: YES | Log an event when ... | Block hard limit reached/exceeded: YES | Block soft limit (warning level) crossed: YES | Quota check starts: NO | Quota Check ends: NO +--------------------------------------------------------+ done EXAMPLE #10
----------- To enable tree quotas for ufs1, type: $ nas_quotas -on -tree -fs ufs1 -path /tree1 -comment Tree #1 done EXAMPLE #11 ----------- To create a tree quota with multibyte character support, type: $ nas_quotas -on -tree -fs fs_22 -path /
$ nas_quotas -edit -tree -fs ufs1 -block 8000:6000 -inode 900:800 1 done EXAMPLE #16 ----------- To edit tree quotas for ufs1 and apply the quota configuration of the prototype tree, type: $ nas_quotas -edit -tree -fs ufs1 -proto 1 2 done EXAMPLE #17 ----------- To display any currently active trees on a file system, type: $ nas_quotas -report -tree -fs ufs1 Report for tree quotas on filesystem ufs1 mounted on /ufs1 +-----------+--------------------------------+--------------------------- ---+ | Tree | Bytes Used (1K) | Files | +-----------+-------+-------+-------+--------+-------+------+------+----- ---+ | | Used | Soft | Hard |Timeleft| Used | Soft | Hard |Timel eft| +-----------+-------+-------+-------+--------+-------+------+------+----- ---+ |#1 | 384| 6000| 8000| | 3| 800| 900| | |#2 | 7856| 6000| 8000| 7.0days| 60| 800| 900| | +-----------+-------+-------+-------+--------+-------+------+------+----- ---+ done EXAMPLE #18 ----------- To disable tree quotas, type: $ nas_quotas -tree -off -fs ufs1 -path /tree1 done EXAMPLE #19 ----------- To enable quotas for users and groups on tree quota, /tree3, of a file system, ufs1, type: $ nas_quotas -on -both -fs ufs1 -path /tree3 done EXAMPLE #20 ---------- To open a vi session to edit file system quotas on quota tree, /tree3, on ufs1 for the specified user, 1000, type: $ nas_quotas -edit -user -fs ufs1 -path /tree3 1000 Userid : 1000 fs ufs1 tree "/tree3" blocks (soft = 4000, hard = 6000) inodes (soft = 30, hard = 50)
"/tmp/EdP.aMdtIQR" 2L, 100C written done EXAMPLE #21 ----------- To change the block limit and inode limit on quota tree, /tree3, on ufs1 for the specified user, 1000, without opening up a vi session, type: $ nas_quotas -edit -user -fs ufs1 -path /tree3 -block 6000:4000 -inode 300:200 1000 done EXAMPLE #22 ----------- To view a report of user quotas on tree quota, /tree3, for ufs1, type: $ nas_quotas -report -user -fs ufs1 -path /tree3 Report for user quotas on quota tree /tree3 on filesystem ufs1 mounted on /ufs1 +-----------+--------------------------------+--------------------------- ---+ |User | Bytes Used (1K) | Files | +-----------+-------+-------+-------+--------+-------+------+------+----- ---+ | | Used | Soft | Hard |Timeleft| Used | Soft | Hard |Timel eft| +-----------+-------+-------+-------+--------+-------+------+------+----- ---+ |#1000 | 2992| 4000| 6000| | 34| 200| 300| | |#32768 | 9824| 0| 0| | 28| 0| 0| | +-----------+-------+-------+-------+--------+-------+------+------+----- ---+ done EXAMPLE #23 ----------- To open a vi session and edit the quota configuration for tree quota, /tree3, on a file system, ufs1, type: $ nas_quotas -edit -config -fs ufs1 -path /tree3 Tree Quota Parameters: fs "ufs1" tree "/tree3" Block Grace: (1.0 weeks) Inode Grace: (1.0 weeks) * Default Quota Limits: User: block (soft = 8000, hard = 9000) inodes (soft = 200, hard= 300) Group: block (soft = 8000, hard = 9000) inodes (soft = 300, hard= 400) Deny disk space to users exceeding quotas: (yes) * Generate Events when: Quota check starts: (no) Quota check ends: (no) soft quota crossed: (yes) hard quota crossed: (yes) "/tmp/EdP.aDTOKeU" 14L, 508C written
done EXAMPLE #24 ----------- To view the quota configuration for tree quota, /tree3, on file system, ufs1, type: $ nas_quotas -report -config -fs ufs1 -path /tree3 +-----------------------------------------------------------------+ | Quota parameters for tree quota /tree3 on filesystem ufs1 mounted | on /ufs1: +-----------------------------------------------------------------+ | Quota Policy: blocks | User Quota: ON | Group Quota: ON | Block grace period: (1.0 weeks) | Inode grace period: (1.0 weeks) | Default USER quota limits: | Block Soft: ( 8000), Block Hard: ( 9000) | Inode Soft: ( 200), Inode Hard: ( 300) | Default GROUP quota limits: | Block Soft: ( 8000), Block Hard: ( 9000) | Inode Soft: ( 300), Inode Hard: ( 400) | Deny Disk Space to users exceeding quotas: YES | Log an event when ... | Block hard limit reached/exceeded: YES | Block soft limit (warning level) crossed: YES | Quota check starts: NO | Quota Check ends: NO +-----------------------------------------------------------------+ done EXAMPLE #25 ----------- To disable user quota and group quota on tree quota, /tree3, type: $ nas_quotas -off -both -fs ufs1 -path /tree3 done EXAMPLE #26 ----------- To disable group quotas for ufs1, type: $ nas_quotas -off -group -fs ufs1 done EXAMPLE #27 ----------- To clear all tree quotas for ufs1, type: $ nas_quotas -clear -tree -fs ufs1 done EXAMPLE #28 ----------- To clear quotas for users and groups of a Data Mover, type: $ nas_quotas -clear -both -mover server_2 done EXAMPLE #29 ----------- To start a tree quota check in quota tree /mktg-a/dir1 in file system ufs1
with the file system online, type: $ nas_quotas -check -start -mode online -tree -fs ufs1 /mktg-a/dir1 done EXAMPLE #30 ----------- To stop a tree quota check in file system ufs1, type: $ nas_quotas -check -stop -fs ufs1 done EXAMPLE #31 ----------- To view the status of a tree quota check in quota tree /mktg-a/dir1 in file system ufs1, type: $ nas_quotas -check -status -tree -fs ufs1 -path /mktg-a/dir1 Tree quota check on filesystem ufs1 and path /mktg-a/dir is running and is 60% complete. Done EXAMPLE #32 ----------- To list quota database limits for all file systems on a Data Mover, type: $ nas_quotas -quotadb -info -mover server_2 Info 13421850365 : The quota limit on ufs0 is at 4TB. The upgrade to 256 TB is estimated to take 5 seconds. A total number of 1500 data blocks in the quota database will be converted at a speed of 300 blocks per second. Info 13421850365 : The quota limit on ufs1 is at 4TB. The upgrade to 256 TB is estimated to take 5 seconds. A total number of 1500 data blocks in the quota database will be converted at a speed of 300 blocks per second. Info 13421850365 : The quota limit on ufs2 is at 4TB. The upgrade to 256 TB is estimated to take 5 seconds. A total number of 1500 data blocks in the quota database will be converted at a speed of 300 blocks per second. Info 13421850365 : The quota limit on ufs3 is at 4TB. The upgrade to 256 TB is estimated to take 5 seconds. A total number of 1500 data blocks in the quota database will be converted at a speed of 300 blocks per second. Info 13421850366 : The quota limit on ufs4 is at 256 TB EXAMPLE #33 ----------- To list quota database limits for file system ufs4, type: $ nas_quotas -quotadb -info -fs ufs4 Info 13421850366 : The quota limit on ufs4 is at 256 TB EXAMPLE #34 ----------- To upgrade all file systems on a Data Mover, in interactive mode, type:
$ nas_quotas -quotadb -upgrade -mover server_2 Info 13421850365 : The quota limit on ufs0 is at 4TB. The upgrade to 256 TB is estimated to take 5 seconds. A total number of 1500 data blocks in the quota database will be converted at a speed of 300 blocks per second. Info 13421850365 : The quota limit on ufs1 is at 4TB. The upgrade to 256 TB is estimated to take 5 seconds. A total number of 1500 data blocks in the quota database will be converted at a speed of 300 blocks per second. Info 13421850365 : The quota limit on ufs2 is at 4TB. The upgrade to 256 TB is estimated to take 5 seconds. A total number of 1500 data blocks in the quota database will be converted at a speed of 300 blocks per second. Info 13421850365 : The quota limit on ufs3 is at 4TB. The upgrade to 256 TB is estimated to take 5 seconds. A total number of 1500 data blocks in the quota database will be converted at a speed of 300 blocks per second. Info 13421850366 : The quota limit on ufs4 is at 256 TB Warning 17716861297: The file systems specified in the list above will not be accessible during the quota database upgrade, and a file systems CIFS share and NFS export also will not be accessible during the upgrade. The file systems shown above are listed in the order that the quota database conversion is performed, one by one sequentially. The estimated time ( shown above ) needed to upgrade the quota database may change based on the file systems quota configuration and I/O performance when the conversion is running. Do you really want to upgrade the file system quota database now[Y/N]: Y Info 13421850367 : quota db upgraded on ufs0 Info 13421850367 : quota db upgraded on ufs1 Info 13421850367 : quota db upgraded on ufs2 Error 13421850368 : Timeout occurred when upgrading quota db on ufs3. The Quota db upgrade may still be in progress. Use the "-info" option to check status. Info 13421850369 : quota db already upgraded on ufs4 EXAMPLE #35 ----------- To list quota database limits for file system ufs3 after an upgrade has timed out, type: $ nas_quotas -quotadb -info -fs ufs3 Info 13421850370 : The quota limit on ufs3 is at 4TB. Upgrade is 48% complete. EXAMPLE #36 ----------- To upgrade all file systems on a Data Mover, in non-interactive mode, type: $ nas_quotas -quotadb -upgrade -Force -mover server_2 Info 13421850365 : The quota limit on ufs0 is at 4TB. The upgrade to 256 TB is estimated to take 5 seconds. A total number of 1500 data blocks in the quota database will be converted at a speed of 300 blocks per second. Info 13421850365 : The quota limit on ufs1 is at 4TB. The upgrade to 256 TB is estimated to take 5 seconds.
A total number of 1500 data blocks in the quota database will be converted at a speed of 300 blocks per second. Info 13421850365 : The quota limit on ufs2 is at 4TB. The upgrade to 256 TB is estimated to take 5 seconds. A total number of 1500 data blocks in the quota database will be converted at a speed of 300 blocks per second. Info 13421850365 : The quota limit on ufs3 is at 4TB. The upgrade to 256 TB is estimated to take 5 seconds. A total number of 1500 data blocks in the quota database will be converted at a speed of 300 blocks per second. Info 13421850366 : The quota limit on ufs4 is at 256 TB Warning 17716861297: The file systems specified in the list above will not be accessible during the quota database upgrade, and a file systems CIFS share and NFS export also will not be accessible during the upgrade. The file systems shown above are listed in the order that the quota database conversion is performed, one by one sequentially. The estimated time ( shown above ) needed to upgrade the quota database may change based on the file systems quota configuration and I/O performance when the conversion is running. Info 13421850367 : quota db upgraded on ufs0 Info 13421850367 : quota db upgraded on ufs1 Info 13421850367 : quota db upgraded on ufs2 Error 13421850368 : Timeout occurred when upgrading quota db on ufs3. The Quota db upgrade may still be in progress. Use the "-info" option to check status. Info 13421850369 : quota db already upgraded on ufs4 EXAMPLE #37 ----------- To upgrade file system ufs3, in interactive mode, type: $ nas_quotas -quotadb -upgrade -fs ufs3 Info 13421850365 : The quota limit on ufs3 is at 4TB. The upgrade to 256 TB is estimated to take 5 seconds. A total number of 1500 data blocks in the quota database will be converted at a speed of 300 blocks per second. Warning 17716861297: The file systems specified in the list above will not be accessible during the quota database upgrade, and a file systems CIFS share and NFS export also will not be accessible during the upgrade. The file systems shown above are listed in the order that the quota database conversion is performed, one by one sequentially. The estimated time ( shown above ) needed to upgrade the quota database may change based on the file systems quota configuration and I/O performance when the conversion is running. Do you really want to upgrade the file system quota database now[Y/N]: Y Info 13421850367 : quota db upgraded on ufs3 done EXAMPLE #38 ----------- To upgrade file system ufs3, in non-interactive mode, type: $ nas_quotas -quotadb -upgrade -Force -fs ufs3 Info 13421850365 : The quota limit on ufs3 is at 4TB. The upgrade to 256 TB is estimated to take 5 seconds. A total number of 1500 data blocks in the quota database will be converted at a speed of 300 blocks per second. Warning 17716861297: The file systems specified in the list above will not be accessible during the quota database upgrade, and a file systems CIFS share and NFS export also will not be accessible during the upgrade. The file systems shown above are listed in the order that the quota database conversion
is performed, one by one sequentially. The estimated time ( shown above ) needed to upgrade the quota database may change based on the file systems quota configuration and I/O performance when the conversion is running. Info 13421850367 : quota db upgraded on ufs3 done ----------------------------------------------------------------- Last Modified: May 12, 2011 3:15 pm
nas_rdf Facilitates communication between two VNX systems. Its primary use is to manage VNX for file systems and define the relationships needed for disaster recovery in a SRDF environment. SYNPOSIS -------- nas_rdf -init | -activate [-reverse]|-skip_rdf_operations][-skip_SiteA_shutdown][-nocheck] | -restore [-skip_rdf_operations [-skip_SiteA_shutdown]][-nocheck] -check {-all|
command. SiteA shutdown (Data Mover shutdown and reboot Control Station) will be skipped all the time when this option is specified. However Control Station reboot is sent to SiteA at the end of the activate operation when the backend RDF status is not "Split" to clean up old processes. (The "Split" status means SiteA is read write, and the production site is up and running). For failover from SiteB to SiteC or SiteC to SiteB, the Control Station reboot is sent to SiteB or SiteC. SiteB/SiteC must be read write before starting this operation. The -activate -skip_rdf_operations -skip_SiteA_shutdown will do the same operation. -activate -skip_SiteA_shutdown Skips SiteA shutdown (Data Mover shutdown and reboot Control Station) operation. However the SiteA shutdown is sent to SiteA at the end of the activate operation. This option is mainly used to minimize the failover time. -restore -skip_rdf_operations Skips RDF backend operations like symrdf failback. This option also completes only SiteB/SiteC restore operations and skip SiteA restore operation. The SiteA restore operation must be done separately at SiteA after the SiteB/SiteC restore operation completes. SiteB/SiteC must be read write before starting this operation. -restore -skip_rdf_operations -skip_SiteA_shutdown Skips RDF backend operations like symrdf failback and also skip SiteA shutdown operation. This is mainly used to failover from SiteB to SiteC or from SiteC to SiteB. -restore Restores a source VNX after a failover. The -restore option is initially executed on the destination VNX. The data on each destination volume is copied to the corresponding volume on the source VNX. On the destination VNX, services on each SRDF standby Data Mover are stopped. (NFS clients connected to these Data Movers see a "server unavailable" message; CIFS client connections time out.) Each volume on the source VNX is set as read-write, and each mirrored volume on the destination VNX is set as read-only. Finally, nas_rdf -restore can be remotely executed on the source VNX to restore the original configuration. Each primary Data Mover reacquires its IP and MAC addresses, file systems, and export tables. When the -restore option is executed, an automatic, internal SRDF health check is performed before restoring source and destination VNX systems. The -nocheck option allows you to skip this health check. -check { -all|
Using SRDF/S with VNX for Disaster Recovery, Using SRDF/S with VNX, and nas_cel. EXAMPLE #1 ---------- To start the initialization process on a destination VNX in an active/passive SRDF/S configuration, as a nasadmin su to root user, type: # /nas/sbin/nas_rdf -init Discover local storage devices ... Discovering storage on eng564168 (may take several minutes) done Start R2 dos client ... done Start R2 nas client ... done Contact CS_A ... is alive Create a new login account to manage the RDF site CELERRA Caution: For an active-active configuration, avoid using the same UID that was used for the rdfadmin account on the other side. New login username and UID (example: rdfadmin:500): rdfadmin:600 done New UNIX password: BAD PASSWORD: it is based on a dictionary word Retype new UNIX password: Changing password for user rdfadmin. passwd: all authentication tokens updated successfully. done operation in progress (not interruptible)... id = 1 name = CS_A owner = 600 device = /dev/ndj1 channel = rdev=/dev/ndg, off_MB=391; wdev=/dev/nda, off_MB=391 net_path = 10.245.64.169 celerra_id = 0001949004310028 passphrase = nasadmin Discover remote storage devices ...done The following servers have been detected on the system (CS_B): id type acl slot groupID state name 1 4 2000 2 0 server_2 2 1 0 3 0 server_3 Please enter the id(s) of the server(s) you wish to reserve (separated by spaces) or "none" for no servers. Select server(s) to use as standby: 1 operation in progress (not interruptible)... id = 1 name = CS_A owner = 600 device = /dev/ndj1 channel = rdev=/dev/ndg, off_MB=391; wdev=/dev/nda, off_MB=391 net_path = 10.245.64.169 celerra_id = 0001949004310028 passphrase = nasadmin EXAMPLE #2 ---------- To initiate an SRDF failover from the source VNX to the destination, as a rdfadmin su to root, type: # /nas/sbin/nas_rdf -activate Is remote site CELERRA completely shut down (power OFF)? Do you wish to continue? [yes or no]: yes Successfully pinged (Remotely) Symmetrix ID: 000187430809 Successfully pinged (Remotely) Symmetrix ID: 000190100559 Successfully pinged (Remotely) Symmetrix ID: 000190100582 Write Disable device(s) on SA at source (R1)..............Done.
Suspend RDF link(s).......................................Done. Read/Write Enable device(s) on RA at target (R2)..........Done. Waiting for nbs clients to die ... done Waiting for nbs clients to start ... done fsck 1.35 (28-Feb-2004) /dev/ndj1: recovering journal /dev/ndj1: clean, 13780/231360 files, 233674/461860 blocks Waiting for nbs clients to die ... done Waiting for nbs clients to start ... done id type acl slot groupID state name 1 1 1000 2 0 server_2 2 4 1000 3 0 server_3 3 1 1000 4 0 server_4 4 4 1000 5 0 server_5 server_2 : server_2 : going offline rdf : going active replace in progress ...done failover activity complete commit in progress (not interruptible)...done done server_3 : server_3 : going offline rdf : going active replace in progress ...done failover activity complete commit in progress (not interruptible)...done done server_4 : Error 4003: server_4 : standby is not configured server_5 : Error 4003: server_5 : standby is not configured Suspend RDF link(s).......................................Done. Merge device track tables between source and target.......Started. Device: 045A in (0557,005)............................... Merged. Merge device track tables between source and target.......Done. Resume RDF link(s)........................................Started. Resume RDF link(s)........................................Done. EXAMPLE #3 ---------- To initiate an SRDF failover from the source VNX to the destination, without the SRDF health check, as rdfadmin su to root user, type: # /nas/sbin/nas_rdf -activate -nocheck Skipping SRDF health check .... Is remote site CELERRA completely shut down (power OFF)?The nas Commands Do you wish to continue? [yes or no]: yes Successfully pinged (Remotely) Symmetrix ID: 000187430809 Successfully pinged (Remotely) Symmetrix ID: 000190100559 Successfully pinged (Remotely) Symmetrix ID: 000190100582 Write Disable device(s) on SA at source (R1)..............Done. Suspend RDF link(s).......................................Done. Read/Write Enable device(s) on RA at target (R2)..........Done. Waiting for nbs clients to die ... done Waiting for nbs clients to start ... done fsck 1.35 (28-Feb-2004) /dev/ndj1: recovering journal /dev/ndj1: clean, 13780/231360 files, 233674/461860 blocks Waiting for nbs clients to die ... done Waiting for nbs clients to start ... done id type acl slot groupID state name 1 1 1000 2 0 server_2 2 4 1000 3 0 server_3
3 1 1000 4 0 server_4 4 4 1000 5 0 server_5 server_2 : server_2 : going offline rdf : going active replace in progress ...done failover activity complete commit in progress (not interruptible)...done done server_3 : server_3 : going offline rdf : going active replace in progress ...done failover activity complete commit in progress (not interruptible)...done done server_4 : Error 4003: server_4 : standby is not configured server_5 : Error 4003: server_5 : standby is not configured Suspend RDF link(s).......................................Done. Merge device track tables between source and target.......Started. Device: 045A in (0557,005)............................... Merged. Merge device track tables between source and target.......Done. Resume RDF link(s)........................................Started. Resume RDF link(s)........................................Done. EXAMPLE #4 ---------- To initiate a Dynamic SRDF failover from the source VNX to the destination, as rdfadmin su to root user, type: #/nas/sbin/nas_rdf -activate -reverse Is remote site CELERRA completely shut down (power OFF)? Do you wish to continue? [yes or no]: yes Successfully pinged (Remotely) Symmetrix ID: 000280600118 Write Disable device(s) on SA at source (R1)...............Done. Suspend RDF link (s).......................................Done. Read/Write Enable device(s) on RA at target (r2)...........Done. fsck 1.35 (28-Feb-2004) /dev/sdjl: recovering journal Clearing ophaned inode 37188 (uid0, gid=0, mode=0100644, size=0) /dev/sdj1: clean, 12860/219968 files, 194793/439797 blocks id type acl slot groupID state name 1 1 1000 2 0 server_2 2 4 1000 3 0 server_3 3 4 2000 4 0 server_4 4 4 2000 5 0 server_5 server_2 : server_2 : going offline rdf : going active replace in progress ...done failover activity complete commit in progress (not interruptible)...done done server_3 : server_3 : going offline rdf : going active replace in progress ...done failover activity complete commit in progress (not interruptible)...done done An RDF Swap Personality operation execution is in progress for device group 1R2_500_1. Please wait... Swap RDF Personality......................................Started. Swap RDF Personality......................................Done. The RDF Swap Personality operation successfully executed for
device group 1R2_500_1. An RDF Incremental Establish operation execution is in progress for device group 1R2_500_1. Please wait... Suspend RDF link(s).......................................Done. Resume RDF link(s)........................................Started. Merge device track tables between source and target.......Started. Devices: 0009-000B ...................................... Merged. Devices: 0032-0034 ...................................... Merged. Devices: 0035-0037 ...................................... Merged. Devices: 0038-003A ...................................... Merged. Devices: 003B-003D ...................................... Merged. Devices: 003E-0040 ...................................... Merged. Devices: 0041-0043 ...................................... Merged. Devices: 0044-0046 ...................................... Merged. Devices: 0047-0049 ...................................... Merged. Merge device track tables between source and target.......Done. Resume RDF link(s)........................................Done. The RDF Incremental Establish operation successfully initiated for device group 1R2_500_1. EXAMPLE #5 ---------- To restore a source VNX after failover, as rdfadmin su to root user, type: # /nas/sbin/nas_rdf -restore Is remote site CELERRA ready for Storage restoration? Do you wish to continue? [yes or no]: yes Contact Joker_R1_CS0 ... is alive Restore will now reboot the source site control station. Do you wish to continue? [yes or no]: yes Device Group (DG) Name : 1R2_500_5 DGs Type : RDF2 DGs Symmetrix ID : 000190100557 Target (R2) View Source (R1) View MODES -------------------------------- ------------------------ ----- ------------ ST LI ST Standard A N A Logical T R1 Inv R2 Inv K T R1 Inv R2 Inv RDF Pair Device Dev E Tracks Tracks S Dev E Tracks Tracks MDA STATE -------------------------------- -- ------------------------ ----- ------------ DEV001 045A RW 10 0 RW 045A WD 0 0 S.. R1 Updated DEV002 045B RW 2054 0 NR 045B WD 0 0 S.. Failed Over DEV003 045C RW 0 0 NR 045C WD 0 0 S.. Failed Over DEV004 045D RW 0 0 NR 045D WD 0 0 S.. Failed Over DEV005 045E RW 1284 0 NR 045E WD 0 0 S.. Failed Over DEV006 045F RW 0 0 NR 045F WD 0 0 S.. Failed Over DEV007 0467 RW 0 0 NR 0467 WD 0 0 S.. Failed Over DEV008 0468 RW 2 0 NR 0468 WD 0 0 S.. Failed Over DEV009 0469 RW 0 0 NR 0469 WD 0 0 S.. Failed Over DEV010 046A RW 0 0 NR 046A WD 0 0 S.. Failed Over DEV011 046B RW 2 0 NR 046B WD 0 0 S.. Failed Over DEV012 046C RW 0 0 NR 046C WD 0 0 S.. Failed Over DEV013 046D RW 0 0 NR 046D WD 0 0 S.. Failed Over DEV014 046E RW 0 0 NR 046E WD 0 0 S.. Failed Over DEV015 046F RW 2 0 NR 046F WD 0 0 S.. Failed Over DEV016 0470 RW 0 0 NR 0470 WD 0 0 S.. Failed Over DEV017 0471 RW 2 0 NR 0471 WD 0 0 S.. Failed Over DEV018 0472 RW 0 0 NR 0472 WD 0 0 S.. Failed Over DEV019 0473 RW 0 0 NR 0473 WD 0 0 S.. Failed Over DEV020 0474 RW 0 0 NR 0474 WD 0 0 S.. Failed Over DEV021 0475 RW 0 0 NR 0475 WD 0 0 S.. Failed Over DEV022 0476 RW 0 0 NR 0476 WD 0 0 S.. Failed Over
DEV023 0477 RW 2 0 NR 0477 WD 0 0 S.. Failed Over DEV024 0478 RW 2 0 NR 0478 WD 0 0 S.. Failed Over DEV025 0479 RW 0 0 NR 0479 WD 0 0 S.. Failed Over DEV026 047A RW 0 0 NR 047A WD 0 0 S.. Failed Over DEV027 047B RW 0 0 NR 047B WD 0 0 S.. Failed Over DEV028 047C RW 0 0 NR 047C WD 0 0 S.. Failed Over DEV029 047D RW 0 0 NR 047D WD 0 0 S.. Failed Over DEV030 047E RW 0 0 NR 047E WD 0 0 S.. Failed Over DEV031 047F RW 0 0 NR 047F WD 0 0 S.. Failed Over DEV032 0480 RW 0 0 NR 0480 WD 0 0 S.. Failed Over DEV033 0481 RW 0 0 NR 0481 WD 0 0 S.. Failed Over DEV034 0482 RW 0 0 NR 0482 WD 0 0 S.. Failed Over DEV035 0483 RW 0 0 NR 0483 WD 0 0 S.. Failed Over DEV036 0484 RW 0 0 NR 0484 WD 0 0 S.. Failed Over DEV037 0485 RW 0 0 NR 0485 WD 0 0 S.. Failed Over DEV038 0486 RW 0 0 NR 0486 WD 0 0 S.. Failed Over DEV039 0487 RW 0 0 NR 0487 WD 0 0 S.. Failed Over DEV040 0488 RW 0 0 NR 0488 WD 0 0 S.. Failed Over DEV041 0489 RW 0 0 NR 0489 WD 0 0 S.. Failed Over DEV042 048A RW 0 0 NR 048A WD 0 0 S.. Failed Over DEV043 048B RW 0 0 NR 048B WD 0 0 S.. Failed Over DEV044 048C RW 0 0 NR 048C WD 0 0 S.. Failed Over DEV045 048D RW 0 0 NR 048D WD 0 0 S.. Failed Over DEV046 048E RW 0 0 NR 048E WD 0 0 S.. Failed Over DEV047 048F RW 2 0 NR 048F WD 0 0 S.. Failed Over DEV048 0490 RW 0 0 NR 0490 WD 0 0 S.. Failed Over DEV049 0491 RW 0 0 NR 0491 WD 0 0 S.. Failed Over DEV050 0492 RW 0 0 NR 0492 WD 0 0 S.. Failed Over DEV051 0493 RW 0 0 NR 0493 WD 0 0 S.. Failed Over DEV052 0494 RW 0 0 NR 0494 WD 0 0 S.. Failed Over DEV053 0495 RW 0 0 NR 0495 WD 0 0 S.. Failed Over DEV054 0496 RW 0 0 NR 0496 WD 0 0 S.. Failed Over DEV055 0497 RW 2 0 NR 0497 WD 0 0 S.. Failed Over DEV056 0498 RW 2 0 NR 0498 WD 0 0 S.. Failed Over DEV057 0499 RW 0 0 NR 0499 WD 0 0 S.. Failed Over DEV058 049A RW 0 0 NR 049A WD 0 0 S.. Failed Over DEV059 049B RW 0 0 NR 049B WD 0 0 S.. Failed Over DEV060 049C RW 0 0 NR 049C WD 0 0 S.. Failed Over DEV061 049D RW 0 0 NR 049D WD 0 0 S.. Failed Over DEV062 049E RW 0 0 NR 049E WD 0 0 S.. Failed Over DEV063 049F RW 0 0 NR 049F WD 0 0 S.. Failed Over DEV064 04A0 RW 0 0 NR 04A0 WD 0 0 S.. Failed Over DEV065 04A1 RW 0 0 NR 04A1 WD 0 0 S.. Failed Over DEV066 04A2 RW 0 0 NR 04A2 WD 0 0 S.. Failed Over DEV067 04A3 RW 0 0 NR 04A3 WD 0 0 S.. Failed Over DEV068 04A4 RW 0 0 NR 04A4 WD 0 0 S.. Failed Over DEV069 04A5 RW 0 0 NR 04A5 WD 0 0 S.. Failed Over DEV070 04A6 RW 0 0 NR 04A6 WD 0 0 S.. Failed Over Total -------- -------- -------- -------- Track(s) 3366 0 0 0 MB(s) 105.2 0.0 0.0 0.0 Legend for MODES: M(ode of Operation): A = Async, S = Sync, E = Semi-sync, C = Adaptive Copy D(omino) : X = Enabled, . = Disabled A(daptive Copy) : D = Disk Mode, W = WP Mode, . = ACp off Suspend RDF link(s).......................................Done. Merge device track tables between source and target.......Started. Devices: 045A-045F, 0467-0477 in (0557,005).............. Merged. Devices: 0478-0489 in (0557,005)......................... Merged. Devices: 048A-049B in (0557,005)......................... Merged. Devices: 049C-04A6 in (0557,005)......................... Merged. Merge device track tables between source and target.......Done. Resume RDF link(s)........................................Started. Resume RDF link(s)........................................Done. Is remote site CELERRA ready for Network restoration? Do you wish to continue? [yes or no]: yes server_2 : done server_3 : done server_4 : Error 4003: server_4 : standby is not configured server_5 :
Error 4003: server_5 : standby is not configured fsck 1.35 (28-Feb-2004) /dev/ndj1: clean, 13836/231360 files, 233729/461860 blocks Waiting for nbs clients to die ... done Waiting for nbs clients to start ... done Waiting for nbs clients to die ... done Waiting for nbs clients to start ... done Waiting for 1R2_500_5 access ...done Write Disable device(s) on RA at target (R2)..............Done. Suspend RDF link(s).......................................Done. Merge device track tables between source and target.......Started. Devices: 045A-045F, 0467-0477 in (0557,005).............. Merged. Devices: 0478-0489 in (0557,005)......................... Merged. Devices: 048A-049B in (0557,005)......................... Merged. Devices: 049C-04A6 in (0557,005)......................... Merged. Merge device track tables between source and target.......Done. Resume RDF link(s)........................................Started. Resume RDF link(s)........................................Done. Read/Write Enable device(s) on SA at source (R1)..........Done. Waiting for 1R2_500_5 sync ...done Starting restore on remote site CELERRA ... Waiting for nbs clients to start ... done Waiting for nbs clients to start ... done Suspend RDF link(s).......................................Done. server_2 : server_2 : going standby rdf : going active replace in progress ...done failover activity complete commit in progress (not interruptible)...done done server_3 : server_3 : going standby rdf : going active replace in progress ...done failover activity complete commit in progress (not interruptible)...done done server_4 : Error 4003: server_4 : standby is not configured server_5 : Error 4003: server_5 : standby is not configured Resume RDF link(s)........................................Started. Resume RDF link(s)........................................Done. If the RDF device groups were setup to operate in ASYNCHRONOUS ( SRDF/A ) mode, now would be a good time to set it back to that mode. Would you like to set device group 1R2_500_5 to ASYNC Mode ? [yes or no]: no done EXAMPLE #6 ----------- To restore a source VNX after failover, without the SRDF health check, as rdfadmin su to root user, type: # /nas/sbin/nas_rdf -restore -nocheck Skipping SRDF health check .... Is remote site CELERRA ready for Storage restoration? Do you wish to continue? [yes or no]: yes Contact Joker_R1_CS0 ... is alive Restore will now reboot the source site control station. Do you wish to continue? [yes or no]: yes Device Group (DG) Name : 1R2_500_5 DGs Type : RDF2 DGs Symmetrix ID : 000190100557 Target (R2) View Source (R1) View MODES -------------------------------- ------------------------ ----- ------------ ST LI ST Standard A N A Logical T R1 Inv R2 Inv K T R1 Inv R2 Inv RDF Pair
Device Dev E Tracks Tracks S Dev E Tracks Tracks MDA STATE -------------------------------- -- --------------- ---- ---------- DEV001 045A RW 10 0 RW 045A WD 0 0 S.. R1 Updated DEV002 045B RW 2054 0 NR 045B WD 0 0 S.. Failed Over DEV003 045C RW 0 0 NR 045C WD 0 0 S.. Failed Over DEV004 045D RW 0 0 NR 045D WD 0 0 S.. Failed Over DEV005 045E RW 1284 0 NR 045E WD 0 0 S.. Failed Over DEV006 045F RW 0 0 NR 045F WD 0 0 S.. Failed Over DEV007 0467 RW 0 0 NR 0467 WD 0 0 S.. Failed Over DEV008 0468 RW 2 0 NR 0468 WD 0 0 S.. Failed Over DEV009 0469 RW 0 0 NR 0469 WD 0 0 S.. Failed Over DEV010 046A RW 0 0 NR 046A WD 0 0 S.. Failed Over DEV011 046B RW 2 0 NR 046B WD 0 0 S.. Failed Over DEV012 046C RW 0 0 NR 046C WD 0 0 S.. Failed Over DEV013 046D RW 0 0 NR 046D WD 0 0 S.. Failed Over DEV014 046E RW 0 0 NR 046E WD 0 0 S.. Failed Over DEV015 046F RW 2 0 NR 046F WD 0 0 S.. Failed Over DEV016 0470 RW 0 0 NR 0470 WD 0 0 S.. Failed Over DEV017 0471 RW 2 0 NR 0471 WD 0 0 S.. Failed Over DEV018 0472 RW 0 0 NR 0472 WD 0 0 S.. Failed Over DEV019 0473 RW 0 0 NR 0473 WD 0 0 S.. Failed Over DEV020 0474 RW 0 0 NR 0474 WD 0 0 S.. Failed Over DEV021 0475 RW 0 0 NR 0475 WD 0 0 S.. Failed Over DEV022 0476 RW 0 0 NR 0476 WD 0 0 S.. Failed Over DEV023 0477 RW 2 0 NR 0477 WD 0 0 S.. Failed Over DEV024 0478 RW 2 0 NR 0478 WD 0 0 S.. Failed Over DEV025 0479 RW 0 0 NR 0479 WD 0 0 S.. Failed Over DEV026 047A RW 0 0 NR 047A WD 0 0 S.. Failed Over DEV027 047B RW 0 0 NR 047B WD 0 0 S.. Failed Over DEV028 047C RW 0 0 NR 047C WD 0 0 S.. Failed Over DEV029 047D RW 0 0 NR 047D WD 0 0 S.. Failed Over DEV030 047E RW 0 0 NR 047E WD 0 0 S.. Failed Over DEV031 047F RW 0 0 NR 047F WD 0 0 S.. Failed Over DEV032 0480 RW 0 0 NR 0480 WD 0 0 S.. Failed Over DEV033 0481 RW 0 0 NR 0481 WD 0 0 S.. Failed Over DEV034 0482 RW 0 0 NR 0482 WD 0 0 S.. Failed Over DEV035 0483 RW 0 0 NR 0483 WD 0 0 S.. Failed Over DEV036 0484 RW 0 0 NR 0484 WD 0 0 S.. Failed Over DEV037 0485 RW 0 0 NR 0485 WD 0 0 S.. Failed Over DEV038 0486 RW 0 0 NR 0486 WD 0 0 S.. Failed Over DEV039 0487 RW 0 0 NR 0487 WD 0 0 S.. Failed Over DEV040 0488 RW 0 0 NR 0488 WD 0 0 S.. Failed Over DEV041 0489 RW 0 0 NR 0489 WD 0 0 S.. Failed Over DEV042 048A RW 0 0 NR 048A WD 0 0 S.. Failed Over DEV043 048B RW 0 0 NR 048B WD 0 0 S.. Failed Over DEV044 048C RW 0 0 NR 048C WD 0 0 S.. Failed Over DEV045 048D RW 0 0 NR 048D WD 0 0 S.. Failed Over DEV046 048E RW 0 0 NR 048E WD 0 0 S.. Failed Over DEV047 048F RW 2 0 NR 048F WD 0 0 S.. Failed Over DEV048 0490 RW 0 0 NR 0490 WD 0 0 S.. Failed Over DEV049 0491 RW 0 0 NR 0491 WD 0 0 S.. Failed Over DEV050 0492 RW 0 0 NR 0492 WD 0 0 S.. Failed Over DEV051 0493 RW 0 0 NR 0493 WD 0 0 S.. Failed Over DEV052 0494 RW 0 0 NR 0494 WD 0 0 S.. Failed Over DEV053 0495 RW 0 0 NR 0495 WD 0 0 S.. Failed Over DEV054 0496 RW 0 0 NR 0496 WD 0 0 S.. Failed Over DEV055 0497 RW 2 0 NR 0497 WD 0 0 S.. Failed Over DEV056 0498 RW 2 0 NR 0498 WD 0 0 S.. Failed Over DEV057 0499 RW 0 0 NR 0499 WD 0 0 S.. Failed Over DEV058 049A RW 0 0 NR 049A WD 0 0 S.. Failed Over DEV059 049B RW 0 0 NR 049B WD 0 0 S.. Failed Over DEV060 049C RW 0 0 NR 049C WD 0 0 S.. Failed Over DEV061 049D RW 0 0 NR 049D WD 0 0 S.. Failed Over DEV062 049E RW 0 0 NR 049E WD 0 0 S.. Failed Over DEV063 049F RW 0 0 NR 049F WD 0 0 S.. Failed Over DEV064 04A0 RW 0 0 NR 04A0 WD 0 0 S.. Failed Over DEV065 04A1 RW 0 0 NR 04A1 WD 0 0 S.. Failed Over DEV066 04A2 RW 0 0 NR 04A2 WD 0 0 S.. Failed Over DEV067 04A3 RW 0 0 NR 04A3 WD 0 0 S.. Failed Over DEV068 04A4 RW 0 0 NR 04A4 WD 0 0 S.. Failed Over DEV069 04A5 RW 0 0 NR 04A5 WD 0 0 S.. Failed Over
DEV070 04A6 RW 0 0 NR 04A6 WD 0 0 S.. Failed Over Total -------- -------- -------- -------- Track(s) 3366 0 0 0he nas Commands MB(s) 105.2 0.0 0.0 0.0 Legend for MODES: M(ode of Operation): A = Async, S = Sync, E = Semi-sync, C = Adaptive Copy D(omino) : X = Enabled, . = Disabled A(daptive Copy) : D = Disk Mode, W = WP Mode, . = ACp off Suspend RDF link(s).......................................Done. Merge device track tables between source and target.......Started. Devices: 045A-045F, 0467-0477 in (0557,005).............. Merged. Devices: 0478-0489 in (0557,005)......................... Merged. Devices: 048A-049B in (0557,005)......................... Merged. Devices: 049C-04A6 in (0557,005)......................... Merged. Merge device track tables between source and target.......Done. Resume RDF link(s)........................................Started. Resume RDF link(s)........................................Done. Is remote site CELERRA ready for Network restoration? Do you wish to continue? [yes or no]: yes server_2 : done server_3 : done server_4 : Error 4003: server_4 : standby is not configured server_5 : Error 4003: server_5 : standby is not configured fsck 1.35 (28-Feb-2004) /dev/ndj1: clean, 13836/231360 files, 233729/461860 blocks Waiting for nbs clients to die ... done Waiting for nbs clients to start ... done Waiting for nbs clients to die ... done Waiting for nbs clients to start ... done Waiting for 1R2_500_5 access ...done Write Disable device(s) on RA at target (R2)..............Done. Suspend RDF link(s).......................................Done. Merge device track tables between source and target.......Started. Devices: 045A-045F, 0467-0477 in (0557,005).............. Merged. Devices: 0478-0489 in (0557,005)......................... Merged. Devices: 048A-049B in (0557,005)......................... Merged. Devices: 049C-04A6 in (0557,005)......................... Merged. Merge device track tables between source and target.......Done. Resume RDF link(s)........................................Started. Resume RDF link(s)........................................Done. Read/Write Enable device(s) on SA at source (R1)..........Done. Waiting for 1R2_500_5 sync ...done Starting restore on remote site CELERRA ... Waiting for nbs clients to start ... done Waiting for nbs clients to start ... done Suspend RDF link(s).......................................Done. server_2 : server_2 : going standby rdf : going active replace in progress ...done failover activity complete commit in progress (not interruptible)...done doneThe nas Commands server_3 : server_3 : going standby rdf : going active replace in progress ...done failover activity complete commit in progress (not interruptible)...done done server_4 : Error 4003: server_4 : standby is not configured server_5 : Error 4003: server_5 : standby is not configured Resume RDF link(s)........................................Started. Resume RDF link(s)........................................Done. If the RDF device groups were setup to operate in ASYNCHRONOUS ( SRDF/A ) mode, now would be a good time to set it back to that mode.
Would you like to set device group 1R2_500_5 to ASYNC Mode ? [yes or no]: no done EXAMPLE #7 ---------- To restore a source VNX after failover, when using Dynamic SRDF, rdfadmin su to root user, type: # /nas/sbin/nas_rdf -restore Is remote site CELERRA ready for Storage restoration? Do you wish to continue? [yes or no]: yes Contact eng17335 ... is alive Restore will now reboot the source site control station. Do you wish to continue? [yes or no]: yes Device Group (DG) Name : 1R2_500_1 DGs Type : RDF1 DGs Symmetrix ID : 000280600187 (Microcode Version: 5568) Remote Symmetrix ID : 000280600118 (Microcode Version: 5568) RDF (RA) Group Number : 1 (00) Source (R1) View Target (R2) View MODES ------------------------------ ------------------- ---- ------------ ST LI ST Standard A N A Logical T R1 Inv R2 Inv K T R1 Inv R2 Inv RDF Pair Device Dev E Tracks Tracks S Dev E Tracks Tracks MDA STATE ------------------------------- -- ------------------- --- ------------ DEV001 0056 RW 0 0 RW 0030 WD 0 0 S.. Synchronized DEV002 0057 RW 0 0 RW 0031 WD 0 0 S.. Synchronized DEV003 0032 RW 0 0 RW 000C WD 0 0 S.. Synchronized ............... BCV008 0069 RW 0 0 RW 005F WD 0 0 S.. Synchronized BCV009 006A RW 0 0 RW 0060 WD 0 0 S.. Synchronized BCV010 006B RW 0 0 RW 0061 WD 0 0 S.. Synchronized Total ------ ------ ------ ------ Track(s) 0 0 0 0 MB(s) 0.0 0.0 0.0 0.0 Legend for MODES: M(ode of Operation): A = Async, S = Sync, E = Semi-sync, C = Adaptive Copy D(omino) : X = Enabled, . = Disabled A(daptive Copy) : D = Disk Mode, W = WP Mode, . = ACp off Is remote site CELERRA ready for Network restoration? Do you wish to continue? [yes or no]: yes server_2 : done server_3 : done server_4 : Error 4003: server_4 : standby is not configured server_5 : Error 4003: server_5 : standby is not configured fsck 1.35 (28-Feb-2004) /dev/sdj1: clean, 12956/219968 files, 188765/439797 blocks An RDF Failover operation execution is in progress for device group 1R2_500_1. Please wait... Write Disable device(s) on SA at source (R1)..............Done. Suspend RDF link(s).......................................Done. Swap RDF Personality......................................Started. Swap RDF Personality......................................Done. Suspend RDF link(s).......................................Done. Read/Write Enable device(s) on SA at source (R1)..........Done. Resume RDF link(s)........................................Started. Resume RDF link(s)........................................Done.
Read/Write Enable device(s) on SA at target (R2)..........Done. The RDF Failover operation successfully executed for device group 1R2_500_1. Waiting for 1R2_500_1 sync ...done Starting restore on remote site CELERRA ... Suspend RDF link(s).......................................Done. server_2 : server_2 : going standby rdf : going active replace in progress ...done failover activity complete commit in progress (not interruptible)...done done server_3 : server_3 : going standby rdf : going active replace in progress ...done failover activity complete commit in progress (not interruptible)...done done server_4 : Error 4003: server_4: standby is not configured server_5 : Error 4003: server_5: standby is not configured done EXAMPLE #8 ---------- To run all available checks on a source VNX, as a nasadmin su to root user, type: # /nas/sbin/nas_rdf -check -all --------------------- SRDF Health Checks --------------------- SRDF: Checking system is restored........................ Pass SRDF: Checking device is normal.......................... Pass SRDF: Checking R1 SRDF session is Synch or Consistent.... Pass SRDF: Checking R1 Data Mover configuration is valid...... Pass SRDF: Checking R1 devices are available.................. Pass SRDF: Checking R1 device group has all devices........... Pass SRDF: Checking R2 SRDF session is Synch or Consistent.... Pass SRDF: Checking R2 Data Mover configuration is valid...... Pass SRDF: Checking R2 devices are available.................. Pass SRDF: Checking R2 device group has all devices........... Pass EXAMPLE #9 ---------- To run one or more specific available checks on a source VNX, as a nasadmin su to root user, type: # /nas/sbin/nas_rdf -check r1_dev_group,r2_dev_group --------------------- SRDF Health Checks --------------------- SRDF: Checking R1 device group has all devices........... Pass SRDF: Checking R2 device group has all devices........... Pass EXAMPLE #10 ----------- To initiate an SRDF failover from the source VNX to the destination, without the SRDF health check for the following use cases, a rdfadmin su to root user, type: # /nas/sbin/nas_rdf -activate -skip_rdf_operations -nocheck * SRDF STAR concurrent or cascaded
* SRDF concurrent or cascaded * SRDF R2 enable (Split) SiteA to SiteB/SiteC failover case Skipping SRDF health check .... Skipping Site A shutdown process for the skip_rdf_opertaions option .... Successfully pinged (Remotely) Symmetrix ID: 000194900462 Successfully pinged (Remotely) Symmetrix ID: 000194900546 Skipping symrdf failover process .... Waiting for nbs clients to die ... done Waiting for nbs clients to start ... done fsck 1.39 (29-May-2006) /dev/ndj1: recovering journal /dev/ndj1: clean, 15012/252928 files, 271838/516080 blocks Waiting for nbs clients to die ... done Waiting for nbs clients to start ... done id type acl slot groupID state name 1 1 0 2 0 server_2 2 1 0 3 0 server_3 server_2 : server_2 : going offline rdf : going active replace in progress ...done failover activity complete commit in progress (not interruptible)...done done Skipping symrdf update process .... A reboot Control Station request was sent to Site A to clean up old processes .... SiteB to SiteC failover case [root@CS_C rdfadmin]# /nas/sbin/nas_rdf -activate -skip_rdf_operations -nocheck Skipping Site A shutdown process .... For Site B to Site C failover or Site C to Site B failover, nas_rdf -restore -skip_rdf_operations -skip_SiteA_shutdown and reboot -f -n operations must be done on the source side Control Station (with read write backend) to clean up old processes before continue this activate operation unless the source side is not reachable or destroyed. Do you wish to continue? [yes or no]: yes Successfully pinged (Remotely) Symmetrix ID: 000194900431 Successfully pinged (Remotely) Symmetrix ID: 000194900546 Successfully pinged (Remotely) Symmetrix ID: 000194900673 Skipping symrdf failover process .... Waiting for nbs clients to die ... done Waiting for nbs clients to start ... done fsck 1.39 (29-May-2006) /dev/ndj1: clean, 14717/252928 files, 279439/516080 blocks Waiting for nbs clients to die ... done Waiting for nbs clients to start ... done server_2 : server_2 : going standby rdf : going active replace in progress ...done failover activity complete commit in progress (not interruptible)...done done Skipping symrdf update process .... A reboot Control Station request was sent to 10.245.64.168 to clean up old processes ....
EXAMPLE # 11 ------------ To initiate an SRDF failover from the source VNX to the destination, without the SRDF health check for the case SiteA Data Movers are already shutdown and the Control Station is already rebooted, type: # /nas/sbin/nas_rdf -activate -skip_SiteA_shutdown -nocheck Skipping SRDF health check .... Skipping Site A shutdown process .... This skip_SiteA_shutdown option is only for the case the Site A Data Movers have been already shutdown and the Site A Control Station has been already rebooted to clean up old processes. Do you wish to continue? [yes or no]: yes Successfully pinged (Remotely) Symmetrix ID: 000194900431 Successfully pinged (Remotely) Symmetrix ID: 000194900462 Successfully pinged (Remotely) Symmetrix ID: 000194900673 Write Disable device(s) on SA at source (R1)..............Done. Suspend RDF link(s).......................................Done. Read/Write Enable device(s) on RA at target (R2)..........Done. Waiting for nbs clients to die ... done Waiting for nbs clients to start ... done fsck 1.39 (29-May-2006) /dev/ndj1: recovering journal /dev/ndj1: clean, 14237/252928 files, 297432/516080 blocks Waiting for nbs clients to die ... done Waiting for nbs clients to start ... done id type acl slot groupID state name 1 4 2000 2 0 server_2 2 1 1000 3 0 server_3 server_3 : server_3 : going offline rdf : going active replace in progress ...done failover activity complete commit in progress (not interruptible)...done done Suspend RDF link(s).......................................Done. Merge device track tables between source and target.......Started. Devices: 0078-0078 in (0546,011)..........................Merged. Merge device track tables between source and target.......Done. Resume RDF link(s)........................................Started. Resume RDF link(s)........................................Done. A shutdown request was sent to Site A to clean up old processes .... EXAMPLE #12 ----------- To restore a source VNX after failover for the following use cases, as a nasadmin su to root user, type: # /nas/sbin/nas_rdf -restore -skip_rdf_operations SRDF STAR concurrent or cascaded SRDF concurrent or cascaded SRDF R2 enable (Split) Restore on SiteB/SiteC Skipping session check .... Is remote site CELERRA ready for Storage restoration? Do you wish to continue? [yes or no]: yes Contact eng564169 ... is alive Restore will now reboot the source site control station. This process may take
several minutes. Do you wish to continue? [yes or no]: yes Halting SiteA Data Movers and rebooting SiteA Control Station .... Checking SiteA Data Mover halt status .... Skipping symrdf update operation .... Is remote site CELERRA ready for Network restoration? Do you wish to continue? [yes or no]: yes server_2 : done server_3 : Error 4003: server_3 : standby is not configured fsck 1.39 (29-May-2006) /dev/ndj1: clean, 14716/252928 files, 279441/516080 blocks Waiting for nbs clients to die ... done Waiting for nbs clients to start ... done Waiting for nbs clients to die ... done Waiting for nbs clients to start ... done Skipping symrdf failback operation & Site A restore .... Restore on SiteA To restore on siteA as a nasadmin su to root user, type: [root@CS_A nasadmin]# /nasmcd/sbin/nas_rdf -restore -skip_rdf_operations Waiting for NAS services to finish starting......................... Done Ensure that SiteA is currently write-enabled to continue this restore operation. Do you wish to continue? [yes or no]: yes Waiting for nbs clients to start ... done Waiting for nbs clients to start ... done server_2 : server_2 : going standby rdf : going active replace in progress ...done failover activity complete commit in progress (not interruptible)...done done server_3 : Error 4003: server_3 : standby is not configured Skipping symrdf set async operation .... Run nas_diskmark -mark -all on all Control Stations in the SRDF configuration to make sure the SRDF configuration and nasdb are restored completely. Starting Services ...done EXAMPLE #13 ------------ To disable SiteB for failover from SiteB to SiteC, as a rdfadmin su to root user, type: # /nas/sbin/nas_rdf -restore -skip_rdf_operations -skip_SiteA_shutdown Skipping session check .... Skipping Site A shutdown process .... Skipping symrdf update operation .... Is remote site CELERRA ready for Network restoration? Do you wish to continue? [yes or no]: yes server_2 : done server_3 : Error 4003: server_3 : standby is not configured fsck 1.39 (29-May-2006) /dev/ndj1: clean, 14717/252928 files, 279439/516080 blocks Waiting for nbs clients to die ... done Waiting for nbs clients to start ... done Waiting for nbs clients to die ... done Waiting for nbs clients to start ... done
Skipping symrdf failback operation & Site A restore .... ----------------------------------------------------------- Last Modified: May 28, 2012 3:45 p.m.
nas_replicate Manages loopback, local, and remote VNX Replicator sessions. SYNOPSIS -------- nas_replicate -list [-id] | -info {-all|id=
In response to a potential disaster scenario, use nas_replicate to perform a failover of a specified replication session with possible data loss. The -switchover option switches over a replication relationship and performs synchronization of the source and destination without data loss. Use nas_replicate to also reverse the direction of a replication session or refresh the destination side with updates to the source based on a time-out of synchronization value or on demand. OPTIONS ------- -list [-id] Displays all configured (or stopped) replication sessions on each Data Mover in the VNX for file cabinet. Each session is represented by either a name or a session ID that is generated automatically whenever a session is configured and is globally unique. Use this option to obtain the session ID needed for another command. Since session IDs are lengthy, the session ID obtained from this command can be copied and pasted into the command. -info {-all|id=
multiple back-end systems attached. Use nas_storage -list to obtain attached storage system serial numbers. -vdm
interface was defined. This option does not apply to a loopback interconnect, which always uses 127.0.0.1. If no destination interface is specified, the system will select an interface. This ensures that the interface selected can communicate with the source interface. [{-max_time_out_of_sync
The -storageSystem option identifies the storage system on which the destination VDM will reside. This is necessary when there are multiple back-end systems attached. Use nas_storage -list to obtain attached stora ge system serial numbers. -interconnect {
time for a VDM replication is 5 minutes. [-overwrite_destination] For an existing destination object, discards any changes made to the destination object and restores it from the established common base, thereby starting the replication session from a differential copy. If this option is not specified, and the destination object contains different content than the established common base, an error is returned. [-background] Executes the command in asynchronous mode. Use the nas_task command to check the status of the command. START OPTIONS ------------- -start {
[-overwrite_destination] For an existing destination object, discards any changes made to the destination object and restores the destination object from the established, internal common base checkpoint, thereby starting the replication session from a differential copy. If this option is not specified and the destination object has different content than the established common base, an error is returned. [-reverse] Reverses the direction of the replication session when invoked from the new source side (the original destination). A reverse operation continues to use the established replication name or replication session ID. Use this option to restart replication after a failover or switchover. [-full_copy] For an existing destination object that contains content changes, performs a full copy of the source object to the destination object. If replication cannot be started from a differential copy using the -overwrite_destination option, omitting this option causes the command to return an error. [-background] Executes the command in asynchronous mode. Use the nas_task command to check the status of the command. MODIFY OPTIONS -------------- -modify {
update occurs. If you do not specify a max_time_out_of_sync value, use the -manual_refresh option to indicate that the destination will be updated on demand using the nas_replicate -refresh command. If no option is selected, the refresh default time for file system replication is 10 minutes,and 5 minutes for VDM replication sessions. STOP OPTIONS ------------ -stop {
command from the Control Station on the destination VNX only. This command cancels any data transfer that is in process and marks the destination object as read-write so that it can serve as the new source object. When the original source Data Mover becomes reachable, the source object is changed to read-only. CAUTION: The execution of the failover operation is asynchronous and results in data loss if all the data was not transferred to the destination site prior to issuing the failover. If there are multiple sessions using the same source object, only one replication session can be failed over. After the selected session is failed over, the other sessions become inactive until the session is restarted or failed back. [-background] Executes the command in asynchronous mode. Use the nas_task command to check progress. SWITCHOVER OPTIONS ------------------ -switchover {
Instructs the replication -refresh option to use a specific checkpoint on the source side and a specific checkpoint on the destination side. Specifying source and destination checkpoints for the -refresh option is optional. However, if you specify a source checkpoint, you must also specify a destination checkpoint. Replication transfers the contents of the user-specified source checkpoint to the destination file system. This transfer can be either a full copy or a differential copy depending on the existing replication semantics. After the transfer, the replication internally refreshes the user specified destination checkpoint and marks the two checkpoints as common bases. After the replication refresh operation completes successfully, both the source and destination checkpoints have the same view of their file systems. The replication continues to use these checkpoints as common bases until the next transfer is completed. After a user checkpoint is marked with a common base property, the property is retained until the checkpoint is refreshed or deleted. A checkpoint that is already paired as a common base with another checkpoint propagates its common base property when it is specified as the source in a replication refresh operation. This propagation makes it possible for file systems without a direct replication relationship to have common base checkpoints. [-background] Executes the command in asynchronous mode. Use the nas_task command to check progress. STORAGE SYSTEM OUTPUT --------------------- The number associated with the storage device is dependent on the attached storage system. VNX for block displays a prefix of APM before a set of integers, for example, APM00033900124-0019. For example, Symmetrix storage systems appear as 002804000190-003C. The outputs displayed in the examples use a VNX for block. EXAMPLE #1 ---------- To list all the VNX Replicator sessions, type: $ nas_replicate -list Name Type Local Mover Interconnect Celerra Status ufs1_rep1 filesystem server_3 -->NYs3_LAs2 cs110 OK vdm1_rep1 vdm server_3 -->NYs3_LAs2 cs110 OK Where: Value Definition ----- ---------- Name Either the name of the session or the globally unique session ID for the session, if there are duplicate names on the system. Type The type of replication session (ongoing file system (fs), copy, or VDM). Source Mover The source Data Mover for the session. Interconnect The name of the source-side interconnect used for the se ssion. Celerra The name of the VNX system. Status The status of the session (OK, Active, Idle, Stopped, Er ror, Waiting) Info, Critical. EXAMPLE #2 ---------- To create a file system replication session ufs1_rep1 on the source file
system ufs1 and destination pool clar_r5_performance on the interconnect NYs3_LAs2 using the specified source and destination IP addresses to be updated automatically every 5 minutes, type: $ nas_replicate -create ufs1_rep1 -source -fs ufs1 -destination -pool clar_r5_per formance -interconnect NYs3_LAs2 -source_interface ip=10.6.3.190 -destination_interface ip =10.6.3.173 -max_time_out_of_sync 5 OK EXAMPLE #3 ---------- To display information for a replication session ufs1_rep1, type: $ nas_replicate -info ufs1_rep1 ID = 184_APM00064600086_0000_173_APM00072901601_0000 Name = ufs1_rep1 Source Status = OK Network Status = OK Destination Status = OK Last Sync Time = Thu Dec 13 14:47:16 EST 2007 Type = filesystem Celerra Network Server = cs110 Dart Interconnect = NYs3_LAs2 Peer Dart Interconnect = 20004 Replication Role = source Source Filesystem = ufs1 Source Data Mover = server_3 Source Interface = 10.6.3.190 Source Control Port = 0 Source Current Data Port = 0 Destination Filesystem = ufs1_replica3 Destination Data Mover = server_2 Destination Interface = 10.6.3.173 Destination Control Port = 5081 Destination Data Port = 8888 Max Out of Sync Time (minutes) = 5 Next Transfer Size (Kb) = 0 Latest Snap on Source = Latest Snap on Destination = Current Transfer Size (KB) = 0 Current Transfer Remain (KB) = 0 Estimated Completion Time = Current Transfer is Full Copy = No Current Transfer Rate (KB/s) = 76 Current Read Rate (KB/s) = 11538 Current Write Rate (KB/s) = 580 Previous Transfer Rate (KB/s) = 0 Previous Read Rate (KB/s) = 0 Previous Write Rate (KB/s) = 0 Average Transfer Rate (KB/s) = 6277 Average Read Rate (KB/s) = 0 Average Write Rate (KB/s) = 0 EXAMPLE #4 ---------- To create a VDM replication session vdm_rep1 on source VDM vdm1 and destination p ool clar_r5_performance on the interconnect NYs3_LAs2 with the given source and desti nation IP addresses to be updated automatically every 5 minutes, type: $ nas_replicate -create vdm1_rep1 -source -vdm vdm1 -destination -pool clar_r5_pe rformance -interconnect NYs3_LAs2 -source_interface ip=10.6.3.190 -destination_interface ip =10.6.3.173
-max_time_out_of_sync 5 OK EXAMPLE #5 ---------- To list existing replication sessions, type: $ nas_replicate -list Name Type Local Mover Interconnect Celerra Status ufs1_rep1 filesystem server_3 -->NYs3_LAs2 cs110 OK vdm1_rep1 vdm server_3 -->NYs3_LAs2 cs110 OK EXAMPLE #6 ---------- To manually synchronize source and destination for the replication session ufs1_rep1, type: $ nas_replicate -refresh ufs1_rep1 OK EXAMPLE #7 ---------- To manually synchronize source and destination for the replication session ufs1_rep1 by using user checkpoints on the source and the destination, type: $ nas_replicate -refresh ufs1_rep1 -source id=101 -destination id=102 OK EXAMPLE #8 ---------- To stop replication on both source and destination for the replication session ufs1_rep1, type: $ nas_replicate -stop ufs1_rep1 -mode both OK EXAMPLE #9 ----------- To start stopped replication session ufs1_rep1 on interconnect NYs3_LAs2, specify manual refresh and to overwrite the destination file system performing a full copy, type: $ nas_replicate -start ufs1_rep1 -interconnect NYs3_LAs2 -manual_refresh -overwrite_destination -full_copy OK EXAMPLE #10 ----------- To display information for the VDM replication session vdm_rep1, type: $ nas_replicate -info vdm1_rep1 ID = 278_APM00064600086_0000_180_APM00072901601_0000 Name = vdm1_rep1 Source Status = OK Network Status = OK Destination Status = OK Last Sync Time = Fri Dec 14 16:49:54 EST 2007 Type = vdm
Celerra Network Server = cs110 Dart Interconnect = NYs3_LAs2 Peer Dart Interconnect = 20004 Replication Role = source Source VDM = vdm1 Source Data Mover = server_3 Source Interface = 10.6.3.190 Source Control Port = 0 Source Current Data Port = 0 Destination VDM = vdm1 Destination Data Mover = server_2 Destination Interface = 10.6.3.173 Destination Control Port = 5081 Destination Data Port = 8888 Max Out of Sync Time (minutes) = 5 Next Transfer Size (Kb) = 0 Latest Snap on Source = Latest Snap on Destination = Current Transfer Size (KB) = 0 Current Transfer Remain (KB) = 0 Estimated Completion Time = Current Transfer is Full Copy = No Current Transfer Rate (KB/s) = 313 Current Read Rate (KB/s) = 19297 Current Write Rate (KB/s) = 469 Previous Transfer Rate (KB/s) = 0 Previous Read Rate (KB/s) = 0 Previous Write Rate (KB/s) = 0 Average Transfer Rate (KB/s) = 155 Average Read Rate (KB/s) = 0 Average Write Rate (KB/s) = 0 EXAMPLE #11 ----------- To change the session name vdm1_rep1 to vdm1_rep2, and to change max time out of sync value to 90, type: $ nas_replicate -modify vdm1_rep1 -name vdm1_rep2 -max_time_out_of_sync 90 OK EXAMPLE #12 ----------- To failover the replication session ufs1_rep1, type on destination: $ nas_replicate -failover ufs1_rep1 OK EXAMPLE #13 ----------- To start failed over replication in the reverse direction, type: $ nas_replicate -start ufs1_rep1 -interconnect LAs2_NYs3 -reverse -overwrite_dest ination OK EXAMPLE #14 ----------- To reverse direction of the replication session ufs1_rep1, type: $ nas_replicate -reverse ufs1_rep1 OK EXAMPLE #15
----------- To switch over the replication session ufs1_rep1 using the background option, typ e: $ nas_replicate -switchover ufs1_rep1 -background Info 26843676673: In Progress: Operation is still running. Check task id 4058 on the Task Status screen for results. *** Comment: Use nas_task -info command to find out the status of background task. EXAMPLE #16 ----------- To delete the replication session fs1_rep1 on both source and destination, type: $ nas_replicate -delete fs1_rep1 -mode both OK -------------------------------------------------------------------- Last modified: Feb 21 2013, 2:34 pm
nas_server Manages the Data Mover (server) table. SYNOPSIS -------- nas_server -list [-all|-vdm] | -delete
-info {-all|
recommended to avoid fsck before mounting a BCV file system on SiteB or S iteC. -vdm
Configuring Virtual Data Mover on VNX, Using International Character Sets for File, nas_fs, nas_volume, and server_cifs. SYSTEM OUTPUT ------------- VNX systems support the following system-defined storage pools: clar_r1, clar_r5_performance, clar_r5_economy, clar_r6, clarata_r3, clarata_r6, clarata_r10, clarata_archive, cm_r1, cm_r5_performance, cm_r5_economy, cm_r6, cmata_r3, cmata_archive, cmata_r6, cmata_r10, clarsas_archive, clarsas_r6, clarsas_r10, clarefd_r5, clarefd_r10, cmsas_archive, cmsas_r6, cmsas_r10, and cmefd_r5. Disk types when using VNX for block are CLSTD, CLEFD, and CLATA, and for VNX for block involving mirrored disks are CMEFD, CMSTD, and CMATA. VNX with a Symmetrix system supports the following system-defined storage pools: symm_std, symm_std_rdf_src, symm_ata, symm_ata_rdf_src, symm_ata_rdf_tgt, symm_std_rdf_tgt, symm_ata_rdf_tgt, symm_std_rdf_tgt, and symm_efd. For user-defined storage pools, the difference in output is in the disk type. Disk types when using a Symmetrix are STD, R1STD, R2STD, BCV, R1BCV, R2BCV, ATA, R1ATA, R2ATA, BCVA, R1BCA, R2BCA, and EFD. EXAMPLE #1 ---------- To list the physical Data Mover table, type: $ nas_server -list id type acl slot groupID state name 1 1 1000 2 0 server_2 2 1 1000 3 0 server_3 3 1 1000 4 0 server_4 4 4 1000 5 0 server_5 Where: Value Definition ----- ---------- id ID of the Data Mover. type Type assigned to Data Mover. acl Access control level value assigned to the Data Mover or VDM. slot Physical slot in the cabinet where the Data Mover resides. groupID ID of the Data Mover group. state Whether the Data Mover is enabled=0, disabled=1, failed over=2. name Name given to the Data Mover. EXAMPLE #2 ---------- To list the physical Data Mover and VDM table, type: $ nas_server -list -all id type acl slot groupID state name 1 1 1000 2 0 server_2 2 1 1000 3 0 server_3 3 1 1000 4 0 server_4 4 4 1000 5 0 server_5 id acl server mountedfs rootfs name
3 0 1 31 vdm_1 EXAMPLE #1 provides a description of outputs for the physical Data Movers. The following table provides a description of the command output for the VDM table. Where: Value Definition ----- ---------- id ID of the Data Mover. acl Access control level value assigned to the Data Mover or VDM. server Server on which the VDM is loaded on. mountedfs Filesystems that are mounted on this VDM. rootfs ID number of the root file system. name Name given to the Data Mover or VDM. EXAMPLE #3 ---------- To list the VDM server table, type: $ nas_server -list -vdm id acl server mountedfs rootfs name 3 0 1 31 vdm_1 EXAMPLE #4 ---------- To list information for a Data Mover, type: $ nas_server -info server_2 id = 1 name = server_2 acl = 1000, owner=nasadmin, ID=201 type = nas slot = 2 member_of = standby = server_5, policy=auto status : defined = enabled actual = online, ready Where: Value Definition ----- ---------- id ID of the Data Mover name Name given to the Data Mover acl Access control level value assigned to the Data Mover or VDM. type Type assigned to Data Mover slot Physical slot in the cabinet where the Data Mover reside s. member_of Group to which the Data Mover is a member. standby If the Data Mover has a local standby associated with it . status Whether the Data Mover is enabled or disabled, and whether it is active. EXAMPLE #5 ---------- To display detailed information for all servers, type: $ nas_server -info -all id = 1
name = server_2 acl = 1000, owner=nasadmin, ID=201 type = nas slot = 2 member_of = standby = server_5, policy=auto status : defined = enabled actual = online, active id = 2 name = server_3 acl = 1000, owner=nasadmin, ID=201 type = nas slot = 3 member_of = standby = server_5, policy=auto status : defined = enabled actual = online, ready id = 3 name = server_4 acl = 1000, owner=nasadmin, ID=201 type = nas slot = 4 member_of = standby = server_5, policy=auto status : defined = enabled actual = online, ready id = 4 name = server_5 acl = 1000, owner=nasadmin, ID=201 type = standby slot = 5 member_of = standbyfor= server_4,server_2,server_3 status : defined = enabled actual = online, ready EXAMPLE #4 provides a description of command outputs. EXAMPLE #6 ---------- To display information for all VDMs, type: $ nas_server -info -vdm -all id = 3 name = vdm_1 acl = 0 type = vdm server = server_2 rootfs = root_fs_vdm_1 I18N mode = UNICODE mountedfs = member_of = status : defined = enabled actual = mounted Interfaces to services mapping: Where: Value Definition ----- ---------- id ID of the Data Mover. name Name of the Data Mover. acl Access control level value assigned to the VDM.
type For VDM server, the type is always VDM. server Server on which the VDM loaded. rootfs Root filesystem of the VDM. I18N mode L18N mode of the VDM. I18N mode is either ASCII or UNICODE. mountedfs Filesystems that are mounted on this VDM. member_of If it is a member of a cluster, then this field will show the cluster name. status Whether the VDM is enabled or disabled, and whether it can be loaded ready, loaded active, mounted, temporarily unloaded, and permanently unloaded. Interfaces to services mapping List of interfaces that are used for the services configured on this VDM. Currently, only CIFS service is provided, so this field lists all the interfaces used in the CIFS servers configured on this VDM. EXAMPLE #7 ---------- To create a mounted VDM named vdm_1 on server_2 using the storage pool, clar_r5_performance with a rawfs, type: $ nas_server -name vdm_1 -type vdm -create server_2 -setstate mounted pool=clar_r5_performance -option fstype=uxfs id = 3 name = vdm_1 acl = 0 type = vdm server = server_2 rootfs = root_fs_vdm_1 I18N mode = UNICODE mountedfs = member_of = status : defined = enabled actual = mounted Interfaces to services mapping: EXAMPLE #6 provides a description of command outputs. EXAMPLE #8 ---------- To set the state of a vdm_1 to mounted, type: $ nas_server -vdm vdm_1 -setstate mounted id = 3 name = vdm_1 acl = 0 type = vdm server = server_2 rootfs = root_fs_vdm_1 I18N mode = UNICODE mountedfs = member_of = status : defined = enabled actual = mounted Interfaces to services mapping: EXAMPLE #6 provides a description of command outputs. EXAMPLE #9 ----------
To move the image of vdm_1 onto server_4, type: $ nas_server -vdm vdm_1 -move server_4 id = 3 name = vdm_1 acl = 0 type = vdm server = server_4 rootfs = root_fs_vdm_1 I18N mode = UNICODE mountedfs = member_of = status : defined = enabled actual = loaded, ready Interfaces to services mapping: EXAMPLE #6 provides a description of command outputs. EXAMPLE #10 ----------- To rename a Data Mover entry from server_2 to dm2, type: $ nas_server -rename server_2 dm2 id = 1 name = dm2 acl = 1000, owner=nasadmin, ID=201 type = nas slot = 2 member_of = standby = server_5, policy=auto status : defined = enabled actual = online, active EXAMPLE #4 provides a description of command outputs. EXAMPLE #11 ----------- To set the access control level for server_2, type: $ nas_server -acl 1432 server_2 id = 1 name = server_2 acl = 1432, owner=nasadmin, ID=201 type = nas slot = 2 member_of = standby = server_5, policy=auto status : defined = enabled actual = online, ready Note: The value 1432 specifies nasadmin as the owner, gives users with an access level of at least observer read-only access, users with an access level of at least operator read/write access, and users with an access level of at least admin read/write/delete access. EXAMPLE #4 provides a description of command outputs. EXAMPLE #12 ----------- To delete vdm_1, type:
$ nas_server -delete vdm_1 id = 3 name = vdm_1 acl = 0 type = vdm server = rootfs = root_fs_vdm_1 I18N mode = UNICODE mountedfs = member_of = status : defined = enabled actual = permanently unloaded Interfaces to services mapping: EXAMPLE #6 provides a description of command outputs. EXAMPLE #13 ----------- To delete a physical Data Mover using root command, type: $ /nas/sbin/rootnas_server -delete server_3 id = 2 name = server_3 acl = 0 type = nas slot = 3 member_of = standby = server_5, policy=auto status : defined = disabled actual = boot_level=0 EXAMPLE #6 provides a description of command outputs. EXAMPLE #14 ----------- To create a VDM named vdm1 on the server 3, type: $ nas_server -name vdm1 -type vdm -create server_3 id = 43 name = vdm1 acl = 0 type = vdm server = server_3 rootfs = root_fs_vdm_vdm1 I18N mode = UNICODE mountedfs = member_of = status : defined = enabled actual = loaded, ready Interfaces to services mapping: EXAMPLE #15 ----------- To assign the network interface to vdm1, assuming vdm1if1 and vdm1if2 exist and are not attached to another vdm, type: $ nas_server -vdm vdm1 -attach vdm1if1, vdm1if2 id = 43 name = vdm1 acl = 0 type = vdm server = server_2 rootfs = root_fs_vdm_vdm1 I18N mode = UNICODE
mountedfs = member_of = status : defined = enabled actual = loaded, ready Interfaces to services mapping: interface=vdm1if1 :vdm interface=vdm1if2 :vdm EXAMPLE #16 ----------- To query the vdm1 state, type: $ nas_server -info -vdm vdm1 id = 43 name = vdm1 acl = 0 type = vdm server = server_2 rootfs = root_fs_vdm_vdm1 I18N mode = UNICODE mountedfs = member_of = status : defined = enabled actual = loaded, ready Interfaces to services mapping: interface=vdm1if2 :cifs vdm interface=vdm1if1 :vdm EXAMPLE #17 ----------- To create a VDM named vdm2 on the server_3 using split ufs log type, type: $ nas_server -name vdm2 -type vdm -create server_3 -setstate loaded pool=symm_std_rdf_src -o log_type=split id = 2 name = vdm2 acl = 0 type = vdm server = server_3 rootfs = root_fs_vdm_vdm2 I18N mode = ASCII mountedfs = member_of = status : defined = enabled actual = loaded, ready Interfaces to services mapping: To confirm a VDM ufs log type, type: /nas/sbin/rootnas_fs -i root_fs_vdm_vdm2 id = 49 name = root_fs_vdm_vdm2 acl = 0 in_use = True type = uxfs worm = off volume = v1260 pool = symm_std_rdf_src member_of = root_avm_fs_group_8 rw_servers= server_3 ro_servers= rw_vdms = ro_vdms = auto_ext = no,thin=no log_type = split fast_clone_level = 2 deduplication = Off
stor_devs = 000194900462-10C6,000194900462-10CE,000194900462-10D6,000194900462-10DE, 000194900462-10E6,000194900462-10EE,000194900462-10F6,000194900462-10FE disks = d1102,d1103,d1104,d1105,d1106,d1107,d1108,d1109 disk=d1102 stor_dev=000194900462-10C6 addr=c4t3l4-72-0 server=server_3 disk=d1102 stor_dev=000194900462-10C6 addr=c20t3l4-71-0 server=server_3 disk=d1102 stor_dev=000194900462-10C6 addr=c36t3l4-71-0 server=server_3 disk=d1102 stor_dev=000194900462-10C6 addr=c52t3l4-72-0 server=server_3 disk=d1103 stor_dev=000194900462-10CE addr=c4t3l5-72-0 server=server_3 disk=d1103 stor_dev=000194900462-10CE addr=c20t3l5-71-0 server=server_3 disk=d1103 stor_dev=000194900462-10CE addr=c36t3l5-71-0 server=server_3 disk=d1103 stor_dev=000194900462-10CE addr=c52t3l5-72-0 server=server_3 disk=d1104 stor_dev=000194900462-10D6 addr=c4t3l6-72-0 server=server_3 disk=d1104 stor_dev=000194900462-10D6 addr=c20t3l6-71-0 server=server_3 disk=d1104 stor_dev=000194900462-10D6 addr=c36t3l6-71-0 server=server_3 disk=d1104 stor_dev=000194900462-10D6 addr=c52t3l6-72-0 server=server_3 disk=d1105 stor_dev=000194900462-10DE addr=c4t3l7-72-0 server=server_3 disk=d1105 stor_dev=000194900462-10DE addr=c20t3l7-71-0 server=server_3 disk=d1105 stor_dev=000194900462-10DE addr=c36t3l7-71-0 server=server_3 disk=d1105 stor_dev=000194900462-10DE addr=c52t3l7-72-0 server=server_3 disk=d1106 stor_dev=000194900462-10E6 addr=c4t3l8-72-0 server=server_3 disk=d1106 stor_dev=000194900462-10E6 addr=c20t3l8-71-0 server=server_3 disk=d1106 stor_dev=000194900462-10E6 addr=c36t3l8-71-0 server=server_3 disk=d1106 stor_dev=000194900462-10E6 addr=c52t3l8-72-0 server=server_3 disk=d1107 stor_dev=000194900462-10EE addr=c4t3l9-72-0 server=server_3 disk=d1107 stor_dev=000194900462-10EE addr=c20t3l9-71-0 server=server_3 disk=d1107 stor_dev=000194900462-10EE addr=c36t3l9-71-0 server=server_3 disk=d1107 stor_dev=000194900462-10EE addr=c52t3l9-72-0 server=server_3 disk=d1108 stor_dev=000194900462-10F6 addr=c4t3l10-72-0 server=server_3 disk=d1108 stor_dev=000194900462-10F6 addr=c20t3l10-71-0 server=server_3 disk=d1108 stor_dev=000194900462-10F6 addr=c36t3l10-71-0 server=server_3 disk=d1108 stor_dev=000194900462-10F6 addr=c52t3l10-72-0 server=server_3 disk=d1109 stor_dev=000194900462-10FE addr=c4t3l11-72-0 server=server_3 disk=d1109 stor_dev=000194900462-10FE addr=c20t3l11-71-0 server=server_3 disk=d1109 stor_dev=000194900462-10FE addr=c36t3l11-71-0 server=server_3 disk=d1109 stor_dev=000194900462-10FE addr=c52t3l11-72-0 server=server_3 ----------------------------------------------------------------------- Last modified: December 3, 2014 12:20 p.m.
nas_stats Manages Statistics Groups. SYNOPSIS -------- nas_stats -groups { -list | -info [-all|
Statgroup names can be used with the -info request. A statgroup name is limited to 255 characters. Space, slash, back slash, quote, double quote, and comma are the illegal characters in it. [-description] The -description option is optional and defaults to the statgroup name. If the -description option is used, its argument must be enclosed in quotation marks. -modify Allows you to modify a statgroup.s member_stats list by specifying the new member statistics of the group, overriding the previous contents. -add Allows you to add statpath and existing statgroup names to a statgroup by specifying additional items to be appended to the statgroup.s list member_stats list. -remove Allows you to remove member statpath and statgroup names from a statgroup by specifying the items to remove from the statgroup.s member_stats list. -delete Allows you to delete a statgroup. However, this option does not delete any statsgroups that are members of the statgroup. -recover Attempts to recover the latest uncorrupted copy of the Statistics Groups database from the NAS database backups. nas_stats searches through the available backups and restores the latest copy. In this event, NAS database backups do not contain a healthy version of the Statistics Groups; a new Statistics Groups database is installed. In the case of a new Statistics Groups database, all user-defined information is lost. NAS database backups run hourly and VNX maintains the last 12 backups. [-Force] Use the -Force option with the -recover option to skip the warning prompt. -verify Checks the health status of the Statistics Groups database. SEE ALSO -------- server_stats EXAMPLE #1 ---------- To list the system-defined and user-defined Statistics Groups, type: $ nas_stats -groups -list Type Name System basic-std System basicCifs-std ... ... User basic User nfsNet ... ... EXAMPLE #2 ---------- To provide detailed information on all (or specified) Statistics Groups, type: $ nas_stats -groups -info
name = basic-std description = The basic system-defined group. type = System-defined member_stats = kernel.cpu.utilization.cpuUtil,net.basic.inBytes,net.basic.outBytes,store.readBy tes,store.writeBytes member_elements = member_of = name = basic3 description = CPU and Memory type = User-defined member_stats = kernel.cpu.utilization.cpuUtil,kernel.memory.freeBytes member_elements = member_of = name = caches-std description = The caches system-defined group. type = System-defined member_stats = fs.dnlc.hitRatio,fs.ofCache.hitRatio,kernel.memory.bufferCache.hitRatio member_elements = member_of = name = cifs-std description = The cifs system-defined group. type = System-defined member_stats = cifs.global.basic.totalCalls,cifs.global.basic.reads,cifs.global.basic.readBytes ,cifs.global.basic.readAvgSize,cifs.global.basic.writes,cifs.global.basic.writeB ytes,cifs.global.basic.writeAvgSize,cifs.global.usage.currentConnections,cifs.gl obal.usage.currentOpenFiles member_elements = member_of = newSG name = cifsOps-std description = The cifs table system-defined group. type = System-defined member_stats = cifs.smb1.op,cifs.smb2.op member_elements = member_of = name = diskVolumes-std description = The disk volume table system-defined group. type = System-defined member_stats = store.diskVolume member_elements = name = metaVolumes-std description = The meta volume table system-defined group. type = System-defined member_stats = store.logicalVolume.metaVolume member_elements = member_of = name = netDevices-std description = The net table system-defined group. type = System-defined member_stats = net.device member_elements = member_of = name = newSG description = newSG type = User-defined member_stats = cifs-std,nfs.v3.op,nfs.v4.op member_elements = member_of = name = nfs-std
Description = The nfs system-defined group. type = System-defined member_stats = nfs.totalCalls,nfs.basic.reads,nfs.basic.readBytes,nfs.basic.readAvgSize,nfs.bas ic.writes,nfs.basic.writeBytes,nfs.basic.writeAvgSize,nfs.currentThreads member_elements = member_of = name = nfsOps-std description = The nfs table system-defined group. type = System-defined member_stats = nfs.v2.op,nfs.v3.op,nfs.v4.op member_elements = member_of = name = statgroup1 description = My first group type = User-defined member_stats = net.basic.inBytes,net.basic.outBytes,store.readBytes,store.writeBytes member_elements = member_of = statgroup2 name = statgroup2 description = My first group type = User-defined member_stats = net.basic.inBytes,net.basic.outBytes,store.readBytes,store.writeBytes,kernel.cpu .utilization.cpuUtil,statgroup1 member_elements = member_of = EXAMPLE #3 ---------- To provide detailed information on all (or specified) Statistics Groups, type: $ nas_stats -groups -info statsA name = statsA description = My group # 2 type = user-defined member_stats = statpath1, statpath2, statpath3, statsC member_elements = member_of = statsB EXAMPLE #4 ---------- To create a statistics group called basic3, type: $ nas_stats -groups -create basic3 -description "CPU and Memory" kernel.cpu.utilization.cpuUtil,kernel.memory.freeBytes basic3 created successfully. EXAMPLE #5 ---------- To create a statistics group called statgroup2, type: $ nas_stats -groups -create statgroup2 statgroup1,nfs,net statgroup2 created successfully. EXAMPLE #6 ---------- To use an existing statgroup, type: $ nas_stats -groups -create statgroup1 -description "My
first group" kernel.cpu.utilization.cpuUtil, net.basic.inBytes,net.basic.outBytes,store.readBytes, store.writeBytes ERROR (13421969439): statgroup1 already exists. EXAMPLE #7 ---------- To modify a statgroup by specifying the new contents of the group, overriding the previous contents, type: $ nas_stats -groups -modify statgroup2 cifs,nfs-std statgroup2 modified successfully. EXAMPLE #8 ---------- To modify the description of a statgroup, type: $ nas_stats -groups -modify basic1 -description "My basic group" basic1 modified successfully. EXAMPLE #9 ---------- To rename a user-defined statgroup, type: $ nas_stats -groups -modify statgroup2 -rename basic2 statgroup2 modified successfully. EXAMPLE #10 ----------- To add to the member_stats list of a statgroup, type: $ nas_stats -groups -add statgroup2 kernel.cpu.utilization.cpuUtil,statgroup1 Adding the following statistics: ... kernel.cpu.utilization.cpuUtil ... statgroup1 Statistics added to statgroup2 successfully. EXAMPLE #11 ----------- To remove from the member_stats list of a statgroup, type: $ nas_stats -groups -remove statgroup1 kernel.cpu.utilization.cpuUtil Removing the following statistics: ... kernel.cpu.utilization.cpuUtil Statistics removed from statgroup1 successfully. EXAMPLE #12 ----------- To delete a statgroup, type: $ nas_stats -groups -delete statgroup1 statgroup1 deleted successfully. EXAMPLE #13 -----------
To delete reference from other groups using statgroupA, type: $ nas_stats -groups -delete statgroupA statgroupA is used in group (s): mystats1, mystats2. Clear statgroupA from other groups? [Y/N] Y statgroupA deleted successfully. EXAMPLE #14 ----------- To delete reference from other groups using statgroupA and the -Force option to skip the warning prompt, type: $ nas_stats -groups -delete statgroupA -F statgroupA is used in group (s): mystats1, mystats2. statgroupA deleted successfully. EXAMPLE #15 ----------- To recover the latest healthy (uncorrupted) copy of a statgroup database from the NAS database backups, type: $ nas_stats -groups -database -recover Latest healthy database modified last on Tue Apr 7 17:29:06 EDT 2009. Any updates performed after the latest backup will be lost. Continue? [Y/N] Y The nas_stats command recover operation is completed successfully. EXAMPLE #16 ----------- To recover the latest healthy (uncorrupted) copy of the statgroup database from the NAS database backups using the -Force option to skip the warning prompt, type: $ nas_stats -groups -database -recover -Force Latest healthy database modified last on Tue Apr 7 17:29:06 EDT 2009. The nas_stats command recover operation is completed successfully. EXAMPLE #17 ----------- To check the health status of the Statistics Groups database, type: $ nas_stats -groups -database -verify Database is healthy. ---------------------------------------------------------------- Last modified: May 10, 2011 4:30 pm.
nas_storage Controls storage system access and performs some management tasks. SYNOPSIS -------- nas_storage -list | -info {-all|
for block storage used by NAS. The -group option deletes the disk group specified. This deletes and unbinds the LUNs in the RAID groups used by VNX for file. If there are other LUNs in the RAID group not allocated to the VNX, the RAID group is not unbound. After removing the VNX LUNs, the RAID group is empty and it will be destroyed. -sync {-all|
example, Symmetrix storage systems appear as 002804000190-003C. EXAMPLE #1 ---------- For the VNX storage system, to list all attached storage systems, type: $ nas_storage -list id acl name serial_number 1 0 APM00042000818 APM00042000818 For the VNX with a Symmetrix storage system, to list all attached storage systems, type: $ nas_storage -list id acl name serial_number 1 0 000187940260 000187940260 Where: Value Definition ----- ---------- id ID number of the attached storage system. acl Access control level value assigned to the attached storage system. name Name assigned to the attached storage system. serial_number Serial number of the attached storage system. EXAMPLE #2 ---------- For the VNX storage system, to display information for the attached storage system, type: $ nas_storage -info APM00042000818 id = 1 arrayname = APM00042000818 name = APM00042000818 type = Clariion model_type = RACKMOUNT model_num = 700 db_sync_time = 1131986667 == Mon Nov 14 11:44:27 EST 2005 API_version = V6.0-629 num_disks = 60 num_devs = 34 num_pdevs = 8 num_storage_grps = 1 num_raid_grps = 16 cache_page_size = 8 wr_cache_mirror = True low_watermark = 60 high_watermark = 80 unassigned_cache = 0 is_local = True failed_over = False captive_storage = False Active Software -AccessLogix = - FLARE-Operating-Environment= 02.16.700.5.004 -NavisphereManager = - Storage Processors SP Identifier = A
signature = 1057303 microcode_version = 2.16.700.5.004 serial_num = LKE00040201171 prom_rev = 3.30.00 agent_rev = 6.16.0 (4.80) phys_memory = 3967 sys_buffer = 773 read_cache = 122 write_cache = 3072 free_memory = 0 raid3_mem_size = 0 failed_over = False hidden = False network_name = spa ip_address = 172.24.102.5 subnet_mask = 255.255.255.0 gateway_address = 172.24.102.254 num_disk_volumes = 20 - root_disk root_ldisk d3 d4 d5 d6 d7 d8 d9 d10 d11 d12 d13 d14 d15 d16 d17 d18 d19 d20 Port Information Port 1 uid = 50:6:1:60:B0:60:1:CC:50:6:1:61:30:60:1:CC link_status = UP port_status = ONLINE switch_present = True switch_uid = 10:0:8:0:88:A0:36:F3:20:42:8:0:88:A0:36:F3 sp_source_id = 6373907 <...removed...> Port 2 uid = 50:6:1:60:B0:60:1:CC:50:6:1:62:30:60:1:CC link_status = UP port_status = ONLINE switch_present = True switch_uid = 10:0:8:0:88:A0:36:F3:20:41:8:0:88:A0:36:F3 sp_source_id = 6373651 SP Identifier = B signature = 1118484 microcode_version = 2.16.700.5.004 serial_num = LKE00041700812 prom_rev = 3.30.00 agent_rev = 6.16.0 (4.80) phys_memory = 3967 sys_buffer = 773 read_cache = 122 write_cache = 3072 free_memory = 0 raid3_mem_size = 0 failed_over = False hidden = False network_name = spb ip_address = 172.24.102.6 subnet_mask = 255.255.255.0 gateway_address = 172.24.102.254 num_disk_volumes = 0 Port Information Port 1 uid = 50:6:1:60:B0:60:1:CC:50:6:1:69:30:60:1:CC link_status = UP port_status = ONLINE switch_present = True switch_uid = 10:0:8:0:88:A0:36:F3:20:3E:8:0:88:A0:36:F3
sp_source_id = 6372883 <...removed...> Port 2 uid = 50:6:1:60:B0:60:1:CC:50:6:1:6A:30:60:1:CC link_status = UP port_status = ONLINE switch_present = True switch_uid = 10:0:8:0:88:A0:36:F3:20:3D:8:0:88:A0:36:F3 sp_source_id = 6372627 Storage Groups id = A4:74:8D:50:6E:A1:D9:11:96:E1:8:0:1B:43:5E:4F name = ns704g-cs100 num_hbas = 18 num_devices = 24 shareable = True hidden = False Hosts uid = 50:6:1:60:90:60:3:49:50:6:1:60:10:60:3:49 storage_processor = B port = 1 server = server_4 uid = 50:6:1:60:90:60:3:49:50:6:1:60:10:60:3:49 storage_processor = A port = 0 server = server_4 uid = 50:6:1:60:80:60:4:F0:50:6:1:61:0:60:4:F0 storage_processor = B port = 0 server = server_2 <...removed...> uid = 50:6:1:60:80:60:4:F0:50:6:1:68:0:60:4:F0 storage_processor = B port = 1 server = server_3 uid = 20:0:0:0:C9:2B:98:77:10:0:0:0:C9:2B:98:77 storage_processor = B port = 0 uid = 20:0:0:0:C9:2B:98:77:10:0:0:0:C9:2B:98:77 storage_processor = A port = 0 ALU HLU ------------ 0000 -> 0000 0001 -> 0001 0002 -> 0002 0003 -> 0003 0004 -> 0004 0005 -> 0005 0018 -> 0018 0019 -> 0019 0020 -> 0020 0021 -> 0021 0022 -> 0022 0023 -> 0023 0024 -> 0024 0025 -> 0025 0026 -> 0026 0027 -> 0027
0028 -> 0028 0029 -> 0029 0030 -> 0030 0031 -> 0031 0032 -> 0032 0033 -> 0033 0034 -> 0034 0035 -> 0035 Disk Groups id = 0000 storage profiles = 2 - clar_r5_performance,cm_r5_performance raid_type = RAID5 logical_capacity = 1068997528 num_spindles = 5 - 0_0_0 0_0_1 0_0_2 0_0_3 0_0_4 num_luns = 6 - 0000 0001 0002 0003 0004 0005 num_disk_volumes = 6 - root_disk root_ldisk d3 d4 d5 d6 spindle_type = FC bus = 0 raw_capacity = 1336246910 used_capacity = 62914560 free_capacity = 1006082968 hidden = False <...removed...> id = 2_0_14 product = ST314670 CLAR146 revision = 6A06 serial = 3KS02RHM capacity = 280346624 used_capacity = 224222822 disk_group = 0014 hidden = False type = FC bus = 2 enclosure = 0 slot = 14 vendor = SEAGATE remapped_blocks = -1 state = ENABLED For the VNX with a Symmetrix storage system, to display information for the attached storage system, type: $ nas_storage -info 000187940260 id = 1 serial_number = 000187940260 name = 000187940260 type = Symmetrix ident = Symm6 model = 800-M2 microcode_version = 5670 microcode_version_num = 16260000 microcode_date = 03012004 microcode_patch_level = 69 microcode_patch_date = 03012004 symmetrix_pwron_time = 1130260200 == Tue Oct 25 13:10:00 EDT 2005 db_sync_time = 1133215405 == Mon Nov 28 17:03:25 EST 2005 db_sync_bcv_time = 1133215405 == Mon Nov 28 17:03:25 EST 2005 db_sync_rdf_time = 1133215405 == Mon Nov 28 17:03:25 EST 2005 last_ipl_time = 1128707062 == Fri Oct 7 13:44:22 EDT 2005 last_fast_ipl_time = 1130260200 == Tue Oct 25 13:10:00 EDT 2005 API_version = V6.0-629 cache_size = 32768 cache_slot_count = 860268 max_wr_pend_slots = 180000 max_da_wr_pend_slots = 90000 max_dev_wr_pend_slots = 6513
permacache_slot_count = 0 num_disks = 60 num_symdevs = 378 num_pdevs = 10 sddf_configuration = ENABLED config_checksum = 0x01ca544 num_powerpath_devs = 0 config_crc = 0x07e0ba1e6 is_local = True Physical Devices /nas/dev/c0t0l15s2 /nas/dev/c0t0l15s3 /nas/dev/c0t0l15s4 /nas/dev/c0t0l15s6 /nas/dev/c0t0l15s7 /nas/dev/c0t0l15s8 /nas/dev/c16t0l15s2 /nas/dev/c16t0l15s3 /nas/dev/c16t0l15s4 /nas/dev/c16t0l15s8 Director Table type num slot ident stat scsi vols ports p0_stat p1_stat p2_stat p3_stat ---- --- ---- ----- ---- ---- ---- ----- ------- ------- ------- ------- DA 1 1 DF-1A On NA 21 2 On On NA NA DA 2 2 DF-2A On NA 8 2 On On NA NA DA 15 15 DF-15A On NA 21 2 On On NA NA DA 16 16 DF-16A On NA 8 2 On On NA NA DA 17 1 DF-1B On NA 8 2 On On NA NA DA 18 2 DF-2B On NA 21 2 On On NA NA DA 31 15 DF-15B On NA 152 2 On On NA NA DA 32 16 DF-16B On NA 165 2 On On NA NA FA 33 1 FA-1C On NA 0 2 On On NA NA FA 34 2 FA-2C On NA 0 2 On On NA NA FA 47 15 FA-15C On NA 0 2 On On NA NA FA 48 16 FA-16C On NA 0 2 On On NA NA FA 49 1 FA-1D On NA 0 2 On On NA NA Note: This is a partial listing due to the length of the outputs. EXAMPLE #3 ---------- To rename a storage system, type: $ nas_storage -rename APM00042000818 cx700_1 id = 1 serial_number = APM00042000818 name = cx700_1 acl = 0 EXAMPLE #4 ---------- To set the access control level for the storage system cx700_1, type: $ nas_storage -acl 1000 cx700_1 id = 1 serial_number = APM00042000818 name = cx700_1 acl = 1000, owner=nasadmin, ID=201 Note: The value 1000 specifies nasadmin as the owner and gives read, write, and delete access only to nasadmin. EXAMPLE #5
---------- To change the existing password on the VNX for block, type: $ nas_storage -modify APM00070204288 -security -username nasadmin -password nasadmin -newpassword abc Changing password on APM00070204288 EXAMPLE #6 ---------- To avoid specifiying passwords in clear text on the command line, type: $ nas_storage -modify APM00070204288 -security -newpassword Enter the Global CLARiiON account information Username: nasadmin Password: *** Retype your response to validate Password: *** New Password Password: ******** Retype your response to validate Password: ******** Changing password on APM00070204288 Done EXAMPLE #7 ---------- To failback a VNX for block, type: $ nas_storage -failback cx700_1 id = 1 serial_number = APM00042000818 name = cx700_1 acl = 1000, owner=nasadmin, ID=201 EXAMPLE #8 To display information for a VNX for block and turn synchronization off, type: $ nas_storage -info cx700_1 -option sync=no id = 1 arrayname = APM00042000818 name = cx700_1 type = Clariion model_type = RACKMOUNT model_num = 700 db_sync_time = 1131986667 == Mon Nov 14 11:44:27 EST 2005 API_version = V6.0-629 num_disks = 60 num_devs = 34 num_pdevs = 8 num_storage_grps = 1 num_raid_grps = 16 cache_page_size = 8 wr_cache_mirror = True low_watermark = 60 high_watermark = 80 unassigned_cache = 0 is_local = True failed_over = False captive_storage = False Active Software -AccessLogix = - FLARE-Operating-Environment= 02.16.700.5.004 -NavisphereManager = - Storage Processors
SP Identifier = A signature = 1057303 microcode_version = 2.16.700.5.004 serial_num = LKE00040201171 prom_rev = 3.30.00 agent_rev = 6.16.0 (4.80) phys_memory = 3967 sys_buffer = 773 read_cache = 122 write_cache = 3072 free_memory = 0 raid3_mem_size = 0 failed_over = False hidden = False network_name = spa ip_address = 172.24.102.5 subnet_mask = 255.255.255.0 gateway_address = 172.24.102.254 num_disk_volumes = 20 - root_disk root_ldisk d3 d4 d5 d6 d7 d8 d9 d10 d11 d12 d13 d14 d15 d16 d17 d18 d19 d20 Port Information Port 1 uid = 50:6:1:60:B0:60:1:CC:50:6:1:61:30:60:1:CC link_status = UP port_status = ONLINE switch_present = True switch_uid = 10:0:8:0:88:A0:36:F3:20:42:8:0:88:A0:36:F3 sp_source_id = 6373907 <...removed...> Port 2 uid = 50:6:1:60:B0:60:1:CC:50:6:1:62:30:60:1:CC link_status = UP port_status = ONLINE switch_present = True switch_uid = 10:0:8:0:88:A0:36:F3:20:41:8:0:88:A0:36:F3 sp_source_id = 6373651 SP Identifier = B signature = 1118484 microcode_version = 2.16.700.5.004 serial_num = LKE00041700812 prom_rev = 3.30.00 agent_rev = 6.16.0 (4.80) phys_memory = 3967 sys_buffer = 773 read_cache = 122 write_cache = 3072 free_memory = 0 raid3_mem_size = 0 failed_over = False hidden = False network_name = spb ip_address = 172.24.102.6 subnet_mask = 255.255.255.0 gateway_address = 172.24.102.254 num_disk_volumes = 0 Port Information Port 1 uid = 50:6:1:60:B0:60:1:CC:50:6:1:69:30:60:1:CC link_status = UP port_status = ONLINE switch_present = True
switch_uid = 10:0:8:0:88:A0:36:F3:20:3E:8:0:88:A0:36:F3 sp_source_id = 6372883 <...removed...> Port 2 uid = 50:6:1:60:B0:60:1:CC:50:6:1:6A:30:60:1:CC link_status = UP port_status = ONLINE switch_present = True switch_uid = 10:0:8:0:88:A0:36:F3:20:3D:8:0:88:A0:36:F3 sp_source_id = 6372627 Storage Groups id = A4:74:8D:50:6E:A1:D9:11:96:E1:8:0:1B:43:5E:4F name = ns704g-cs100 num_hbas = 18 num_devices = 24 shareable = True hidden = False Hosts uid = 50:6:1:60:90:60:3:49:50:6:1:60:10:60:3:49 storage_processor = B port = 1 server = server_4 uid = 50:6:1:60:90:60:3:49:50:6:1:60:10:60:3:49 storage_processor = A port = 0 server = server_4 uid = 50:6:1:60:80:60:4:F0:50:6:1:61:0:60:4:F0 storage_processor = B port = 0 server = server_2 <...removed...> uid = 50:6:1:60:80:60:4:F0:50:6:1:68:0:60:4:F0 storage_processor = B port = 1 server = server_3 uid = 20:0:0:0:C9:2B:98:77:10:0:0:0:C9:2B:98:77 storage_processor = B port = 0 uid = 20:0:0:0:C9:2B:98:77:10:0:0:0:C9:2B:98:77 storage_processor = A port = 0 ALU HLU ------------ 0000 -> 0000 0001 -> 0001 0002 -> 0002 0003 -> 0003 0004 -> 0004 0005 -> 0005 0018 -> 0018 0019 -> 0019 0020 -> 0020 0021 -> 0021 0022 -> 0022 0023 -> 0023 0024 -> 0024 0025 -> 0025
0026 -> 0026 0027 -> 0027 0028 -> 0028 0029 -> 0029 0030 -> 0030 0031 -> 0031 0032 -> 0032 0033 -> 0033 0034 -> 0034 0035 -> 0035 Disk Groups id = 0000 storage profiles = 2 - clar_r5_performance,cm_r5_performance raid_type = RAID5 logical_capacity = 1068997528 num_spindles = 5 - 0_0_0 0_0_1 0_0_2 0_0_3 0_0_4 num_luns = 6 - 0000 0001 0002 0003 0004 0005 num_disk_volumes = 6 - root_disk root_ldisk d3 d4 d5 d6 spindle_type = FC bus = 0 raw_capacity = 1336246910 used_capacity = 62914560 free_capacity = 1006082968 hidden = False <...removed...> id = 0205 storage profiles = 0 raid_type = SPARE logical_capacity = 622868992 num_spindles = 1 - 0_1_0 num_luns = 1 - 0205 num_disk_volumes = 0 spindle_type = ATA bus = 0 raw_capacity = 622868992 used_capacity = 622868992 free_capacity = 0 hidden = False Spindles id = 0_0_0 product = ST314670 CLAR146 revision = 6A06 serial = 3KS088SQ capacity = 280346624 used_capacity = 12582912 disk_group = 0000 hidden = False type = FC bus = 0 enclosure = 0 slot = 0 vendor = SEAGATE remapped_blocks = -1 state = ENABLED <...removed...> id = 2_0_14 product = ST314670 CLAR146 revision = 6A06 serial = 3KS02RHM capacity = 280346624 used_capacity = 224222822 disk_group = 0014 hidden = False
type = FC bus = 2 enclosure = 0 slot = 14 vendor = SEAGATE remapped_blocks = -1 state = ENABLED Note: This is a partial display due to the length of the outputs. EXAMPLE #9 ---------- To delete a storage system with no attached disks, type: $ nas_storage -delete APM00035101740 id = 0 serial_number = APM00035101740 name = APM00035101740 acl = 0 EXAMPLE #10 ----------- To turn synchronization on for all systems, type: $ nas_storage -sync -all done EXAMPLE #11 ----------- To perform a health check on the storage system, type: $ nas_storage -check -all Discovering storage (may take several minutes) done EXAMPLE #12 ----------- To set the access control level for the storage system APM00042000818, type: $ nas_storage -acl 1432 APM00042000818 id = 1 serial_number = APM00042000818 name = APM00042000818 acl = 1432, owner=nasadmin, ID=201 Note: The value 1432 specifies nasadmin as the owner and gives users with an access level of at least observer read access only, users with an access level of at least operator read/write access, and users with an access level of at least admin read/write/delete access. EXAMPLE #13 ----------- To modify the IP address of the VNX for block, type: $ nas_storage -modify APM00072303347 -network -spa 10.6.4.225 Changing IP address for APM00072303347 Discovering storage (may take several minutes) done
EXAMPLE #14 ----------- To reset hostname. $ nas_storage -resetssv done ------------------------------------------------------ Last modified: July 26, 2011 12:35 pm.
nas_syncrep Manages Virtual Data Mover (VDM) synchronous replication sessions. The list, info, and create switches of this command can be executed on both the active and standby systems. Execute the delete switch of this command on the active system. Execute the reverse, failover, and Clean switches of this command on the standby system. SYNOPSIS -------- nas_syncrep -list | -info { -all |
Specifies the name of an existing source sync-replicable VDM to replicate. -remote_system
10030 my_syncrep2 my_vdm2 <--my_system1 in_sync EXAMPLE #2 ---------- To display information of a synchronous replication session by name, type: $ nas_syncrep -i id=4096 id = 4096 name = LY2E6_session1 vdm_name = LY2E6_vdm1 syncrep_role = active local_system = LY2E6_CS0 local_pool = src_sg_1 local_mover = server_2 remote_system = L9P36_CS0 remote_pool = dst_sg_1 remote_mover = server_2 device_group = 61_260_60_125 session_status = in_sync EXAMPLE #3 ---------- To create a synchronous replication session, type: $ nas_syncrep -create LY2E6_session1 -vdm LY2E6_vdm1 -remote_system L9P36_CS0 -remote_pool l9p36_marketing_sg -remote_mover server_2 -network_devices cge0:cge0 Now validating params... done Now creating LUN mapping... done Now creating remote network interface(s)... done Now marking remote pool as standby pool... done Now updating local disk type... done Now updating remote disk type... done Now generating session entry... done done EXAMPLE #4 ---------- To delete a synchronous replication session, type: $ nas_syncrep -delete my_syncrep1 WARNING: Please do not perform any operation on my_syncrep1 on standby system until delete is done. Deleting... done done EXAMPLE #5 ---------- To reverse a synchronous replication session, type: $ nas_syncrep -reverse id=4315 WARNING: There will be a period of Data Unavailabilty during the reverse operation, and, after the reverse operation, the VDM/FS(s)/checkpoint(s) protected by the sync replication session will be reversed to the local site. Are you sure you want to proceed? [yes or no] yes Now doing precondition check... done: 19 s Now doing health check... done: 11 s Now cleaning local... done: 1 s Service outage start...... Now turning down remote network interface(s)... done: 8 s Now switching the session (may take several minutes)... done: 7 s Now importing sync replica of NAS database... done: 16 s Now creating VDM... done: 5 s Now importing VDM settings... done: 0 s Now mounting exported FS(s)/checkpoint(s)... done: 13 s Now loading VDM... done: 3 s Now turning up local network interface(s)... done: 0 s
Service outage end: 52 s Now mounting unexported FS(s)/checkpoint(s)... done: 0 s Now importing schedule(s)... done: 0 s Now unloading remote VDM/FS(s)/checkpoint(s)... done: 16 s Now cleaning remote... done: 17 s Elapsed time: 116s done EXAMPLE #6 ---------- To failover a synchronous replication session, type: $ nas_syncrep -failover id=4560 WARNING: You have just issued the nas_syncrep -failover command. Verify whether the peer system or any of its file storage resources are accessible. If they are, then you should issue the nas_syncrep -reverse command instead. Running the nas_syncrep -failover command while the peer system is still accessible could result in Data Unavailability or Data Loss. Are you sure you want to proceed? [yes or no] yes Now doing precondition check... done: 30 s Now doing health check... done: 7 s Now cleaning local... done: 1 s Now switching the session (may take several minutes)... done: 4 s Now importing sync replica of NAS database... done: 15 s Now creating VDM... done: 5 s Now importing VDM settings... done: 0 s Now mounting exported FS(s)/checkpoint(s)... done: 3 s Now loading VDM... done: 4 s Now turning up local network interface(s)... done: 0 s Service outage end: 69 s Now mounting unexported FS(s)/checkpoint(s)... done: 0 s Now importing schedule(s)... done: 0 s Elapsed time: 69 s done EXAMPLE #7 ---------- To clean a synchronous replication session, type: [nasadmin@L9P36_CS0 ]$ nas_syncrep -Clean LY2E6_session1 WARNING: You have just issued the nas_syncrep -Clean command. This may result in a reboot of the original source Data Mover that the VDM was failed over from. Verify whether or not you have working VDM(s)/FS(s)/checkpoint(s) on this Data Mover and plan for this reboot accordingly. Running the nas_syncrep -Clean command while you have working VDM(s)/FS(s)/checkpoint(s) on this Data Mover will result in Data Unavailability during the reboot. Are you sure you want to proceed? [yes or no] yes Now cleaning session LY2E6_session1 (may take several minutes)... done Now starting session LY2E6_session1... done done EXAMPLE #8 ---------- To refresh a synchronous replication session, type: $ nas_syncrep -Refresh_pairs LY2E6_session1 WARNING: You have just issued the nas_syncrep -Refresh_pairs command. Please do not perform any operation(s) on the remote (R2) side during the same. Also note that the operation cannot be reverted. Are you sure you want to proceed? [yes or no] yes Now refreshing session LY2E6_session1... done EXAMPLE #9 ----------
To perform a health check of the VDM synchronous replication session, type: $ nas_syncrep -health_check Health check starting ... Initializing ... Check No. | Check Name | Message ID | Status | Brief Description Check ( 1/12 ) | Check SRDF Group State | 34906964006 | PASS | SRDF Group online. Check ( 2/12 ) | Check SRDF session(s) status | 34906964010 | PASS | SRDF session in sync. Check ( 3/12 ) | Check VDM Sync session(s) disktype(s) | 34906964014 | PASS | VDM Sync session disktype ok. Check ( 4/12 ) | File system has 128K free space | 34906964018 | PASS | File system has at least 128K free space. Check ( 5/12 ) | Network Configuration Check | 34906963984 | PASS | Interfaces correct. Check ( 6/12 ) | Check SRDF link status | 34906963989 | PASS | Remote SRDF is pingable. Check ( 7/12 ) | Data Mover status | 34906963992 | PASS | Data movers are in the correct state. Check ( 8/12 ) | eNAS, SE, Enginuity version check | 34906963993 | PASS | eNAS, SE and microcode version check passed. Check ( 9/12 ) | Check for filesystem ID consistency | 34906963997 | PASS | No conflict in file system IDs. Check ( 10/12 ) | Pool to SRDF session mapping | 34906964014 | PASS | VDM Sync session disktypes in correct state. Check ( 11/12 ) | Check for director ports online/offline | 34906963999 | PASS | Directors and ports online. Check ( 12/12 ) | Check for Equivalent Data Services | 34906964026 | PASS | Data services match. Health check complete. Check /nas/log/nas_syncrep.log for more details. Use nas_message -i
nas_task Manages in-progress or completed tasks. SYNOPSIS -------- nas_task -list [-remote_system {
This command can be executed from the source or destination side. Use this command when the source and destination VNX systems cannot communicate. You should run this command on both sides. [-remote_system {
ID Task State Originator Start Time Description Schedule Remote System 4241 Running nasadmin@cli+ Mon Dec 17 14:21:35 EST 2007 Create Replication ufs1_r+ cs100 4228 Succeeded nasadmin@cli+ Mon Dec 17 14:04:02 EST 2007 Delete task NONE: 4214. cs100 4177 Failed nasadmin@cli+ Mon Dec 17 13:59:26 EST 2007 Create Replication ufs1_r+ cs100 4150 Succeeded nasadmin@cli+ Mon Dec 17 13:55:39 EST 2007 Delete task NONE: 4136. cs100 4127 Succeeded nasadmin@cli+ Mon Dec 17 11:38:32 EST 2007 Delete task NONE: 4113. cs100 4103 Succeeded nasadmin@cli+ Mon Dec 17 11:21:00 EST 2007 Delete task NONE: 4098. cs100 4058 Succeeded nasadmin@cli+ Fri Dec 14 16:43:23 EST 2007 Switchover Replication NONE. cs100 2277 Succeeded nasadmin@cli+ Fri Dec 14 16:42:08 EST 2007 Reverse Replication NONE. cs110 2270 Succeeded nasadmin@cli+ Fri Dec 14 16:40:29 EST 2007 Start Replication NONE. cs110 2265 Failed nasadmin@cli+ Fri Dec 14 16:40:11 EST 2007 Start Replication NONE. cs110 EXAMPLE #1 provides a description of the outputs. EXAMPLE #3 ---------- To abort task 4267 running locally on server_3, type: $ nas_task -abort 4267 -mover server_3 OK EXAMPLE #4 ---------- To delete the existing task 4267, type: $ nas_task -delete 4267 OK ---------------------------------------------------------------- Last Modified: May 10, 2011 5:00 pm
nas_version Displays the software version running on the Control Station. SYNOPSIS -------- nas_version [-h|-l] DESCRIPTION ----------- nas_version displays the Control Station version in long form or short form. When used during a software upgrade, informs the user about the upgrade in progress. OPTIONS ------- No arguments Displays the software version running on the Control Station. -h Displays command usage. -l Displays detailed software version information for the Control Station. EXAMPLE #1 ---------- To display the software version running on the Control Station during a software upgrade, type: $ nas_version 5.6.25-0 EXAMPLE #2 ---------- To display the system output during a software upgrade, type: $ nas_version 5.6.19-0 Warning!!Upgrade is in progress from 5.6.19-0 to 5.6.20-0 Warning!!Please log off IMMEDIATELY if you are not upgrading the system EXAMPLE #3 ---------- To display the usage for nas_version, type: $ nas_version -h usage: /nas/bin/nas_version [-h|-l] -h help -l long_format EXAMPLE #4 ---------- To display detailed software version information for the Control Station, type: $ nas_version -l Name : emcnas Relocations: /nas
Version : 5.6.19 Vendor: EMC Release : 0 Build Date: Tue 19 Dec 2006 08:53:31 PM EST Size : 454239545 License: EMC Copyright Signature : (none) Packager : EMC Corporation URL : http://www.emc.com Summary : EMC nfs base install Description : EMC nfs base install EXAMPLE #5 ---------- To display detailed software version information for the Control Station during a software upgrade, type: $ nas_version -l Name : emcnas Relocations: /nas Version : 5.6.19 Vendor: EMC Release : 0 Build Date: Wed 14 Mar 2007 12:36:55 PM EDT Size : 500815102 License: EMC Copyright Signature : (none) Packager : EMC Corporation URL : http://www.emc.com Summary : EMC nfs base install Description : EMC nfs base install Warning!!Upgrade is in progress from 5.6.19-0 to 5.6.20-0 Warning!!Please log off IMMEDIATELY if you are not upgrading the system ------------------------------------------------------------ Last modified: May 10, 2011 5:15 pm.
nas_volume Manages the volume table. SYNOPSIS -------- nas_volume -list | -delete
Creates a volume configuration from the specified volumes. Unless otherwise specified, volumes are automatically created as metavolumes. [-name
9 y 1 0 root_slice_1 1 10 10 y 3 0 root_volume_1 2 1 11 y 1 0 root_slice_2 1 12 12 y 3 0 root_volume_2 2 2 13 y 1 0 root_slice_3 1 14 ... Note: This is a partial listing due to the length of the outputs. Where: Value Definition ----- ---------- id ID of the volume. inuse Whether the volume is used. type Type assigned to the volume. Available types are: 1=slice, 2=stripe, 3=meta, 4=disk, and 100=pool. acl Access control level assigned the volume. name Name assigned to the volume. cltype The client type of the volume. Available values are: 0 - If the clid field is not empty then the client is a slice. 1 - The client is another volume (meta, stripe, volume_pool). 2 - The client is a file system. clid ID of the client. EXAMPLE #2 ---------- To create a metavolume named, mtv1, on disk volume, d7, type: $ nas_volume -name mtv1 -create d7 id = 146 name = mtv1 acl = 0 in_use = False type = meta volume_set = d7 disks = d7 Where: Value Definition ----- ---------- id ID of the volume. name Name assigned to the volume. acl Access control level value assigned to the volume. in_use Whether the volume is used. type Type assigned to the volume. Types are meta, stripe, slice, disk, and pool. volume_set Name assigned to the volume. disks Disks used to build a file system. EXAMPLE #3 ---------- To display configuration information for mtv1, type: $ nas_volume -info mtv1 id = 146 name = mtv1 acl = 0 in_use = False type = meta volume_set = d7
disks = d7 EXAMPLE #4 ---------- To rename a mtv1 to mtv2, type: $ nas_volume -rename mtv1 mtv2 id = 146 name = mtv2 acl = 0 in_use = False type = meta volume_set = d7 disks = d7 EXAMPLE #5 ----------- To create a stripe volume named, stv1, with a size of 32768 bytes on disk volumes d10, d12, d13, and d15, type: $ nas_volume -name stv1 -create -Stripe 32768 d10,d12,d13,d15 id = 147 name = stv1 acl = 0 in_use = False type = stripe stripe_size = 32768 volume_set = d10,d12,d13,d15 disks = d10,d12,d13,d15 Where: Value Definition ----- ---------- stripe_size Specified size of the stripe volume. EXAMPLE #6 ---------- To clone mtv1, type: $ nas_volume -Clone mtv1 id = 146 name = mtv1 acl = 0 in_use = False type = meta volume_set = d7 disks = d7 id = 148 name = v148 acl = 0 in_use = False type = meta volume_set = d8 disks = d8 EXAMPLE #7 ---------- To clone the volume mtv1 and set the disk type to BCV, type: $ /nas/sbin/rootnas_volume -Clone mtv1 -option disktype=BCV id = 322 name = mtv1
acl = 0 in_use = False type = meta volume_set = d87 disks = d87 id = 323 name = v323 acl = 0 in_use = False type = meta volume_set = rootd99 disks = rootd99 EXAMPLE #8 ---------- To extend mtv1 with mtv2, type: $ nas_volume -xtend mtv1 mtv2 id = 146 name = mtv1 acl = 0 in_use = False type = meta volume_set = d7,mtv2 disks = d7,d8 EXAMPLE #9 ---------- To display the size of mtv1, type: $ nas_volume -size mtv1 total = 547418 avail = 547418 used = 0 ( 0% ) (sizes in MB) Where: Value Definition ----- ---------- total Total size of the volume. avail Amount of unused space on the volume. used Amount of space used on the volume. EXAMPLE #10 ----------- To set the access control level for the metavolume mtv1, type: $ nas_volume -acl 1432 mtv1 id = 125 name = mtv1 acl = 1432, owner=nasadmin, ID=201 in_use = False type = meta volume_set = d7,mtv2 disks = d7,d8 Note: The value 1432 specifies nasadmin as the owner and gives users with an access level of at least observer read access only, users with an access level of at least operator read/write access, and users with an access level of at least admin read/write/delete access. EXAMPLE #11 ----------- To delete mtv2, type:
$ nas_volume -delete mtv1 id = 146 name = mtv1 acl = 1432, owner=nasadmin, ID=201 in_use = False type = meta volume_set = d7,mtv2 disks = d7,d8 ------------------------------------------------------------------------ Last modified: April 29 2011, 3:15 pm.
FS CLI Commands
This chapter lists the eNAS Command Set provided for managing,
configuring, and monitoring the specified file system. The commands are
prefixed with fs and appear alphabetically. The command line syntax
(Synopsis), a description of the options, and examples of usage are provided
for each command.
fs_ckpt fs_dedupe fs_dhsm
fs_group fs_rdf fs_timefinder
fs_ckpt Manages checkpoints using the EMCSnapSure functionality. SYNOPSIS -------- fs_ckpt {
disabled. maxsavsize=
%disabled. The default for
id ckpt_name inuse fullmark total_savvol_used base ckpt_usage_on_savvol EXAMPLE #2 ---------- To display all the checkpoints including internal checkpoints for the file system fs4, type: $ fs_ckpt fs4 -list -all id ckpt_name creation_time inuse fullmark total _savvol_used ckpt_usage_on_savvol 1401 root_rep_ckpt_1398_21625_1 05/26/2008-16:11:10-EDT y 90% 51% 0% 1402 root_rep_ckpt_1398_21625_2 05/26/2008-16:11:22-EDT y 90% 51% 0% 1406 fs4_ckpt1 05/26/2008-16:22:19-EDT y 90% 51% 0% id wckpt_name inuse fullmark total _savvol_used base ckpt_usage_on_savvol EXAMPLE #3 ---------- To create a checkpoint of ufs1, on the volume, ssmtv1, type: $ fs_ckpt ufs1 -Create ssmtv1 operation in progress (not interruptible)...id = 22 name = ufs1 acl = 0 in_use = True type = uxfs worm = off volume = mtv1 pool = rw_servers= server_2 ro_servers= rw_vdms = ro_vdms = ckpts = ufs1_ckpt1 stor_devs = APM00043807043-0010,APM00043807043-0014 disks = d7,d9 disk=d7 stor_dev=APM00043807043-0010 addr=c0t1l0 server=server_2 disk=d7 stor_dev=APM00043807043-0010 addr=c16t1l0 server=server_2 disk=d9 stor_dev=APM00043807043-0014 addr=c0t1l4 server=server_2 disk=d9 stor_dev=APM00043807043-0014 addr=c16t1l4 server=server_2 id = 24 name = ufs1_ckpt1 acl = 0 in_use = True type = ckpt worm = off volume = vp132 pool = member_of = rw_servers= ro_servers= server_2 rw_vdms = ro_vdms = checkpt_of= ufs1 Wed Oct 13 18:01:04 EDT 2004 used = 0% full(mark)= 90% stor_devs = APM00043807043-0011,APM00043807043-0017 disks = d12,d15 disk=d12 stor_dev=APM00043807043-0011 addr=c16t1l1 server=server_2 disk=d12 stor_dev=APM00043807043-0011 addr=c0t1l1 server=server_2 disk=d15 stor_dev=APM00043807043-0017 addr=c16t1l7 server=server_2 disk=d15 stor_dev=APM00043807043-0017 addr=c0t1l7 server=server_2 Where:
Value Definition id Automatically assigned ID of a file system or the checkpoint. name Name assigned to the file system or the checkpoint acl Access control value for a file system. See nas_acl. in_use If a file system is registered into the mount table of a Data Mover. type Type of file system. See -list for a description of the types. worm Whether the File-Level Retention feature is enabled. volume Volume on which a file system resides. pool Storage pool for the file system. member_of Group to which the file system belongs. rw_servers Servers with read-write access to a file system. ro_servers Servers with read-only access to a file system. rw_vdms VDM servers with read-write access to a file system. ro_vdms VDM servers with read-only access to a file system. ckpts Associated checkpoints for the file system. checkpt_of Name of the PFS related to the existing checkpoints. used Percentage of SavVol space used by the checkpoints of the PFS. full(mark) SavVol usage point which, when reached, sends a warning message to the system log, and auto-extends the SavVol as system space permits. stor_devs Storage system devices associated with a file system. disks Disks on which the metavolume resides. EXAMPLE #4 ---------- To create a checkpoint of ufs1 named ufs1_ckpt2 with a size of 2 GB using the clar_r5_performance pool, with the specified storage system, with the %full set to 95, type: $ fs_ckpt ufs1 -name ufs1_ckpt2 -Create size=2G pool=clar_r5_performance storage=APM00043807043 -option %full=95 operation in progress (not interruptible)...id = 27 name = ufs1 acl = 0 in_use = True type = uxfs worm = off volume = mtv1 pool = rw_servers= server_2 ro_servers= rw_vdms = ro_vdms = ckpts = ufs1_ckpt1,ufs1_ckpt2 stor_devs = APM00043807043-0010,APM00043807043-0014 disks = d7,d9 disk=d7 stor_dev=APM00043807043-0010 addr=c0t1l0 server=server_2 disk=d7 stor_dev=APM00043807043-0010 addr=c16t1l0 server=server_2 disk=d9 stor_dev=APM00043807043-0014 addr=c0t1l4 server=server_2 disk=d9 stor_dev=APM00043807043-0014 addr=c16t1l4 server=server_2 id = 30 name = ufs1_ckpt2 acl = 0 in_use = True type = ckpt worm = off volume = vp145 pool = member_of = rw_servers= ro_servers= server_2 rw_vdms = ro_vdms = checkpt_of= ufs1 Wed Nov 10 14:00:20 EST 2004 used = 0% full(mark)= 95%
stor_devs = APM00043807043-0011,APM00043807043-0017 disks = d12,d15 disk=d12 stor_dev=APM00043807043-0011 addr=c16t1l1 server=server_2 disk=d12 stor_dev=APM00043807043-0011 addr=c0t1l1 server=server_2 disk=d15 stor_dev=APM00043807043-0017 addr=c16t1l7 server=server_2 disk=d15 stor_dev=APM00043807043-0017 addr=c0t1l7 server=server_2 EXAMPLE #3 provides a description of command output. EXAMPLE #5 ---------- To create a checkpoint of ufs2 named ufs2_ckpt1 with a size of 2 GB by using the clar_mapped_pool VNX mapped pool, with the specified system, with the %full set to 95, type: $ fs_ckpt ufs2 -name ufs2_ckpt1 -Create size=2G pool=clar_mapped_pool storage=APM00043807043 -option %full=95 operation in progress (not interruptible)...id = 435 name = ufs2 acl = 0 in_use = True type = uxfs worm = off volume = v731 pool = clar_mapped_pool member_of = root_avm_fs_group_50 rw_servers= server_2 ro_servers= rw_vdms = ro_vdms = auto_ext = no,thin=no fast_clone_level = 1 deduplication = Off thin_storage = False tiering_policy = N/A/Optimize Pool compressed= False mirrored = False ckpts = ufs2_ckpt1 stor_devs = FNM00103400314-0036,FNM00103400314-0037,FNM00103400314-0038,FNM00103400314-0039 disks = d60,d61,d62,d63 disk=d60 stor_dev=FNM00103400314-0036 addr=c0t1l0 server=server_2 disk=d60 stor_dev=FNM00103400314-0036 addr=c16t1l0 server=server_2 disk=d61 stor_dev=FNM00103400314-0037 addr=c0t1l1 server=server_2 disk=d61 stor_dev=FNM00103400314-0037 addr=c16t1l1 server=server_2 disk=d62 stor_dev=FNM00103400314-0038 addr=c0t1l2 server=server_2 disk=d62 stor_dev=FNM00103400314-0038 addr=c16t1l2 server=server_2 disk=d63 stor_dev=FNM00103400314-0039 addr=c0t1l3 server=server_2 disk=d63 stor_dev=FNM00103400314-0039 addr=c16t1l3 server=server_2 id = 438 name = ufs2_ckpt1 acl = 0 in_use = True type = ckpt worm = off volume = vp735 pool = clar_mapped_pool member_of = rw_servers= ro_servers= server_2 rw_vdms = ro_vdms = checkpt_of= ufs2 Fri Jan 4 01:43:20 EST 2013 deduplication = Off thin_storage = False tiering_policy = N/A/Optimize Pool compressed= False mirrored = False
used = 13% full(mark)= 95% stor_devs = FNM00103400314-0036,FNM00103400314-0037,FNM00103400314-0038,FNM00103400314-0039 disks = d60,d61,d62,d63 disk=d60 stor_dev=FNM00103400314-0036 addr=c0t1l0 server=server_2 disk=d60 stor_dev=FNM00103400314-0036 addr=c16t1l0 server=server_2 disk=d61 stor_dev=FNM00103400314-0037 addr=c0t1l1 server=server_2 disk=d61 stor_dev=FNM00103400314-0037 addr=c16t1l1 server=server_2 disk=d62 stor_dev=FNM00103400314-0038 addr=c0t1l2 server=server_2 disk=d62 stor_dev=FNM00103400314-0038 addr=c16t1l2 server=server_2 disk=d63 stor_dev=FNM00103400314-0039 addr=c0t1l3 server=server_2 disk=d63 stor_dev=FNM00103400314-0039 addr=c16t1l3 server=server_2 Where: Value Definition thin_storage Indicates whether the VNX for block system uses thin provisioning . Values are: True, False, Mixed. tiering_policy Indicates the tiering policy is in effect. If the initial tier an d the tiering policy are the same, the values are: Auto-Tier, Highe st Available Tier, Lowest Available Tier. If the initial tier and th e tiering policy are not the same, the values are: Auto-Tier/No Dat a Movement, Highest Available Tier/No Data Movement, Lowest Availab le Tier/No Data Movement. compressed Indicates whether data is compressed. Values are True, False, Mix ed (indicates some of the LUNs, but not all, are compressed) . mirrored Indicates whether the disk is mirrored. EXAMPLE #6 ---------- To create a writeable checkpoint of baseline checkpoint ufs1_ckpt1, type: $ fs_ckpt ufs1_ckpt1 -Create -readonly n operation in progress (not interruptible)...id = 45 name = ufs1_ckpt1 acl = 0 in_use = False type = ckpt worm = off volume = vp145 pool = clar_r5_performance member_of = rw_servers= ro_servers= rw_vdms = ro_vdms = checkpt_of= ufs1 Tue Nov 6 14:56:43 EST 2007 ckpts = ufs1_ckpt1_writeable1 used = 38% full(mark)= 90% stor_devs = APM00042000814-0029,APM00042000814-0024,APM00042000814-0021,APM000420 00814-001C disks = d34,d17,d30,d13 id = 46 name = ufs1_ckpt1_writeable1 acl = 0 in_use = True type = wckpt worm = off volume = vp145
pool = clar_r5_performance member_of = rw_servers= server_2 ro_servers= rw_vdms = ro_vdms = checkpt_of= ufs1 baseline_ckpt = ufs1_ckpt1 Tue Nov 6 14:56:43 EST 2007 used = 38% full(mark)= 90% stor_devs = APM00042000814-0029,APM00042000814-0024,APM00042000814-0021,APM000420 00814-001C disks = d34,d17,d30,d13 disk=d34 stor_dev=APM00042000814-0029 addr=c16t2l9 server=server_2 disk=d34 stor_dev=APM00042000814-0029 addr=c32t2l9 server=server_2 disk=d34 stor_dev=APM00042000814-0029 addr=c0t2l9 server=server_2 disk=d34 stor_dev=APM00042000814-0029 addr=c48t2l9 server=server_2 disk=d17 stor_dev=APM00042000814-0024 addr=c0t2l4 server=server_2 disk=d17 stor_dev=APM00042000814-0024 addr=c48t2l4 server=server_2 disk=d17 stor_dev=APM00042000814-0024 addr=c16t2l4 server=server_2 disk=d17 stor_dev=APM00042000814-0024 addr=c32t2l4 server=server_2 disk=d30 stor_dev=APM00042000814-0021 addr=c16t2l1 server=server_2 disk=d30 stor_dev=APM00042000814-0021 addr=c32t2l1 server=server_2 disk=d30 stor_dev=APM00042000814-0021 addr=c0t2l1 server=server_2 disk=d30 stor_dev=APM00042000814-0021 addr=c48t2l1 server=server_2 disk=d13 stor_dev=APM00042000814-001C addr=c0t1l12 server=server_2 disk=d13 stor_dev=APM00042000814-001C addr=c48t1l12 server=server_2 disk=d13 stor_dev=APM00042000814-001C addr=c16t1l12 server=server_2 disk=d13 stor_dev=APM00042000814-001C addr=c32t1l12 server=server_2 Where: Value Definition baseline_ckpt Name of the read-only checkpoint from which the writeable checkpoint is created. EXAMPLE #3 provides a description of command output. EXAMPLE #7 --------- To list checkpoints for ufs1, type: $ fs_ckpt ufs1 -list id ckpt_name creation_time inuse full(mark) used 29 ufs1_ckpt1 11/04/2004-14:54:06-EST n 95% 0% 30 ufs1_ckpt2 11/10/2004-14:00:20-EST y 95% 0% Where: Value Definition id Automatically assigned ID of a file system or checkpoint. ckpt_name Name assigned to the checkpoint. creation_time Date and time the checkpoint was created. inuse If a checkpoint is registered into the mount table of a Data Mover. full(mark) SavVol-usage point which, when reached, sends a warning message to the system log, and auto-extends the SavVol as system space permits. used Percentage of SavVol space used by checkpoints of the PFS. EXAMPLE #8 ---------- To refresh ufs1_ckpt2 using the %full at 85, type: $ fs_ckpt ufs1_ckpt2 -refresh -option %full=85 operation in progress (not interruptible)...id = 30 name = ufs1_ckpt2 acl = 0
in_use = True type = ckpt worm = off volume = vp145 pool = member_of = rw_servers= ro_servers= server_2 rw_vdms = ro_vdms = checkpt_of= ufs1 Wed Nov 10 14:02:59 EST 2004 used = 0% full(mark)= 85% stor_devs = APM00043807043-0011,APM00043807043-0017 disks = d12,d15 disk=d12 stor_dev=APM00043807043-0011 addr=c16t1l1 server=server_2 disk=d12 stor_dev=APM00043807043-0011 addr=c0t1l1 server=server_2 disk=d15 stor_dev=APM00043807043-0017 addr=c16t1l7 server=server_2 disk=d15 stor_dev=APM00043807043-0017 addr=c0t1l7 server=server_2 EXAMPLE #3 provides a description of command output. EXAMPLE #9 ---------- Using root command, to restore ufs1_ckpt2 and capture the latest point-in-time image of the PFS on ufs1_ckpt3, type: $ /nas/sbin/rootfs_ckpt ufs1_ckpt2 -name ufs1_ckpt3 -Restore operation in progress (not interruptible)...id = 30 name = ufs1_ckpt2 acl = 0 in_use = True type = ckpt worm = off volume = vp145 pool = member_of = rw_servers= ro_servers= server_2 rw_vdms = ro_vdms = checkpt_of= ufs1 Wed Nov 10 14:02:59 EST 2004 used = 0% full(mark)= 90% stor_devs = APM00043807043-0011,APM00043807043-0017 disks = d12,d15 disk=d12 stor_dev=APM00043807043-0011 addr=c16t1l1 server=server_2 disk=d12 stor_dev=APM00043807043-0011 addr=c0t1l1 server=server_2 disk=d15 stor_dev=APM00043807043-0017 addr=c16t1l7 server=server_2 disk=d15 stor_dev=APM00043807043-0017 addr=c0t1l7 server=server_2 EXAMPLE #3 provides a description of command output. EXAMPLE #10 ---------- To modify the %full value of the SavVol associated with the file system ufs1 and set it to 95, type: $ fs_ckpt ufs1 -modify %full=95 operation in progress (not interruptible)...id = 33 name = ufs1 acl = 0 in_use = True type = uxfs worm = off volume = vp145 pool = rw_servers= server_2 ro_servers=
rw_vdms = ro_vdms = auto_ext = no,virtual_provision=no ckpts = wipckpt stor_devs = APM00062400708-0014,APM00062400708-0016 disks = d26,d27 disk=d26 stor_dev=APM00062400708-0014 addr=c0t1l4 server=server_2 disk=d26 stor_dev=APM00062400708-0014 addr=c16t1l4 server=server_2 disk=d27 stor_dev=APM00062400708-0016 addr=c0t1l6 server=server_2 disk=d27 stor_dev=APM00062400708-0016 addr=c16t1l6 server=server_2 EXAMPLE #11 ----------- To modify the maxsavsize value of the SavVol associated with the file system ufs1 and set it to 65 GB, type: $ fs_ckpt ufs1 -modify maxsavsize=65G operation in progress (not interruptible)...id = 33 name = ufs1 acl = 0 in_use = True type = uxfs worm = off volume = vp145 pool = rw_servers= server_2 ro_servers= rw_vdms = ro_vdms = auto_ext = no,virtual_provision=no ckpts = wipckpt stor_devs = APM00062400708-0014,APM00062400708-0016 disks = d26,d27 disk=d26 stor_dev=APM00062400708-0014 addr=c0t1l4 server=server_2 disk=d26 stor_dev=APM00062400708-0014 addr=c16t1l4 server=server_2 disk=d27 stor_dev=APM00062400708-0016 addr=c0t1l6 server=server_2 disk=d27 stor_dev=APM00062400708-0016 addr=c16t1l6 server=server_2 DIAGNOSTICS fs_ckpt returns one of the following return codes: 0 - Command completed successfully 1 - Usage error 2 - Invalid object error 3 - Unable to acquire lock 4 - Permission error 5 - Communication error 6 - Transaction error 7 - Dart error 8 - Backend error -------------------------------------- Last Modified: Jan 11, 2013 3:47 pm
fs_dedupe Manages filesystem deduplication state. SYNOPSIS -------- fs_dedupe { -list | -info {-all|
id=
environments. The default value is zero (false). [-pathname_exclude_list
When using VNX Replicator, VNX systems that use version 7.0 and earlier cannot read the deep compression format and will return an I/O error if a read operation is attempted. Select the deep compression format only if downstream replication sessions are using compatible software or are scheduled to be upgraded soon. -clear {
Represents the percentage of the configured save volume (SavVol) auto extension threshold that can be used during deduplication. After the specified amount of SavVol is used, deduplication stops on this filesystem. By default, this value is 90 percent and the SavVol auto extension is also 90 percent; this option will apply when the SavVol is 81 percent full (90 * 90). Setting this value to zero disables it. The values range from 0 to 100. [-backup_data_threshold] Indicates the full percentage that a deduplicated file has to be below in order to trigger space-reduced backups for NDMP. For example, when set to 90, any deduplicated file whose physical size (compressed file plus changed blocks) is greater than 90 percent of the logical size of the file will have the entire file data backed up without attempting to back it up in a space-reduced format. Setting this value to zero disables it. The values range from 0 to 200 and the default value is 90 percent. [-cifs_compression_enabled] This option controls whether CIFS compression is allowed. The default is yes, enable CIFS compression. When set to yes and the deduplication state of the filesystem is either on or suspended, then CIFS compression is allowed. If the deduplication state is either off or in the process of being turned off, then CIFS compression is not allowed, regardless of whether this option is set to yes. [-compression_method] This is a filesystem setting only (no global setting). Identifies the compression algorithm: fast (default) or deep. | -default {-info {
[-file_ext_exclude_list
value will be candidates for deduplication. Setting this value to zero disables it. This value should not be set lower than 24 KB. The values range from 0 to 1000 and the default value is 24 KB. [-maximum_size] Defines the file size in MB of the largest file to be processed for deduplication. Files larger than this size in MB will not be deduplicated. Setting this value to zero disables it. The values range from 0 to 8388608 and the default value is 8388608 MB. [-access_time] Defines the minimum required file age in days based on read access time. File s that have been read within the specified number of days will not be deduplicated. This setting does not apply to files with an FLR locked state. Setting this value to zero disables it. The values range from 0 to 365 and th e default value is 15 days. [-modification_time] Defines the minimum required file age in days based on modification time. Files updated within the specified number of days will not be deduplicated. Setting this value to zero disables it. The values range from 0 to 365 and th e default value is 15 days. [-case_sensitive] Defines whether case-sensitive (for NPS environments) or case-insensitive (fo r CIFS environments) string comparisons will be used during scans. By default, case insensitive comparisons will be done to be consistent for CIFS environments. The default value is zero (false). [-file_ext_exclude_list] Specifies a colon-delimited list of filename extensions to be excluded from deduplication. Each extension must include the leading dot. The default value is (empty). [-duplicate_detection_method] 0 (off) - This means that duplicate data detection is disabled. With this setting, every deduplicated file is considered unique and the only space savings made are accomplished with compression. 1 (sha1) - The SHA-1 hash is used to detect duplicate data. It is faster t han a byte comparison. This is the default method. 2 (byte) - This will use a byte-by-byte comparison to detect duplicate dat a. This adds considerable overhead especially for large files. [-savvol_threshold] Represents the percentage of the configured save volume (SavVol) auto extension threshold that can be used during deduplication. After the specifie d amount of SavVol is used, deduplication stops on this filesystem. By default, this value is 90 percent and the SavVol auto extension is also 90 percent; this option will apply when the SavVol is 81 percent full (90 * 90). Setting this value to zero disables it. The values range from 0 to 100. [-cpu_usage_low_watermark] Specifies the average percent of CPU usage that can be used during the deduplication process at which full throttle mode is re-entered. The values range from 0 to 100 and the default value is 25 percent. This is a global setting only. [-cpu_usage_high_watermark] Specifies the average percent of CPU usage that can be used during the deduplication process which should trigger a slow throttle mode. The system starts in full throttle mode. The values range from 0 to 100 and the default value is 75 percent. This is a global setting only. [-backup_data_threshold
Specifies the full percentage that a deduplicated file has to be below in order to trigger space-reduced backups for NDMP. For example, when set to 90, any deduplicated file whose physical size (compressed file plus changed block s) is greater than 90 percent of the logical size of the file will have the enti re file data backed up without attempting to back it up in a space-reduced forma t. Setting this value to zero disables it. The values range from 0 to 200 and th e default value is 90 percent. [-cifs_compression_enabled] This option controls whether CIFS compression is allowed. When the default is yes, enable CIFS compression. When set to yes and the deduplication state of the filesystem is either on or suspended, then CIFS compression is allowed. If the deduplication state is either off or in the process of being turned off, then CIFS compression is not allowed, regardless of whether this option is set to yes. SEE ALSO: nas_fs -------- EXAMPLE #1 ---------- To list the filesystems and their deduplication states, type: $ fs_dedupe -list id name state status time_of_last original_data usage space_saved _scan _size 141 ranap1- Suspended Wed Nov 12 5 MB 0% 0 MB(0%) replica 09:04:45 EST 2008 104 ds850gb On Idle Fri Nov 21 875459MB 84% 341590 MB _replica1 10:31:15 EST 2008 (39%) 495 cworm On Idle Thu Nov 20 3 MB 0% 0 MB(0%) 09:14:09 EST 2008 33 chrisfs1 On Idle Sat Nov 22 1100 MB 18% 424 MB 10:04:33 EST 2008 (38%) Where: Value Definition id Filesystem identifier name Name of the filesystem state Deduplication state of the filesystem. The file data is transferred to the storage which performs the deduplication and compression on the data The states are: On-- Deduplication on the filesystem is enabled. Suspended-- Deduplication on the filesystem is suspended. Deduplication does not perform any new space reduction but the existing files that were reduced in space remain the same. Off-- Deduplication on the filesystem is disabled. Deduplication does not perform any new space reduction and the data is now reduplicated. status Current state of the deduplication enabled file system. The progress statuses are: Idle-- Deduplication process is currently idle. Scanning-- Filesystem is being scanned for deduplication. It displays the percentage of scanned files in the filesystem. Reduplicating-- Filesystem files are being reduplicated from the deduplicated files. It displays the percentage of reduplicated files. time_of_last_scan Time when the filesystem was last scanned
original_data_size Original size of the filesystem before deduplication usage Current space usage of the filesystem space_saved Filesystem space saved after deduplication EXAMPLE #2 ---------- To list the filesystems and provide detailed reports on the state of the deduplication processing, type: $ fs_dedupe -info -all Id = 53 Name = svr2fs1 Deduplication = Off File system parameters: Case Sensitive = no Duplicate Detection Method = sha1 Access Time = 15 Modification Time = 15 Minimum Size = 24 KB Maximum Size = 8388608 MB File Extension Exclude List = Minimum Scan Interval = 7 Savevol Threshold = 90 Backup Data Threshold = 90 Cifs Compression Enabled = yes Pathname Exclude List = Compression Method = fast Id = 2040 Name = server_2_fsltest2 Deduplication = Suspended As of the last file system scan (Mon Aug 17 11:33:38 EDT 2009): Files scanned = 4 Files deduped = 3 (75% of total files) File system capacity = 2016 MB Original data size = 6 MB (0% of current file system capacity) Space saved = 0 MB (0% of original data size) File system parameters: Case Sensitive = no Duplicate Detection Method = sha1 Access Time = 15 Modification Time = 15 Minimum Size = 24 KB Maximum Size = 8388608 MB File Extension Exclude List = Minimum Scan Interval = 7 Savevol Threshold = 90 Backup Data Threshold = 90 Cifs Compression Enabled = yes Pathname Exclude List = Compression Method = fast Id = 506 Name = demofs Deduplication = Off File system parameters: Case Sensitive = no Duplicate Detection Method = sha1 Access Time = 15 Modification Time = 15 Minimum Size = 24 KB Maximum Size = 8388608 MB File Extension Exclude List = Minimum Scan Interval = 7 Savevol Threshold = 90 Backup Data Threshold = 90 Cifs Compression Enabled = yes Pathname Exclude List = Id = 2113
Name = testrdefs Deduplication = Suspended As of the last file system scan (Thu Aug 13 14:22:31 EDT 2009): Files scanned = 1 Files deduped = 0 (0% of total files) File system capacity = 1008 MB Original data size = 0 MB (0% of current file system capacity) Space saved = 0 MB (0% of original data size) File system parameters: Case Sensitive = no Duplicate Detection Method = sha1 Access Time = 15 Modification Time = 15 Minimum Size = 24 KB Maximum Size = 8388608 MB File Extension Exclude List = Minimum Scan Interval = 7 Savevol Threshold = 90 Backup Data Threshold = 90 Cifs Compression Enabled = yes Pathname Exclude List = Compression Method = fast Id = 2093 Name = kfs_ckpt1 Deduplication = Off File system parameters: Case Sensitive = no Duplicate Detection Method = sha1 Access Time = 15 Modification Time = 15 Minimum Size = 24 KB Maximum Size = 8388608 MB File Extension Exclude List = Minimum Scan Interval = 7 Savevol Threshold = 90 Backup Data Threshold = 90 Cifs Compression Enabled = yes Pathname Exclude List = Compression Method = fast Id = 2095 Name = ranap-test3 Deduplication = On Status = Idle As of the last file system scan (Tue Aug 11 17:37:58 EDT 2009): Files scanned = 30 Files deduped = 2 (7% of total files) File system capacity = 5041 MB Original data size = 1109 MB (22% of current file system capacity) Space saved = 0 MB (0% of original data size) File system parameters: Case Sensitive = no Duplicate Detection Method = sha1 Access Time = 15 Modification Time = 15 Minimum Size = 24 KB Maximum Size = 8388608 MB File Extension Exclude List = Minimum Scan Interval = 7 Savevol Threshold = 90 Backup Data Threshold = 90 Cifs Compression Enabled = yes Pathname Exclude List = Compression Method = deep Where: Value Definition Deduplication Current deduplication state of the filesystem. Status Progress status of the files being scanned. Name Name of the filesystem.
Id Filesystem identifier. Files scanned Number of files scanned. Files deduped Number of files in the filesystem that has been dedu plicated. Original data size Proportion of space in use with respect to the file system capacity. File system capacity Current space usage of the filesystem. Space saved Proportion of space saved with respect to the origin al data size. Case Sensitive Method of string comparison: case sensitive or case insensitive. Duplicate Detection Method Method of duplication detection: 0, sha-1, or byte-by-byte. Access Time Minimum required file age in days based on read acce ss time. Modification Time Minimum required file age in days based on modification time. Minimum Size Minimum file size to be processed for deduplication. Maximum Size Maximum file size to be processed for deduplication. File Extension Exclude List Lists filename extensions to be excluded from the deduplication. Minimum Scan Interval Minimum number of days between completing one scan o f a filesystem and before scanning the same file syste m again. SavVol Threshold Percentage of SavVol space that can be used during deduplication. Backup Data Threshold Percentage below which a deduplicated file has to be in order to trigger space-reduced NDMP backups. Cifs Compression Enabled Controls whether CIFS permission is enabled. Pathname Exclude List Lists relative path names to be excluded from the deduplication. Compression Method Compression algorithm used: fast or deep. Note: If reduplication fails, then the state transitions to the suspended state a nd a CCMD message will be sent to the servers event log. If reduplication succeeds, then it remains in the off state. EXAMPLE #3 ---------- To list the filesystems for a given filesystem name, type: $ fs_dedupe -info server3_fs3 Id = 98 Name = server3_fs3 Deduplication = On Status = Idle As of the last filesystem scan on Tue Sep 23 13:28:01 EDT 2008: Files deduped = 30 (100%) Filesystem capacity = 413590 MB Original data size = 117 MB (0% of current filesystem capacity) Space saved = 106 MB (90% of original data size) Filesystem parameters: Case Sensitive = yes Duplicate Detection Method = sha1 Access Time = 30 Modification Time = 30 Minimum Size = 20 Maximum Size = 200 File Extension Exclude List = .jpg:.db:.pst Minimum Scan Interval = 1 SavVol Threshold = 90 Backup Data Threshold = 90 Pathname Exclude List = root;etc Compression Method = fast EXAMPLE #2 provides a description of command output.
EXAMPLE #6 ---------- To list the duplication properties of a given Data Mover, type: $ fs_dedupe -default -info server_2 Server parameters: Case Sensitive = yes Duplicate Detection Method = sha1 Access Time = 30 Modification Time = 30 Minimum Size = 20 Maximum Size = 200 File Extension Exclude List = .jpg:.db:.pst Minimum Scan Interval = 1 SavVol Threshold = 90 Backup Data Threshold = 90 CPU % Usage Low Water Mark = 25 CPU % Usage High Water Mark = 90 Cifs Compression Enabled = yes Where: Value Definition Deduplication Current deduplication state of the filesystem. Status Progress status of the files being scanned. Name Name of the filesystem. Id Filesystem identifier. Files scanned Number of files scanned. Files deduped Number of files in the filesystem that has been deduplicated. Original data size Proportion of space in use with respect to the file system capacity. File system capacity Current space usage of the filesystem. Space saved Proportion of space saved with respect to the original data size. Case Sensitive Method of string comparison - case sensitive or case insensitive. Duplicate Detection Method Method of duplication detection : 0, sha-1, or byte-by-byte. Access Time Minimum required file age in days based on read access time. Modification Time Minimum required file age in days based on modification time. Mininum Size Minimum file size to be processed for deduplication. Maximum Size Maximum file size to be processed for deduplication. File Extension Exclude List Lists filename extensions to be excluded from the deduplication. Mininum Scan Interval Minimum number of days between completing one scan of a filesystem and before scanning the same file system again. SavVol Threshold Percentage of SavVol space that can be used during deduplication. Backup Data Threshold Percentage below which a deduplicated file has to be in order to trigger space-reduced NDMP backu p. CPU % Usage Low Water Mark Average percentage of CPU usage which should trigger full throttle mode. CPU % Usage High Water Mark Average percentage of CPU usage which should trigger slow throttle mode. EXAMPLE #5 ---------- To modify the filesystem, type: $ fs_dedupe -modify testrdefs -state on Done EXAMPLE #6
---------- To modify the filesystem settings to the user specified values, type: $ fs_dedupe -modify testrdefs -maximum_size 100 -file_extension_exclude_list .jpg:.db:.pst Done EXAMPLE #7 ---------- To modify specific Data Mover settings, type: $ fs_dedupe -default -set server_2 -maximum_size 100 -minimum_size 20 -duplicate_detection_method sha1 Done EXAMPLE #8 ----------- To reset the filesystem settings to the default settings (which are the Data Mover settings) type: $ fs_dedupe -clear testrdefs -maximum_size -minimum_size -duplicate_detection_met hod Done EXAMPLE #9 ----------- To reset specific Data Mover settings to the default settings, type: $ fs_dedupe -default -clear server_2 -maximum_size -minimum_size -duplicate_detection_method Done EXAMPLE #10 ----------- To reset all options for a specific Data Mover to the default settings, type: $ fs_dedupe -default -clear server_2 Done EXAMPLE #11 ----------- To reset all options on all Data Movers to the default settings, type: $ fs_dedupe -default -clear -all Done --------------------------------------------------------------------------- Last modified: April 13, 2012 1:00 p.m.
fs_dhsm Manages the VNX FileMover file system connections. SYNOPSIS -------- fs_dhsm -list | -info [
is automatically changed to enabled. When specifying the -state disabled option, it is not possible to specify any other parameter to modify. [-state enabled] Enables VNX FileMover operations on the specified file system. The file system must be enabled to accept other options. [-state disabled] Disables VNX FileMover operations on the specified file system. New FileMover attributes cannot be specified as part of a disable command, nor can be specified for a file system that is in the disabled state. The attributes persist. If the file system is enabled after a disable command, then the attributes prior to the disable command take effect. [-popup_timeout
{nfsv3|nfsv2} -secondary
system. enabled (default) allows both the creation of stub files and data migration through reads and writes. If the state is disabled, neither stub files nor data migration is possible. Data currently on the VNX can be read and written to in the disabled state. If the state is recallonly, the policy engine is not allowed to create stub files, but the user is still able to trigger data migration using a read or write request from the secondary file system to the VNX. [-read_policy_override {full|passthrough|partial|none}] Specifies the migration method option used by the VNX, in the connection level or file system level, to override the migration method specified in the stub file. none (default) specifies no override, full recalls the whole file to the VNX on read request before the data is returned, passthrough retrieves data without recalling the data to the VNX, and partial recalls only the blocks required to satisfy the client read request. Note: The full migration may take minutes or hours if the file is very large. [-nfs_server
] Specifies the name or IP address of the secondary NFS server. Note: Although an IP address can be specified for the-admin [
Specifies the fully qualified domain name of the secondary CIFS server. [-local_server
-httpPort
-httpPort
method specified in the stub file. none (default) specifies no override, full recalls the whole file to the VNX on read request before the data is returned, passthrough retrieves data without recalling the data to the VNX, and partial recalls only the blocks required to satisfy the client read request. Note: The full migration may take several minutes or hours if the file is very large. [-httpsPort
Specifies the migration method option used by the VNX, in the connection level or file system level, to override the migration method specified in the stub file. none (default) specifies no override, full recalls the whole file to the VNX on read request before the data is returned, passthrough retrieves data without recalling the data to the VNX, and partial recalls only the blocks required to satisfy the client read request. Note: The full migration may take minutes or hours if the file is very large. [-http_server
Using VNX FileMover, server_cifs, server_http, and server_nfs. EXAMPLE #1 ---------- To enable VNX FileMover on a file system, type: $ fs_dhsm -modify ufs1 -state enabled ufs1: state = enabled offline attr = on popup timeout = 0 backup = passthrough read policy override = none log file = on max log size = 10MB Done Where: Value Definition State Whether VNX FileMover is enabled or disabled on the file system offline attr Whether CIFS clients should be notified that a file is migrated popup timeout Timeout value in seconds, before Windows popup notification is sent to the CIFS client backup Nature of CIFS network backups read policy override Migration method option used to override the read method specified in the stub file log file Whether FileMover logging is enabled or disabled max log size Maximum size of the log file EXAMPLE #2 ---------- To create a CIFS connection for ufs1 to the secondary file system \\winserver2.nasdocs.emc.com\dhsm1 with a specified administrative account nasdocs.emc.com\Administrator and local server dm102-cge0: $ fs_dhsm -connection ufs1 -create -type cifs -admin nasdocs.emc.com\Administrator -secondary \\winserver2.nasdocs.emc.com\dhsm1 -local_server dm102-cge0 Enter Password:******** ufs1: state = enabled offline attr = on popup timeout = 0 backup = passthrough read policy override = none log file = on max log size = 10MB cid = 0 type = CIFS secondary = \\winserver2.nasdocs.emc.com\dhsm1\ state = enabled read policy override = none write policy = full local_server = DM102-CGE0.NASDOCS.EMC.COM admin = nasdocs.emc.com\Administrator wins = Done
Where: Value Definition state Whether VNX FileMover is enabled or disabled on the file system offline attr Whether CIFS clients should be notified that a file is migrated popup timeout Timeout value, in seconds, before a popup notification is sent to CIFS client backup Nature of CIFS network backups read policy override Migration method option used to override the read method specified in the stub file log file Whether FileMover logging is enabled or disabled max log size Maximum size of the log file cid Connection ID type Type of file system See -list for a description of the types secondary Hostname or IP address of the remote file system state Specifies whether VNX FileMover is enabled or disabled on the file system read policy override Migration method option used to override the read method specified in the stub file write policy Write policy option used to recall data from secondary storage local_server Name of the local CIFS server used to authenticate the CIFS connection EXAMPLE #3 ---------- To create a CIFS connection for ufs1 to the secondary file system \\winserver2.nasdocs.emc.com\dhsm2 with a specified administrative account nasdocs.emc.com\Administrator, local server dm102-cge0, WINS server, and with the migration method set to full, type: $ fs_dhsm -connection ufs1 -create -type cifs -admin nasdocs.emc.com\Administrator -secondary \\winserver2.nasdocs.emc.com\dhsm1 -local_server dm102-cge0 -wins 172.24.102.25 -read_policy_override full Enter Password:******** ufs1: state = enabled offline attr = on popup timeout = 0 backup = passthrough read policy override = none log file = on max log size = 10MB cid = 0 type = CIFS secondary = \\winserver2.nasdocs.emc.com\dhsm1\ state = enabled read policy override = full write policy = full local_server = DM102-CGE0.NASDOCS.EMC.COM admin = nasdocs.emc.com\Administrator wins = 172.24.102.25
Done EXAMPLE #2 provides a description of command output. EXAMPLE #4 ---------- To display connection information for ufs1, type: $ fs_dhsm -connection ufs1 -info 1 ufs1: state = enabled offline attr = on popup timeout = 0 backup = passthrough read policy override = none log file = on max log size = 10MB EXAMPLE #2 provides a description of command output. EXAMPLE #5 ---------- To modify the read_policy_override setting for connection 0 for ufs1, type: $ fs_dhsm -connection ufs1 -modify 0 -read_policy_override passthrough ufs1: state = enabled offline attr = on popup timeout = 0 backup = passthrough read policy override = none log file = on max log size = 10MB cid = 0 type = CIFS secondary = \\winserver2.nasdocs.emc.com\dhsm1\ state = enabled read policy override = pass write policy = full local_server = DM102-CGE0.NASDOCS.EMC.COM admin = nasdocs.emc.com\Administrator wins = 172.24.102.25 Done EXAMPLE #2 provides a description of command output. EXAMPLE #6 ---------- To modify the VNX FileMover connection for ufs1, type: $ fs_dhsm -connection ufs1 -modify 0 -nfs_server 172.24.102.115 -proto TCP ufs1: state = enabled offline attr = on popup timeout = 10 backup = offline read policy override = full log file = on max log size = 25MB cid = 0 type = NFSV3 secondary = 172.24.102.115:/export/dhsm1 state = enabled read policy override = full write policy = full options = useRootCred=true proto=TCP cid = 1
type = CIFS secondary = \\winserver2.nasdocs.emc.com\dhsm1\ state = enabled read policy override = none write policy = full local_server = DM102-CGE0.NASDOCS.EMC.COM admin = nasdocs.emc.com\Administrator wins = 172.24.102.25 cid = 2 type = HTTP secondary = http://172.24.102.115/export/dhsm1 state = enabled read policy override = none write policy = full user = options = cgi=n Done EXAMPLE #2 provides a description of command output. EXAMPLE #7 ---------- To create the NFSv3 connection for ufs1 to the secondary file system 172.24.102.115:/export/dhsm1 with the migration method set to full, the -useRootCred set to true, and the protocol set to UDP, type: $ fs_dhsm -connection ufs1 -create -type nfsv3 -secondary 172.24.102.115:/export/dhsm1 -read_policy_override full -useRootCred true -proto UDP ufs1: state = enabled offline attr = on popup timeout = 0 backup = passthrough read policy override = none log file = on max log size = 10MB cid = 0 type = CIFS secondary = \\winserver2.nasdocs.emc.com\dhsm1\ state = enabled read policy override = pass write policy = full local_server = DM102-CGE0.NASDOCS.EMC.COM admin = nasdocs.emc.com\Administrator wins = 172.24.102.25 cid = 1 type = NFSV3 secondary = 172.24.102.115:/export/dhsm1 state = enabled read policy override = full write policy = full options = useRootCred=true proto=UDP Done EXAMPLE #2 provides a description of command output. EXAMPLE #8 ---------- To modify the VNX FileMover connection for ufs1, type: $ fs_dhsm -connection ufs1 -modify 1 -proto TCP ufs1: state = enabled offline attr = on popup timeout = 0 backup = passthrough
read policy override = none log file = on max log size = 10MB cid = 0 type = CIFS secondary = \\winserver2.nasdocs.emc.com\dhsm1\ state = enabled read policy override = pass write policy = full local_server = DM102-CGE0.NASDOCS.EMC.COM admin = nasdocs.emc.com\Administrator wins = 172.24.102.25 cid = 1 type = NFSV3 secondary = 172.24.102.115:/export/dhsm1 state = enabled read policy override = full write policy = full options = useRootCred=true proto=TCP Done EXAMPLE #2 provides a description of command output. EXAMPLE #9 ---------- To display VNX FileMover connection information for ufs1, type: $ fs_dhsm -info ufs1 ufs1: state = enabled offline attr = on popup timeout = 0 backup = passthrough read policy override = none log file = on max log size = 10MB EXAMPLE #1 provides a description of command output. EXAMPLE #10 ----------- To list VNX FileMover connections, type: $ fs_dhsm -connection ufs1 -list id name cid 29 ufs1 0 29 ufs1 1 29 ufs1 2 EXAMPLE #11 ----------- To modify the VNX FileMover connection for ufs1, type: $ fs_dhsm -modify ufs1 -popup_timeout 10 -backup offline -log on -max_log_size 25 -offline_attr on -read_policy_override full ufs1: state = enabled offline attr = on popup timeout = 10 backup = offline read policy override = full log file = on max log size = 25MB cid = 0 type = CIFS secondary = \\winserver2.nasdocs.emc.com\dhsm1\ state = enabled read policy override = pass
write policy = full local_server = DM102-CGE0.NASDOCS.EMC.COM admin = nasdocs.emc.com\Administrator wins = 172.24.102.25 cid = 1 type = NFSV3 secondary = 172.24.102.115:/export/dhsm1 state = enabled read policy override = full write policy = full options = useRootCred=true proto=TCP Done EXAMPLE #2 provides a description of command output. EXAMPLE #12 ----------- To modify the state of the VNX FileMover connection 0 for ufs1, type: $ fs_dhsm -connection ufs1 -modify 0 -state disabled ufs1: state = enabled offline attr = on popup timeout = 10 backup = offline read policy override = full log file = on max log size = 25MB cid = 0 type = CIFS secondary = \\winserver2.nasdocs.emc.com\dhsm1\ state = disabled read policy override = pass write policy = full local_server = DM102-CGE0.NASDOCS.EMC.COM admin = nasdocs.emc.com\Administrator wins = 172.24.102.25 cid = 1 type = NFSV3 secondary = 172.24.102.115:/export/dhsm1 state = enabled read policy override = full write policy = full options = useRootCred=true proto=TCP Done EXAMPLE #2 provides a description of command output. EXAMPLE #13 ----------- To modify the state of the VNX FileMover connection 1 for ufs1, type: $ fs_dhsm -connection ufs1 -modify 1 -state recallonly ufs1: state = enabled offline attr = on popup timeout = 10 backup = offline read policy override = full log file = on max log size = 25MB cid = 0 type = CIFS secondary = \\winserver2.nasdocs.emc.com\dhsm1\ state = enabled read policy override = pass write policy = full
local_server = DM102-CGE0.NASDOCS.EMC.COM admin = nasdocs.emc.com\Administrator wins = 172.24.102.25 cid = 1 type = NFSV3 secondary = 172.24.102.115:/export/dhsm1 state = recallonly read policy override = full write policy = full options = useRootCred=true proto=TCP Done EXAMPLE #2 provides a description of command output. EXAMPLE #14 ----------- To delete the VNX FileMover connections 1 and 2 for ufs1, and specify the recall policy for any migrated files during the delete, type: $ fs_dhsm -connection ufs1 -delete 0,1 -recall_policy no ufs1: state = enabled offline attr = on popup timeout = 10 backup = offline read policy override = full log file = on max log size = 25MB Done EXAMPLE #2 provides a description of command output. EXAMPLE #15 ----------- To change the state of the VNX FileMover connection for ufs1 to disabled, type: $ fs_dhsm -modify ufs1 -state disabled ufs1: state = disabled offline attr = on popup timeout = 10 backup = offline read policy override = full log file = on max log size = 25MB Done EXAMPLE #1 provides a description of command output. EXAMPLE #16 ----------- To create an HTTP connection for ufs1 to the secondary file system /export/dhsm1 on the web server http://172.24.102.115 which has direct access to the storage, type: $ fs_dhsm -connection ufs1 -create -type http -secondary http://172.24.102.115/export/dhsm1 -cgi n ufs1: state = enabled offline attr = on popup timeout = 10 backup = offline read policy override = full
log file = on max log size = 25MB cid = 2 type = HTTP secondary = http://172.24.102.115/export/dhsm1 state = enabled read policy override = none write policy = full user = options = cgi=n Done EXAMPLE #2 provides a description of command output. EXAMPE #17 ---------- To create an HTTP connection for ufs1 to the secondary file system using CGI connections to access migrated file data using a CGI application, type: $ fs_dhsm -connection ufs1 -create -type http -secondary http://www.nasdocs.emc.com/cgi-bin/access.sh ufs1: state = enabled offline attr = on popup timeout = 0 backup = passthrough read policy override = none log file = on max log size = 10MB cid = 0 type = HTTP secondary = http://www.nasdocs.emc.com/cgi-bin/access.sh state = enabled read policy override = none write policy = full user = options = Done EXAMPLE #2 provides a description of command output. EXAMPLE #18 ----------- To create an HTTPS connection for server2_fs1 on the web server https://int16543 with read_policy_override set to full, type: $ fs_dhsm -connection server2_fs1 -create -type https -secondary https://int16543 -read_policy_override full -cgi n server2_fs1: state = enabled offline attr = on popup timeout = 0 backup = passthrough read policy override = passthrough log file = on max log size = 10MB cid = 0 type = HTTPS secondary = https://int16543 state = enabled read policy override = full write policy = full user = options = Done EXAMPLE #2 provides a description of command output.
EXAMPLE #19 ----------- To create an HTTPS connection for ufs1 to the secondary file system using CGI connections to access migrated file data using a CGI application, type: $ fs_dhsm -connection ufs1 -create -type https .secondary https://www.nasdocs.emc.com/cgi-bin/access.sh ufs1: state = enabled offline attr = on popup timeout = 0 backup = passthrough read policy override = none log file = on max log size = 10MB cid = 0 type = HTTPS secondary = https://www.nasdocs.emc.com/cgi-bin/access.sh state = enabled read policy override = none write policy = full user = options = Done EXAMPLE #2 provides a description of command output. EXAMPLE #20 ----------- To create an HTTPS connection on httpsPort 443 for server2_ufs1 on the web server https://int16543 with read_policy_override set to passthrough, type: $ fs_dhsm -connection server2_fs1 -create -type https -secondary https://int16543 -read_policy_override passthrough -httpsPort 443 -cgi n server2_fs1: state = enabled offline attr = on popup timeout = 0 backup = passthrough read policy override = passthrough log file = on max log size = 10MB cid = 1 type = HTTPS secondary = https://int16543 state = enabled read policy override = pass write policy = full user = options = Done EXAMPLE #2 provides a description of command output. EXAMPLE #21 ----------- To create an HTTPS connection on localPort 80 for server2_ufs1 on the web server https://int16543 with read_policy_override set to passthrough, type: $ fs_dhsm -connection server2_fs1 -create -type https -secondary https://int16543 -read_policy_override passthrough -localPort 80 -cgi n server2_fs1: state = enabled offline attr = on popup timeout = 0 backup = passthrough
read policy override = passthrough log file = on max log size = 10MB cid = 0 type = HTTPS secondary = https://int16543 state = enabled read policy override = pass write policy = full user = options = Done EXAMPLE #2 provides a description of command output. EXAMPLE #22 ----------- To create an HTTPS connection on httpsPort 443 for server2_ufs1 on the web server https://int16543 with a specified user dhsm_user, type: $ fs_dhsm -connection server2_fs1 -create -type https -secondary https://int16543 -read_policy_override full -httpsPort 443 .user dhsm_user -password dhsm_user -cgi n server2_fs1: state = enabled offline attr = on popup timeout = 0 backup = passthrough read policy override = passthrough log file = on max log size = 10MB cid = 1 type = HTTPS secondary = https://int16543 state = enabled read policy override = full write policy = full user = dhsm_user options = Done EXAMPLE #2 provides a description of command output. EXAMPLE #23 ----------- To modify the read_policy_override setting for connection 1 from server2_fs1, type: $ fs_dhsm -connection server2_fs1 -modify 1 -read_policy_override passthrough server2_fs1: state = enabled offline attr = on popup timeout = 0 backup = passthrough read policy override = passthrough log file = on max log size = 10MB cid = 1 type = HTTPS secondary = https://int16543 state = enabled read policy override = pass write policy = full user = dhsm_user options = Done
EXAMPLE #2 provides a description of command output. EXAMPLE #24 ----------- To delete the VNX FileMover connection 0 for ufs1, type: $ fs_dhsm -connection ufs1 -delete 0 ufs1: state = enabled offline attr = on popup timeout = 0 backup = passthrough read policy override = none log file = on max log size = 10MB Done EXAMPLE #1 provides a description of command output. -------------------------------------- Last Modified: March 29, 2011 05:00 Pm
fs_group Creates a file system group from the specified file systems or a single file system. SYNOPSIS -------- fs_group -list | -delete
id = 22 name = ufsg1 acl = 0 in_use = False type = group fs_set = ufs1 pool = stor_devs = 000187940268-0006,000187940268-0007,000187940268-0008,000187940268-0009 disks = d3,d4,d5,d6 Where: Value Indicates: id ID of the group that is automatically assigned name Name assigned to the group acl Access control value for the group in_use Whether a file system is used by a group type Type of file system fs_set File systems that are part of the group pool Storage pool given to the file system group stor_devs Storage system devices associated with the group disks Disks on which the metavolume resides EXAMPLE #2 ---------- To list all file system groups, type: $ fs_group -list id name acl in_use type member_of fs_set 20 ufsg1 0 n 100 18 Where: Value Indicates: member_of Groups which the file system group belong to EXAMPLE #3 ---------- To display information for the file system group, ufsg1, type: $ fs_group -info ufsg1 id = 22 name = ufsg1 acl = 0 in_use = False type = group fs_set = ufs1 pool = stor_devs = 000187940268-0006,000187940268-0007,000187940268-0008,000187940268-0009 disks = d3,d4,d5,d6 EXAMPLE #1 provides a description of command output. EXAMPLE #4 ---------- To add file system, ufs2, to the file system group, ufsg1, type: $ fs_group -xtend ufsg1 ufs2
id = 22 name = ufsg1 acl = 0 in_use = False type = group fs_set = ufs1,ufs2 pool = stor_devs = 000187940268-0006,000187940268-0007,000187940268-0008,000187940268-0009,000187940 268-000A,000187940268-000B,000187940268-000C,000187940268-000D disks = d3,d4,d5,d6,d7,d8,d9,d10 EXAMPLE #1 provides a description of command output. EXAMPLE #5 ---------- To remove file system, ufs2, from the file system group, ufsg1, type: $ fs_group -shrink ufsg1 ufs2 id = 22 name = ufsg1 acl = 0 in_use = False type = group fs_set = ufs1 pool = stor_devs = 000187940268-0006,000187940268-0007,000187940268-0008,000187940268-0009 disks = d3,d4,d5,d6 EXAMPLE #1 provides a description of command output. EXAMPLE #6 ---------- To delete file system group, ufsg1, type: $ fs_group -delete ufsg1 id = 22 name = ufsg1 acl = 0 in_use = False type = group fs_set = stor_devs = disks = EXAMPLE #1 provides a description of command output. -------------------------------------- Last Modified: March 29, 2010 6:00 pm
fs_rdf Manages the Remote Data Facility (RDF) functionality for a file system residing on RDF drives. SYNOPSIS -------- fs_rdf {
remote_sym_devname = ra_group_number = 2 dev_rdf_type = R1 remote_symid = 002804000218 remote_sym_devname = ra_group_number = 2 dev_rdf_type = R1 dev_ra_status = READY dev_link_status = READY rdf_mode = SYNCHRONOUS rdf_pair_state = SYNCINPROG rdf_domino = DISABLED adaptive_copy = DISABLED adaptive_copy_skew = 65535 num_r1_invalid_tracks = 0 num_r2_invalid_tracks = 736440 dev_rdf_state = READY remote_dev_rdf_state = WRITE_DISABLED rdf_status = 0 link_domino = DISABLED prevent_auto_link_recovery = DISABLED link_config = suspend_state = NA consistency_state = DISABLED adaptive_copy_wp_state = NA prevent_ra_online_upon_pwron = ENABLED Where: Value Definition id ID of a file system that is assigned automatically name Name assigned to a file system acl Access control value for a file system in_use Whether a file system is registered into the mount table type Type of file system See nas_fs for a description of the types volume Volume on which a file system resides pool Storage pool for the file system rw_servers Servers with read-write access to a file system ro_servers Servers with read-only access to a file system rw_vdms VDM servers with read-write access to a file syste m ro_vdms VDM servers with read-only access to a file system backup_of The remote RDF file system stor_devs The storage system devices associated with a file system disks The disks on which the metavolume resides remote_symid The serial number of the storage system containing the target volume remote_sym_devname The storage system device name of the remote devic e in an RDF pair ra_group_number The RA group number (1-n)
dev_rdf_type The type of RDF device Possible values are: R1 and R2 RA status. Possible values are: READY, dev_ra_status NOT_READY, WRITE_DISABLED, STATUS_NA, STATUS_MIXED dev_link_status Link status Possible values are: READY, NOT_READY, WRITE_DISABLED, NA, MIXED The RDF mode.Possible values are: rdf_mode SYNCHRONOUS, SEMI_SYNCHRONOUS, ADAPTIVE_COPY, MIXED Composite state of the RDF pair Possible values are: rdf_pair_state INVALID, SYNCINPROG, SYNCHRONIZED, SPLIT,SUSPENDED, FAILED_OVER, PARTITIONED, R1_UPDATED, R1_UPDINPROG, MIXED rdf_domino The RDF device domino Possible values are: ENABLED, DISABLED, MIXED adaptive_copy Possible values are: DISABLED, WP_MODE, DISK_MODE, MIXED adaptive_copy_skew Number of invalid tracks when in Adaptive copy mode num_r1_invalid_tracks Number of invalid tracks on the source (R1) device num_r2_invalid_tracks Number of invalid tracks on the target (R2) device Specifies the composite RDF state of the RDF dev_rdf_state device Possible values are: READY, NOT_READY, WRITE_DISAB LED, NA, MIXED Specifies the composite RDF state of the remote_dev_rdf_state remote RDF device Possible values are: READY, NOT_READY, WRITE_DISABLED, NA, MIXED Specifies the RDF status of the device rdf_status Possible values are: READY,NOT_READY,WRITE_DISABLE D,NA,MIXED link_domino RDF link domino Possible values are: ENABLED, DISABLED When enabled, prevents the automatic prevent_auto_link_recovery resumption of data copy across the RDF links as soon as the links have recovered Possible values are: ENABLED, DISABLED link_config Possible values are: CONFIG_ESCON, CONFIG_T3 Specifies the status of R1 devices in a suspend_state consistency group Possible states are: NA, OFFLINE, OFFLINE_PEND, ON LINE_MIXED Specifies state of an R1 device related to consistency_state consistency groups Possible states are: ENABLED, DISABLED Specifies state of the adaptive copy mode adaptive_copy_wp_state Possible states are: NA, OFFLINE, OFFLINE_PEND, ON
LINE_MIXED Specifies the state of the RA director prevent_ra_online_upon_pwron coming online after power on Possible states are: ENABLED, DISABLED EXAMPLE #2 ---------- To display RDF-related information for ufs1_snap1 from the R2 Control Station, type: $ fs_rdf ufs1_snap1 -info id = 20 name = ufs1_snap1 acl = 0 in_use = False type = uxfs volume = v168 pool = rw_servers= ro_servers= rw_vdms = ro_vdms = backup_of = ufs1 Fri Apr 23 16:29:23 EDT 2004 stor_devs = 002804000190-0052,002804000190-0053,002804000190-0054,002804000190-0055 disks = rootd33,rootd34,rootd35,rootd36 RDF Information: remote_symid = 002804000218 remote_sym_devname = ra_group_number = 2 dev_rdf_type = R1 dev_ra_status = READY dev_link_status = READY rdf_mode = SYNCHRONOUS rdf_pair_state = SYNCINPROG rdf_domino = DISABLED adaptive_copy = DISABLED adaptive_copy_skew = 65535 num_r1_invalid_tracks = 0 num_r2_invalid_tracks = 696030 dev_rdf_state = READY remote_dev_rdf_state = WRITE_DISABLED rdf_status = 0 link_domino = DISABLED prevent_auto_link_recovery = DISABLED link_config = suspend_state = NA consistency_state = DISABLED adaptive_copy_wp_state = NA prevent_ra_online_upon_pwron = ENABLED EXAMPLE #1 provides a description of command output. EXAMPLE #3 ---------- To turn the mirroring off for ufs1_snap1 on the R1 Control Station, type: $ fs_rdf ufs1_snap1 -Mirror off remainder(MB) = 20548..17200..13110..8992..4870..746 0 id = 20 name = ufs1_snap1 remainder(MB) = 20548..17200..13110..8992..4870..746 0 id = 20 name = ufs1_snap1 acl = 0 in_use = False type = uxfs
volume = v168 pool = rw_servers= ro_servers= rw_vdms = ro_vdms = backup_of = ufs1 Fri Apr 23 16:29:23 EDT 2004 stor_devs = 002804000190-0052,002804000190-0053,002804000190-0054,002804000190-0055 disks = rootd33,rootd34,rootd35,rootd36 RDF Information: remote_symid = 002804000218 remote_sym_devname = ra_group_number = 2 dev_rdf_type = R1 dev_ra_status = READY dev_link_status = NOT_READY rdf_mode = SYNCHRONOUS rdf_pair_state = SUSPENDED rdf_domino = DISABLED adaptive_copy = DISABLED adaptive_copy_skew = 65535 num_r1_invalid_tracks = 0 num_r2_invalid_tracks = 0 dev_rdf_state = READY remote_dev_rdf_state = WRITE_DISABLED rdf_status = 0 link_domino = DISABLED prevent_auto_link_recovery = DISABLED link_config = suspend_state = OFFLINE consistency_state = DISABLED adaptive_copy_wp_state = NA prevent_ra_online_upon_pwron = ENABLED EXAMPLE #1 provides a description of command output. EXAMPLE #4 ---------- To perform a mirror refresh for ufs1_snap1 on the R1 Control Station, type: $ fs_rdf ufs1_snap1 -Mirror refresh remainder(MB) = 1 0 id = 20 name = ufs1_snap1 acl = 0 in_use = False type = uxfs volume = v168 pool = rw_servers= ro_servers= rw_vdms = ro_vdms = backup_of = ufs1 Fri Apr 23 16:29:23 EDT 2004 stor_devs = 002804000190-0052,002804000190-0053,002804000190-0054,002804000190-0055 disks = rootd33,rootd34,rootd35,rootd36 RDF Information: remote_symid = 002804000218 remote_sym_devname = ra_group_number = 2 dev_rdf_type = R1 dev_ra_status = READY dev_link_status = NOT_READY rdf_mode = SYNCHRONOUS rdf_pair_state = SUSPENDED
rdf_domino = DISABLED adaptive_copy = DISABLED adaptive_copy_skew = 65535 num_r1_invalid_tracks = 0 num_r2_invalid_tracks = 0 dev_rdf_state = READY remote_dev_rdf_state = WRITE_DISABLED rdf_status = 0 link_domino = DISABLED prevent_auto_link_recovery = DISABLED link_config = suspend_state = OFFLINE consistency_state = DISABLED adaptive_copy_wp_state = NA prevent_ra_online_upon_pwron = ENABLED EXAMPLE #1 provides a description of command output. EXAMPLE #5 ---------- To restore the file system ufs1_snap1 from the R1 Control Station, type: $ /nas/sbin/rootfs_rdf ufs1_snap1 -Restore remainder(MB) = 1 0 id = 20 name = ufs1_snap1 acl = 0 in_use = False type = uxfs volume = v168 pool = rw_servers= ro_servers= rw_vdms = ro_vdms = backup_of = ufs1 Fri Apr 23 16:29:23 EDT 2004 stor_devs = 002804000190-0052,002804000190-0053,002804000190-0054,002804000190-0055 disks = rootd33,rootd34,rootd35,rootd36 RDF Information: remote_symid = 002804000218 remote_sym_devname = ra_group_number = 2 dev_rdf_type = R1 dev_ra_status = READY dev_link_status = READY rdf_mode = SYNCHRONOUS rdf_pair_state = SYNCHRONIZED rdf_domino = DISABLED adaptive_copy = DISABLED adaptive_copy_skew = 65535 num_r1_invalid_tracks = 0 num_r2_invalid_tracks = 0 dev_rdf_state = READY remote_dev_rdf_state = WRITE_DISABLED rdf_status = 0 link_domino = DISABLED prevent_auto_link_recovery = DISABLED link_config = suspend_state = NA consistency_state = DISABLED adaptive_copy_wp_state = NA prevent_ra_online_upon_pwron = ENABLED EXAMPLE #1 provides a description of command output. -------------------------------------- Last Modified: March 29, 2010 06:15 pm
fs_timefinder Manages the TimeFinderTM/FS functionality for the specified filesystem or filesystem group. SYNOPSIS -------- fs_timefinder {
[-option
EXAMPLE #1 ---------- To create a TimeFinder/FS copy of the PFS, type: $ fs_timefinder ufs1 -Snapshot operation in progress (not interruptible)... remainder(MB) = 43688..37205..31142..24933..18649..12608..7115..4991..4129..3281..2457..1653..81 5..0 operation in progress (not interruptible)...id = 18 name = ufs1 acl = 0 in_use = True type = uxfs worm = off volume = mtv1 pool = rw_servers= server_2 ro_servers= rw_vdms = ro_vdms = backups = ufs1_snap1 auto_ext = no,thin=no fast_clone_level = 1 deduplication = Off stor_devs = 000187940268-0006,000187940268-0007,000187940268-0008,000187940268-0009 disks = d3,d4,d5,d6 disk=d3 stor_dev=000187940268-0006 addr=c0t1l0-48-0 server=server_2 disk=d3 stor_dev=000187940268-0006 addr=c16t1l0-33-0 server=server_2 disk=d4 stor_dev=000187940268-0007 addr=c0t1l1-48-0 server=server_2 disk=d4 stor_dev=000187940268-0007 addr=c16t1l1-33-0 server=server_2 disk=d5 stor_dev=000187940268-0008 addr=c0t1l2-48-0 server=server_2 disk=d5 stor_dev=000187940268-0008 addr=c16t1l2-33-0 server=server_2 disk=d6 stor_dev=000187940268-0009 addr=c0t1l3-48-0 server=server_2 disk=d6 stor_dev=000187940268-0009 addr=c16t1l3-33-0 server=server_2 id = 19 name = ufs1_snap1 acl = 0 in_use = False type = uxfs worm = off volume = v456 pool = rw_servers= ro_servers= rw_vdms = ro_vdms = backup_of = ufs1 Thu Oct 28 14:13:30 EDT 2011 auto_ext = no,thin=no fast_clone_level = unavailable deduplication = unavailable stor_devs = 000187940268-0180,000187940268-0181,000187940268-0182,000187940268-0183 disks = rootd378,rootd379,rootd380,rootd381 Where: Value Definition name Name assigned to the file system. acl Access control value for a file system. nas_ac provides informatio n. in_use If a file system is registered into the mount table of a Data Mover. type Type of file system.-list option provides a description of the types. worm Whether is enabled. volume Volume on which the file system resides. pool Storage pool for the file system.
rw_servers Servers with read-write access to a file system. ro_servers Servers with read-only access to a file system. rw_vdms VDM servers with read-write access to a file system. ro_vdms VDM servers with read-only access to a file system. backups Name of associated backups. backup_of File system that the file system copy is made from. auto_ext Indicates whether auto-extension and thin provisioning are enabled. fast_clone_ fast_clone_level=1 enables ability to create a fast clone. File le vel level retention and fast clone creation cannot be enabled together on a filesystem. fast_clone_level=2 enables ability to create fast clone of a fast clone (also called as the second level fast clone) on the filesystem. deduplication Deduplication state of the file system. The file data is transferred to the storage which performs the deduplication and compression on the data. The states are: On - Deduplication on the file system is enabled. Suspended - Deduplication on the file system is suspended. Deduplication does not perform any new space reduction but the exi sting files that were reduced in space remain the same. Off - Deduplication on the file system is disabled. Deduplication does not perform any new space reduction and the dat a is now reduplicated. stor_devs Storage system devices associated with a file system. The storage device output is the result of the Symmetrix hardware storage system. disks Disks on which the metavolume resides. EXAMPLE #2 ---------- To create a TimeFinder/FS copy of the PFS, ufs1, and leave a file system copy in mirrored mode, type: $ fs_timefinder ufs1 -Snapshot -option mirror=on operation in progress (not interruptible)...id = 18 name = ufs1 acl = 0 in_use = True type = uxfs worm = off volume = mtv1 pool = rw_servers= server_2 ro_servers= rw_vdms = ro_vdms = backups = ufs1_snap1 auto_ext = no,thin=no fast_clone_level = 1 deduplication = Off stor_devs = 000187940268-0006,000187940268-0007,000187940268-0008,000187940268-0009 disks = d3,d4,d5,d6 disk=d3 stor_dev=000187940268-0006 addr=c0t1l0-48-0 server=server_2 disk=d3 stor_dev=000187940268-0006 addr=c16t1l0-33-0 server=server_2 disk=d4 stor_dev=000187940268-0007 addr=c0t1l1-48-0 server=server_2 disk=d4 stor_dev=000187940268-0007 addr=c16t1l1-33-0 server=server_2 disk=d5 stor_dev=000187940268-0008 addr=c0t1l2-48-0 server=server_2 disk=d5 stor_dev=000187940268-0008 addr=c16t1l2-33-0 server=server_2 disk=d6 stor_dev=000187940268-0009 addr=c0t1l3-48-0 server=server_2 disk=d6 stor_dev=000187940268-0009 addr=c16t1l3-33-0 server=server_2 id = 19 name = ufs1_snap1 acl = 0 in_use = False
type = mirrorfs worm = off volume = v456 pool = rw_servers= ro_servers= rw_vdms = ro_vdms = backup_of = ufs1 Thu Oct 28 14:19:03 EDT 2012 auto_ext = no,thin=no fast_clone_level = unavailable deduplication = unavailable remainder = 0 MB (0%) stor_devs = 000187940268-0180,000187940268-0181,000187940268-0182,000187940268-0183 disks = rootd378,rootd379,rootd380,rootd381 EXAMPLE #1 provides a description of command output. EXAMPLE #3 ---------- To turn mirroring off for a file system copy, ufs1_snap1, type: $ fs_timefinder ufs1_snap1 -Mirror off operation in progress (not interruptible)... remainder(MB) = 0 operation in progress (not interruptible)...id = 18 name = ufs1 acl = 0 in_use = True type = uxfs worm = off volume = mtv1 pool = rw_servers= server_2 ro_servers= rw_vdms = ro_vdms = backups = ufs1_snap1 auto_ext = no,thin=no fast_clone_level = 1 deduplication = Off stor_devs = 000187940268-0006,000187940268-0007,000187940268-0008,000187940268-0009 disks = d3,d4,d5,d6 disk=d3 stor_dev=000187940268-0006 addr=c0t1l0-48-0 server=server_2 disk=d3 stor_dev=000187940268-0006 addr=c16t1l0-33-0 server=server_2 disk=d4 stor_dev=000187940268-0007 addr=c0t1l1-48-0 server=server_2 disk=d4 stor_dev=000187940268-0007 addr=c16t1l1-33-0 server=server_2 disk=d5 stor_dev=000187940268-0008 addr=c0t1l2-48-0 server=server_2 disk=d5 stor_dev=000187940268-0008 addr=c16t1l2-33-0 server=server_2 disk=d6 stor_dev=000187940268-0009 addr=c0t1l3-48-0 server=server_2 disk=d6 stor_dev=000187940268-0009 addr=c16t1l3-33-0 server=server_2 id = 19 name = ufs1_snap1 acl = 0 in_use = False type = uxfs worm = off volume = v456 pool = rw_servers= ro_servers= rw_vdms = ro_vdms = backup_of = ufs1 Thu Oct 28 14:21:50 EDT 2011 auto_ext = no,thin=no fast_clone_level = unavailable
deduplication = unavailable stor_devs = 000187940268-0180,000187940268-0181,000187940268-0182,000187940268-0183 disks = rootd378,rootd379,rootd380,rootd381 EXAMPLE #1 provides a description of command output. EXAMPLE #4 ---------- To turn mirroring on for a file system copy, ufs1_snap1, type: $ fs_timefinder ufs1_snap1 -Mirror on operation in progress (not interruptible)...id = 18 name = ufs1 acl = 0 in_use = True type = uxfs worm = off volume = mtv1 pool = rw_servers= server_2 ro_servers= rw_vdms = ro_vdms = backups = ufs1_snap1 auto_ext = no,thin=no fast_clone_level = 1 deduplication = Off stor_devs = 000187940268-0006,000187940268-0007,000187940268-0008,000187940268-0009 disks = d3,d4,d5,d6 disk=d3 stor_dev=000187940268-0006 addr=c0t1l0-48-0 server=server_2 disk=d3 stor_dev=000187940268-0006 addr=c16t1l0-33-0 server=server_2 disk=d4 stor_dev=000187940268-0007 addr=c0t1l1-48-0 server=server_2 disk=d4 stor_dev=000187940268-0007 addr=c16t1l1-33-0 server=server_2 disk=d5 stor_dev=000187940268-0008 addr=c0t1l2-48-0 server=server_2 disk=d5 stor_dev=000187940268-0008 addr=c16t1l2-33-0 server=server_2 disk=d6 stor_dev=000187940268-0009 addr=c0t1l3-48-0 server=server_2 disk=d6 stor_dev=000187940268-0009 addr=c16t1l3-33-0 server=server_2 id = 19 name = ufs1_snap1 acl = 0 in_use = False type = mirrorfs worm = off volume = v456 pool = rw_servers= ro_servers= rw_vdms = ro_vdms = backup_of = ufs1 Thu Oct 28 14:21:50 EDT 2011 auto_ext = no,thin=no fast_clone_level = unavailable deduplication = unavailable remainder = 0 MB (0%) stor_devs = 000187940268-0180,000187940268-0181,000187940268-0182,000187940268-0183 disks = rootd378,rootd379,rootd380,rootd381 EXAMPLE #1 provides a description of command output. EXAMPLE #5 ---------- To perform a mirror refresh on ufs1_snap1, type: $ fs_timefinder ufs1_snap1 -Mirror refresh operation in progress (not interruptible)...
remainder(MB) = 4991..4129..3281..2457..1653..815..0 operation in progress (not interruptible)...id = 18 name = ufs1 acl = 0 in_use = True type = uxfs worm = off volume = mtv1 pool = rw_servers= server_2 ro_servers= rw_vdms = ro_vdms = backups = ufs1_snap1 auto_ext = no,thin=no fast_clone_level = 1 deduplication = Off stor_devs = 000187940268-0006,000187940268-0007,000187940268-0008,000187940268-0009 disks = d3,d4,d5,d6 disk=d3 stor_dev=000187940268-0006 addr=c0t1l0-48-0 server=server_2 disk=d3 stor_dev=000187940268-0006 addr=c16t1l0-33-0 server=server_2 disk=d4 stor_dev=000187940268-0007 addr=c0t1l1-48-0 server=server_2 disk=d4 stor_dev=000187940268-0007 addr=c16t1l1-33-0 server=server_2 disk=d5 stor_dev=000187940268-0008 addr=c0t1l2-48-0 server=server_2 disk=d5 stor_dev=000187940268-0008 addr=c16t1l2-33-0 server=server_2 disk=d6 stor_dev=000187940268-0009 addr=c0t1l3-48-0 server=server_2 disk=d6 stor_dev=000187940268-0009 addr=c16t1l3-33-0 server=server_2 id = 19 name = ufs1_snap1 acl = 0 in_use = False type = uxfs worm = off volume = v456 pool = rw_servers= ro_servers= rw_vdms = ro_vdms = backup_of = ufs1 Thu Oct 28 14:25:21 EDT 2011 auto_ext = no,thin=no fast_clone_level = unavailable deduplication = unavailable stor_devs = 000187940268-0180,000187940268-0181,000187940268-0182,000187940268-0183 disks = rootd378,rootd379,rootd380,rootd381 Example #1 provides a description of command output. EXAMPLE #6 ---------- To restore the file system copy, ufs1_snap1, to its original location, type: $ /nas/sbin/rootfs_timefinder ufs1_snap1 -Restore -Force operation in progress (not interruptible)... remainder(MB) = 0 operation in progress (not interruptible)...id = 19 name = ufs1_snap1 acl = 0 in_use = False type = uxfs worm = off volume = v456 pool = rw_servers= ro_servers= rw_vdms = ro_vdms =
backup_of = ufs1 Thu Oct 28 14:25:21 EDT 2011 auto_ext = no,thin=no fast_clone_level = unavailable deduplication = unavailable stor_devs = 000187940268-0180,000187940268-0181,000187940268-0182,000187940268-0183 disks = rootd378,rootd379,rootd380,rootd381 id = 18 name = ufs1 acl = 0 in_use = True type = uxfs worm = off volume = mtv1 pool = rw_servers= server_2 ro_servers= rw_vdms = ro_vdms = backups = ufs1_snap1 auto_ext = no,thin=no fast_clone_level = 1 deduplication = Off stor_devs = 000187940268-0006,000187940268-0007,000187940268-0008,000187940268-0009 disks = d3,d4,d5,d6 disk=d3 stor_dev=000187940268-0006 addr=c0t1l0-48-0 server=server_2 disk=d3 stor_dev=000187940268-0006 addr=c16t1l0-33-0 server=server_2 disk=d4 stor_dev=000187940268-0007 addr=c0t1l1-48-0 server=server_2 disk=d4 stor_dev=000187940268-0007 addr=c16t1l1-33-0 server=server_2 disk=d5 stor_dev=000187940268-0008 addr=c0t1l2-48-0 server=server_2 disk=d5 stor_dev=000187940268-0008 addr=c16t1l2-33-0 server=server_2 disk=d6 stor_dev=000187940268-0009 addr=c0t1l3-48-0 server=server_2 disk=d6 stor_dev=000187940268-0009 addr=c16t1l3-33-0 server=server_2 EXAMPLE #7 ---------- To create a snapshot for a mapped pool, type: $ fs_timefinder ufs1 -name ufs1_snap1 -Snapshot -option pool=bcv_sg operation in progress (not interruptible)... remainder(MB) = ..14184..0 operation in progress (not interruptible)...id = 87 name = ufs1 acl = 0 in_use = False type = uxfs worm = off volume = mtv1 pool = rw_servers= ro_servers= rw_vdms = ro_vdms = backups = ufs1_snap1 fast_clone_level = 1 deduplication = Off deduplication = unavailable auto_ext = no,thin=no deduplication = unavailable stor_devs = 000194900546-0037 disks = d11 id = 88 name = ufs1_snap1 acl = 0 in_use = False type = uxfs worm = off volume = v456
pool = bcv_sg member_of = root_avm_fs_group_49 rw_servers= ro_servers= rw_vdms = ro_vdms = backup_of = ufs1 Fri Oct 1 12:03:10 EDT 2011 auto_ext = no,thin=no fast_clone_level = unavailable deduplication = unavailable thin_storage = False tiering_policy = thickfp2 mirrored = False stor_devs = 000194900546-003C disks = rootd16 Where: Value Definition auto_ext Indicates whether auto-extension and thin provisioning are enabled. deduplication Deduplication state of the file system. The file data is transferred to the storage which performs the deduplication and compression on the data. The states are: On - Deduplication on the file system is enabled. Suspended - Deduplication on the file system is suspended. Dedupl ication does not perform any new space reduction but the existing files that w ere reduced in space remain the same. Off - Deduplication on the file system is disabled. Deduplication does not perform any new space reduction and the data is now reduplicated. thin_storage Indicates whether the block storage system uses thin provisioning. Values are: True, False, Mixed. tiering_policy Indicates the tiering policy in effect. If the initial tier and the tiering policy are the same, the values are: Auto-Tier, H ighest Available Tier, Lowest Available Tier. If the initial tier and th e tiering policy are not the same, the values are: Auto-Tier/No Data Moveme nt, Highest Available Tier/No Data Movement, Lowest Available Tier/No Data Mo vement. mirrored Indicates whether the disk is mirrored. -------------------------------------- Last Modified: June 5, 2012 12:30 p.m.
Server CLI Commands
This chapter lists the eNAS Command Set provided for managing,
configuring, and monitoring Data Movers. The commands are prefixed with
server and appear alphabetically. The command line syntax (Synopsis), a
description of the options, and an example of usage are provided for each
command.
server_archive server_arp server_cdms
server_cepp server_certificate server_checkup
server_cifs server_cifssupport server_cpu
server_date server_dbms server_devconfig
server_df server_dns server_export
server_file server_fileresolve server_ftp
server_http server_ifconfig server_ip
server_kerberos server_ldap server_log
server_mount server_mountpoint server_mpfs
server_mt server_name server_netstat
server_nfs server_nis server_nsdomains
server_param server_pax server_ping
server_ping6 server_rip server_route
server_security server_setup server_snmpd
server_ssh server_standby server_stats
server_sysconfig server_sysstat server_tftp
server_umount server_uptime server_user
server_usermapper server_version server_viruschk
server_vtlu
server_archive Reads and writes file archives, and copies directory hierarchies. SYNOPSIS -------- server_archive
except that there may be hard links between the original and the copied files. The -l option provides more information. CAUTION ------- The destination directory must exist and must not be one of the file operands or a member of a file hierarchy rooted at one of the file operands. The result of a copy under these conditions is unpredictable. While processing a damaged archive during a read or list operation, server_archive attempts to recover from media defects and searches through the archive to locate and process the largest number of archive members possible (the -E option provides more details on error handling). OPERANDS -------- The directory operand specifies a destination directory pathname. If the directory operand does not exist, or it is not writable by the user, or it is not a directory name, server_archive exits with a non-zero exit status. The pattern operand is used to select one or more pathnames of archive members. Archive members are selected using the pattern matching notation described by fnmatch 3. When the pattern operand is not supplied, all members of the archive are selected. When a pattern matches a directory, the entire file hierarchy rooted at that directory is selected. When a pattern operand does not select at least one archive member, server_archive writes these pattern operands in a diagnostic message to standard error and then exits with a non-zero exit status. The file operand specifies the pathname of a file to be copied or archived. When a file operand does not select at least one archive member, server_archive writes these file operand pathnames in a diagnostic message to standard error and then exits with a non-zero exit status. The archive_file operand is the name of a file where the data is stored (write) or read (read/list). The archive_name is the name of the streamer on which the data will be stored (write) or read (read/list). Note: To obtain the device name, you can use server_devconfig -scsi. OPTIONS ------- The following options are supported: -r Reads an archive file from archive and extracts the specified files. If any intermediate directories are needed to extract an archive member, these directories will be created as if mkdir 2 was called with the bit-wise inclusive OR of S_IRWXU, S_IRWXG, and S_IRWXO, as the mode argument. When the selected archive format supports the specification of linked files and these files cannot be linked while the archive is being extracted, server_archive writes a diagnostic message to standard error and exits with a non-zero exit status at the completion of operation. -w Writes files to the archive in the specified archive format. -0 (zero) With this option, a full referenced backup is performed with the time and date of launching put in a reference file. This reference file is an ASCII file and is located in /.etc/BackupDates. The backup is
referenced by the pathname of the files to back up and the time and date when the backup was created. This file is updated only if the backup is successful. Backup files can be copied using the server_file command. -
be opened for reading and writing. -k Does not allow overwriting existing files. -l Links files. In the copy mode (-r, -w), hard links are made between the source and destination file hierarchies whenever possible. -I
Modifies the file or archive member names specified by the pattern or
bcpio The old binary cpio format. The default blocksize for this format is 5120 bytes. Note: This format is not very portable and should not be used when other formats are available. Inode and device information about a file (used for detecting file hard links by this format) which may be truncated by this format is detected by server_archive and is repaired. sv4cpio The System V release 4 cpio. The default blocksize for this format is 5120 bytes. Inode and device information about a file (used for detecting file hard links by this format) which may be truncated by this format is detected by server_archive and is repaired. sv4crc The System V release 4 cpio with file crc checksums. The default blocksize for this format is 5120 bytes. Inode and device information about a file (used for detecting file hard links by this format) which may be truncated by this format is detected by server_archive and is repaired. tar The old BSD tar format as found in BSD4.3. The default blocksize for this format is 10240 bytes. Pathnames stored by this format must be 100 characters or less in length. Only regular files, hard links, soft links, and directories will be archived (other file system types are not supported). ustar The extended tar interchange format specified in the -p1003.2 standard. The default blocksize for this format is 10240 bytes. Note: Pathnames stored by this format must be 250 characters or less in length (150 for basename and 100 for
Note: This option is the same as the .u option, except that the file inode change time is checked instead of the file modification time. The file inode change time can be used to select files whose inode information (such as uid, gid, and so on) is newer than a copy of the file in the destination directory. -E limit Has the following two goals: . In case of medium error, to limit the number of consecutive read faults while trying to read a flawed archive to limit. With a positive limit, server_archive attempts to recover from an archive read error and will continue processing starting with the next file stored in the archive. A limit of 0 (zero) will cause server_archive to stop operation after the first read error is detected on an archive volume. A limit of "NONE" will cause server_archive to attempt to recover from read errors forever. . In case of no medium error, to limit the number of consecutive valid header searches when an invalid format detection occurs. With a positive value, server_archive will attempt to recover from an invalid format detection and will continue processing starting with the next file stored in the archive. A limit of 0 (zero) will cause server_archive to stop operation after the first invalid header is detected on an archive volume. A limit of "NONE" will cause server_archive to attempt to recover from invalid format errors forever. The default limit is 10 retries. CAUTION Using this option with NONE requires extreme caution as server_archive may get stuck in an infinite loop on a badly flawed archive. -J Backs up, restores, or displays CIFS extended attributes. p: displays the full pathnamefor alternate names (for listing and archive only) u: specifies UNIX name for pattern search w: specifies M256 name for pattern search d: specifies M83 name for pattern search -L Follows all symbolic links to perform a logical file system traversal. -N Used with the -e archive_name option, prevents the tape from rewinding at the end of command execution. -P Does not follow symbolic links. Note: Performs a physical file system traversal. This is the default mode. -T [from_date][,to_date][/[c][m]] Allows files to be selected based on a file modification or inode change time falling within a specified time range of from_date to to_date (the dates are inclusive). If only a from_date is supplied, all files with a modification or inode change time equal to or less than are selected. If only a to_date is supplied, all files with a modification or inode change time equal to or greater than will be selected. When the from_date is equal to the to_date, only files with a modification or inode change time of exactly that time will be selected. When server_archive is in the write or copy mode, the optional trailing field [c][m] can be used to determine which file time (inode change, file modification or both) is used in the comparison. If neither is specified, the default is to use file modification time only. The m specifies the comparison of file modification time (the time when the file was last written). The c specifies the comparison of inode change time (the time when the file inode was last changed; for example, a change of owner, group, mode, and so on). When c and m are both
specified, then the modification and inode change times are both compared. The inode change time comparison is useful in selecting files whose attributes were recently changed, or selecting files which were recently created and had their modification time reset to an older time (as what happens when a file is extracted from an archive and the modification time is preserved). Time comparisons using both file times are useful when server_archive is used to create a time-based incremental archive (only files that were changed during a specified time range will be archived). A time range is made up of six different fields and each field must contain two digits. The format is: [yy[mm[dd[hh]]]]mm[ss] Where yy is the last two digits of the year, the first mm is the month (from 01 to 12), dd is the day of the month (from 01 to 31), hh is the hour of the day (from 00 to 23), the second mm is the minute (from 00 to 59), and ss is seconds (from 00 to 59). The minute field mm is required, while the other fields are optional, and must be added in the following order: hh, dd, mm, yy. The ss field may be added independently of the other fields. Time ranges are relative to the current time, so -T 1234/cm selects all files with a modification or inode change time of 12:34 P.M. today or later. Multiple -T time range can be supplied, and checking stops with the first match. -X When traversing the file hierarchy specified by a pathname, does not allow descending into directories that have a different device ID. The st_dev field as described in stat 2 for more information about device IDs. -Y Ignores files that have a less recent file inode change time than a pre-existing file, or archive member with the same name. Note: This option is the same as the -D option, except that the inode change time is checked using the pathname created after all the filename modifications have completed. -Z Ignores files that are older (having a less recent file modification time) than a pre-existing file, or archive member with the same name. Note: This option is the same as the -u option, except that the modification time is checked using the pathname created after all the filename modifications have completed. The options that operate on the names of files or archive members (-c, -i, -n, -s, -u, -v, -D, -T, -Y, and -Z) interact as follows. When extracting files during a read operation, archive members are selected, based only on the user-specified pattern operands as modified by the -c, -n, -u, -D, and -T options. Then any -s and -i options will modify, in that order, the names of those selected files. Then the -Y and -Z options will be applied based on the final pathname. Finally, the -v option will write the names resulting from these modifications. When archiving files during a write operation, or copying files during a copy operation, archive members are selected, based only on the user specified pathnames as modified by the -n, -u, -D, and -T options (the -D option applies only during a copy operation). Then any -s and -i options will modify, in that order, the names of these selected files. Then during a copy operation, the -Y and the -Z options will be applied based on the final pathname. Finally, the -v option will write the names resulting from these modifications. When one or both of the -u or -D options are specified along with the -n option, a file is not considered selected unless it is newer than the file to which it is compared. SEE ALSO --------
Using the server_archive Utility on VNX. EXAMPLE #1 ---------- To archive the contents of the root directory to the device rst0, type: $ server_archive
preserve the user ID, group ID, or file mode when the -p option is specified, a diagnostic message is written to standard error, and a non-zero exit status is returned. However, processing continues. In the case where server_archive cannot create a link to a file, this command will not create a second copy of the file. If the extraction of a file from an archive is prematurely terminated by a signal or error, server_archive may have only partially extracted a file the user wanted. Additionally, the file modes of extracted files and directories may have incorrect file bits, and the modification and access times may be wrong. If the creation of an archive is prematurely terminated by a signal or error, server_archive may have only partially created the archive which may violate the specific archive format specification. If while doing a copy, server_archive detects a file is about to overwrite itself, the file is not copied, a diagnostic message is written to standard error and when server_archive completes, it exits with a non-zero exit status. ---------------------------------------------------------- Last modified: May 12, 2011 1:15 pm.
server_arp Manages the Address Resolution Protocol (ARP) table for the Data Movers. SYNOPSIS -------- server_arp {
-------------------------------------- Last Modified: March 31, 2010 11:15 am
server_cdms Provides File Migration Service for VNX functionality for the specified Data Movers. SYNOPSIS -------- server_cdms {
[mntPort=
been run. [-include
type = CIFS source = \\winserver1.nasdocs.emc.com\srcdir\ netbios= DM112-CGE0.NASDOCS.EMC.COM admin = nasdocs.emc.com\administrator When migration is started: $ server_cdms server_2 server_2 : CDMS enabled with 32 threads. ufs1: path = /nfsdir cid = 0 type = NFSV3 source = 172.24.102.144:/srcdir options= proto=TCP path = /dstdir cid = 1 type = CIFS source = \\winserver1.nasdocs.emc.com\srcdir\ netbios= DM112-CGE0.NASDOCS.EMC.COM admin = nasdocs.emc.com\administrator threads: path = /dstdir state = ON_GOING log = / cid = NONE Where: Value Definition ufs1 Migration file system path Directory in the local file system cid Connection ID (0 through 1023) type Protocol type to be used to communicate with the remote server source Source file server name or IP address of the remote server and the export path for migration options Connection protocol type netbios NetBIOS name of the remote CIFS server admin Administrator for the file system threads Currently existing migration threads state Current status of migration threads log Location of the log file that provides detailed information EXAMPLE #4 ---------- To direct server_2 to migrate all files from the source file server to the VNX, type: $ server_cdms server_2 -start ufs1 -path /dstdir -log / server_2 : done EXAMPLE #5 ---------- To display information about migration with the specified status, type: $ server_cdms server_2 -info ufs1 -state ON_GOING server_2 :
ufs1: path = /nfsdir cid = 0 type = NFSV3 source = 172.24.102.144:/srcdir options= proto=TCP path = /dstdir cid = 1 type = CIFS source = \\winserver1.nasdocs.emc.com\srcdir\ netbios= DM112-CGE0.NASDOCS.EMC.COM admin = nasdocs.emc.com\administrator threads: path = /dstdir state = ON_GOING log = / cid = NONE EXAMPLE #6 ---------- To stop data migration on server_2 for ufs1, type: $ server_cdms server_2 -halt ufs1 -path /dstdir server_2 : done EXAMPLE #7 ---------- To check that all data has completed the migration, type: $ server_cdms server_2 -verify ufs1 -path /dstdir server_2 : done EXAMPLE #8 ---------- To disconnect the path on server_2 for data migration, type: $ server_cdms server_2 -disconnect ufs1 -path /nfsdir server_2 : done EXAMPLE #9 ---------- To disconnect all paths for data migration, type: $ server_cdms server_2 -disconnect ufs1 -all server_2 : done EXAMPLE #10 ----------- To perform a verify check on ufs1, and then convert it to a uxfs, type: $ server_cdms server_2 -Convert ufs1 server_2 : done -------------------------------------- Last Modified: March 31, 2010 05:00 pm
server_cepp Manages the Common Event Publishing Agent (CEPA) service on the specified Data Mover SYNOPSIS -------- server_cepp {
ft size = 1048576 ft location = /.etc/cepp msrpc user = OMEGA13$ msrpc client name = OMEGA13.CEE.LAB.COM pool_name server_required access_checks_ignored req_timeout retry_timeout pool_1 no 0 5000 25000 Where Value Definition CIFS share name The name of the shared directory and CIFS server used to access files in the Data Movers. cifs_server CIFS server to access files. heartbeat_interval The time taken to scan each CEPA server. ft level Fault tolerance level assigned. This option is required. 0 (continue and tolerate lost events; default setting), 1 (continue and use a persistence file as a circular event buffer for lost events), 2 (continue and use a persistenc e file as a circular event buffer for lost events until the buffer is filled and then stop CIFS), or 3 (upon heartbea t loss of connectivity, stop CIFS). ft location Directory where the persistence buffer file resides relative to the root of a file system. If a location is not specified, the default location is the root of the file system. ft size Maximum size in MB of the persistence buffer file. The default is 1 MB and the range is 1 MB to 100 MB. msrpc user Name assigned to the user account that the CEPA service i s running under on the CEE machine. For example, ceeuser. msrpc client name Domain name assigned if the msrpc user is a member of a domain. For example, domain.ceeuser. pool_name Name assigned to the pool that will use the specified CEPA options. server_required Displays availability of the CEPA server. If a CEPA server is not available and this option is yes, an error is returned to the requestor that access is denied. If a CEPA server is not available and this option is no, an error is not returned to the requestor and access is allowed. access_checks_ignored The number of CIFS requests processed when a CEPA server is not available and the server_required option is set to "no." This option is reset when the CEPA server becomes available. req_timeout Time out in ms to send a request that allows access to the CEPA server. retry_timeout Time out in ms to retry the access request sent to the CEPA server. EXAMPLE #4 ---------- To display information about the CEPA pool, type: $ server_cepp server_2 -pool -info
server_2 : pool_name = pool1 server_required = yes access_checks_ignored = 0 req_timeout = 5000 ms retry_timeout = 25000 ms pre_events = OpenFileNoAccess, OpenFileRead post_events = CreateFile,DeleteFile post_err_events = CreateFile,DeleteFile CEPP Servers: IP = 10.171.10.115, state = ONLINE, vendor = Unknown ... Where Value Definition pre_events Sends notification before selected event occurs. An empty list indicates that no pre-event messages are generated. post_events Sends notification after selected event occurs. An empty list indicates that no post-event messages are generated. post_err_events Sends notification if selected event generates an error. An empty list indicates that no post-error-event messages are generated. CEPP Servers IP addresses of the CEPA servers; state of the CEPA servers; vendor software installed on CEPA servers. EXAMPLE #5 ---------- To display statistics for the CEPA pool, type: $ server_cepp server_2 -pool -stats server_2 : pool_name = pool1 Event Name Requests Min(us) Max(us) Average(us) OpenFileWrite 2 659 758 709 CloseModified 2 604 635 620 Total Requests = 4 Min(us) = 604 Max(us) = 758 Average(us) = 664 -------------------------------------------- Last Modified: April 05 2010, 11:15 am
server_certificate Manages VNX for file systems Public Key Infrastructure (PKI) for the specified Data Movers. SYNOPSIS -------- server_certificate {
-persona -info {-all|
done EXAMPLE #2 ---------- To list all the CA certificates currently available on the VNX, type: $ server_certificate ALL -ca_certificate -list server_2 : id=1 subject=O=Celerra Certificate Authority;CN=sorento issuer=O=Celerra Certificate Authority;CN=sorento expire=20120318032639Z id=2 subject=C=US;O=VeriSign, Inc.;OU=Class 3 Public Primary Certification Author issuer=C=US;O=VeriSign, Inc.;OU=Class 3 Public Primary Certification Author expire=20280801235959Z server_3 : id=1 subject=O=Celerra Certificate Authority;CN=zeus-cs issuer=O=Celerra Certificate Authority;CN=zeus-cs expire=20120606181215Z EXAMPLE #3 ---------- To list the properties of the CA certificate identified by certificate ID 2, type: $ server_certificate server_2 -ca_certificate -info 2 server_2 : id=2 subject = C=US;O=VeriSign, Inc.;OU=Class 3 Public Primary Certification Authority issuer = C=US;O=VeriSign, Inc.;OU=Class 3 Public Primary Certification Authority start = 19960129000000Z expire = 20280801235959Z signature alg. = md2WithRSAEncryption public key alg. = rsaEncryption public key size = 1024 bits serial number = 70ba e41d 10d9 2934 b638 ca7b 03cc babf version = 1 EXAMPLE #4 ---------- To generate a key set and certificate request to be sent to an external CA for the persona identified by the persona name default, type: $ server_certificate server_2 -persona -generate default -key_size 2048 -common_name division.xyz.com server_2 : Starting key generation. This could take a long time ... done EXAMPLE #5 ---------- To list all the key sets and associated certificates currently available on the VNX, type: $ server_certificate ALL -persona -list server_2 : id=1 name=default
next state=Request Pending request subject=CN=name;CN=1.2.3.4 server_3 : id=1 name=default next state=Not Available CURRENT CERTIFICATE: id=1 subject=CN=test;CN=1.2.3.4 expire=20070706183824Z issuer=O=Celerra Certificate Authority;CN=eng173100 EXAMPLE #6 ---------- To list the properties of the key set and certificate identified by persona ID 1, type: $ server_certificate server_2 -persona -info id=1 server_2 : id=1 name=default next state=Request Pending request subject=CN=name;CN=1.2.3.4 Request: -----BEGIN CERTIFICATE REQUEST----- MIIEZjCCAk4CAQAwITENMAsGA1UEAxMEbmFtZTEQMA4GA1UEAxMHMS4yLjMuNDCC AiIwDQYJKoZIhvcNAQEBBQADggIPADCCAgoCggIBANKW3Q/F6eMqIxrCO5IeXLET bWkm5RzrbI5lHxLNuhobR5S9G2o+k47X0QZFkGzq/2F7kR06vVIH7CPH9X2gGAzV 7GmZaFO0wPcktPJYzjQE8guNhcL1qZpPl4IZrbnSGEAWcAAE0nvNwLp9aN0WSC+N TDJZY4A9yTURiUc+Bs8plhQh16wLLL0zjUKIvKjAqiTE0F3RApVJEE/9y6N+Idsb Vwf/rvzP6/z0wZW5Hl84HKXInJaHTBDK59G+e/Y2JgvUY1UNBZ5SODunOakHabex k6COFYjDu7Vd+yHpvcyTalHJ2RcIavpQuM02o+VVpxgUyX7M1+VXJXTJm0yb4j4g tZITOSVZ2FqEpOkoIpzqoAL7A9B69WpFbbpIX8danhReafDh4oj4yWocvSwMKYv1 33nLak3+wpMQNrwJ2L9FIHP2fXClnvThBgupm7uqqHP3TfNBbBPTYY3qkNPZ78wx /njUrZKbfWd81Cc+ngUi33hbMuBR3FFsQNASYZUzgl5+JexALH5jhBahd2aRXBag itQLhvxYK0dEqIEwDfdDedx7i+yro2gbNxhLLdtkuBtKrmOnuT5g2WWXNKzNa/H7 KWv8JSwCv1mW1N/w7V9aEbDizBBfer+ZdMPkGLbyb/EVXZnHABeWH3iKC6/ecnRd 4Kn7KO9F9qXVHlzzTeYVAgMBAAGgADANBgkqhkiG9w0BAQUFAAOCAgEAzSS4ffYf 2WN0vmZ0LgsSBcVHPVEVg+rP/aU9iNM9KDJ4P4OK41UDU8tOGy09Kc8EvklBUm59 fyjt2T/3RqSgvvkCEHJsVW3ZMnSsyjDo6Ruc0HmuY4q+tuLl+dilSQnZGUxt8asw dhEpdEzXA6o9cfmVZMSt5QicfAmmBNr4BaO96+VAlg59fu/chU1pvKWWMGXz4I2s 7z+UdMBYO4pEfyG1i34Qof/z4K0SVNICn3CEkW5TIsSt8qA/E2JXXlLhbMYWKYuY 9ur/gspHuWzkIXZFx4SmTK9/RsE1Vy7fBztIoN8myFN0nma84D9pyqls/yhvXZ/D iDF6Tgk4RbNzuanRBSYiJFu4Tip/nJlK8uv3ZyFJ+3DK0c8ozlBLuQdadxHcJglt m/T4FsHa3JS+D8CdA3uDPfIvvVNcwP+4RBK+Dk6EyQe8uKrVL7ShbacQCUXn0AAd Ol+DQYFQ7Mczcm84L98srhov3JnIEKcjaPseB7S9KtHvHvvs4q1lQ5U2RjQppykZ qpSFnCbYDGjOcqOrsqNehV9F4h9fTszEdUY1UuLgvtRj+FTT2Ik7nMK641wfVtSO LCial6kuYsZg16SFxncnH5gKHtQMWxd9nv+UyJ5VwX3aN12N0ZQbaIDcQp75Em2E aKjd28cZ6FEavimn69sz0B8PHQV+6dPwywM= -----END CERTIFICATE REQUEST----- EXAMPLE #7 ---------- To generate a key set and certificate request that is automatically received by the Control Station for the persona identified by the persona name default, type: $ server_certificate server_2 -persona -generate default -key_size 2048 -cs_sign_duration 12 -comon_name division.xyz.com server_2 : Starting key generation. This could take a long time ... done EXAMPLE #8 ---------- To generate a key set and certificate request to be sent to an external
CA specifying subject information, type: $ server_certificate server_2 -persona -generate default -key_size 2048 -common_name division.xyz.com -ou QA -organization XYZ -location Bethesda -state Maryland -country US -filename /tmp/server_2.1.request.pem server_2 : Starting key generation. This could take a long time ... done EXAMPLE #9 ---------- To import a signed certificate and paste the certificate text, type: $ server_certificate server_2 -persona -import default server_2 : Please paste certificate data. Enter a carriage return and on the new line type .end of file. or .eof. followed by another carriage return. ----------------------------------------------------- Last Modified: March 31, 2010 12:45 pm
server_checkup Checks the configuration parameters, and state of a Data Mover and its dependencies. SYNOPSIS -------- server_checkup {
EXAMPLE #2 ---------- To execute the check of the CIFS component, type: $ server_checkup server_2 -test CIFS server_2 : ------------------------------------Checks-------------------------------------- Component CIFS : ACL : Checking the number of ACL per file system.....................*Pass Connection: Checking the load of TCP connections of CIFS...................Pass Credential: Checking the validity of credentials...........................Pass DC : Checking the connectivity and configuration of the DCs.........*Fail DFS : Checking the DFS configuration files and DFS registry.......... Pass DNS : Checking the DNS configuration and connectivity to DNS servers. Pass EventLog : Checking the configuration of Windows Event Logs...............Pass FS_Type : Checking if all file systems are all DIR3 type................. Pass GPO : Checking the GPO configuration................................. Pass HomeDir : Checking the configuration of home directory share............. Pass I18N : Checking the I18N mode and the Unicode/UTF8 translation tables. Pass Kerberos : Checking machine password update for Kerberos..................Fail LocalGrp : Checking the local groups database configuration...............Fail NIS : Checking the connectivity to the NIS servers, if defined....... Pass NTP : Checking the connectivity to theNTP servers, if defined........ Pass Ntxmap : Checking the ntxmap configuration file......................... Pass Security : Checking the CIFS security settings............................Pass Server : Checking the CIFS files servers configuration.................. Pass Share : Checking the network shares database........................... Pass SmbList : Checking the range availability of SMB ID......................*Pass Threads : Checking for CIFS blocked threads.............................. Pass UM_Client : Checking for the connectivity to usermapper servers, if any....Pass UM_Server : Checking the consistency of usermapper database, if primary....*Pass UnsupOS : Checking for unsupported client network OS..................... Pass UnsupProto: Checking for unsupported client network protocols..............Pass VC : Checking the configuration to Virus Checker servers............ Pass WINS : Checking for the connectivity to WINS servers, if defined...... Pass NB: a result with a * means that some tests were not executed. use -full to run them -------------------------------------------------------------------------------- ---------------------------CIFS : Kerberos Warnings----------------------------- Warning 17451974742: server_2 : No update of the machine password of server DM102-CGE1. hold. --> Check the log events to find out the reason of this issue. Warning 17451974742: server_2 : No update of the machine password of server DM102-CGE0. hold. --> Check the log events to find out the reason of this issue. ---------------------------CIFS : LocalGrp Warnings----------------------------- Warning 17451974726: server_2 : The local group Guests of server DM102-CGE1 contains an unmapped member: S-1-5-15-60415a8a-335a7a0d-6b635f23-202.The access to some network resources may be refused. --> According the configured resolver of your system (NIS, etc config files, usermapper, LDAP...),add the missing members. -------------------------------------------------------------------------------- -------------------------------CIFS : DC Errors--------------------------------- Error 13160939577: server_2 : pingdc failed due to NT error ACCESS_DENIED at step SAMR lookups --> check server configuration and/or DC policies according to reported error. Error 13160939577: server_2 : pingdc failed due to NT error ACCESS_DENIED at step SAMR lookups --> check server configuration and/or DC policies according to reported error. -------------------------------------------------------------------------------- EXAMPLE #3
--------- To execute only the check of the DNS dependency of the CIFS component, type: $ server_checkup server_2 -test CIFS -subtest DNS server_2 : ------------------------------------Checks-------------------------------------- Component CIFS : DNS : Checking the DNS configuration and connectivity to DNS servers. Pass -------------------------------------------------------------------------------- EXAMPLE #4 ---------- To list the available dependencies of the CIFS component, type: $ server_checkup server_2 -info CIFS server_2 : done COMPONENT : CIFS DEPENDENCY : ACL DESCRIPTION : Number of ACL per file system. TESTS : In full mode, check if the number of ACL per file system doesnt exceed 90% of the maximum limit. COMPONENT : CIFS DEPENDENCY : Connection DESCRIPTION : TCP connection number TESTS : Check if the number of CIFS TCP connections doesnt exceed 80% of the maximum number. COMPONENT : CIFS DEPENDENCY : Credential DESCRIPTION : Users and groups not mapped TESTS : Check if all credentials in memory are mapped to a valid SID. COMPONENT : CIFS DEPENDENCY : DC DESCRIPTION : Connectivity to the domain controllers TESTS : Check the connectivity to the favorite DC (DCPing), In full mode, check the connectivity to all DC of the domain, Check if DNS site information are defined for each computer name, Check if the site of each computer name has an available DC, Check if trusted domain of each computer name can be reached, Check the ds.useDCLdapPing parameter is enabled, Check the ds.useADSite parameter is enabled. COMPONENT : CIFS DEPENDENCY : DFS DESCRIPTION : DFS service configuration on computer names TESTS : Check the DFS service is enabled in registry if DFS metadata exists, Check the DFS metadata of each share with DFS flag are correct, Check if share names in DFS metadata are valid and have the DFS flag, Check if each DFS link is valid and loaded, Check in the registry if the WideLink key is enabled and corresponds to a valid share name. COMPONENT : CIFS DEPENDENCY : DNS DESCRIPTION : DNS domain configuration TESTS : Check if each DNS domain has at least 2 defined servers, Check the connectivity to each DNS server of each DNS domain,
Check if each DNS server of each DNS domain supports really the DNS service, Check the ds.useDSFile parameter (automatic discovery of DC), Check the ds.useDSFile parameter is enabled if the directoryservice file exists. COMPONENT : CIFS DEPENDENCY : EventLog DESCRIPTION : Event Logs parameters on servers TESTS : Check if the pathnames of each event logs files are valid (application, system and security), Check if the maximum file size of each event logs file doesnt exceed 1GB, Check if the retention time of each event logs file doesnt exceed 1 month. COMPONENT : CIFS DEPENDENCY : FS_Type DESCRIPTION : DIR3 mode of filesystems TESTS : Check if each file system is configured in the DIR3 mode. COMPONENT : CIFS DEPENDENCY : GPO DESCRIPTION : GPO configuration on Win2K servers TESTS : Check if the size of the GPO cache file doesnt exceed 10% of the total size of the root file system, Check the last modification date of the GPO cache file is up-to-date, Check the cifs.gpo and cifs.gpoCache parameters have not been changed, COMPONENT : CIFS DEPENDENCY : HomeDir DESCRIPTION : Home directory shares configuration TESTS : Check if the home directory shares configuration file exists, the feature is enabled, Check if the home directory shares configuration file is optimized (40 lines maximum), Check the syntax of the home directory shares configuration file. COMPONENT : CIFS DEPENDENCY : I18N DESCRIPTION : Internationalization and translation tables TESTS : Check if computer name exists, the I18N mode is enabled, Check the .etc_common file system is correctly mounted, Check the syntax of the definition file of the Unicode characters, Check the uppercase/lowercase conversion table of Unicode character is valid. COMPONENT : CIFS DEPENDENCY : Kerberos DESCRIPTION : Kerberos configuration TESTS : Check the machine password update is enabled and up-to-date. COMPONENT : CIFS DEPENDENCY : LocalGrp DESCRIPTION : Local groups and local users TESTS : Check the local group database doesnt contain more than 80% of the maximum number of servers, Check if the servers in the local group database are all valid servers, Check the state of the local group database (initialized and writable), Check if the members of built-in local groups are all resolved in the domain, Check the number of built-in local groups and built-in local users, Check if the number of defined local users doesnt exceed 90% of the maximum number. COMPONENT : CIFS DEPENDENCY : NIS DESCRIPTION : Network Information System (NIS) configuration TESTS :
If NIS is configured, check at least 2 NIS servers are defined (redundancy check), Check if each NIS server can be contacted on the network, Check if each NIS server really supports the NIS service. COMPONENT : CIFS DEPENDENCY : NTP DESCRIPTION : Network Time Protocol (NTP) configuration TESTS : If NTP is configured, check at least 2 NTP servers are defined (redundancy check), Check if each NIS server can be contacted on the network, If computer names exist, check if NTP is configured and is running. COMPONENT : CIFS DEPENDENCY : Ntxmap DESCRIPTION : Checking the ntxmap.conf file. TESTS : Check the data consistency of the ntxmap configuration file. COMPONENT : CIFS DEPENDENCY : Security DESCRIPTION : Security settings TESTS : If the I18N mode is enabled, check the share/unix security setting is not in use, Discourage to use the share/unix security setting, Check the cifs.checkAcl parameter is enabled if the security setting is set to NT. COMPONENT : CIFS DEPENDENCY : Server DESCRIPTION : Files servers TESTS : Check if each CIFS server is configured with a valid IP interface, Check if each computer name has joined its domain, Check if each computer name is correctly registered in their DNS servers, Check if the DNS servers have the valid IP addresses of each computer name, Check if a DNS domain exists if at least one computer name exists, COMPONENT : CIFS DEPENDENCY : Share DESCRIPTION : Network shares TESTS : Check the available size and i-nodes on the root file system are at least 10% of the total size, Check the size of the share database doesnt exceed 30% of the total size of the root file system, Check if the pathname of each share is valid and is available, Check if each server in the share database really exists, Check if the I18N mode is enabled, all the share names are UTF-8 compatible, Check the list of ACL of each share contains some ACE, Check the length of each share name doesnt exceed 80 Unicode characters. COMPONENT : CIFS DEPENDENCY : SmbList DESCRIPTION : 64k UID, TID and FID limits TESTS : In full mode, check the 3 SMB ID lists (UID, FID and TID) dont exceed 90% of the maximum ID number. COMPONENT : CIFS DEPENDENCY : Threads DESCRIPTION : Blocked threads and overload TESTS : Check CIFS threads blocked more than 5 and 30 seconds, Check the maximum number of CIFS threads in use in the later 5 minutes doesnt exceed 90% of the total number, Check the number of threads reserved for Virus Checker doesnt exceed 20% of the total number of CIFS threads.
COMPONENT : CIFS DEPENDENCY : UM_Client DESCRIPTION : Connectivity to the usermapper server TESTS : If usermapper servers are defined, check each server can be contacted, Check if usermapper servers are defined, NIS is not simultaneously activated. COMPONENT : CIFS DEPENDENCY : UM_Server DESCRIPTION : Primary usermapper server TESTS : If a primary usermapper is defined locally, check its database size doesnt exceed 30% of the total size, Check if configuration file is in use, the filling rate of the ranges doesnt exceed 90%, Check if configuration file is in use, 2 ranges do not overlap, Check if secmap is enabled, In full mode, check the SID/UID and SID/GID mappings and reverses are correct and coherent. COMPONENT : CIFS DEPENDENCY : UnsupOS DESCRIPTION : Client OS not supported TESTS : Check for unsupported client network OS. COMPONENT : CIFS DEPENDENCY : UnsupProto DESCRIPTION : Unsupported protocol commands detected TESTS : Check for unsupported client network protocol commands. COMPONENT : CIFS DEPENDENCY : VC DESCRIPTION : Virus checker configuration TESTS : If VC is enabled, check the syntax of the VC configuration file, Check if the VC enable file and the VC configuration are compatible, Check the number of VC servers. Make sure at least 2 servers are defined, for redundancy, Check if there are offline VC servers, Check if the VC high watermark has not been reached, Check the connection of VC servers to the Data Mover. COMPONENT : CIFS DEPENDENCY : WINS DESCRIPTION : WINS servers. TESTS : If NetBIOS names are defined, check if at least one WINS server is defined, Check the number of WINS servers. check if two servers are defined for redundancy, Check if each WINS server can be contacted on the network, Check these servers are really WINS servers, Check if the NetBIOS are correctly registered on the servers. EXAMPLE #5 ---------- To execute additional tests, type: $ server_checkup server_2 -full server_2 : ------------------------------------Checks-------------------------------------- Component REPV2 : F_RDE_CHEC: Checking the F-RDE compatibilty of Repv2 sessions........ Fail Component HTTPS :
HTTP: Checking the configuration of HTTP applications................ Pass SSL : Checking the configuration of SSL applications................. Pass Component CIFS : ACL : Checking the number of ACL per file system..................... Pass Connection: Checking the load of TCP connections of CIFS............. Pass Credential: Checking the validity of credentials..................... Pass DC : Checking the connectivity and configuration of the DCs.......... Fail DFS : Checking the DFS configuration files and DFS registry.......... Pass DNS : Checking the DNS configuration and connectivity to DNS servers. Pass EventLog : Checking the configuration of Windows Event Logs.......... Pass FS_Type : Checking if all file systems are all DIR3 type............. Pass GPO : Checking the GPO configuration................................. Pass HomeDir : Checking the configuration of home directory share......... Pass I18N : Checking the I18N mode and the Unicode/UTF8 translation tables Pass Kerberos : Checking machine password update for Kerberos............. Fail LocalGrp : Checking the local groups database configuration.......... Fail NIS : Checking the connectivity to the NIS servers, if defined....... Pass NTP : Checking the connectivity to theNTP servers, if defined........ Pass Ntxmap : Checking the ntxmap configuration file...................... Pass Security : Checking the CIFS security settings....................... Pass Server : Checking the CIFS files servers configuration............... Pass Share : Checking the network shares database......................... Pass SmbList : Checking the range availability of SMB ID.................. Pass Threads : Checking for CIFS blocked threads.......................... Pass UM_Client : Checking for the connectivity to usermapper servers, if any.... Pass UM_Server : Checking the consistency of usermapper database, if primary.... Pass UnsupOS : Checking for unsupported client network OS................. Pass UnsupProto: Checking for unsupported client network protocols........ Pass VC : Checking the configuration to Virus Checker servers............. Pass WINS : Checking for the connectivity to WINS servers, if defined..... Pass Component FTPDS : FS_Type : Checking if all file systems are in the DIR3 format...... Pass FTPD : Checking the configuration of FTPD....................... Fail NIS : Checking the connectivity to the NIS servers............. Pass NS : Checking the naming services configuration............... Fail NTP : Checking the connectivity to the NTP servers............. Fail SSL : Checking the configuration of SSL applications........... Fail -------------------------------------------------------------------------------- --------------------------HTTPS : SSL Warnings---------------------------- Warning 17456169084: server_2 : The SSL feature DHSM can not get certificate from the persona default. Because this feature needs a certificate and a private key, it can not start, --> Run the server_certificate command to generate a new key set and certificate for this persona. Or run the appropriate command (like server_http for instance) to set a correct persona for this SSL feature. Warning 17456169084: server_2 : The SSL feature DIC can not get certificate from the persona default. Because this feature needs a certificate and a private key, it can not start, --> Run the server_certificate command to generate a new key set and certificate for this persona. Or run the appropriate command (like server_http for instance) to set a correct persona for this SSL feature. Warning 17456169084: server_2 : The SSL feature DIC_S can not get certificate from the persona default. Because this feature needs a certificate and a private key, it can not start,
--> Run the server_certificate command to generate a new key set and certificate for this persona. Or run the appropriate command (like server_http for instance) to set a correct persona for this SSL feature. Warning 17456169084: server_2 : The SSL feature DIC_L can not get certificate from the persona default. Because this feature needs a certificate and a private key, it can not start, --> Run the server_certificate command to generate a new key set and certificate for this persona. Or run the appropriate command (like server_http for instance) to set a correct persona for this SSL feature. Warning 17456169084: server_2 : The SSL feature DBMS_FILE_TRANSFER can not get certificate from the persona default. Because this feature needs a certificate and a private key, it can not start, --> Run the server_certificate command to generate a new key set and certificate for this persona. Or run the appropriate command (like server_http for instance) to set a correct persona for this SSL feature. -----------------------CIFS : Credential Warnings------------------------- Warning 17456168968: server_2 : The CIFS service is currently stopped. Many CIFS sanity check tests cannot be done as all CIFS servers are currently disabled on this Data Mover. --> Start the CIFS server by executing the server_setup command, and try again. ---------------------------CIFS : DC Warnings----------------------------- Warning 17456168968: server_2 : The CIFS service is currently stopped. Many CIFS sanity check tests cannot be done as all CIFS servers are currently disabled on this Data Mover. --> Start the CIFS server by executing the server_setup command, and try again. --------------------------CIFS : DFS Warnings----------------------------- Warning 17456168968: server_2 : The CIFS service is currently stopped. Many CIFS sanity check tests cannot be done as all CIFS servers are currently disabled on this Data Mover. --> Start the CIFS server by executing the server_setup command, and try again. ------------------------CIFS : EventLog Warnings-------------------------- Warning 17456168968: server_2 : The CIFS service is currently stopped. Many CIFS sanity check tests cannot be done as all CIFS servers are currently disabled on this Data Mover. --> Start the CIFS server by executing the server_setup command, and try again. ------------------------CIFS : HomeDir Warnings--------------------------- Warning 17456168968: server_2 : The CIFS service is currently stopped. Many CIFS sanity check tests cannot be done as all CIFS servers are currently disabled on this Data Mover. --> Start the CIFS server by executing the server_setup command, and try again. --------------------------CIFS : I18N Warnings---------------------------- Warning 17456168968: server_2 : The CIFS service is currently stopped. Many CIFS sanity check tests cannot be done as all CIFS servers are currently disabled on this Data Mover. --> Start the CIFS server by executing the server_setup command, and try again. ------------------------CIFS : Kerberos Warnings-------------------------- Warning 17456168968: server_2 : The CIFS service is currently stopped. Many CIFS sanity check tests cannot be done as all CIFS servers are currently disabled on this Data Mover.
--> Start the CIFS server by executing the server_setup command, and try again. ------------------------CIFS : LocalGrp Warnings-------------------------- Warning 17456168968: server_2 : The CIFS service is currently stopped. Many CIFS sanity check tests cannot be done as all CIFS servers are currently disabled on this Data Mover. --> Start the CIFS server by executing the server_setup command, and try again. --------------------------CIFS : NTP Warnings----------------------------- Warning 17456169044: server_2 : The Network Time Protocol subsystem (NTP) has been stopped or is not connected to its server. It may cause potential errors during Kerberos authentication (timeskew). --> If the NTP service is not running, start it using the server_date command. If it is not connected, check the IP address of the NTP server and make sure the NTP service is up and running on the server. If needed, add another NTP server in the configuration of the Data Mover. Use the server_date command to manage the NTP service and the parameters on the Data Mover. -------------------------CIFS : Secmap Warnings--------------------------- Warning 17456168968: server_2 : The CIFS service is currently stopped. Many CIFS sanity check tests cannot be done as all CIFS servers are currently disabled on this Data Mover. --> Start the CIFS server by executing the server_setup command, and try again. -------------------------CIFS : Server Warnings--------------------------- Warning 17456168968: server_2 : The CIFS service is currently stopped. Many CIFS sanity check tests cannot be done as all CIFS servers are currently disabled on this Data Mover. --> Start the CIFS server by executing the server_setup command, and try again. -------------------------CIFS : Share Warnings---------------------------- Warning 17456168968: server_2 : The CIFS service is currently stopped. Many CIFS sanity check tests cannot be done as all CIFS servers are currently disabled on this Data Mover. --> Start the CIFS server by executing the server_setup command, and try again. ------------------------CIFS : SmbList Warnings--------------------------- Warning 17456168968: server_2 : The CIFS service is currently stopped. Many CIFS sanity check tests cannot be done as all CIFS servers are currently disabled on this Data Mover. --> Start the CIFS server by executing the server_setup command, and try again. --------------------------CIFS : WINS Warnings---------------------------- Warning 17456168968: server_2 : The CIFS service is currently stopped. Many CIFS sanity check tests cannot be done as all CIFS servers are currently disabled on this Data Mover. --> Start the CIFS server by executing the server_setup command, and try again. --------------------------FTPDS : NTP Warnings---------------------------- Warning 17456169044: server_2 : The Network Time Protocol subsystem (NTP) has been stopped or is not connected to its server. It may cause potential errors during Kerberos authentication (timeskew). --> If the NTP service is not running, start it using the server_date command. If it is not connected, check the IP address of the NTP server and make sure the NTP service is up and running on the server. If needed, add another NTP server in the configuration of the Data Mover. Use the server_date command to
manage the NTP service and the parameters on the Data Mover. --------------------------FTPDS : SSL Warnings---------------------------- Warning 17456169084: server_2 : The SSL feature DHSM can not get certificate from the persona default. Because this feature needs a certificate and a private key, it can not start, --> Run the server_certificate command to generate a new key set and certificate for this persona. Or run the appropriate command (like server_http for instance) to set a correct persona for this SSL feature. Warning 17456169084: server_2 : The SSL feature DIC can not get certificate from the persona default. Because this feature needs a certificate and a private key, it can not start, --> Run the server_certificate command to generate a new key set and certificate for this persona. Or run the appropriate command (like server_http for instance) to set a correct persona for this SSL feature. Warning 17456169084: server_2 : The SSL feature DIC_S can not get certificate from the persona default. Because this feature needs a certificate and a private key, it can not start, --> Run the server_certificate command to generate a new key set and certificate for this persona. Or run the appropriate command (like server_http for instance) to set a correct persona for this SSL feature. Warning 17456169084: server_2 : The SSL feature DIC_L can not get certificate from the persona default. Because this feature needs a certificate and a private key, it can not start, --> Run the server_certificate command to generate a new key set and certificate for this persona. Or run the appropriate command (like server_http for instance) to set a correct persona for this SSL feature. Warning 17456169084: server_2 : The SSL feature DBMS_FILE_TRANSFER can not get certificate from the persona default. Because this feature needs a certificate and a private key, it can not start, --> Run the server_certificate command to generate a new key set and certificate for this persona. Or run the appropriate command (like server_http for instance) to set a correct persona for this SSL feature. -------------------------------------------------------------------------------- -----------------------REPV2 : F_RDE_CHECK Errors------------------------- Error 13160415855: server_2 : For the Replication session: rep1, Data Mover version on the source fs: 5.6.47 Data Mover version on the destination fs: 5.5.5 Minimum required Data Mover version on the destination fs: 5.6.46 The Data Mover version on the destination file system is incompatible with the Data Mover version on the source file system. After data transfer, the data in the destination file system may appear to be corrupt, even though the data is in fact intact. Upgrade the Data Mover where the destination file system resides to at least 5.6.46. Error 13160415855: server_2 : For the Replication session:rsd1, F-RDE version on the source fs: 5.6.46 F-RDE version on the destination fs: 5.5.5 Minimum required F-RDE version on the destination fs: 5.6.46 The F-RDE versions are incompatible. After data transfer, the data in the dst FS may appear to be corrupt. --> Upgrade the DataMover where the dst fs resides to atleast the version on the source. Error 13160415855: server_2 : For the Replication session:rsd2, F-RDE version on the source fs: 5.6.46 F-RDE version on the destination fs: 5.5.5 Minimum required F-RDE version on the destination fs: 5.6.46 The F-RDE versions are incompatible. After data transfer, the data in the dst FS may appear to be corrupt.
--> Upgrade the DataMover where the dst fs resides to atleast the version on the source. Error 13160415855: server_2 : For the Replication session:rsd3, F-RDE version on the source fs: 5.6.46 F-RDE version on the destination fs: 5.5.5 Minimum required F-RDE version on the destination fs: 5.6.46 The F-RDE versions are incompatible. After data transfer, the data in the dst FS may appear to be corrupt. --> Upgrade the DataMover where the dst fs resides to atleast the version on the source. ---------------------------HTTPS : SSL Errors----------------------------- Error 13156876314: server_2 : The persona default contains nor certificate neither private keys sets. So, this persona can not be used by a SSL feature on the Data Mover. --> Run the server_certificate command to generate a new key set and certificate for this persona. ---------------------------CIFS : DNS Errors------------------------------ Error 13161070637: server_2 : The DNS service is currently stopped and does not contact any DNS server. The CIFS clients may not be able to access the Data Mover on the network. --> Start the DNS service on the Data Mover, using the server_dns command. -----------------------------CIFS : NS Errors----------------------------- Error 13156352011: server_2 : None of the naming services defined for the entity host in nsswitch.conf is configured. --> Make sure each entity (e.g. host, passwd..) in the nsswitch.conf file contains naming services, (e.g. local files, NIS or usermapper), and make sure these services are configured. Use the corresponding commands like server_nis, server_dns or server_ldap to make sure they are configured. Error 13156352011: server_2 : None of the naming services defined for the entity group in nsswitch.conf is configured. --> Make sure each entity (e.g. host, passwd..) in the nsswitch.conf file contains naming services, (e.g. local files, NIS or usermapper), and make sure these services are configured. Use the corresponding commands like server_nis, server_dns or server_ldap to make sure they are configured. Error 13156352011: server_2 : None of the naming services defined for the entity netgroup in nsswitch.conf is configured. --> Make sure each entity (e.g. host, passwd..) in the nsswitch.conf file contains naming services, (e.g. local files, NIS or usermapper), and make sure these services are configured. Use the corresponding commands like server_nis, server_dns or server_ldap to make sure they are configured. --------------------------FTPDS : FTPD Errors----------------------------- Error 13156876314: server_2 : The persona default contains nor certificate neither private keys sets. So, this persona can not be used by a SSL feature on the Data Mover. --> Run the server_certificate command to generate a new key set and certificate for this persona. ---------------------------FTPDS : NS Errors------------------------------ Error 13156352011: server_2 : None of the naming services defined for the entity host in nsswitch.conf is configured. --> Make sure each entity (e.g. host, passwd..) in the nsswitch.conf file contains naming services, (e.g. local files, NIS or usermapper), and make sure these services are configured. Use the corresponding commands like server_nis, server_dns or server_ldap to make sure they are configured. Error 13156352011: server_2 : None of the naming services defined for the entity group in nsswitch.conf is configured. --> Make sure each entity (e.g. host, passwd..) in the nsswitch.conf file contains naming services, (e.g. local files, NIS or usermapper), and make sure
these services are configured. Use the corresponding commands like server_nis, server_dns or server_ldap to make sure they are configured. Error 13156352011: server_2 : None of the naming services defined for the entity netgroup in nsswitch.conf is configured. --> Make sure each entity (e.g. host, passwd..) in the nsswitch.conf file contains naming services, (e.g. local files, NIS or usermapper), and make sure these services are configured. Use the corresponding commands like server_nis, server_dns or server_ldap to make sure they are configured. ---------------------------FTPDS : SSL Errors----------------------------- Error 13156876314: server_2 : The persona default contains nor certificate neither private keys sets. So, this persona can not be used by a SSL feature on the Data Mover. --> Run the server_certificate command to generate a new key set and certificate for this persona. -------------------------------------------------------------------------------- Total : 14 errors, 25 warnings -------------------------------------------------------------------------------- EXAMPLE #6 ---------- To display only the number of errors and warnings for a Data Mover and dependency, type: $ server_checkup server_2 -quiet server_2 : ------------------------------------Checks-------------------------------------- Component REPV2 : F_RDE_CHEC: Checking the F-RDE compatibilty of Repv2 sessions.......... Fail Component HTTPS : HTTP : Checking the configuration of HTTP applications............ Pass SSL : Checking the configuration of SSL applications............. Pass Component CIFS : ACL : Checking the number of ACLs per file system....................*Pass Connection: Checking the load of CIFS TCP connections.................. Pass Credential: Checking the validity of credentials....................... Fail DC : Checking the connectivity and configuration of Domain Controlle Fail DFS : Checking the DFS configuration files and DFS registry...... Fail DNS : Checking the DNS configuration and connectivity to DNS servers.Fail EventLog : Checking the configuration of Windows Event Logs........... Fail FS_Type : Checking if all file systems are in the DIR3 format........ Pass GPO : Checking the GPO configuration............................. Pass HomeDir : Checking the configuration of home directory shares........ Fail I18N : Checking the I18N mode and the Unicode/UTF8 translation tables. Fail Kerberos : Checking password updates for Kerberos..................... Fail LDAP : Checking the LDAP configuration............................ Pass LocalGrp : Checking the database configuration of local groups........ Fail NIS : Checking the connectivity to the NIS servers............... Pass NS : Checking the naming services configuration................. Fail NTP : Checking the connectivity to the NTP servers............... Fail Ntxmap : Checking the ntxmap configuration file..................... Pass Secmap : Checking the SECMAP database............................... Fail Security : Checking the CIFS security settings........................ Pass Server : Checking the CIFS file servers configuration............... Fail Share : Checking the network shares database....................... Fail SmbList : Checking the range availability of SMB IDs.................*Pass Threads : Checking for CIFS blocked threads.......................... Pass UM_Client : Checking the connectivity to usermapper servers............ Pass UM_Server : Checking the usermapper server database....................*Pass
UnsupOS : Checking for unsupported client network operating systems.. Pass UnsupProto: Checking for unsupported client network protocols.......... Pass VC : Checking the configuration of Virus Checker servers........ Pass WINS : Checking the connectivity to WINS servers.................. Fail Component FTPDS : FS_Type : Checking if all file systems are in the DIR3 format........ Pass FTPD : Checking the configuration of FTPD......................... Fail NIS : Checking the connectivity to the NIS servers............... Pass NS : Checking the naming services configuration................. Fail NTP : Checking the connectivity to the NTP servers............... Fail SSL : Checking the configuration of SSL applications............. Pass NB: a result with a * means that some tests were not executed. use -full to run them ---------------------------------------------------------------------------- Total : 12 errors, 14 warnings ------------------------------------Checks---------------------------------- ---------------------------------------------------------------------------- Last Modified: April 05, 2010 12:30 pm
server_cifs Manages the CIFS configuration for the specified Data Movers or Virtual Data Movers (VDMs). SYNOPSIS -------- server_cifs {
server_cifs manages the CIFS configuration for the specified
deleted at any time. It is recommended that IP interfaces should always be specified. VDMs do not have default CIFS servers. [,local_users] Enables local user support that allows the creation of a limited number of local user accounts on the CIFS server. When this command executes, type and confirm a password that is assigned to the local Administrator account on the CIFS server. In addition to the Administrator account, a Guest account is also created. The Guest account is disabled by default. The Administrator account password must be changed before the Administrator can log in to the CIFS server. After initial creation of the stand-alone server, the local_users option resets the local Administrator account password. The password can only be reset if it has not been changed through a Windows client. If the password has already been changed through Windows, the reset will be refused. [-comment
[,netbios=
[,hidden={y|n}] By default, the
and command values must be used. For example, for a disjoint namespace, you must always specify the fully qualified domain name (FQDN) with the computer name when joining a CIFS server to a domain, that is, dm112-cge0.emc.com, not just dm112-cge0. Caution: Time services must be synchronized using server_date. [,ou=
set. [,dialect=
Updates the attributes and their CIFS names for COMPAT file systems. For every file system, CIFS maintains certain attributes for which there are no NFS equivalents. Updating CIFS attributes updates file attributes and CIFS names by searching the subdirectories of the defined share or path, generating a listing of Microsoft clients filenames (M8.3 and M256), and converting them to a format that CIFS supports. It is not necessary to use this command for DIR3 file systems. Options include: [mindirsize=
considered. Any size specified smaller than 64 KB will be ignored. SMB Hash Files are generated only if they are missing or obsolete. The hash file generation is asynchronous, so the command will reply immediately. Use -info or check the system event log to monitor if the request has been completed. -smbhash -hashdel
Optional list of events category is : - service: Generate service events - task: Generate task events - access: Generate SMB Hash access events. -smbhash -service {enable | disable} Enables or disables the SMB hash generation service (default is started).If CIFS service is started, this command is taken into account immediately. If CIFS is not running, this command is executed at the next "cifs start". -smbhash -cleanup
EXAMPLE #1 ---------- To display the number and names of open files on server_2, type: $ server_cifs server_2 -o audit,full AUDIT Ctx=0xdffcc404, ref=2, Client(fm-main07B60004) Port=36654/139 NS40_1[BRCSLAB] on if=cge0_new CurrentDC 0xceeab604=W2K3PHYAD Proto=NT1, Arch=UNKNOWN, RemBufsz=0xfefb, LocBufsz=0xffff, popupMsg=1 0 FNN in FNNlist NbUsr=1 NbCnx=0 Uid=0x3f NTcred(0xcf156a04 RC=1 NTLM Capa=0x401) BRCSLAB\gustavo CHECKER AUDIT Ctx=0xde05cc04, ref=2, XP Client(BRCSBARREGL1C) Port=1329/445 NS40_1[BRCSLAB] on if=cge0_new CurrentDC 0xceeab604=W2K3PHYAD Proto=NT1, Arch=Win2K, RemBufsz=0xffff, LocBufsz=0xffff, popupMsg=1 0 FNN in FNNlist NbUsr=1 NbCnx=2 Uid=0x3f NTcred(0xceeabc04 RC=3 NTLMSSP Capa=0x11001) BRCSLAB\gustavo CHECKER Cnxp(0xceeaae04), Name=IPC$, cUid=0x3f Tid=0x3f, Ref=1, Aborted=0 readOnly=0, umask=22, opened files/dirs=0 Cnxp(0xde4e3204), Name=gustavo, cUid=0x3f Tid=0x41, Ref=1, Aborted=0 readOnly=0, umask=22, opened files/dirs=2 Fid=64, FNN=0x1b0648f0(FREE,0x0,0), FOF=0x0 DIR=\ Notify commands received: Event=0x17, wt=0, curSize=0x0, maxSize=0x20, buffer=0x0 Tid=0x41, Pid=0xb84, Mid=0xec0, Uid=0x3f, size=0x20 Fid=73, FNN=0x1b019ed0(FREE,0x0,0), FOF=0xdf2ae504 (CHECK) FILE=\New Wordpad Document.doc EXAMPLE #2 ---------- To configure CIFS service on server_2 with a NetBIOS name of dm110-cge0, in the NT4 domain NASDOCS, with a NetBIOS alias of dm110-cge0a1, hiding the NetBIOS name in the Network Neighborhood, with the interface for CIFS service as cge0, the WINS server as 172.24.102.25, and with the comment string EMC VNX, type: $ server_cifs server_2 -add netbios=dm110-cge0,domain=NASDOCS,alias=dm110-cge0a1,hidden=y,interface=cge0,wins =172.24.102.25 -comment "EMC Celerra" server_2 : done EXAMPLE #3 ---------- To enable the home directory on server_2, type: $ server_cifs server_2 -option homedir server_2 : done EXAMPLE #4 ---------- To add the WINS servers, 172.24.103.25 and 172.24.102.25, type: $ server_cifs server_2 -add wins=172.24.103.25,wins=172.24.102.25 server_2 : done EXAMPLE #5 ---------- To rename the NetBIOS name from dm110-cge0 to dm112-cge0, type: $ server_cifs server_2 -rename -netbios dm110-cge0 dm112-cge0 server_2 : done
EXAMPLE #6 ---------- To display the CIFS configuration for NT4 with Internal Usermapper, type: $ server_cifs server_2 server_2 : 256 Cifs threads started Security mode = NT Max protocol = NT1 I18N mode = UNICODE Home Directory Shares ENABLED, map=/.etc/homedir Usermapper auto broadcast enabled Usermapper[0] = [127.0.0.1] state:active port:14640 (auto discovered) Default WINS servers = 172.24.103.25:172.24.102.25 Enabled interfaces: (All interfaces are enabled) Disabled interfaces: (No interface disabled) DOMAIN NASDOCS RC=3 SID=S-1-5-15-99589f8d-9aa3a5f-338728a8-ffffffff >DC=WINSERVER1(172.24.102.66) ref=2 time=0 ms CIFS Server DM112-CGE0[NASDOCS] RC=2 (Hidden) Alias(es): DM110-CGE0A1 Comment=EMC Celerra if=cge0 l=172.24.102.242 b=172.24.102.255 mac=0:60:16:4:35:4f wins=172.24.102.25 Password change interval: 0 minutes Where: Value Definition Cifs threads started Number of CIFS threads used when the CIFS service was started. Security mode User authorization mechanism used by the Data Mover. Max protocol Maximum dialect supported by the security mode. I18N mode I18N mode (unicode or ASCII). Home Directory Shares Whether Home Directory shares are enabled. map Home directory used by the Data Mover. Usermapper auto broadcast Usermapper is using its broadcast mechanism to enabled discover its servers. This only displays when the mechanism is active. It is disabled when you manually set the Usermapper server addresses. Usermapper IP address of the servers running the Usermapper service. state Current state of Usermapper. Default WINS servers Addresses of the default WINS servers. Enabled interfaces Data Movers enabled interfaces. Disabled interfaces Data Movers disabled interfaces. Unused Interfaces Interfaces not currently used by the Data Mover. RC Reference count indicating the number of internal objects (such as client contexts) using the CIFS server. SID Security ID of the domain. DC Domain controllers used by the Data Mover. Depending on the number of DCs in the domain, this list may be large. ref Number of internal objects using the Domain Controller. time Domain Controller response time. Aliases Alternate NetBIOS names assigned to the CIFS server configuration. if Interfaces used by the CIFS server. Password change interval The amount of time between password changes.
EXAMPLE #7 ---------- To display the CIFS configuration for NT4, type: $ server_cifs server_2 server_2 : 256 Cifs threads started Security mode = NT Max protocol = NT1 I18N mode = UNICODE Home Directory Shares ENABLED, map=/.etc/homedir Usermapper auto broadcast suspended Usermapper[0] = [172.24.102.20] state:available Default WINS servers = 172.24.103.25:172.24.102.25 Enabled interfaces: (All interfaces are enabled) Disabled interfaces: (No interface disabled) DOMAIN NASDOCS RC=3 SID=S-1-5-15-99589f8d-9aa3a5f-338728a8-ffffffff >DC=WINSERVER1(172.24.102.66) ref=2 time=0 ms CIFS Server DM112-CGE0[NASDOCS] RC=2 (Hidden) Alias(es): DM110-CGE0A1 Comment=EMC Celerra if=cge0 l=172.24.102.242 b=172.24.102.255 mac=0:60:16:4:35:4f wins=172.24.102.25 Password change interval: 0 minutes EXAMPLE #8 ---------- To add a Windows server using the compname dm112-cge0, in the Active Directory domain nasdocs.emc.com, with a NetBIOS alias of dm112-cge0a1, hiding the NetBIOS name in the Network Neighborhood, with the interface for CIFS service as cge0, the WINS servers as 172.24.102.25 and 172.24.103.25, in the DNS domain nasdocs.emc.com, and with the comment string EMC VNX, type: $ server_cifs server_2 -add compname=dm112-cge0,domain=nasdocs.emc.com,alias=dm112-cge0a1,hidden=y, interface=cge0,wins=172.24.102.25:172.24.103.25,dns=nasdocs.emc.com -comment "EMC Celerra" server_2 : done EXAMPLE #9 ---------- To join dm112-cge0 into the Active Directory domain nasdocs.emc.com, using the Administrator account, and to add this server to Engineering\Computers organizational unit, type: $ server_cifs server_2 -Join compname=dm112-cge0,domain=nasdocs.emc.com,admin=administrator,ou="ou=Computers:o u=Engineering" server_2 : Enter Password:******** done EXAMPLE #10 ----------- To add the NFS service to the CIFS server in order to make it possible for NFS users to access the Windows KDC, type: $ server_cifs server_2 -Join
compname=dm112-cge0,domain=nasdocs.emc.com,admin=administrator -option addservice=nfs server_2 : Enter Password:******** done EXAMPLE #11 ----------- To enable the cge1 interface, type: $ server_cifs server_2 -Enable cge1 server_2 : done EXAMPLE #12 ----------- To display CIFS information for a Data Mover in a Windows domain with internal usermapper, type: $ server_cifs server_2 server_2 : 256 Cifs threads started Security mode = NT Max protocol = NT1 I18N mode = UNICODE Home Directory Shares ENABLED, map=/.etc/homedir Usermapper auto broadcast enabled Usermapper[0] = [127.0.0.1] state:active (auto discovered) Default WINS servers = 172.24.103.25:172.24.102.25 Enabled interfaces: (All interfaces are enabled) Disabled interfaces: (No interface disabled) Unused Interface(s): if=cge1 l=172.24.102.243 b=172.24.102.255 mac=0:60:16:4:35:4e DOMAIN NASDOCS FQDN=nasdocs.emc.com SITE=Default-First-Site-Name RC=3 SID=S-1-5-15-99589f8d-9aa3a5f-338728a8-ffffffff >DC=WINSERVER1(172.24.102.66) ref=3 time=1 ms (Closest Site) CIFS Server DM112-CGE0[NASDOCS] RC=2 (Hidden) Alias(es): DM112-CGEA1 Full computer name=dm112-cge0.nasdocs.emc.com realm=NASDOCS.EMC.COM Comment=EMC Celerra if=cge0 l=172.24.102.242 b=172.24.102.255 mac=0:60:16:4:35:4f wins=172.24.102.25:172.24.103.25 FQDN=dm112-cge0.nasdocs.emc.com (Updated to DNS) Password change interval: 30 minutes Last password change: Thu Oct 27 15:59:17 2005 Password versions: 2 EXAMPLE #13 ----------- To display CIFS information for a Data Mover in a Windows domain, type: $ server_cifs server_2 server_2 : 256 Cifs threads started Security mode = NT Max protocol = NT1 I18N mode = UNICODE Home Directory Shares ENABLED, map=/.etc/homedir Usermapper auto broadcast suspended Usermapper[0] = [172.24.102.20] state:available Default WINS servers = 172.24.103.25:172.24.102.25 Enabled interfaces: (All interfaces are enabled)
Disabled interfaces: (No interface disabled) Unused Interface(s): if=cge1 l=172.24.102.243 b=172.24.102.255 mac=0:60:16:4:35:4e DOMAIN NASDOCS FQDN=nasdocs.emc.com SITE=Default-First-Site-Name RC=3 SID=S-1-5-15-99589f8d-9aa3a5f-338728a8-ffffffff >DC=WINSERVER1(172.24.102.66) ref=3 time=1 ms (Closest Site) CIFS Server DM112-CGE0[NASDOCS] RC=2 (Hidden) Alias(es): DM112-CGEA1 Full computer name=dm112-cge0.nasdocs.emc.com realm=NASDOCS.EMC.COM Comment=EMC Celerra if=cge0 l=172.24.102.242 b=172.24.102.255 mac=0:60:16:4:35:4f wins=172.24.102.25:172.24.103.25 FQDN=dm112-cge0.nasdocs.emc.com (Updated to DNS) Password change interval: 30 minutes Last password change: Thu Oct 27 16:29:21 2005 Password versions: 3, 2 EXAMPLE #14 ----------- To display CIFS information for a Data Mover when CIFS service is not started, type: $ server_cifs server_2 server_2 : Cifs NOT started Security mode = NT Max protocol = NT1 I18N mode = UNICODE Home Directory Shares ENABLED, map=/.etc/homedir Usermapper auto broadcast suspended Usermapper[0] = [172.24.102.20] state:available Default WINS servers = 172.24.103.25:172.24.102.25 Enabled interfaces: (All interfaces are enabled) Disabled interfaces: (No interface disabled) Unused Interface(s): if=cge1 l=172.24.102.243 b=172.24.102.255 mac=0:60:16:4:35:4e CIFS Server DM112-CGE0[NASDOCS] RC=2 (Hidden) Alias(es): DM112-CGEA1 Full computer name=dm112-cge0.nasdocs.emc.com realm=NASDOCS.EMC.COM Comment=EMC Celerra if=cge0 l=172.24.102.242 b=172.24.102.255 mac=0:60:16:4:35:4f wins=172.24.102.25:172.24.103.25 FQDN=dm112-cge0.nasdocs.emc.com (Updated to DNS) Password change interval: 30 minutes Last password change: Thu Oct 27 16:29:21 2005 Password versions: 3, 2 EXAMPLE #15 ----------- To add a Windows server named, dm112-cge0, in the Active Directory domain nasdocs.emc.com, with the interface for CIFS service as cge0, and enable local users support, type: $ server_cifs server_2 -add compname=dm112-cge0,domain=nasdocs.emc.com,interface=cge0,local_users server_2 : Enter Password:******** Enter Password Again:******** done EXAMPLE #16 -----------
To set a security mode to NT for a Data Mover, type: $ server_cifs server_2 -add security=NT server_2 : done EXAMPLE #17 ----------- To disable a CIFS interface, type: $ server_cifs server_2 -Disable cge1 server_2 : done EXAMPLE #18 ----------- To display CIFS audit information for a Data Mover, type: $ server_cifs server_2 -option audit server_2 : |||| AUDIT Ctx=0xad3d4820, ref=1, W2K3 Client(WINSERVER1) Port=1638/139 ||| DM112-CGE0[NASDOCS] on if=cge0 ||| CurrentDC 0xad407620=WINSERVER1 ||| Proto=NT1, Arch=Win2K, RemBufsz=0xffff, LocBufsz=0xffff ||| 0 FNN in FNNlist NbUsr=1 NbCnx=1 ||| Uid=0x3f NTcred(0xad406a20 RC=2 KERBEROS Capa=0x2) NASDOCS\administrator || Cnxp(0xad3d5420), Name=IPC$, cUid=0x3f Tid=0x3f, Ref=1, Aborted=0 | readOnly=0, umask=22, opened files/dirs=1 |||| AUDIT Ctx=0xad43c020, ref=1, W2K3 Client(172.24.102.67) Port=1099/445 ||| DM112-CGE0[NASDOCS] on if=cge0 ||| CurrentDC 0xad407620=WINSERVER1 ||| Proto=NT1, Arch=Win2K, RemBufsz=0xffff, LocBufsz=0xffff ||| 0 FNN in FNNlist NbUsr=1 NbCnx=1 ||| Uid=0x3f NTcred(0xad362c20 RC=2 KERBEROS Capa=0x2) NASDOCS\user1 || Cnxp(0xaec21020), Name=IPC$, cUid=0x3f Tid=0x3f, Ref=1, Aborted=0 | readOnly=0, umask=22, opened files/dirs=2 Where: Value Definition Ctx Address in memory of the Stream Context. ref Reference counter of components using this context at this time. Port The client port and the Data Mover port used in the current TCP connection. CurrentDC Specifies the address of the Domain Controller that is currently used. Proto Dialect level that is currently used. Arch Type of the client OS. RemBufsz Max buffer size negotiated by the client. LocBufsz Max buffer size we have negotiated. FNN/FNNlist Number of blocked files that has not yet been checked by Virus Checker. NbUsr Number of sessions connected to the stream context (TCP connection). NbCnx Number of connections to shares for this TCP connection. Uid/NTcred User Id(this number is not related to the Unix UID used to create a file), the credential address, and the type of authentication. Cnxp/Name Share connection address and the name of the share the user is connecting to. cUid User Id who has opened the connection first. Tid Tree Id (number which represents the share connection in any protocol request). Aborted Status of the connection. readOnly If the share connection is readonly. umask A user file-creation mask. opened files/dirs Number of files or directories opened on this share
connection. EXAMPLE #19 ----------- To unjoin the computer dm112-cge0 from the nasdocs.emc.com domain, type: $ server_cifs server_2 -Unjoin compname=dm112-cge0,domain=nasdocs.emc.com,admin=administrator server_2 : Enter Password:******** done EXAMPLE #20 ----------- To delete WINS servers, 172.24.102.25, and 172.24.103.25, type: $ server_cifs server_2 -delete wins=172.24.102.25,wins=172.24.103.25 server_2 : done EXAMPLE #21 ----------- To delete a NetBIOS name, dm112-cge0, type: $ server_cifs server_2 -delete netbios=dm112-cge0 server_2 : done EXAMPLE #22 ----------- To delete the compname, dm112-cge0, type: $ server_cifs server_2 -delete compname=dm112-cge0 server_2 : done EXAMPLE #23 ----------- To delete the usermapper, 172.24.102.20, type: $ server_cifs server_2 -delete usrmapper=172.24.102.20 server_2 : done EXAMPLE #24 ----------- To add and join a Windows server in disjoint DNS and Windows domains, type: $ server_cifs server_2 -add compname=dm112-cge0,domain=nasdocs.emc.com,interface=cge0,dns=eng.emc.com -comment "EMC Celerra" $ server_cifs server_2 -Join compname=dm112-cge0.eng.emc.com,domain=nasdocs.emc.com,admin=Administrator EXAMPLE #25 ----------- To add a Windows server using a delegated account from a trusted domain, type: $ server_cifs server_2 -Join compname=dm112-cge0,domain=nasdocs.emc.com,admin=delegateduser@it.emc.com server_2 : Enter Password:******** done EXAMPLE #26 ----------- To add a Windows server in the Active Directory domain using a pre-created computer account, type:
$ server_cifs server_2 -Join compname=dm112-cge0,domain=nasdocs.emc.com,admin=administrator -option reuse server_2 : Enter Password:******** done EXAMPLE #27 ----------- To update the directory /ufs1/users with a new minimum directory size of 8192, type: $ server_cifs server_2 -update /ufs1/users mindirsize=8192 server_2 : done EXAMPLE #28 ----------- To migrate all SIDs in the ACL database for file system, ufs1, from the
server_2 : done EXAMPLE #33 ----------- To display a summary of SMB statistics, type: $ server_cifs server_2 -stats -summary server_2 : State info: Open connection Open files 2 2 SMB total requests: totalAllSmb totalSmb totalTrans2Smb totalTransNTSmb 10038 6593 3437 8 EXAMPLE #33 ----------- To display all non-zero CIFS statistics, type: $ server_cifs server_2 -stats server_2 : SMB statistics: proc ncalls %totcalls maxTime ms/call Close 1305 7.96 46.21 2.16 Rename 2 0.01 0.81 0.50 Trans 314 1.91 0.77 0.08 Echo 21 0.13 0.01 0.00 ReadX 231 1.41 0.03 0.00 WriteX 3697 22.54 39.96 0.98 Trans2Prim 9375 57.16 34.27 0.46 TreeDisco 10 0.06 0.06 0.00 NegProt 29 0.18 0.42 0.24 SessSetupX 47 0.29 60.55 5.81 UserLogoffX 9 0.05 0.01 0.00 TreeConnectX 13 0.08 0.39 0.23 TransNT 8 0.05 0.01 0.00 CreateNTX 1338 8.16 47.11 0.81 CancelNT 1 0.01 0.03 0.00 Trans2 SMBs: proc ncalls %totcalls maxTime ms/call FindFirst 22 0.23 0.22 0.09 QFsInfo 3154 33.65 0.08 0.05 QPathInfo 1113 11.87 6.73 0.15 QFileInfo 2077 22.16 0.04 0.02 SetFileInfo 3007 32.08 34.26 1.28 NT SMBs: proc ncalls %totcalls maxTime ms/call NotifyChange 8 100.00 0.01 0.00 Performance info: Read Re/s Write Wr/s All Ops/sec 231 231000.00 3697 1021.27 25783 1575.40 State info: Open connection Open files 2 2 Shadow info: Reads Writes Splits Extinsert Truncates 0 0 0 0 0 SMB total requests: totalAllSmb totalSmb totalTrans2Smb totalTransNTSmb (unsupported) 25783 16400 9375 8 2
Where: Value Definition proc Name of CIFS requests received. ncalls Number of requests received. %totcalls Percentage of this type of request compared to all requests. maxTime Maximum amount of time used. ms/call Average time in milliseconds taken to service calls. failures Number of times the call has failed. Read Total number of read operations. Re/s Number of read operations per second. Write Total number of write operations. Wr/s Number of write operations per second. EXAMPLE #35 ----------- To reset to zero the values for all SMB statistics, type: $ server_cifs server_2 -stats -zero server_2 : done EXAMPLE #36 ----------- To configure CIFS service in a language that uses multibyte characters, type: $ server_cifs server_2 -add compname=
server_cifssupport Provides support services for CIFS users. SYNOPSIS -------- server_cifssupport {
The -acl option displays the access control list (ACL) of files, directories, or shares in plain text form. The -cred option generates a credential containing all groups to which a user belongs, including local groups, without the user being connected to a CIFS server. This allows you to verify if users SIDs are being correctly mapped to UNIX UIDs and GIDs and to troubleshoot any user access control issues. The -pingdc option checks the network connectivity between a CIFS server and a domain controller then verifies that a CIFS server can access and use the following domain controller services: * IPC$ share logon * Secure Channel when verifying domain users during NT LAN Manager (NTLM) authentication * Local Security Authority (LSA) pipe information when mapping Windows SIDs to UNIX UIDs and GIDs * SAMR (Remote Security Account Manager) pipe when merging a users UNIX and Windows groups together to create a credential * Trusted domain information * Privilege names for internationalization: pingdc The -secmap option manages the secure mapping (secmap) cache. Secmap contains all mapping between SIDs and UID/GIDs used by a Data Mover or Virtual Data Mover (VDM). The Data Mover permanently caches all mappings it receives from any mapping mechanism (local files, NIS, iPlanet, Active Directory, and Usermapper) in the secmap database, making the response to subsequent mapping requests faster and less susceptible to network problems. Reverse mapping provides better quota support. ACCESS RIGHT OPTIONS -------------------- -accessright {-name
[-build [-admin
The -standalone option specifies the stand-alone CIFS server to use when rebuilding a user credential. Note: If no CIFS server is specified, the system uses the default CIFS server, which uses all interfaces not assigned to other CIFS servers on the Data Mover. PINGDC OPTIONS -------------- -pingdc {-netbios
-secmap -update {-name
Path : /ufs1/test/test.txt Allowed mask : 0x301ff Action : List Folder / Read data Action : Create Files / Write data Action : Create Folders / Append Data Action : Read Extended Attributes Action : Write Extended Attributes Action : Traverse Folder / Execute File Action : Delete Subfolders and Files Action : Read Attributes Action : Write Attributes Action : Delete Action : Read Permissions EXAMPLE #3 ---------- To display user access rights to a file for user1 with access-checking policy UNIX, type: $ server_cifssupport server_2 -accessright -name user1 -domain NASDOCS -path /ufs1/test/test.txt -policy unix server_2 : done ACCOUNT GENERAL INFORMATIONS Name : user1 Domain : NASDOCS Path : /ufs1/test/test.txt Allowed mask : 0x20089 Action : List Folder / Read data Action : Read Extended Attributes Action : Read Attributes Action : Read Permissions EXAMPLE #4 ---------- To rebuild a credential for user1 to a file using an administrative account, type: $ server_cifssupport server_2 -accessright -name user1 -domain NASDOCS -path /ufs1/test/test.txt -build -admin administrator server_2 : Enter Password:******* done ACCOUNT GENERAL INFORMATIONS Name : user1 Domain : NASDOCS Path : /ufs1/test/test.txt Allowed mask : 0x200a9 Action : List Folder / Read data Action : Read Extended Attributes Action : Traverse Folder / Execute File Action : Read Attributes Action : Read Permissions EXAMPLE #5 ---------- To display the verbose ACL information of a file, type: $ server_cifssupport server_2 -acl -path /ufs1/test/test.txt -verbose server_2 : done ACL DUMP REPORT Path : /ufs1/test/test.txt UID : 32770 GID : 32797
Rights : rw-r--r-- acl ID : 0x4 acl size : 174 owner SID : S-1-5-20-220 group SID : S-1-5-15-b8e641e2-33f0942d-8f03a08f-201 DACL Owner : USER 32770 S-1-5-15-b8e641e2-33f0942d-8f03a08f-1f4 Access : ALLOWED 0x0 0x1f01ff RWXPDO Rights : List Folder / Read data Create Files / Write data Create Folders / Append Data Read Extended Attributes Write Extended Attributes Traverse Folder / Execute File Delete Subfolders and Files Read Attributes Write Attributes Delete Read Permissions Change Permissions Take Ownership Synchronize Owner : USER 32771 S-1-5-15-b8e641e2-33f0942d-8f03a08f-a59 Access : ALLOWED 0x0 0x1200a9 R-X--- Rights : List Folder / Read data Read Extended Attributes Traverse Folder / Execute File Read Attributes Read Permissions Synchronize EXAMPLE #6 ---------- To display the access control level of a share, type: $ server_cifssupport server_2 -acl -share ufs1 server_2 : done ACL DUMP REPORT Share : ufs1 UID : 0 GID : 1 Rights : rwxr-xr-x EXAMPLE #7 ---------- To generate a credential for user1, type: $ server_cifssupport server_2 -cred -name user1 -domain NASDOCS server_2 : done ACCOUNT GENERAL INFORMATIONS Name : user1 Domain : NASDOCS Primary SID : S-1-5-15-b8e641e2-33f0942d-8f03a08f-a59 UID : 32771 GID : 32768 Authentification : KERBEROS Credential capability : 0x2 Privileges : 0x8 System privileges : 0x2 Default Options : 0x2 NT administrator : False Backup administrator : False
Backup : False NT credential capability : 0x2 ACCOUNT GROUPS INFORMATIONS Type UNIX ID Name Domain SID NT 32797 S-1-5-15-b8e641e2-33f0942d-8f03a08f-201 NT 32798 S-1-5-15-b8e641e2-33f0942d-8f03a08f-e45 NT 4294967294 S-1-1-0 NT 4294967294 S-1-5-2 NT 4294967294 S-1-5-b NT 2151678497 S-1-5-20-221 UNIX 32797 UNIX 32798 UNIX 4294967294 UNIX 2151678497 EXAMPLE #8 ---------- To rebuild a user credential including the users universal groups for a user using SID, type: $ server_cifssupport server_2 -cred -sid S-1-5-15-b8e641e2-33f0942d-8f03a08f-1f4 -build -ldap -compname dm102-cge0 server_2 : done ACCOUNT GENERAL INFORMATIONS Name : Domain : NASDOCS Server : dm102-cge0 Primary SID : S-1-5-15-b8e641e2-33f0942d-8f03a08f-1f4 UID : 32770 GID : 32768 Authentification : NTLM Credential capability : 0x0 Privileges : 0x7f System privileges : 0x1 Default Options : 0xe NT administrator : True Backup administrator : True Backup : False NT credential capability : 0x0 ACCOUNT GROUPS INFORMATIONS Type UNIX ID Name Domain SID NT 32794 Group Policy Cre NASDOCS S-1-5-15-b8e641e2-33f0942d-8f03a08f-208 NT 32795 Schema Admins NASDOCS S-1-5-15-b8e641e2-33f0942d-8f03a08f-206 NT 32796 Enterprise Admin NASDOCS S-1-5-15-b8e641e2-33f0942d-8f03a08f-207 NT 32797 Domain Users NASDOCS S-1-5-15-b8e641e2-33f0942d-8f03a08f-201 NT 32793 Domain Admins NASDOCS S-1-5-15-b8e641e2-33f0942d-8f03a08f-200 NT 4294967294 Everyone S-1-1-0 NT 4294967294 NETWORK NT AUTHORITY S-1-5-2 NT 4294967294 ANONYMOUS LOGON NT AUTHORITY S-1-5-7 NT 2151678496 Administrators BUILTIN S-1-5-20-220 NT 2151678497 Users BUILTIN S-1-5-20-221 NT 1 UNIX GID=0x1 &ap S-1-5-12-2-1 UNIX 32794 UNIX 32795 UNIX 32796 UNIX 32797 UNIX 32793
EXAMPLE #9 ---------- To check the network connectivity for the CIFS server with netbios dm102-cge0, type: $ server_cifssupport server_2 -pingdc -netbios dm102-cge0 server_2 : done PINGDC GENERAL INFORMATIONS DC SERVER: Netbios name : NASDOCSDC CIFS SERVER : Compname : dm102-cge0 Domain : nasdocs.emc.com EXAMPLE #10 ----------- To check the network connectivity between the domain controller and the CIFS server with compname dm102-cge0, type: $ server_cifssupport server_2 -pingdc -compname dm102-cge0 -dc NASDOCSDC -verbose server_2 : done PINGDC GENERAL INFORMATIONS DC SERVER: Netbios name : NASDOCSDC CIFS SERVER : Compname : dm102-cge0 Domain : nasdocs.emc.com EXAMPLE #11 ----------- To display the secmap mapping entries, type: $ server_cifssupport server_2 -secmap -list server_2 : done SECMAP USER MAPPING TABLE UID Origin Date Name SID 32772 usermapper Tue Sep 18 19:08:40 2007 NASDOCS\user2 S-1-5-15-b8e641e2-33f0942d-8f03a08f-452 32771 usermapper Tue Sep 18 17:56:53 2007 NASDOCS\user1 S-1-5-15-b8e641e2-33f0942d-8f03a08f-a59 32770 usermapper Sun Sep 16 07:50:39 2007 NASDOCS\Administrator S-1-5-15-b8e641e2-33f0942d-8f03a08f-1f4 SECMAP GROUP MAPPING TABLE GID Origin Date Name SID 32793 usermapper Wed Sep 12 14:16:18 2007 NASDOCS\Domain Admins S-1-5-15-b8e641e2-33f0942d-8f03a08f-200 32797 usermapper Sun Sep 16 07:50:40 2007 NASDOCS\Domain Users S-1-5-15-b8e641e2-33f0942d-8f03a08f-201 32799 usermapper Mon Sep 17 19:13:16 2007 NASDOCS\Domain Guests S-1-5-15-b8e641e2-33f0942d-8f03a08f-202 32800 usermapper Mon Sep 17 19:13:22 2007 NASDOCS\Domain Computers S-1-5-15-b8e641e2-33f0942d-8f03a08f-203 32795 usermapper Sun Sep 16 07:50:40 2007 NASDOCS\Schema Admins S-1-5-15-b8e641e2-33f0942d-8f03a08f-206
32796 usermapper Sun Sep 16 07:50:40 2007 NASDOCS\Enterprise Admins S-1-5-15-b8e641e2-33f0942d-8f03a08f-207 32794 usermapper Sun Sep 16 07:50:40 2007 NASDOCS\Group Policy Creator Owners S-1-5-15-b8e641e2-33f0942d-8f03a08f-208 32798 usermapper Mon Sep 17 19:13:15 2007 NASDOCS\CERTSVC_DCOM_ACCESS S-1-5-15-b8e641e2-33f0942d-8f03a08f-e45 32801 usermapper Tue Sep 18 19:08:41 2007 NASDOCS\NASDOCS Group S-1-5-15-b8e641e2-33f0942d-8f03a08f-45b EXAMPLE #12 ----------- To display the secmap mapping entry for a user user1 in a domain NASDOCS, type: $ server_cifssupport server_2 -secmap -list -name user1 -domain NASDOCS server_2 : done SECMAP USER MAPPING TABLE UID Origin Date Name SID 32771 usermapper Tue Sep 18 17:56:53 2007 NASDOCS\user1 S-1-5-15-b8e641e2-33f0942d-8f03a08f-a59 EXAMPLE #13 ----------- To display the secmap mapping entry for a user with UID 32771, type: $ server_cifssupport server_2 -secmap -list -uid 32771 server_2 : done SECMAP USER MAPPING TABLE UID Origin Date Name SID 32771 usermapper Tue Sep 18 17:56:53 2007 NASDOCS\user1 S-1-5-15-b8e641e2-33f0942d-8f03a08f-a59 EXAMPLE #14 ----------- To create the secmap mapping entry for user3 in a domain NASDOCS, type $ server_cifssupport server_2 -secmap -create -name user3 -domain NASDOCS server_2 : done SECMAP USER MAPPING TABLE UID Origin Date Name SID 32773 usermapper Tue Sep 18 19:21:59 2007 NASDOCS\user3 S-1-5-15-b8e641e2-33f0942d-8f03a08f-a3d EXAMPLE #15 ----------- To check the secmap mapping for user1 in a domain NASDOCS, type: $ server_cifssupport server_2 -secmap -verify -name user1 -domain NASDOCS server_2 : done EXAMPLE #16 ----------- To update the secmap mapping entry for a user using SID, type: $ server_cifssupport server_2 -secmap -update -sid S-1-5-15-b8e641e2-33f0942d-8f03a08f-a3d server_2 : done
EXAMPLE #17 ----------- To delete the secmap mapping entry for user3, type: $ server_cifssupport server_2 -secmap -delete -name user3 -domain NASDOCS server_2 : done EXAMPLE #18 ----------- To display current secmap status, type: $ server_cifssupport server_2 -secmap -report server_2 : done SECMAP GENERAL INFORMATIONS Name : server_2 State : Enabled Fs : / Used nodes : 12 Used blocks : 8192 SECMAP MAPPED DOMAIN Name SID NASDOCS S-1-5-15-b8e641e2-33f0942d-8f03a08f-ffffffff EXAMPLE #19 ----------- To export the secmap mapping entries to the display, type: $ server_cifssupport server_2 -secmap -export server_2 : done SECMAP MAPPING RECORDS S-1-5-15-b8e641e2-33f0942d-8f03a08f-200:2:96:8019:8019:NASDOCS\Domain Admins S-1-5-15-b8e641e2-33f0942d-8f03a08f-201:2:96:801d:801d:NASDOCS\Domain Users S-1-5-15-b8e641e2-33f0942d-8f03a08f-202:2:96:801f:801f:NASDOCS\Domain Guests S-1-5-15-b8e641e2-33f0942d-8f03a08f-203:2:96:8020:8020:NASDOCS\Domain Computers S-1-5-15-b8e641e2-33f0942d-8f03a08f-206:2:96:801b:801b:NASDOCS\Schema Admins S-1-5-15-b8e641e2-33f0942d-8f03a08f-207:2:96:801c:801c:NASDOCS\Enterprise Admins S-1-5-15-b8e641e2-33f0942d-8f03a08f-208:2:96:801a:801a:NASDOCS\Group Policy Creator Owners S-1-5-15-b8e641e2-33f0942d-8f03a08f-e45:2:96:801e:801e:NASDOCS\CERTSVC_DCOM_ACCES S S-1-5-15-b8e641e2-33f0942d-8f03a08f-452:1:96:8004:8000:NASDOCS\user2 S-1-5-15-b8e641e2-33f0942d-8f03a08f-a59:1:96:8003:8000:NASDOCS\user1 S-1-5-15-b8e641e2-33f0942d-8f03a08f-45b:2:96:8021:8021:NASDOCS\NASDOCS Group S-1-5-15-b8e641e2-33f0942d-8f03a08f-1f4:1:96:8002:8000:NASDOCS\Administrator EXAMPLE #20 ----------- To export the secmap mapping entries to a file, type $ server_cifssupport server_2 -secmap -export -file exportfile.txt server_2 : done EXAMPLE #21 ----------- To import the secmap mapping entries from a file, type: $ server_cifssupport server_2 -secmap -import -file exportfile.txt server_2 : Secmap import in progress : #
done ------------------------------------------------ Last Modified: April 06, 2011 3:00 pm
server_cpu Performs an orderly, timed, or immediate halt or reboot of a Data Mover. SYNOPSIS -------- server_cpu {
Value Definition 0 Reset 1 DOS booted 2 SIB failed 3 Loaded 4 Configured 5 Contacted 7 Panicked 9 Reboot pending EXAMPLE #2 ---------- To immediately halt server_2, type: $ server_cpu server_2 -halt now server_2 : done EXAMPLE #3 ---------- To immediately reboot server_2, type: $ server_cpu server_2 -reboot now server_2 : done EXAMPLE #4 ---------- To monitor a reboot of server_2, that is set to take place in one minute, type: $ server_cpu server_2 -reboot -monitor +1 server_2 : reboot in progress ........3.3.3.3.3.done -------------------------------------- Last Modified: April 06, 2011 6:00 pm
server_date Displays or sets the date and time for a Data Mover, and synchronizes time between a Data Mover and an external time source. SYNOPSIS -------- server_date {
Note: If -sync_delay is not typed, by default, the clock is set at Data Mover startup. The clock is synchronized after the first poll. -interval
$ server_date server_2 server_2 : Thu Jan 6 16:55:09 EST 2005 EXAMPLE #2 ---------- To customize the display of the date and time on a Data Mover, type: $ server_date server_2 "+%Y-%m-%d %H:%M:%S" server_2 : 2005-01-06 16:55:58 EXAMPLE #3 ---------- To start time synchronization between a Data Mover and an external source, type: $ server_date server_2 timesvc start ntp -interval 06:00 172.24.102.20 server_2 : done EXAMPLE #4 ---------- To set the time service without slewing the clock, type: $ server_date server_2 timesvc set ntp server_2 : done EXAMPLE #5 ---------- To display statistical information, type: $ server_date server_2 timesvc stats ntp server_2 : Time synchronization statistics since start: hits= 2, misses= 0, first poll hit= 2, miss= 0 Last offset: 0 secs, 0 usecs Current State: Running, connected, interval=360 Time sync hosts: 0 1 172.24.102.20 Where: Value Definition ----- ---------- hits When a client sends a request to the server requesting the current time, if there is a reply, that is a hit. misses No reply from any of the time servers. first poll hit First poll hit which sets the first official time for the Data Mover. miss First poll miss. Last offset Time difference between the time server and the Data Mov er. Current State State of the time server. Time sync hosts IP address of the time server. EXAMPLE #6 ---------- To update time synchronization between a Data Mover and an external source, type: $ server_date server_2 timesvc update ntp
server_2 : done EXAMPLE #7 ---------- To get the time zone on the specified Data Mover, type: $ server_date server_2 timezone server_2 : Local timezone: GMT EXAMPLE #8 ---------- To set the time zone to Central Time for a Data Mover when you do not have to adjust for daylight savings time, type: $ server_date server_2 timezone CST6 server_2 : done EXAMPLE #9 ---------- To set the time zone to Central Time and adjust for daylight savings time for a Data Mover, type: $ server_date server_2 timezone CST6CDT5,M4.1.0,M10.5.0 server_2 : done EXAMPLE #10 ----------- To set the time zone to Central Time and adjust the daylight savings time for a Data Mover using the Linux method, type: $ server_date server_2 timezone -name America/Chicago server_2 : done EXAMPLE #11 ----------- To display the time service configuration for a Data Mover, type: $ server_date server_2 timesvc server_2 : Timeservice State time: Thu Jan 6 17:04:28 EST 2005 type: ntp sync delay: off interval: 360 hosts: 172.24.102.20, Where: Value Definition ----- ---------- time Date and time known to the Data Mover. type Time service protocol configured on the Data Mover. sync delay Whether sync delay is on or off. interval Time interval between polls. hosts Specifies the IP address of the time server. EXAMPLE #12 -----------
To stop time services for a Data Mover, type: $ server_date server_2 timesvc stop ntp server_2 : done EXAMPLE #13 ----------- To delete the time service configuration for a Data Mover, type: $ server_date server_2 timesvc delete ntp server_2 : done EXAMPLE #14 ----------- To set the timezone on a Data Mover to Los Angeles, type: $ server_date server_2 timezone -n America/Los_Angeles server_2 : done ------------------------------------------------- Last modified: Feb 20, 2013 4:36 pm.
server_dbms Enables backup and restore of databases, displays database environment statistics. SYNOPSIS -------- server_dbms {
-db -restore [
Comment : Store the uid & gid ranges allocations for domains. Size : 8192 Modification time : Tue Jun 12 09:14:31 2007 Creation time : Tue Jun 12 09:14:31 2007 TABLE NAME : idxname Version : 1 Comment : Store the reverse mapping uid/gid to sid. Size : 8192 Modification time : Tue Jun 12 09:14:31 2007 Creation time : Tue Jun 12 09:14:31 2007 TABLE NAME : usrmapusrc Version : 1 Comment : Store the mapping SID -> (uid, name). Size : 8192 Modification time : Tue Jun 12 09:14:31 2007 Creation time : Tue Jun 12 09:14:31 2007 TABLE NAME : usrgrpmapnamesid Version : 1 Comment : Store the mapping user.domain -> SID. Size : 8192 Modification time : Tue Jun 12 09:14:31 2007 Creation time : Tue Jun 12 09:14:31 2007 TABLE NAME : usrmapgrpc Version : 1 Comment : Store the mapping SID -> (gid, name). Size : 8192 Modification time : Tue Jun 12 09:14:31 2007 Creation time : Tue Jun 12 09:14:31 2007 TABLE NAME : groupmapnamesid Version : 1 Comment : Store the mapping group.domain -> SID. Size : 8192 Modification time : Tue Jun 12 09:14:31 2007 Creation time : Tue Jun 12 09:14:31 2007 EXAMPLE #2 ---------- To display Secmap statistics, type: $ server_dbms server_3 -db -stats Secmap server_3 : done STATISTICS FOR DATABASE : Secmap TABLE : Mapping server_dbms magic 340322 Magic number. version 9 Table version number. metaflags 0 Metadata flags. nkeys 14 Number of unique keys. ndata 14 Number of data items. pagesize 4096 Page size. minkey 2 Minkey value. re_len 0 Fixed-length record length. re_pad 32 Fixed-length record pad. levels 1 Tree levels. int_pg 0 Internal pages. leaf_pg 1 Leaf pages. dup_pg 0 Duplicate pages. over_pg 0 Overflow pages. empty_pg 0 Empty pages. free 0 Pages on the free list. int_pgfree 0 Bytes free in internal pages. leaf_pgfree 2982 Bytes free in leaf pages. dup_pgfree 0 Bytes free in duplicate pages. over_pgfree 0 Bytes free in overflow pages. EXAMPLE #3 ---------- To display statistics of the VDM database environment, type:
$ server_dbms server_3 -service -stats STATISTICS FOR MODULE : LOG server_dbms magic 264584 Log file magic number. version 12 Log file version number. mode 0 Log file mode. lg_bsize 32768 Log buffer size. lg_size 5242880 Log file size. record 96 Records entered into the log. w_bytes 16001 Bytes to log. w_mbytes 0 Megabytes to log. wc_bytes 0 Bytes to log since checkpoint. wc_mbytes 0 Megabytes to log since checkpoint. wcount 31 Total writes to the log. wcount_fill 0 Overflow writes to the log. rcount 137 Total I/O reads from the log. scount 31 Total syncs to the log. region_wait 0 Region lock granted after wait. region_nowait 0 Region lock granted without wait. cur_file 3 Current log file number. cur_offset 16001 Current log file offset. disk_file 3 Known on disk log file number. disk_offset 16001 Known on disk log file offset. regsize 98304 Region size. maxcommitperflush 1 Max number of commits in a flush. mincommitperflush 1 Min number of commits in a flush. STATISTICS FOR MODULE : LOCK server_dbms last_id 91 Last allocated locker ID. cur_maxid 2147483647 Current maximum unused ID. maxlocks 1000 Maximum number of locks in table. maxlockers 1000 Maximum num of lockers in table. maxobjects 1000 Maximum num of objects in table. nmodes 9 Number of lock modes. nlocks 20 Current number of locks. maxnlocks 21 Maximum number of locks so far. nlockers 49 Current number of lockers. maxnlockers 49 Maximum number of lockers so far. nobjects 20 Current number of objects. maxnobjects 21 Maximum number of objects so far. nrequests 65711 Number of lock gets. nreleases 65691 Number of lock puts. nupgrade 0 Number of lock upgrades. ndowngrade 20 Number of lock downgrades. lock_wait 0 Lock conflicts w/ subsequent wait. lock_nowait 0 Lock conflicts w/o subsequent wait. ndeadlocks 0 Number of lock deadlocks. locktimeout 0 Lock timeout. nlocktimeouts 0 Number of lock timeouts. txntimeout 0 Transaction timeout. ntxntimeouts 0 Number of transaction timeouts. region_wait 0 Region lock granted after wait. region_nowait 0 Region lock granted without wait. regsize 352256 Region size. STATISTICS FOR MODULE : TXN server_dbms last_ckp 3/15945 lsn of the last checkpoint. time_ckp Fri Aug 3 09:38:36 2007 time of last checkpoint. last_txnid 0x8000001a last transaction id given out. maxtxns 20 maximum txns possible. naborts 0 number of aborted transactions. nbegins 26 number of begun transactions. ncommits 26 number of committed transactions. nactive 0 number of active transactions. nsnapshot 0 number of snapshot transactions. nrestores 0 number of restored transactions
after recovery. maxnactive 2 maximum active transactions. maxnsnapshot 0 maximum snapshot transactions. region_wait 0 Region lock granted after wait. region_nowait 0 Region lock granted without wait. regsize 16384 Region size. STATISTICS FOR MODULE : MPOOL server_dbms gbytes 0 Total cache size: GB. bytes 10487684 Total cache size: B. ncache 1 Number of caches. regsize 10493952 Region size. mmapsize 0 Maximum file size for mmap. maxopenfd 0 Maximum number of open fds. maxwrite 0 Maximum buffers to write. maxwrite_sleep 0 Sleep after writing max buffers. map 0 Pages from mapped files. cache_hit 65672 Pages found in the cache. cache_miss 36 Pages not found in the cache. page_create 0 Pages created in the cache. page_in 36 Pages read in. page_out 2 Pages written out. ro_evict 0 Clean pages forced from the cache. rw_evict 0 Dirty pages forced from the cache. page_trickle 0 Pages written by memp_trickle. pages 36 Total number of pages. page_clean 36 Clean pages. page_dirty 0 Dirty pages. hash_buckets 1031 Number of hash buckets. hash_searches 65744 Total hash chain searches. hash_longest 1 Longest hash chain searched. hash_examined 65672 Total hash entries searched. hash_nowait 0 Hash lock granted with nowait. hash_wait 0 Hash lock granted after wait. hash_max_nowait 0 Max hash lock granted with nowait hash_max_wait 0 Max hash lock granted after wait. region_nowait 0 Region lock granted with nowait. region_wait 0 Region lock granted after wait. mvcc_frozen 0 Buffers frozen. mvcc_thawed 0 Buffers thawed. mvcc_freed 0 Frozen buffers freed. alloc 123 Number of page allocations. alloc_buckets 0 Buckets checked during allocation. alloc_max_buckets 0 Max checked during allocation. alloc_pages 0 Pages checked during allocation. alloc_max_pages 0 Max checked during allocation. io_wait 0 Thread waited on buffer I/O. STATISTICS FOR MODULE : MUTEX server_dbms mutex_align 4 Mutex alignment. mutex_tas_spins 1 Mutex test-and-set spins. mutex_cnt 3254 Mutex count. mutex_free 1078 Available mutexes. mutex_inuse 2176 Mutexes in use. mutex_inuse_max 2176 Maximum mutexes ever in use. region_wait 0 Region lock granted after wait. region_nowait 0 Region lock granted without wait. regsize 278528 Region size. -------------------------------------- Last Modified: April 07, 2011 12:45 PM
server_devconfig Queries, saves, and displays the SCSI over Fibre Channel device configuration connected to the specified Data Movers. SYNOPSIS -------- server_devconfig {
-list -scsi [
Scsi Device Table name addr type info gk01 c0t0l disk 5 020700000000APM00043807043 ggk01 c0t1l0 disk 5 020710001000APM00043807043 gk161 c16t1l1 disk 5 020711001100APM00043807043 For the VNX with a Symmetrix storage system, to list all the devices in the SCSI table, type: $ server_devconfig server_2 -list -scsi -all server_2 : Scsi Disk Table Director Port name addr num type num sts stor_id stor_dev root_disk c0t0l0 16C FA 0 On 000187940268 0000 root_disk c16t0l0 01C FA 0 On 000187940268 0000 root_ldisk c0t0l1 16C FA 0 On 000187940268 0001 root_ldisk c16t0l1 01C FA 0 On 000187940268 0001 d3 c0t1l0 16C FA 0 On 000187940268 0006 d3 c16t1l0 01C FA 0 On 000187940268 0006 d4 c0t1l1 16C FA 0 On 000187940268 0007 d4 c16t1l1 01C FA 0 On 000187940268 0007 d5 c0t1l2 16C FA 0 On 000187940268 0008 d5 c16t1l2 01C FA 0 On 000187940268 0008 d6 c0t1l3 16C FA 0 On 000187940268 0009 d6 c16t1l3 01C FA 0 On 000187940268 0009 d7 c0t1l4 16C FA 0 On 000187940268 000A d7 c16t1l4 01C FA 0 On 000187940268 000A <... removed ...> d377 c1t8l6 16C FA 0 On 000187940268 017C d377 c17t8l6 01C FA 0 On 000187940268 017C rootd378 c1t8l7 16C FA 0 On 000187940268 0180 rootd378 c17t8l7 01C FA 0 On 000187940268 0180 rootd379 c1t8l8 16C FA 0 On 000187940268 0181 rootd379 c17t8l8 01C FA 0 On 000187940268 0181 rootd380 c1t8l9 16C FA 0 On 000187940268 0182 rootd380 c17t8l9 01C FA 0 On 000187940268 0182 rootd381 c1t8l10 16C FA 0 On 000187940268 0183 rootd381 c17t8l10 01C FA 0 On 000187940268 0183 Scsi Device Table name addr type info gk01 c0t0l15 disk 56706817D480 000187940268 gk161 c16t0l15 disk 56706817D330 000187940268 Note: This is a partial display due to the length of the output. Where: Value Definition name A unique name for each device in the chain addr SCSI chain, target, and LUN information Director num Director number. This output is applicable for Symmetrix storage systems only. type Device type, as specified in the SCSI specification for peripherals. This output is applicable for Symmetrix storage systems only. Port num Port number. This output is applicable for Symmetrix storage systems only. sts Indicates the port status. Possible values are: On, Off, WD (write disabled), and NA. This output is applicable for Symmetrix storage systems only. stor_id Storage system ID
stor_dev Storage system device ID EXAMPLE #2 ----------- For the VNX, to list all SCSI-attached non-disk devices, type: $ server_devconfig server_2 -list -scsi -nondisks server_2 : Scsi Device Table name addr type info gk01 c0t0l0 disk 5 020700000000APM00043807043 ggk01 c0t1l0 disk 5 020710001000APM00043807043 gk161 c16t1l1 disk 5 020711001100APM00043807043 For the VNX with a Symmetrix storage system, to list all SCSI-attached non-disk devices, type: $ server_devconfig server_2 -list -scsi -nondisks server_2 : Scsi Device Table name addr type info gk01 c0t0l15 disk 56706817D480 000187940268 gk161 c16t0l15 disk 56706817D330 000187940268 For info=56706817D480, the following breakdown applies: 5670 Symmcode 68 Last 2 digits in the Symm S/N 17D Symm Device ID# 48 Symm SA # 0 SA Port # (0=a, 1=b) EXAMPLE #3 ---------- To rename a device, type: $ server_devconfig server_2 -rename gk161 gk201 server_2 : done EXAMPLE #4 ---------- For the VNX, to discover SCSI disk devices, without saving them to the database table, type: $ server_devconfig server_2 -probe -scsi -disks server 2 : SCSI disk devices : chain= 0, scsi-0 stor_id= APM00043807043 celerra_id= APM000438070430000 tid/lun= 0/0 type= disk sz= 11263 val= 1 info= DGC RAID 5 02070000000000NI tid/lun= 0/1 type= disk sz= 11263 val= 2 info= DGC RAID 5 02070100010001NI tid/lun= 0/2 type= disk sz= 2047 val= 3 info= DGC RAID 5 02070200020002NI tid/lun= 0/3 type= disk sz= 2047 val= 4 info= DGC RAID 5 02070300030003NI tid/lun= 0/4 type= disk sz= 2047 val= 5 info= DGC RAID 5 02070400040004NI tid/lun= 0/5 type= disk sz= 2047 val= 6 info= DGC RAID 5 02070500050005NI tid/lun= 1/0 type= disk sz= 245625 val= 7 info= DGC RAID 5 02071000100010NI tid/lun= 1/1 type= disk sz= 0 val= -5 info= DGC RAID 5 02071100110011NI tid/lun= 1/2 type= disk sz= 273709 val= 9 info= DGC RAID 5 02071200120012NI tid/lun= 1/3 type= disk sz= 0 val= -5 info= DGC RAID 5 02071300130013NI tid/lun= 1/4 type= disk sz= 273709 val= 10 info= DGC RAID 5 02071400140014NI tid/lun= 1/5 type= disk sz= 0 val= -5 info= DGC RAID 5 02071500150015NI tid/lun= 1/6 type= disk sz= 273709 val= 11 info= DGC RAID 5 02071600160016NI tid/lun= 1/7 type= disk sz= 0 val= -5 info= DGC RAID 5 02071700170017NI tid/lun= 1/8 type= disk sz= 273709 val= 12 info= DGC RAID 5 02071800180018NI tid/lun= 1/9 type= disk sz= 0 val= -5 info= DGC RAID 5 02071900190019NI
chain= 1, scsi-1 : no devices on chain chain= 2, scsi-2 : no devices on chain chain= 3, scsi-3 : no devices on chain chain= 4, scsi-4 : no devices on chain chain= 5, scsi-5 : no devices on chain chain= 6, scsi-6 : no devices on chain chain= 7, scsi-7 : no devices on chain chain= 8, scsi-8 : no devices on chain chain= 9, scsi-9 : no devices on chain chain= 10, scsi-10 : no devices on chain chain= 11, scsi-11 : no devices on chain chain= 12, scsi-12 : no devices on chain chain= 13, scsi-13 : no devices on chain chain= 14, scsi-14 : no devices on chain chain= 15, scsi-15 : no devices on chain For the VNX with a Symmetrix storage system, to discover SCSI disk devices, without saving them to the database table, type: $ server_devconfig server_2 -probe -scsi -disks server_2 : SCSI disk devices : chain= 0, scsi-0 : no devices on chain chain= 1, scsi-1 : no devices on chain chain= 2, scsi-2 stor_id= 000190102173 celerra_id= 0001901021730041 tid/lun= 0/0 type= disk sz= 11507 val= 1 info= 577273041291SI00041 tid/lun= 0/1 type= disk sz= 11507 val= 2 info= 577273042291SI00042 tid/lun= 1/0 type= disk sz= 11501 val= 3 info= 57727304F291SI0004F tid/lun= 1/1 type= disk sz= 11501 val= 4 info= 577273050291SI00050 tid/lun= 1/2 type= disk sz= 11501 val= 5 info= 577273051291SI00051 tid/lun= 1/3 type= disk sz= 11501 val= 6 info= 577273052291SI00052 tid/lun= 1/4 type= disk sz= 11501 val= 7 info= 577273053291SI00053 tid/lun= 1/5 type= disk sz= 11501 val= 8 info= 577273054291SI00054 tid/lun= 1/6 type= disk sz= 11501 val= 9 info= 577273055291SI00055 tid/lun= 1/7 type= disk sz= 11501 val= 10 info= 577273056291SI00056 tid/lun= 1/8 type= disk sz= 11501 val= 11 info= 577273057291SI00057 tid/lun= 1/9 type= disk sz= 11501 val= 12 info= 577273058291SI00058 tid/lun= 1/10 type= disk sz= 11501 val= 13 info= 577273059291SI00059 tid/lun= 1/11 type= disk sz= 11501 val= 14 info= 57727305A291SI0005A tid/lun= 1/12 type= disk sz= 11501 val= 15 info= 57727305B291SI0005B tid/lun= 1/13 type= disk sz= 11501 val= 16 info= 57727305C291SI0005C tid/lun= 1/14 type= disk sz= 11501 val= 17 info= 57727305D291SI0005D tid/lun= 1/15 type= disk sz= 11501 val= 18 info= 57727305E291SI0005E tid/lun= 2/0 type= disk sz= 11501 val= 19 info= 57727305F291SI0005F tid/lun= 2/1 type= disk sz= 11501 val= 20 info= 577273060291SI00060 tid/lun= 2/2 type= disk sz= 11501 val= 21 info= 577273061291SI00061 <... removed ...> tid/lun= 7/6 type= disk sz= 11501 val= 105 info= 577273517291SI00517 tid/lun= 7/7 type= disk sz= 11501 val= 106 info= 577273518291SI00518 tid/lun= 7/8 type= disk sz= 11501 val= 107 info= 577273519291SI00519 tid/lun= 7/9 type= disk sz= 11501 val= 108 info= 57727351A291SI0051A tid/lun= 7/10 type= disk sz= 11501 val= 109 info= 57727351B291SI0051B tid/lun= 7/11 type= disk sz= 11501 val= 110 info= 57727351C291SI0051C tid/lun= 7/12 type= disk sz= 11501 val= 111 info= 57727351D291SI0051D tid/lun= 7/13 type= disk sz= 11501 val= 112 info= 57727351E291SI0051E tid/lun= 7/14 type= disk sz= 11501 val= 113 info= 57727351F291SI0051F tid/lun= 7/15 type= disk sz= 11501 val= 114 info= 577273520291SI00520 chain= 3, scsi-3 : no devices on chain chain= 4, scsi-4 : no devices on chain chain= 5, scsi-5 : no devices on chain chain= 6, scsi-6 : no devices on chain <... removed ...> chain= 18, scsi-18 stor_id= 000190102173 celerra_id= 0001901021730041 tid/lun= 0/0 type= disk sz= 11507 val= 1 info= 577273041201SI00041 tid/lun= 0/1 type= disk sz= 11507 val= 2 info= 577273042201SI00042 tid/lun= 1/0 type= disk sz= 11501 val= 3 info= 57727304F201SI0004F tid/lun= 1/1 type= disk sz= 11501 val= 4 info= 577273050201SI00050
tid/lun= 1/2 type= disk sz= 11501 val= 5 info= 577273051201SI00051 tid/lun= 1/3 type= disk sz= 11501 val= 6 info= 577273052201SI00052 tid/lun= 1/4 type= disk sz= 11501 val= 7 info= 577273053201SI00053 Note: This is a partial listing due to the length of the output. EXAMPLE #5 ---------- To discover and save all SCSI devices, type: $ server_devconfig server_2 -create -scsi -all Discovering storage (may take several minutes) server_2 : done EXAMPLE #6 ---------- To discover and save all non-disk devices, type: $ server_devconfig server_2 -create -scsi -nondisks Discovering storage (may take several minutes) server_2 : done EXAMPLE #7 ---------- To save all SCSI devices with the discovery operation disabled, and display information regarding the progress, type: $ server_devconfig ALL -create -scsi -all -discovery n -monitor y server_2 : server_2: chain 0 .......... chain 16 ..... done server_3 : server_3: chain 0 .......... chain 16 ..... done server_4 : server_4: chain 0 .......... chain 16 ..... done server_5 : server_5: chain 0 .......... chain 16 ..... done -------------------------------------- Last Modified: April 07, 2011 03:25 pm
server_df Reports free and used disk space and inodes for mounted file systems on the specified Data Movers. SYNOPSIS -------- server_df {
avail Amount of space in kilobytes available for the fi le system. capacity Percentage capacity that is used. Mounted on Mount point of the file system. EXAMPLE #2 ---------- To display the amount of disk space and the amount of free and unused inodes on a Data Mover, type: $ server_df server_2 -inode server_2 : Filesystem inodes used avail capacity Mounted on ufs1 131210494 140 131210354 0% /ufs1 ufs4 25190398 10 25190388 0% /nmfs1/ufs4 ufs2 25190398 11 25190387 0% /nmfs1/ufs2 nmfs1 50380796 21 50380775 0% /nmfs1 root_fs_common 21822 26 21796 0% /.etc_common root_fs_2 130942 66 130876 0% / EXAMPLE #3 ---------- To display the amount of disk space and the amount of free and unused inodes on a file system, type: $ server_df server_2 -inode ufs1 server_2 : Filesystem inodes used avail capacity Mounted on ufs1 131210494 140 131210354 0% /ufs -------------------------------------- Last Modified: April 07, 2011 03:35 pm
server_dns Manages the Domain Name System (DNS) lookup server configuration for the specified Data Movers. SYNOPSIS -------- server_dns {
DNS is running. prod.emc.com proto:udp server(s):172.10.20.10 EXAMPLE #3 ---------- To change the protocol to TCP from UDP, type: $ server_dns server_2 -protocol tcp prod.emc.com 172.10.20.10 server_2 : done EXAMPLE #4 ---------- To halt access to the DNS lookup servers, type: $ server_dns server_2 -option stop server_2 : done EXAMPLE #5 ---------- To flush the cache on a Data Mover, type: $ server_dns server_2 -option flush server_2 : done EXAMPLE #6 ---------- To dump the DNS cache, type: $ server_dns server_2 -option dump server_2 : DNS cache size for one record type: 64 DNS cache includes 6 item(s): dm102-cge0.nasdocs.emc.com Type:A TTL=184 s dataCount:1 172.24.102.202 (local subnet) --- winserver1.nasdocs.emc.com Type:A TTL=3258 s dataCount:1 172.24.103.60 --- _ldap._tcp.Default-First-Site-Name._sites.dc._msdcs.nasdocs.emc.com Type:SRV TTL=258 s dataCount:1 priority:0 weight:100 port:389 server:winserver1.nasdocs.emc.com --- _kerberos._tcp.Default-First-Site-Name._sites.dc._msdcs.nasdocs.emc.com Type:SRV TTL=258 s dataCount:1 priority:0 weight:100 port:88 server:winserver1.nasdocs.emc.com --- Expired item(s): 2 EXAMPLE #7 ---------- To delete the DNS lookup servers, type: $ server_dns server_2 -delete prod.emc.com server_2 : done --------------------------------------------------------- Last modified: May 12, 2011 9:30 am.
server_export Exports file systems and manages access on the specified Data Movers for NFS and CIFS clients. SYNOPSIS -------- server_export {
-list
rw=
and then on the NIS servers netgroup database. If the client name does not exist in any case, then access is denied. Note: Netgroups are supported. The hosts and netgroup files can be created on the Control Station using your preferred method (for example, with an editor, or by copying from another node), then copied to the Data Mover. nosuid=
either a
* Encrypted: The server requires encrypted messages to access the share. * Access Based Enumeration (ABE): Only files and directories to which the user has read access are visible (Access Based Enumeration). * HASH: Indicates that the share supports hash generation for BranchCache retri eval. * Offline Caching Attributes (OC): User MUST allow only manual caching for the files open from this share by default. - CAutoI: The user MAY cache every file that it opens from this share. - OCVDO: The user MAY cache every file that it opens from this share. Also, t he user MAY satisfy the file requests from its local cache. - OCNone: Indicates that no files or programs from the shared folder are avai lable offline. SEE ALSO -------- Configuring NFS on VNX, Managing Volumes and File Systems for VNX Manually, server_checkup, and server_mount. EXAMPLE #1 ---------- To export a specific NFS entry, type: $ server_export server_2 -Protocol nfs /ufs1 server_2 : done EXAMPLE #2 ---------- To export an NFS entry and overwrite existing settings, type: $ server_export server_2 -Protocol nfs -ignore -option access=172.24.102.0/255.255.255.0,root=172.24.102.240 -comment NFS Export for ufs1 /ufs1 server_2 : done EXAMPLE #3 ---------- To export NFS entry dir1, a subdirectory of the exported entry /ufs1 in a multilevel file system hierarchy, type: $ server_export server_2 -Protocol nfs /ufs1/dir1 server_2 : done EXAMPLE #4 ---------- To assign a name to a NFS export, type: $ server_export server_2 -Protocol nfs -name nasdocsfs /ufs1 server_2 : done EXAMPLE #5 ---------- To export an NFS entry using Kerberos authentication, type: $ server_export server_2 -Protocol nfs -option sec=krb5:ro,root=172.24.102.240,access=172.24.102.0/255.255.255.0 /ufs2 server_2 : done EXAMPLE #6 ---------- To export an NFS entry for NFSv4 only, type: $ server_export server_2 -Protocol nfs -option nfsv4only /ufs1
server_2 : done EXAMPLE #7 ---------- To list all NFS entries, type: $ server_export server_2 -Protocol nfs -list -all server_2 : export "/ufs2" sec=krb5 ro root=172.24.102.240 access=172.24.102.0/255.255.255.0 export "/ufs1" name="/nasdocsfs" access=172.24.102.0/255.255.255.0 root=172.24.102.240 nfsv4only comment="NFS Export for ufs1" export "/" anon=0 access=128.221.252.100:128:221.253.100:128.221.252.101:128.221.253.101 EXAMPLE #8 ---------- To list NFS entries for the specified path, type: $ server_export server_2 -list /ufs1 server_2 : export "/ufs1" name="/nasdocsfs" access=172.24.102.0/255.255.255.0 root=172.24.102.240 nfsv4only comment="NFS Export for ufs1" EXAMPLE #9 ---------- To temporarily unexport an NFS entry, type: $ server_export server_2 -Protocol nfs -unexport /ufs2 server_2 : done EXAMPLE #10 ----------- To export all NFS entries, type: $ server_export server_2 -Protocol nfs -all server_2 : done EXAMPLE #11 ----------- To export a specific NFS entry in a language that uses multibyte characters, type: $ server_export server_2 -Protocol nfs /
$ server_export server_2 -name ufs1 /ufs1 server_2 : done EXAMPLE #15 ----------- To create a CIFS share and overwrite existing settings, type: $ server_export server_2 -name ufs1 -ignore -option ro,umask=027,maxusr=200,netbios=dm112-cge0 -comment CIFS share /ufs1 server_2 : done EXAMPLE #16 ----------- To create a CIFS share in a language that uses multibyte characters, type: $ server_export server_2 -P cifs -name
sec Security mode for the file system. ro File system is to be exported as read-only. root IP address with root access. access Access is permitted for those IP addresses. share Entry to be shared. ro Filesystem is to be shared as read-only. umask User creation mask. maxuser Maximum number of simultaneous users. netbios NetBIOS name for the share. comment Comment specified for the share. EXAMPLE #21 ----------- To permanently unexport all CIFS and NFS entries, type: $ server_export server_2 -unexport -perm -all server_2 : done EXAMPLE #22 ----------- To delete a CIFS share, type: $ server_export server_2 -unexport -name ufs1 -option netbios=dm112-cge0 server_2 : done EXAMPLE #23 ----------- To delete all CIFS shares, type: $ server_export server_2 -Protocol cifs -unexport -all server_2 : done EXAMPLE #24 ----------- To export a file system for NFS that specifies an IPv4 and IPv6 address, type: $ server_export server_2 -Protocol nfs -option access=172.24.108.10:[1080:0:0:0:8:800:200C:417A] /fs1 server_2 : done EXAMPLE #25 ----------- To export a file system for NFS that specifies two IPv6 addresses, type: $ server_export server_2 -Protocol nfs -option rw=[1080:0:0:0:8:80:200C:417A]:[10 80:0:0:0:8:800:200C:417B] /fs1 server_2 : done EXAMPLE #26 ----------- To verify that the file system was exported, type: $ server_export server_2 -list /fs1 server_2 : export "/fs1" rw=[1080:0:0:0:8:80:200C:417A]:[1080:0:0:0:8:800:200C:417B] EXAMPLE #27 ----------- To export the fs42 file system of the VDM vdm1, type: $ server_export vdm1 -P nfs /fs42 done
EXAMPLE #28 ----------- To create a share foo on the server PALIC with HASH and ABE enabled, type: $ server_export server_3 -name foo -option netbios=PALIC, type=ABE:HASH /fs3/foo server_3 : done EXAMPLE #29 ----------- To change attributes to this share to ABE only, type: $ server_export server_3 -name foo -option netbios=PALIC, type=ABE /fs3/foo server_3 : done EXAMPLE #30 ----------- To remove all the attributes, type: server_export server_3 -name foo -ignore -option netbios=PALIC,type=None /fs3/fro server_3 : done EXAMPLE #31 ----------- To view the attributes, type: server_export server_3 share "foo" "/fs3/fro" type=ABE:HASH umask=022 maxusr=4294967295 netbios=PALIC server_3 : done EXAMPLE #32 ----------- To create a share foo on the server palic with CA and ABE enabled, type: $ server_export server_3 -name foo -option netbios=PALIC, type=CA:ABE /fs3/foo server_3 : done EXAMPLE #33 ----------- To change attributes of the share foo to CA only, type: $ server_export server_3 -name foo -option netbios=PALIC, type=CA /fs3/foo server_3 : done EXAMPLE #34 ----------- To view the attributes, type: $ server_export server_3 share "foo" "/fs3/fro" type=CA umask=022 maxusr=4294967295 netbios=PALIC server_3 : done EXAMPLE #35 ----------- To create a share share10 accessible only through encrypted SMB messages, type:
$ server_export vdm1 -P cifs -name share10 -o type=Encrypted /fs42/protected_dir1 server_3 : done EXAMPLE #36 ----------- To export the NFS pathname "/users/gary" on Data Mover server_2 restricting setuid and setgid bit access for clients host10 and host11, type: $ server_export server_2 -Protocol nfs -option nosuid=host10:host11 /users/gary server_2 : done EXAMPLE #37 ----------- To export the NFS pathname "/production1" on all Data Movers restricting setuid and setgid bit access for client host123, type: $ server_export ALL -option nosuid=host123 /production1 server_2 : done EXAMPLE #38 ----------- To export the NFS pathname "/fs1" on all Data Movers restricting setuid and setgid bit access for all clients except for 10.241.216.239, which is allowed root privileges in addition to setuid and setgid bit access, type: $ server_export server_2 -Protocol nfs -option root=10.241.216.239,nosuid=-10.241.216.239 /fs1 server_2 : done ------------------------------------------------------- Last Modified: November 20, 2012 11:55 a.m.
server_file Copies files between the Control Station and the specified Data Movers. SYNOPSIS -------- server_file {
server_fileresolve Starts, deletes, stops, checks, and displays the fileresolve service for the specified Data Mover. Filereseolve service facilitates inode-to-filename translation. This translation is required when administrator monitors the fs.qtreeFile and fs.filesystem statistics. SYNOPSIS -------- server_fileresolve
server_fileresolve server_X -add
Paths are dropped Warning: Restart service to remove the cached entries of dropped paths. EXAMPLE #5 ---------- To lookup multiple inodes within the same filesystem, type: $ server_fileresolve server_2 -lookup -filesystem ufs_0 -inode 61697,61670,61660 server_2 : Filesystem/QTree Inode Path ufs_0 61660 /server_2/ufs_0/dir00000/ testdir/yYY_0000039425.tmp ufs_0 61670 /server_2/ufs_0/dir00000/ testdir/kNt_0000028175.tmp ufs_0 61697 /server_2/ufs_0/dir00000/ testdir/gwR_0000058176.tmp EXAMPLE #6 ---------- To lookup multiple inodes within a Quota Tree, type: $ server_fileresolve server_2 -lookup -qtree dir00000 -inode 61697 server_2 : Filesystem/QTree Inode Path dir00000 61697 /server_2/ufs_0/dir00000/ testdir/gwR_0000058176.tmp --------------------------------------------------------------------------------- --------------------------- Date updated: June 04, 2012 12:15 p.m.
server_ftp Configures the FTP server configuration for the specified Data Movers. SYNOPSIS -------- server_ftp {
status. -modify Modifies the ftp daemon configuration. The ftp daemon has to be stopped to carry out the changes. The modifications are taken into account when the service is restarted. -controlport
Specifies path of the file to be displayed on the welcome screen. For example, this file can display a login banner before the user is requested for authentication data. By default, no welcome message is displayed. -motd
Note: These options are set on the server but are dependent on ftp client capabilities. Some client capabilities may be incompatible with server settings. Using FTP on VNX provides information on validating compatibility. -sslpersona {anonymous|default|
session timeout 900 seconds max session timeoutQ 7200 seconds Security Options ============= sslpersona default sslprotocol default sslcipher default FTP over TLS explicit Options ---------------------------------------- sslcontrol SSL require for authentication ssldata allow SSL FTP over SSL implicit Options ----------------------------------------- sslcontrolport 990 ssldataport 989 EXAMPLE #2 ---------- To display the statistics of the ftp daemon, type: $ server_ftp server_2 -service -stats Login Type Successful Failed ========== ========== ======= Anonymous 10 0 Unix 3 2 CIFS 7 1 Throughput (MBytes/sec) Data transfers Count min average max ============== ===== ==== ======================= ===== Write Bin 10 10.00 19.00 20.00 Read Bin 0 ---- ---- ---- Write ASCII 2 1.00 1.50 2.00 Read ASCII 0 ---- ---- ---- SSL Write Bin 5 5.00 17.00 18.00 SSL Read Bin 15 7.00 25.00 35.00 SSL Write ASCII 0 ---- ---- ---- SSL Read ASCII 0 ---- ---- ---- Where: Value Definition Throughput Throughput is calculated using the size of the file (Mbytes) (MBytes/sec) divided by the duration of the transfer (in seconds). average Average is the average of the throughputs (sum of the throughputs divided by the number of transfers). Data transfers Defines the type of transfer. Count Number of operations for a transfer type. min Minimum time in milliseconds required to execute the operation (with regards to Data mover). max Maximum time in milliseconds required to execute the operation (with regards to Data mover). EXAMPLE #3 ---------- To display the statistics of the ftp daemon with details, type: $ server_ftp server_2 -service -stats -all Commands Count
======== ===== USER 23 PASS 23 QUIT 23 PORT 45 EPRT 10 .... .... FEAT 23 SITE Commands Count ============= ===== UMASK 0 IDLE 10 CHMOD 0 HELP 0 BANDWIDTH 0 KEEPALIVE 10 PASV 56 OPTS Commands Count ============= ===== UTF8 10 Login Type Successful Failed ========== ========== ======= Anonymous 10 0 Unix 3 2 CIFS 7 1 Connections Count =========== ===== Non secure ---------- Control 10 Data 44 Explicit SSL ------------ Control Auth 3 Control 8 Data 20 Implicit SSL ------------ Control 0 Data 0 Throughput (MBytes/sec) Data transfers Count min average max ============== ===== ======== ==================== ======= Write Bin 10 10.00 19.00 20.00 Read Bin 0 ---- ---- ---- Write ASCII 2 1.00 1.50 2.00 Read ASCII 0 ---- ---- ---- SSL Write Bin 5 5.00 17.00 18.00 SSL Read Bin 15 7.00 25.00 35.00 SSL Write ASCII 0 ---- ---- ---- SSL Read ASCII 0 ---- ---- ---- Where: Value Definition Commands FTP protocol command name. Count Number of commands received by Data mover. SITE Commands Class of command in FTP protocol. OPTS Commands Class of command in FTP protocol.
EXAMPLE #4 ---------- To retrieve the status of the ftp daemon, type: $ server_ftp server_3 -service -status server_3 : done State : running EXAMPLE #5 ---------- To start the ftp daemon , type: $ server_ftp server_2 -service -start server_2 : done EXAMPLE #6 ---------- To stop the ftp daemon, type: $ server_ftp server_2 -service -stop server_2 : done EXAMPLE #7 ----------- To set the local tcp port for the control connections, type: $ server_ftp server_2 -modify -controlport 256 server_2 :done FTPD CONFIGURATION ================== State : stopped Control Port : 256 Data Port : 20 Default dir : / Home dir : disable Keepalive : 1 High watermark : 65536 Low watermark : 32768 Timeout : 900 Max timeout : 7200 Read size : 8192 Write size : 49152 Umask : 27 Max connection : 65535 SSL CONFIGURATION ================= Control channel mode : disable Data channel mode : disable Persona : default Protocol : default Cipher : default Control port : 990 Data port : 989 EXAMPLE #8 ---------- To set the local tcp port for active data connections, type: $ server_ftp server_2 -modify -dataport 257 server_2 : done
FTPD CONFIGURATION ================== State : stopped Control Port : 256 Data Port : 257 Default dir : / Home dir : disable Keepalive : 1 High watermark : 65536 Low watermark : 32768 Timeout : 900 Max timeout : 7200 Read size : 8192 Write size : 49152 Umask : 27 Max connection : 65535 SSL CONFIGURATION ================= Control channel mode : disable Data channel mode : disable Persona : default Protocol : default Cipher : default Control port : 990 Data port : 989 EXAMPLE #9 ---------- To change the default directory of a user when his home directory is not accessible, type: $ server_ftp server_2 -modify -defaultdir /big server_2 : done FTPD CONFIGURATION ================== State : stopped Control Port : 256 Data Port : 257 Default dir : /big Home dir : disable Keepalive : 1 High watermark : 65536 Low watermark : 32768 Timeout : 900 Max timeout : 7200 Read size : 8192 Write size : 49152 Umask : 27 Max connection : 65535 SSL CONFIGURATION ================= Control channel mode : disable Data channel mode : disable Persona : default Protocol : default Cipher : default Control port : 990 Data port : 989 EXAMPLE #10 ----------- To allow users access to their home directory tree, type: $ server_ftp server_2 -modify -homedir enable server_2 : done
FTPD CONFIGURATION ================== State : stopped Control Port : 256 Data Port : 257 Default dir : /big Home dir : enable Keepalive : 1 High watermark : 65536 Low watermark : 32768 Timeout : 900 Max timeout : 7200 Read size : 8192 Write size : 49152 Umask : 27 Max connection : 65535 SSL CONFIGURATION ================= Control channel mode : disable Data channel mode : disable Persona : default Protocol : default Cipher : default Control port : 990 Data port : 989 EXAMPLE #11 ----------- To restrict user access to their home directory tree, type: $ server_ftp server_2 -modify -homedir disable server_2 : done FTPD CONFIGURATION ================== State : stopped Control Port : 256 Data Port : 257 Default dir : /big Home dir : disable Keepalive : 1 High watermark : 65536 Low watermark : 32768 Timeout : 900 Max timeout : 7200 Read size : 8192 Write size : 49152 Umask : 27 Max connection : 65535 SSL CONFIGURATION ================= Control channel mode : disable Data channel mode : disable Persona : default Protocol : default Cipher : default Control port : 990 Data port : 989 EXAMPLE #12 ----------- To set the default umask for creating a file or a directory by means of the ftp daemon, type: $ server_ftp server_2 -modify -umask 077 server_2 : done
FTPD CONFIGURATION ================== State : stopped Control Port : 256 Data Port : 257 Default dir : /big Home dir : disable Keepalive : 1 High watermark : 65536 Low watermark : 32768 Timeout : 900 Max timeout : 7200 Read size : 8192 Write size : 49152 Umask : 77 Max connection : 65535 SSL CONFIGURATION ================= Control channel mode : disable Data channel mode : disable Persona : default Protocol : default Cipher : default Control port : 990 Data port : 989 EXAMPLE #13 ----------- To set the TCP keepalive for the ftp daemon, type: $ server_ftp server_2 -modify -keepalive 120 server_2 : done FTPD CONFIGURATION ================== State : stopped Control Port : 256 Data Port : 257 Default dir : /big Home dir : disable Keepalive : 120 High watermark : 65536 Low watermark : 32768 Timeout : 900 Max timeout : 7200 Read size : 8192 Write size : 49152 Umask : 77 Max connection : 65535 SSL CONFIGURATION ================= Control channel mode : disable Data channel mode : disable Persona : default Protocol : default Cipher : default Control port : 990 Data port : 989 EXAMPLE #14 ----------- To set the TCP highwatermark for the ftp daemon, type: $ server_ftp server_2 -modify -highwatermark 90112 server_2 : done
FTPD CONFIGURATION ================== State : stopped Control Port : 256 Data Port : 257 Default dir : /big Home dir : disable Keepalive : 120 High watermark : 90112 Low watermark : 32768 Timeout : 900 Max timeout : 7200 Read size : 8192 Write size : 49152 Umask : 77 Max connection : 65535 SSL CONFIGURATION ================= Control channel mode : disable Data channel mode : disable Persona : default Protocol : default Cipher : default Control port : 990 Data port : 989 EXAMPLE #15 ----------- To set the TCP lowwatermark for the ftp daemon, type: $ server_ftp server_2 -modify -lowwatermark 32768 server_2 : done FTPD CONFIGURATION ================== State : stopped Control Port : 256 Data Port : 257 Default dir : /big Home dir : disable Keepalive : 120 High watermark : 90112 Low watermark : 32768 Timeout : 900 Max timeout : 7200 Read size : 8192 Write size : 49152 Umask : 77 Max connection : 65535 SSL CONFIGURATION ================= Control channel mode : disable Data channel mode : disable Persona : default Protocol : default Cipher : default Control port : 990 Data port : 989 EXAMPLE #16 ----------- To restrict FTP server access to specific users, type: $ server_ftp server_2 -modify -deniedusers /.etc/mydeniedlist server_2 : done
FTPD CONFIGURATION ================== State : stopped Control Port : 256 Data Port : 257 Default dir : /big Home dir : disable Keepalive : 120 High watermark : 90112 Low watermark : 32768 Denied users conf file : /.etc/mydeniedlist Timeout : 900 Max timeout : 7200 Read size : 8192 Write size : 49152 Umask : 77 Max connection : 65535 SSL CONFIGURATION ================= Control channel mode : disable Data channel mode : disable Persona : default Protocol : default Cipher : default Control port : 990 Data port : 989 EXAMPLE #17 ----------- To set the path of the file displayed before the user logs in, type: $ server_ftp server_2 -modify -welcome /.etc/mywelcomefile server_2 : done FTPD CONFIGURATION ================== State : stopped Control Port : 256 Data Port : 257 Default dir : /big Home dir : disable Keepalive : 120 High watermark : 90112 Low watermark : 32768 Welcome file : /.etc/mywelcomefile Timeout : 900 Max timeout : 7200 Read size : 8192 Write size : 49152 Umask : 77 Max connection : 65535 SSL CONFIGURATION ================= Control channel mode : disable Data channel mode : disable Persona : default Protocol : default Cipher : default Control port : 990 Data port : 989 ----------------------------------------------------------------- Last Modified Date: April 12, 2011. Time: 11:20 am
server_http Configures the HTTP configuration file for independent services, such as VNX FileMover, for the specified Data Movers. SYNOPSIS -------- server_http {
If valid is entered, all users in the passwd file are allowed to use digest authentication. A comma-separated list of users can also be given. If no users are given, digest authentication is turned off. [-hosts
specified, displays the current HTTP configuration. -remove
protocol The level of SSL protocol used for the service. cipher The cipher suite the service is negotiating, for establishing a secure connection with the client. EXAMPLE #2 ---------- To display statistical information about the HTTP protocol connection for the FileMover service, type: $ server_http server_2 -service dhsm -stats server_2 : done Statistics report for HTTPD facility DHSM : Thread activity Maximum in use count : 0 Connection IP filtering rejection count : 0 Request Authentication failure count : 0 SSL Handshake failure count : 0 EXAMPLE #3 ---------- To configure an HTTP protocol connection for FileMover using SSL, type: $ server_http server_2 -modify dhsm -ssl required server_2 : done EXAMPLE #4 ---------- To modify the threads option of the HTTP protocol connection for FileMover, type: $ server_http server_2 -modify dhsm -threads 40 server_2 : done DHSM FACILITY CONFIGURATION Service name : EMC File Mover service Comment : Service facility for getting DHSM attributes Active : False Port : 5080 Threads : 40 Max requests : 300 Timeout : 60 seconds ACCESS CONTROL Allowed IPs : any Authentication : digest ,Realm : DHSM_Authorization Allowed user : nobody SSL CONFIGURATION Mode : OFF Persona : default Protocol : default Cipher : default EXAMPLE #5 ---------- To allow specific users to manage the HTTP protocol connection for FileMover, type: $ server_http server_2 -modify dhsm -users valid -hosts 10.240.12.146 server_2 : done EXAMPLE #6 ---------- To add specific users who can manage the existing HTTP protocol connection for FileMover, type:
$ server_http server_2 -append dhsm -users user1,user2,user3 server_2 : done EXAMPLE #7 ---------- To add a specific user who can manage the existing HTTP protocol connection for FileMover, type: $ server_http server_2 -append dhsm -users user4 -hosts 172.24.102.20,172.24.102.21 server_2 : done EXAMPLE #8 ---------- To remove the specified users and hosts so they can no longer manage the HTTP connection for FileMover, type: $ server_http server_2 -remove dhsm -users user1,user2 -hosts 10.240.12.146 server_2 : done --------------------------------------- Last Modified: April 12, 2011 12:45 pm
server_ifconfig Manages the network interface configuration for the specified Data Movers. SYNOPSIS -------- server_ifconfig {
When creating the first IPv6 interface with a global unicast address on a broadcast domain, the system automatically creates an associated IPv6 link-local interface. Similarly, when deleting the last remaining IPv6 interface on a broadcast domain, the system automatically deletes the associated IPv6 link-local interface. The down option can be specified for both IPv4 and IPv6. If specified, the network interface will be set to the down state; otherwise, the network interface is up by default. For CIFS users, when an interface is created, deleted, or marked up or down, use the server_setup command to stop and then restart the CIFS service in order to update the CIFS interface list.
FRONT-END OUTPUT ---------------- The network device name is dependent on the front end of the system (for example, NS series Data Mover, 514 Data Movers, 510 Data Movers, and so on) and the network device type. NS series and 514 Data Movers network device name display a prefix of cge, for example, cge0. 510 or earlier Data Movers display a prefix of ana or ace, for example, ana0, ace0. Internal network devices on a Data Mover are displayed as el30, el31. EXAMPLE #1 ---------- To display parameters of all interfaces on a Data Mover, type: $ server_ifconfig server_2 -all server_2 : loop protocol=IP device=loop inet=127.0.0.1 netmask=255.0.0.0 broadcast=127.255.255.255 UP, loopback, mtu=32768, vlan=0, macaddr=0:0:0:0:0:0 netname=localhost cge0 protocol=IP device=cge0 inet=172.24.102.238 netmask=255.255.255.0 broadcast=172.24.102.255 UP, ethernet, mtu=1500, vlan=0, macaddr=0:60:16:4:29:87 el31 protocol=IP device=cge6 inet=128.221.253.2 netmask=255.255.255.0 broadcast=128.221.253.255 UP, ethernet, mtu=1500, vlan=0, macaddr=0:60:16:4:11:a6 netname=localhost el30 protocol=IP device=fxp0 inet=128.221.252.2 netmask=255.255.255.0 broadcast=128.221.252.255 UP, ethernet, mtu=1500, vlan=0, macaddr=8:0:1b:43:7e:b8 netname=localhost EXAMPLE #2 ---------- To create an IP interface for Gigabit Ethernet, type: $ server_ifconfig server_2 -create -Device cge1 -name cge1 -protocol IP 172.24.102.239 255.255.255.0 172.24.102.255 server_2 : done EXAMPLE #3 ---------- To create an interface for network device cge0 with an IPv6 address with a nondefault prefix length on server_2, type: $ server_ifconfig server_2 -create -Device cge0 -name cge0_int1 -protocol IP6 3ffe:0000:3c4d:0015:0435:0200:0300:ED20/48 server_2 : done EXAMPLE #4 ---------- To create an interface for network device cge0 with an IPv6 address on server_2, type: $ server_ifconfig server_2 -create -Device cge0 -name cge0_int1 -protocol IP6 3ffe:0000:3c4d:0015:0435:0200:0300:ED20 server_2 : done EXAMPLE #5 ---------- To verify that the settings for the cge0_int1 interface for server_2 are correct, type: $ server_ifconfig server_2 cge0_int1
server_2 : cge0_int1 protocol=IP6 device=cge0 inet=3ffe:0:3c4d:15:435:200:300:ed20 prefix=48 UP, Ethernet, mtu=1500, vlan=0, macaddr=0:60:16:c:5:5 Note: The bold item in the output highlights the nondefault 48-bit prefix. EXAMPLE #6 ---------- To verify that the interface settings for server_2 are correct, type: $ server_ifconfig server_2 -all server_2 : el30 protocol=IP device=mge0 inet=128.221.252.2 netmask=255.255.255.0 broadcast=128.221.252.255 UP, Ethernet, mtu=1500, vlan=0, macaddr=0:60:16:d:30:b1 netname=localhost el31 protocol=IP device=mge1 inet=128.221.253.2 netmask=255.255.255.0 broadcast=128.221.253.255 UP, Ethernet, mtu=1500, vlan=0, macaddr=0:60:16:d:30:b2 netname=localhost loop6 protocol=IP6 device=loop inet=::1 prefix=128 UP, Loopback, mtu=32768, vlan=0, macaddr=0:0:0:0:0:0 netname=localhost loop protocol=IP device=loop inet=127.0.0.1 netmask=255.0.0.0 broadcast=127.255.255.255 UP, Loopback, mtu=32768, vlan=0, macaddr=0:0:0:0:0:0 netname=localhost cge0_int1 protocol=IP6 device=cge0 inet=3ffe:0:3c4d:15:435:200:300:ed20 prefix=64 UP, Ethernet, mtu=1500, vlan=0, macaddr=0:60:16:c:2:5 cge0_0000_ll protocol=IP6 device=cge0 inet=fe80::260:16ff:fe0c:205 prefix=64 UP, Ethernet, mtu=1500, vlan=0, macaddr=0:60:16:c:2:5 Note: The first bold item in the output highlights the default 64-bit prefix. The second and third bold items highlight the link-local name and address that are automatically generated when you configure a global address for cge0. The automatically created link local interface name is made by concatinating the device name with the four digit VLAN ID between 0 and 4094. Note that the interface you configured with the IPv6 address 3ffe:0:3c4d:15:435:200:300:ed20 and the address with the link-local address fe80::260:16ff:fe0c:205 share the same MAC address. The link-local address is derived from the MAC address. EXAMPLE #7 ---------- To verify that the interface settings for server_2 are correct, type: $ server_ifconfig server_2 -all server_2 : cge0_int2 protocol=IP device=cge0 inet=172.24.108.10 netmask=255.255.255.0 broadcast=172.24.108.255 UP, Ethernet, mtu=1500, vlan=0, macaddr=0:60:16:c:2:5 cge0_int1 protocol=IP6 device=cge0 inet=3ffe:0:3c4d:15:435:200:300:ed20 prefix=64 UP, Ethernet, mtu=1500, vlan=0, macaddr=0:60:16:c:2:5 cge0_0000_ll protocol=IP6 device=cge0 inet=fe80::260:16ff:fe0c:205 prefix=64 UP, Ethernet, mtu=1500, vlan=0, macaddr=0:60:16:c:2:5 el30 protocol=IP device=mge0 inet=128.221.252.2 netmask=255.255.255.0 broadcast=128.221.252.255 UP, Ethernet, mtu=1500, vlan=0, macaddr=0:60:16:d:30:b1 netname=localhost el31 protocol=IP device=mge1 inet=128.221.253.2 netmask=255.255.255.0 broadcast=128.221.253.255 UP, Ethernet, mtu=1500, vlan=0, macaddr=0:60:16:d:30:b2 netname=localhost loop6 protocol=IP6 device=loop inet=::1 prefix=128 UP, Loopback, mtu=32768, vlan=0, macaddr=0:0:0:0:0:0 netname=localhost
loop protocol=IP device=loop inet=127.0.0.1 netmask=255.0.0.0 broadcast=127.255.255.255 UP, Loopback, mtu=32768, vlan=0, macaddr=0:0:0:0:0:0 netname=localhost Note: The bold items in the output highlight the IPv4 interface, cge0_int2, and the IPv6 interface, cge0_int1. EXAMPLE #8 ---------- To disable an interface, type: $ server_ifconfig server_2 cge0_int2 down server_2 : done EXAMPLE #9 ---------- To enable an interface, type: $ server_ifconfig server_2 cge0_int2 up server_2 : done EXAMPLE #10 ----------- To reset the MTU for Gigabit Ethernet, type: $ server_ifconfig server_2 cge0_int2 mtu=9000 server_2 : done EXAMPLE #11 ----------- To set the ID for the Virtual LAN, type: $ server_ifconfig server_2 cge0_int1 vlan=40 server_2 : done EXAMPLE #12 ----------- To verify that the VLAN ID in the interface settings for server_2 are correct, type: $ server_ifconfig server_2 -all server_2 : cge0_int1 protocol=IP6 device=cge0 inet=3ffe:0:3c4d:15:435:200:300:ed20 prefix=64 UP, Ethernet, mtu=1500, vlan=40, macaddr=0:60:16:c:2:5 cge0_0040_ll protocol=IP6 device=cge0 inet=fe80::260:16ff:fe0c:205 prefix=64 UP, Ethernet, mtu=1500, vlan=40, macaddr=0:60:16:c:2:5 cge0_int2 protocol=IP device=cge0 inet=172.24.108.10 netmask=255.255.255.0 broadcast=172.24.108.255 UP, Ethernet, mtu=1500, vlan=20, macaddr=0:60:16:c:2:5 el30 protocol=IP device=mge0 inet=128.221.252.2 netmask=255.255.255.0 broadcast=128.221.252.255 UP, Ethernet, mtu=1500, vlan=0, macaddr=0:60:16:d:30:b1 netname=localhost el31 protocol=IP device=mge1 inet=128.221.253.2 netmask=255.255.255.0 broadcast=128.221.253.255 UP, Ethernet, mtu=1500, vlan=0, macaddr=0:60:16:d:30:b2 netname=localhost loop6 protocol=IP6 device=loop inet=::1 prefix=128 UP, Loopback, mtu=32768, vlan=0, macaddr=0:0:0:0:0:0 netname=localhost loop protocol=IP device=loop
inet=127.0.0.1 netmask=255.0.0.0 broadcast=127.255.255.255 UP, Loopback, mtu=32768, vlan=0, macaddr=0:0:0:0:0:0 netname=localhost Note: The bold items in the output highlight the VLAN tag. Note that the link-local address uses the VLAN tag as part of its name. EXAMPLE #13 ----------- To delete an IP interface, type: $ server_ifconfig server_2 -delete cge1_int2 server_2 : done Note: The autogenerated link local interfaces cannot be deleted. ---------------------------------------------------------------- Last modified: May 12, 2011 1:40 pm.
server_ip Manages the IPv6 neighbor cache and route table for VNX. SYNOPSIS -------- server_ip {ALL|
server_2: Address Link layer address Interface Type State fe80::204:23ff:fead:4fd4 0:4:23:ad:4f:d4 cge1_0000_ll host STALE fe80::216:9cff:fe15:c00 0:16:9c:15:c:0 cge1_0000_ll router STALE fe80::216:9cff:fe15:c00 0:16:9c:15:c:0 cge4_0000_ll router STALE fe80::216:9cff:fe15:c00 0:16:9c:15:c:0 cge3_2998_ll router STALE fe80::216:9cff:fe15:c00 0:16:9c:15:c:0 cge2_2442_ll router STALE 3ffe::1 0:16:9c:15:c:10 cge3_0000_ll router REACHABLE Where: Value Definition Address The neighbor IPv6 address. Link layer address The link layer address of the neighbor. Interface Interface name of the interface connecting to the neighbor. Type Type of neighbor. The neighbor can be either host or router. State The state of the neighbor such as REACHABLE, INCOMPLETE, STALE, DELAY, or PROBE. EXAMPLE #2 ---------- To view a list of neighbor cache entries for a specific IP address on the Data Mover server_2, type: $ server_ip server_2 -neighbor -list fe80::216:9cff:fe15:c00 server_2: Address Link layer address Interface Type State fe80::216:9cff:fe15:c00 0:16:9c:15:c:0 cge1_0000_ll router STALE fe80::216:9cff:fe15:c00 0:16:9c:15:c:0 cge4_0000_ll router STALE fe80::216:9cff:fe15:c00 0:16:9c:15:c:0 cge3_2998_ll router STALE fe80::216:9cff:fe15:c00 0:16:9c:15:c:0 cge2_2442_ll router STALE EXAMPLE #3 ---------- To view a list of neighbor cache entries for a specific IP address and interface type, on the Data Mover server_2, type: $ server_ip server_2 -neighbor -list fe80::216:9cff:fe15:c00 -interface cge1_0000_ll server_2: Address Link layer address Interface Type State fe80::216:9cff:fe15:c00 0:16:9c:15:c:0 cge1_0000_ll router STALE EXAMPLE #4 ---------- To add an entry to the neighbor cache for a global unicast IPv6 address, on the Data Mover server_2, type: $ server_ip server_2 -neighbor -create 2002:8c8:0:2310::2 -lladdress 0:16:9c:15:c:15 OK EXAMPLE #5 ---------- To add an entry to the neighbor cache for a link local IPv6 address, on the Data Mover server_2, type: $ server_ip server_2 -neighbor -create fe80::2 -lladdress 0:16:9c:15:c:12 -interface cge1v6 OK EXAMPLE #6 ---------- To delete an entry from the neighbor cache for a global unicast IPv6 address, on the Data Mover server_2, type:
$ server_ip server_2 -neighbor -delete 2002:8c8:0:2310:0:2:ac18:f401 OK EXAMPLE #7 ---------- To delete an entry from the neighbor cache for a link local IPv6 address, on all the Data Movers, type: $ server_ip ALL -neighbor -delete fe80::1 -interface cge1v6 OK EXAMPLE #8 ---------- To delete entries from the neighbor cache on the Data Mover server_2 type: $ server_ip server_2 -neighbor -delete -all OK EXAMPLE #9 ---------- To view a list of route table entries on the Data Mover server_2, type: $ server_ip server_2 -route -list server_2: Destination Gateway Interface Expires (secs) 2002:8c8:0:2310::/64 cge1v6 0 2002:8c8:0:2311::/64 cge1v6 0 2002:8c8:0:2312::/64 cge1v6 0 2002:8c8:0:2313::/64 cge1v6 0 default fe80::260:16ff:fe05:1bdd cge1_0000_ll 1785 default fe80::260:16ff:fe05:1bdc cge1_0000_ll 1785 default 2002:8c8:0:2314::1 cge4v6 0 selected default fe80::260:16ff:fe05:1bdd cge1_0000_ll 1785 Where: Value Definition Destination The prefix of the destination or the default route entry. There can be multiple default routes, but only one is active and shown as selected default. The default sorting of the destination column displays the default routes at the bottom of the list and the selected default at the end of the list. Gateway The default gateway for default route entries. This value is blank for prefix destination entries. Interface Interface name of the interface used for the route. Expires The time until the route entry is valid. Zero denotes route is permanent and does not have an expiry. EXAMPLE #10 ----------- To add a default route table entry on the Data Mover server_2 to the destination network with the specified prefix, type: $ server_ip server_2 -route -create -destination 2002:8c8:0:2314::/64 -interface cge4v6 OK EXAMPLE #11 ----------- To add a default route table entry on the Data Mover server_2 through the specified gateway, type: $ server_ip server_2 -route -create -default -gateway 2002:8c8:0:2314::1 OK
EXAMPLE #12 ----------- To add a default route table entry on the Data Mover server_2 through the specified gateway using the link-local interface, type: $ server_ip server_2 -route -create -default -gateway fe80::1 -interface cge1v6 OK EXAMPLE #13 ----------- To delete an entry from the route table with an IPv6 prefix route destination for all the Data Movers, type: $ server_ip ALL -route -delete -destination 2002:8c8:0:2314::/64 OK EXAMPLE #14 ----------- To delete an entry from the route table for a global unicast IPv6 address, on the Data Mover server_2, type: $ server_ip server_2 -route -delete -default -gateway 2002:8c8:0:2314::1 OK EXAMPLE #15 ----------- To delete an entry from the route table for a link local IPv6 address, on the Data Mover server_2, type: $ server_ip server_2 -route -delete -default -gateway fe80::1 -interface cge1v6 OK EXAMPLE #16 ----------- To delete all entries from the IPv6 route table on the Data Mover server_2 type: $ server_ip server_2 -route -delete -all OK ---------------------------------------------------------------------------- Last modified: April 12, 2011 1:30 pm
server_kerberos Manages the Kerberos configuration within the specified Data Movers. SYNOPSIS -------- server_kerberos {
-keytab Displays the principal names for the keys stored in the keytab file. -ccache Displays the entries in the Data Movers Kerberos credential cache. Note: The -ccache option can also be used to provide EMC Customer Support with information for troubleshooting user access problems. [-flush] Flushes the Kerberos credential cache removing all entries. Credential cache entries are automatically flushed when they expire or during a Data Mover reboot. Once the cache is flushed, Kerberos obtains new credentials when needed. The repopulation of credentials may take place immediately, over several hours, or be put off indefinitely if no Kerberos activity occurs. -list Displays a listing of all configured realms on a specified Data Mover or on all Data Movers. -kadmin [
---------- To list the keytabs, type: $ server_kerberos server_2 -keytab server_2 : Dumping keytab file keytab file major version = 0, minor version 0 -- Entry number 1 -- principal: DM102-CGE0$@NASDOCS.EMC.COM realm: NASDOCS.EMC.COM encryption type: rc4-hmac-md5 principal type 1, key version: 332 key length: 16, key: b1c199a6ac11cd529df172e270326d5e key flags:(0x0), Dynamic Key, Not Cached key cache hits: 0 -- Entry number 2 -- principal: DM102-CGE0$@NASDOCS.EMC.COM realm: NASDOCS.EMC.COM encryption type: des-cbc-md5 principal type 1, key version: 332 key length: 8, key: ced9a23183619267 key flags:(0x0), Dynamic Key, Not Cached key cache hits: 0 -- Entry number 3 -- principal: DM102-CGE0$@NASDOCS.EMC.COM realm: NASDOCS.EMC.COM encryption type: des-cbc-crc principal type 1, key version: 332 key length: 8, key: ced9a23183619267 key flags:(0x0), Dynamic Key, Not Cached key cache hits: 0 -- Entry number 4 -- principal: host/dm102-cge0@NASDOCS.EMC.COM realm: NASDOCS.EMC.COM encryption type: rc4-hmac-md5 principal type 1, key version: 332 key length: 16, key: b1c199a6ac11cd529df172e270326d5e key flags:(0x0), Dynamic Key, Not Cached key cache hits: 0 <... removed ...> -- Entry number 30 -- principal: cifs/dm102-cge0.nasdocs.emc.com@NASDOCS.EMC.COM realm: NASDOCS.EMC.COM encryption type: des-cbc-crc principal type 1, key version: 333 key length: 8, key: d95e1940b910ec61 key flags:(0x0), Dynamic Key, Not Cached key cache hits: 0 End of keytab entries. 30 entries found. This is a partial listing due to the length of the output. Where: Value Definition principal type Type of the principal as defined in the GSS-API. Reference to RFC 2743. key version Every time a key is regenerated its version changes. EXAMPLE #3 ---------- To list all of the realms on a Data Mover, type:
$ server_kerberos server_2 -list server_2 : Kerberos common attributes section: Supported TGS encryption types: rc4-hmac-md5 des-cbc-md5 des-cbc-crc Supported TKT encryption types: rc4-hmac-md5 des-cbc-md5 des-cbc-crc Use DNS locator: yes End of Kerberos common attributes. Kerberos realm configuration: realm name: NASDOCS.EMC.COM kdc: winserver1.nasdocs.emc.com admin server: winserver1.nasdocs.emc.com kpasswd server: winserver1.nasdocs.emc.com default domain: nasdocs.emc.com End of Kerberos realm configuration. Kerberos domain_realm section: DNS domain = Kerberos realm .nasdocs.emc.com = NASDOCS.EMC.COM End of Krb5.conf domain_realm section. EXAMPLE #4 ---------- To specify a kadmin server, type: # server_kerberos server_2 -add realm=eng.nasdocs.emc.com,kdc=winserver1.nasdocs.emc.com,kadmin=172.24.102.67 server_2 : done Note: You must be root to execute the -kadmin option. Replace $ with # as the root login is a requirement. EXAMPLE #5 ---------- To delete a realm on a Data Mover, type: $ server_kerberos server_2 -delete realm=eng.nasdocs.emc.com server_2 : done EXAMPLE #6 ---------- To display the credential cache on a Data Mover, type: $ server_kerberos server_2 -ccache server_2 : Dumping credential cache Names: Client: DM102-CGE0$@NASDOCS.EMC.COM Service: WINSERVER1.NASDOCS.EMC.COM Target: HOST/WINSERVER1.NASDOCS.EMC.COM@NASDOCS.EMC.COM Times: Auth: 09/12/2005 07:15:04 GMT Start: 09/12/2005 07:15:04 GMT End: 09/12/2005 17:15:04 GMT Flags: PRE_AUTH,OK_AS_DELEGATE Encryption Types: Key: rc4-hmac-md5 Ticket: rc4-hmac-md5 Names:
Client: DM102-CGE0$@NASDOCS.EMC.COM Service: winserver1.nasdocs.emc.com Target: ldap/winserver1.nasdocs.emc.com@NASDOCS.EMC.COM Times: Auth: 09/12/2005 07:15:04 GMT Start: 09/12/2005 07:15:04 GMT End: 09/12/2005 17:15:04 GMT Flags: PRE_AUTH,OK_AS_DELEGATE Encryption Types: Key: rc4-hmac-md5 Ticket: rc4-hmac-md5 Names: Client: DM102-CGE0$@NASDOCS.EMC.COM Service: NASDOCS.EMC.COM Target: krbtgt/NASDOCS.EMC.COM@NASDOCS.EMC.COM Times: Auth: 09/12/2005 07:15:04 GMT Start: 09/12/2005 07:15:04 GMT End: 09/12/2005 17:15:04 GMT Flags: INITIAL,PRE_AUTH Encryption Types: Key: rc4-hmac-md5 Ticket: rc4-hmac-md5 End of credential cache entries. Where: Value Definition client Client name and its realm. service Domain controller and its realm. target Target name and its realm. auth Time of the initial authentication for the named principal. start Time after which the ticket is valid. end Time after which the ticket will not be honored (its expiration time). flags Options used or requested when the ticket was issued. key Key encryption type. ticket Ticket encryption type. EXAMPLE #7 ---------- To flush the credential cache on a Data Mover, type: $ server_kerberos server_2 -ccache flush server_2 : Purging credential cache. Credential cache flushed. -------------------------------------- Last Modified: April 13, 2011 11:35 am
server_ldap Manages the LDAP-based directory client configuration and LDAP over SSL for the specified Data Movers. SYNOPSIS -------- server_ldap {
OpenLDAP or Active Directory with SFU or IdMU ldap.conf file are only required for customized setups. Note: In the case in which the DN of the directory base contains dots and the client is configured using the domain name, the default containers may not be set up correctly. For example, if the name is dc=my.company,dc=com and it is specified as domain name my.company.com, VNX incorrectly defines the default containers as dc=my,dc=company,dc=com. [-servers {
a password. For SSL-based client authentication to succeed, the Data Mover certificate Subject must match the distinguished name for an existing user (account) at the directory server. Note: To configure a LDAP-based directory service for authentication, -binddn is not required if the -sslpersona option is specified. In this case , SSL-based client authentication will be used. The Kerberos account name must be the CIFS server computer name known by the KDC. The account name must terminate with a $ symbol. By default, the Data Mover assumes that the realm is the same as the LDAP domain provided in the -domain or -basedn options. But a different realm name can be specified, if necessary. [-sslenabled {y|n}] Enables (y) or disables (n) SSL. SSL is disabled by default. [-sslpersona {none|
-info Displays the service status as well as the static and dynamic configuration. [-verbose] Adds troubleshooting information to the output. -service {-start|-stop|-status} The -start option enables the LDAP-based directory client service. The LDAP-based directory client service is also restarted when the VNX is rebooted. The -stop option disables the LDAP-based directory client service, and the -status option displays the status of the LDAP-based directory service. -lookup {user=
To configure the use of an LDAP-based directory by a Data Mover and specify the use of the client profile using its distinguished name, type: $ server_ldap server_4 -set -domain nasdocs.emc.com -servers 172.24.102.62 -profile cn=celerra_profile,dc=nasdocs,dc=emc,dc=com -nisdomain nasdocs -sslenabled y server_4 : done EXAMPLE #6 ---------- To specify the NIS domain to which the Data Mover is a member, type: $ server_ldap server_2 -set -domain nasdocs.emc.com -servers 172.24.102.62 -nisdomain nasdocs server_2 : done EXAMPLE #7 ---------- To configure the use of simple authentication by specifying a bind Distinguished Name (DN) and password, type: $ server_ldap server_2 -set -p -domain nasdocs.emc.com -servers 172.24.102.10 -binddn "cn=admin,cn=users,dc=nasdocs,dc=emc" server_2 : Enter Password:******** done EXAMPLE #8 ---------- To configure the use of an LDAP-based directory by a Data Mover using SSL, type: $ server_ldap server_4 -set -basedn dc=nasdocs,dc=emc,dc=com -servers 172.24.102.62 -sslenabled y server_4 : done EXAMPLE #9 ---------- To configure the use of an LDAP-based directory by a Data Mover using SSL and user key and certificate, type: $ server_ldap server_4 -set -basedn dc=nasdocs,dc=emc,dc=com -servers 172.24.102.62 -sslenabled y -sslpersona default server_4 : done EXAMPLE #10 ----------- To configure the use of an LDAP-based directory by a Data Mover using SSL and using specified ciphers, type: $ server_ldap server_4 -set -basedn dc=nasdocs,dc=emc,dc=com -servers 172.24.102.62 -sslenabled y -sslcipher "RC4-MD5,RC4-SHA" server_4 : done EXAMPLE #11 ----------- To display informaton about the LDAP-based directory configuration on a Data Mover, type: $ server_ldap server_4 -info server_4 : LDAP domain: nasdocs.emc.com base DN: dc=nasdocs,dc=emc,dc=com State: Configured - Connected NIS domain: nasdocs.emc.com
No client profile nor config. file provided (using default setup) Connected to LDAP server address: 172.24.102.62 - port 636 SSL enabled/disabled by Command line, cipher suites configured by Command line EXAMPLE #12 ----------- To configure the use of Kerberos authentication by specifying a Kerberos account, type: $ server_ldap server_2 -set -basedn dc=nasdocs,dc=emc,dc=com -servers 172.24.102.62 -kerberos -kaccount cifs_compname$ server_2 : done EXAMPLE #13 ----------- To display detailed informaton about the LDAP-based directory configuration on a Data Mover, type: $ server_ldap server_2 -info -verbose server_2 : LDAP domain: devldapdom1.lcsc State: Configured - Connected Schema: OpenLDAP Base dn: dc=devldapdom1,dc=lcsc Bind dn:
EXAMPLE #16 ----------- To stop the LDAP-based directory service, type: $ server_ldap server_4 -service -stop server_4 : done EXAMPLE #17 ----------- To delete the LDAP configuration for the specified Data Mover and stop the service, type: $ server_ldap server_4 -clear server_4 : done EXAMPLE #18 ----------- To check if any ldap domain is configured, type: server_ldap server_3 -service -status server_3 : LDAP domain is not configured yet. EXAMPLE #19 ----------- To configure a domain for openLdap with standard schema, type: server_ldap server_3 -set -domain devldapdom1.lcsc -servers 192.168.67.114, 192.168.67.148 server_3 : done Note: Since this is the first domain, you can use -set or -add option. EXAMPLE #20 ----------- To configure a domain for Fedora Directory Service (same as openLdap), type: server_ldap server_3 -add -p -basedn dc=389-ds,dc=lcsc -servers 192.168.67.10.64.223.182 -binddn "\"cn=Directory Manager\"" server_3 : Enter Password:******** done Note: Since a domain is already set up, you must use -add option. EXAMPLE #21 ----------- To configure a domain for iPlanet using specific configuration profile, type: server_ldap server_3 -add -domain dvt.emc -servers 192.168.67.140 -profile profilecad3 server_3 : done EXAMPLE #22 ----------- To configure a domain for IDMU using specific configuration file, type: server_ldap server_3 -add -p -basedn dc=eng,dc=lcsc -servers 192.168.67.82 -binddn cn=administrator,cn=Users,dc=eng,dc=lcsc -file ldap.conf
server_3 : Enter Password:****** done EXAMPLE #23 ----------- To check if the domains are ok, type: server_ldap server_3 -service -status server_3 : LDAP domain "dev.lcsc" is active - Configured with RFC-2307 defaults LDAP domain "ds.lcsc" is inactive - Configured with RFC-2307 defaults LDAP domain "dvt.emc" is active - Configured with profile "profilecad3" LDAP domain "eng.lcsc" is active - Configured with file "ldap.conf" EXAMPLE #24 ----------- To get the details about the domain ds.lcsc, type: server_ldap server_3 -info -verbose -domain ds.lcsc server_3 : LDAP domain: ds.lcsc State: Uninitialized - Disconnected Schema: Unknown yet (must succeed to connect) Base dn: dc=ds,dc=lcsc Bind dn: cn=Directory Manager Configuration: RFC-2307 defaults Global warnings & errors { Only one LDAP server is configured for LDAP domain ds.lcsc. } LDAP server: 192.168.67.182 - Port: 389 - Spare SSL: Not enabled Last error: 91 / Connect error Server warnings & errors { LDAP server 192.168.67.182: LDAP protocol error: LDAP is unable to connect to the specified port. LDAP server 192.168.67.182: LDAP protocol error: Connect error. EXAMPLE #25 ----------- To delete the domain ds.lcsc, type: server_ldap server_3 -clear -domain ds.lcsc server_3 : done server_ldap server_3 -service -status server_3 : LDAP domain "dev.lcsc" is active - Configured with RFC-2307 defaults LDAP domain "dvt.emc" is active - Configured with profile "profilecad3" LDAP domain "eng.lcsc" is active - Configured with file "ldap.conf" EXAMPLE #26 ----------- To lookup a user in a given domain, type: server_ldap server_3 -lookup -user cad -domain eng.lcsc server_3 : user: cad, uid: 33021, gid: 32769, homeDir: /emc/cad EXAMPLE #27 ----------- To get info on all domains, type: server_ldap server_3 -info -all server_3 :
LDAP domain: dev.lcsc State: Configured - Connected Schema: OpenLDAP Base dn: dc=devldapdom1,dc=lcsc Bind dn:
server_log Displays the log generated by the specified Data Mover. SYNOPSIS -------- server_log
SchedulerSrc=118_APM00062400708_0000_253_APM00062400708_00 00, curState=active, input=refreshDone 1200229390: DPSVC: 6: DpVersion::getTotalBlocksVolume enter 1200229390: DPSVC: 6: DpVersion::getTotalBlocksVolume found newV 118.ckpt003, bl ocks 17534 1200229390: DPSVC: 6: DpVersion::getTotalBlocksVolume 0 blocks for vnumber 1038 totalB 0 1200229390: DPSVC: 6: DpVersion::getTotalBlocksVolume found oldV 118.ckpt004 1200229390: DPSVC: 6: DpVersion::getTotalBlocksVolume exit 1200229390: DPSVC: 6: DpVersion::getTotalBytes 0 blocks 0 bytes 1200229390: DPSVC: 6: SchedulerSrc=118_APM00062400708_0000_253_APM00062400708_00 00, newState=active 1200229390: SVFS: 6: D113118_736: After Merge err:4 full:0 mD:0 1200229390: SVFS: 6: D113118_736: prev !full release ch:82944 newPrev:99328 1200229390: SVFS: 6: D113118_737: Chunk:0 hdrAdd:50176 ==> prevChunk:82944 befor e changePrevChunk 1200229390: SVFS: 6: D113118_737: Ch:0 hdr:50176 : prevCh:99328 after changePrev 1200229510: DPSVC: 6: refreshSnap: cur=1200229510, dl=1200229520, kbytes=0, setu p=0, rate=1000 1200229510: DPSVC: 6: SchedulerSrc=199_APM00062400708_0000_258_APM00062400708_00 00, curState=active, input=refresh 1200229510: DPSVC: 6: DpRequest::execute() BEGIN reqType:DpRequest_VersionInt_Sc hSrcRefresh reqCaller:DpRequest_Caller_Scheduler reqMode:0 1200229510: DPSVC: 6: DpRequest::execute() END reqType:DpRequest_VersionInt_SchS rcRefresh reqCaller:DpRequest_Caller_Scheduler status:0 reqMode:0 1200229510: DPSVC: 6: SchedulerSrc=199_APM00062400708_0000_258_APM00062400708_00 00, newState=active --More-- Note: This is a partial listing due to the length of the output. EXAMPLE #2 ---------- To display the current log, type: $ server_log server_2 NAS LOG for slot 2: -------------------- 0 keys=0 h=0 nc=0 0 keys=0 h=0 nc=0 2008-01-13 08:03:10: VRPL: 6: 122: Allocating chunk:3 Add:50176 Chunks:24 2008-01-13 08:03:10: SVFS: 6: Merge Start FsVol:118 event:0x0 2008-01-13 08:03:10: SVFS: 6: D113118_736: hdr:82944 currInd:6, Destpmdv:D114118 _503 2008-01-13 08:03:10: CFS: 6: Resuming fs 24 2008-01-13 08:03:10: SVFS: 6: 118:D113118_736:Merge hdr=82944 prev=99328 id=113 chunk=0 stableEntry=7 2008-01-13 08:03:10: UFS: 6: Volume name:Sh122113 2008-01-13 08:03:10: UFS: 6: starting gid map file processing. 2008-01-13 08:03:10: UFS: 6: gid map file processing is completed. 2008-01-13 08:03:10: DPSVC: 6: DpRequest::done() BEGIN reqType:DpRequest_Version Int_SchSrcRefresh reqCaller:DpRequest_Caller_Scheduler status:0 2008-01-13 08:03:10: DPSVC: 6: SchedulerSrc=118_APM00062400708_0000_253_APM00062 400708_0000, curState=active, input=refreshDone 2008-01-13 08:03:10: DPSVC: 6: DpVersion::getTotalBlocksVolume enter
2008-01-13 08:03:10: DPSVC: 6: DpVersion::getTotalBlocksVolume found newV 118.ck pt003, blocks 17534 2008-01-13 08:03:10: DPSVC: 6: DpVersion::getTotalBlocksVolume 0 blocks for vnum ber 1038 totalB 0 2008-01-13 08:03:10: DPSVC: 6: DpVersion::getTotalBlocksVolume found oldV 118.ck pt004 2008-01-13 08:03:10: DPSVC: 6: DpVersion::getTotalBlocksVolume exit 2008-01-13 08:03:10: DPSVC: 6: DpVersion::getTotalBytes 0 blocks 0 bytes 2008-01-13 08:03:10: DPSVC: 6: SchedulerSrc=118_APM00062400708_0000_253_APM00062 400708_0000, newState=active 2008-01-13 08:03:10: SVFS: 6: D113118_736: After Merge err:4 full:0 mD:0 2008-01-13 08:03:10: SVFS: 6: D113118_736: prev !full release ch:82944 newPrev:9 9328 2008-01-13 08:03:10: SVFS: 6: D113118_737: Chunk:0 hdrAdd:50176 ==> prevChunk:82 944 before changePrevChunk 2008-01-13 08:03:10: SVFS: 6: D113118_737: Ch:0 hdr:50176 : prevCh:99328 after c hangePrev 2008-01-13 08:05:10: DPSVC: 6: refreshSnap: cur=1200229510, dl=1200229520, kbyte s=0, setup=0, rate=1000 2008-01-13 08:05:10: DPSVC: 6: SchedulerSrc=199_APM00062400708_0000_258_APM00062 400708_0000, curState=active, input=refresh 2008-01-13 08:05:10: DPSVC: 6: DpRequest::execute() BEGIN reqType:DpRequest_Vers ionInt_SchSrcRefresh reqCaller:DpRequest_Caller_Scheduler reqMode:0 Note: This is a partial listing due to the length of the output. EXAMPLE #3 ---------- To display the log file without the time stamp, type: $ server_log server_2 -n NAS LOG for slot 2: -------------------- 0 keys=0 h=0 nc=0 VRPL: 6: 122: Allocating chunk:3 Add:50176 Chunks:24 SVFS: 6: Merge Start FsVol:118 event:0x0 SVFS: 6: D113118_736: hdr:82944 currInd:6, Destpmdv:D114118_503 CFS: 6: Resuming fs 24 SVFS: 6: 118:D113118_736:Merge hdr=82944 prev=99328 id=113 chunk=0 stableEntry=7 UFS: 6: Volume name:Sh122113 UFS: 6: starting gid map file processing. UFS: 6: gid map file processing is completed. DPSVC: 6: DpRequest::done() BEGIN reqType:DpRequest_VersionInt_SchSrcRefresh req Caller:DpRequest_Caller_Scheduler status:0 DPSVC: 6: SchedulerSrc=118_APM00062400708_0000_253_APM00062400708_0000, curState =active, input=refreshDone DPSVC: 6: DpVersion::getTotalBlocksVolume enter DPSVC: 6: DpVersion::getTotalBlocksVolume found newV 118.ckpt003, blocks 17534 DPSVC: 6: DpVersion::getTotalBlocksVolume 0 blocks for vnumber 1038 totalB 0 DPSVC: 6: DpVersion::getTotalBlocksVolume found oldV 118.ckpt004 DPSVC: 6: DpVersion::getTotalBlocksVolume exit DPSVC: 6: DpVersion::getTotalBytes 0 blocks 0 bytes DPSVC: 6: SchedulerSrc=118_APM00062400708_0000_253_APM00062400708_0000, newState =active SVFS: 6: D113118_736: After Merge err:4 full:0 mD:0 SVFS: 6: D113118_736: prev !full release ch:82944 newPrev:99328
SVFS: 6: D113118_737: Chunk:0 hdrAdd:50176 ==> prevChunk:82944 before changePrev Chunk SVFS: 6: D113118_737: Ch:0 hdr:50176 : prevCh:99328 after changePrev DPSVC: 6: refreshSnap: cur=1200229510, dl=1200229520, kbytes=0, setup=0, rate=10 00 DPSVC: 6: SchedulerSrc=199_APM00062400708_0000_258_APM00062400708_0000, curState =active, input=refresh DPSVC: 6: DpRequest::execute() BEGIN reqType:DpRequest_VersionInt_SchSrcRefresh reqCaller:DpRequest_Caller_Scheduler reqMode:0 DPSVC: 6: DpRequest::execute() END reqType:DpRequest_VersionInt_SchSrcRefresh re qCaller:DpRequest_Caller_Scheduler status:0 reqMode:0 DPSVC: 6: SchedulerSrc=199_APM00062400708_0000_258_APM00062400708_0000, newState =active VBB: 6: VBB session list empty CFS: 6: fs 0x78 type = dhfs being unmounted. Waiting for quiesce ... CFS: 6: fs 0x78 type = dhfs unmounted --More-- Note: This is a partial listing due to the length of the output. EXAMPLE #4 ---------- To display all of the current logs available, type: $ server_log server_2 -a NAS LOG for slot 2: -------------------- 1200152690: SVFS: 6: D113118_606: prev !full release ch:82944 newPrev:99328 1200152690: SVFS: 6: D113118_607: Chunk:0 hdrAdd:50176 ==> prevChunk:82944 befor e changePrevChunk 1200152690: SVFS: 6: D113118_607: Ch:0 hdr:50176 : prevCh:99328 after changePrev 1200152950: DPSVC: 6: refreshSnap: cur=1200152950, dl=1200152960, kbytes=0, setu p=0, rate=666 1200152950: DPSVC: 6: SchedulerSrc=199_APM00062400708_0000_258_APM00062400708_00 00, curState=active, input=refresh 1200152950: DPSVC: 6: DpRequest::execute() BEGIN reqType:DpRequest_VersionInt_Sc hSrcRefresh reqCaller:DpRequest_Caller_Scheduler reqMode:0 1200152950: DPSVC: 6: DpRequest::execute() END reqType:DpRequest_VersionInt_SchS rcRefresh reqCaller:DpRequest_Caller_Scheduler status:0 reqMode:0 1200152950: DPSVC: 6: SchedulerSrc=199_APM00062400708_0000_258_APM00062400708_00 00, newState=active 1200152950: VBB: 6: VBB session list empty 1200152950: CFS: 6: fs 0x78 type = dhfs being unmounted. Waiting for quiesce ... 1200152950: CFS: 6: fs 0x78 type = dhfs unmounted 1200152950: SVFS: 6: pause() requested on fsid:78 1200152950: SVFS: 6: pause done on fsid:78 1200152950: SVFS: 6: Cascaded Delete... 1200152950: SVFS: 6: D120199_1131: createBlockMap PBM root=0 keys=0 h=0 nc=0 1200152950: VRPL: 6: 217: Allocating chunk:4 Add:66560 Chunks:15 1200152950: SVFS: 6: Merge Start FsVol:199 event:0x0 1200152950: SVFS: 6: D120199_1130: hdr:99328 currInd:6, Destpmdv:D119199_1124 1200152950: CFS: 6: Resuming fs 78 1200152950: SVFS: 6: 199:D120199_1130:Merge hdr=99328 prev=82944 id=120 chunk=0 stableEntry=7 1200152950: UFS: 6: Volume name:Sh217120 1200152950: UFS: 6: starting gid map file processing. 1200152950: SVFS: 6: D120199_1130: After Merge err:4 full:0 mD:0
1200152950: SVFS: 6: D120199_1130: prev !full release ch:99328 newPrev:82944 1200152950: SVFS: 6: D120199_1131: Chunk:0 hdrAdd:66560 ==> prevChunk:99328 befo re changePrevChunk 1200152950: SVFS: 6: D120199_1131: Ch:0 hdr:66560 : prevCh:82944 after changePre v 1200152950: UFS: 6: gid map file processing is completed. 1200152950: DPSVC: 6: DpRequest::done() BEGIN reqType:DpRequest_VersionInt_SchSr cRefresh reqCaller:DpRequest_Caller_Scheduler status:0 1200152950: DPSVC: 6: SchedulerSrc=199_APM00062400708_0000_258_APM00062400708_00 00, curState=active, input=refreshDone --More-- Note: This is a partial listing due to the length of the output. EXAMPLE #5 ---------- To display the current log in terse form, type: $ server_log server_2 -t NAS LOG for slot 2: -------------------- 0 keys=0 h=0 nc=0 1200229390: 26043285504: 122: Allocating chunk:3 Add:50176 Chunks:24 1200229390: 26042826752: Merge Start FsVol:118 event:0x0 1200229390: 26042826752: D113118_736: hdr:82944 currInd:6, Destpmdv:D114118_503 1200229390: 26040008704: Resuming fs 24 1200229390: 26042826752: 118:D113118_736:Merge hdr=82944 prev=99328 id=113 chunk =0 stableEntry=7 1200229390: 26042433536: Volume name:Sh122113 1200229390: 26042433536: starting gid map file processing. 1200229390: 26042433536: gid map file processing is completed. 1200229390: 26045513728: DpRequest::done() BEGIN reqType:DpRequest_VersionInt_Sc hSrcRefresh reqCaller:DpRequest_Caller_Scheduler status:0 1200229390: 26045513728: SchedulerSrc=118_APM00062400708_0000_253_APM00062400708 _0000, curState=active, input=refreshDone 1200229390: 26045513728: DpVersion::getTotalBlocksVolume enter 1200229390: 26045513728: DpVersion::getTotalBlocksVolume found newV 118.ckpt003, blocks 17534 1200229390: 26045513728: DpVersion::getTotalBlocksVolume 0 blocks for vnumber 10 38 totalB 0 1200229390: 26045513728: DpVersion::getTotalBlocksVolume found oldV 118.ckpt004 1200229390: 26045513728: DpVersion::getTotalBlocksVolume exit 1200229390: 26045513728: DpVersion::getTotalBytes 0 blocks 0 bytes 1200229390: 26045513728: SchedulerSrc=118_APM00062400708_0000_253_APM00062400708 _0000, newState=active 1200229390: 26042826752: D113118_736: After Merge err:4 full:0 mD:0 1200229390: 26042826752: D113118_736: prev !full release ch:82944 newPrev:99328 1200229390: 26042826752: D113118_737: Chunk:0 hdrAdd:50176 ==> prevChunk:82944 b efore changePrevChunk 1200229390: 26042826752: D113118_737: Ch:0 hdr:50176 : prevCh:99328 after change Prev 1200229510: 26045513728: refreshSnap: cur=1200229510, dl=1200229520, kbytes=0, s etup=0, rate=1000 1200229510: 26045513728: SchedulerSrc=199_APM00062400708_0000_258_APM00062400708 _0000, curState=active, input=refresh
1200229510: 26045513728: DpRequest::execute() BEGIN reqType:DpRequest_VersionInt _SchSrcRefresh reqCaller:DpRequest_Caller_Scheduler reqMode:0 1200229510: 26045513728: DpRequest::execute() END reqType:DpRequest_VersionInt_S --More-- Note: This is a partial listing due to the length of the output. EXAMPLE #6 ---------- To display the current log in verbose form, type: $ server_log server_2 -v DART Work Partition Layout found @ LBA 0x43000 (134MB boundary) slot 2) About to dump log @ LBA 0xc7800 NAS LOG for slot 2: -------------------- About to print log from LBA c8825 to c97ff 0 keys=0 h=0 nc=0 logged time = 2008-01-13 08:03:10 id = 26043285504 severity = INFO component = DART facility = VRPL baseid = 0 type = STATUS argument name = arg0 argument value = 122: Allocating chunk:3 Add:50176 Chunks:24 argument type = string (8) brief description = 122: Allocating chunk:3 Add:50176 Chunks:24 full description = No additional information is available. recommended action = No recommended action is available. Use the text from the error messages brief description to search the Knowledgebase on Powerlink. After logging in to Powerlink, go to Support > Knowledgebase Search > Support Solutions Search. logged time = 2008-01-13 08:03:10 id = 26042826752 severity = INFO component = DART facility = SVFS baseid = 0 type = STATUS argument name = arg0 argument value = Merge Start FsVol:118 event:0x0 argument type = string (8) brief description = Merge Start FsVol:118 event:0x0 full description = No additional information is available. recommended action = No recommended action is available. Use the text from the error messages brief description to search the Knowledgebase on Powerlink. After logging in to Powerlink, go to Support > Knowledgebase Search > Support Solutions Search. --More-- Note: This is a partial listing due to the length of the output. -------------------------------------- Last Modified: June 2, 2011 2:00 pm
server_mount ------------ Mounts file systems and manages mount options for the specified Data Movers. SYNOPSIS -------- server_mount {
[-option options] Specifies the following comma-separated options: [ro|rw] Specifies the mount as read-write (default), or read-only which is the default for checkpoints and TimeFinder/FS. Note: MPFS clients do not acknowledge file systems that are mounted read-only and allow their clients to write to the file system. [accesspolicy={NT|UNIX|SECURE|NATIVE|MIXED|MIXED_COMPAT}] Indicates the access control policy as defined in the table below: Note: When accessed from a Windows client, ACLs are only checked if the CIFS user authentication method is set to the recommended default, NT. This is set using the -add security option in the server_cifs command. Access Policy CIFS clients NFS clients ------------- ------------ ----------- NATIVE (default) ACL is checked. UNIX rights are checke d. UNIX ACL and UNIX rights are UNIX rights are checke d. checked. NT ACL is checked. ACL and UNIX rights ar e checked. SECURE ACL and UNIX rights are ACL and UNIX rights ar e checked. checked. ACL is checked. If there ACL is checked. If the re is not an ACL, one is is not an ACL, one is created based on the UNIX created based on the U NIX mode bits. Access is also mode bits. Access is a lso determined by the ACL. determined by the ACL. MIXED NFSv4 clients can manage NFSv4 clients can mana ge the ACL. An ACL the ACL. A modificatio n to modification rebuilds the the UNIX mode bits UNIX mode bits but the rebuilds the ACL UNIX rights are not permissions but the UN IX checked. rights are not checked . MIXED_COMPAT If the permissions of a If the permissions of a file or directory were file or directory were last set or changed by a last set or changed by an CIFS client, the ACL is NFS client, the UNIX checked and the UNIX rights are checked and the rights are rebuilt but are ACL is rebuilt but is not not checked. If the permissions of a file If the permissions of the file
or directory were last set or directory were last set or or changed by an NFS client, changed by a CIFS clie nt, the the UNIX rights are checked ACL is checked and the UNIX and the ACL is rebuilt but is rights are rebuilt but are not checked. not checked. NFSv4 clients can NFSV4 clients can mana ge the manage the ACL. ACL. Note: The MIXED policy translates the UNIX ownership mode bits into three ACEs: Owner, Group, and Everyone, which can result in different permissions for the Group ACE and the Everyone ACE. The MIXED_COMPAT policy does not translate a UNIX Group into a Group ACE. The Everyone ACE is generated from the UNIX Group. [cvfsname=
of CIFS write protocol option. This can impact write performance. [triggerlevel=
[ceppnfs] It enables the CEPA events for NFS on a file system. Note: If ceppnfs is used without the ceppcifs option, the CEPA events for CIFS are disabled. To enable CEPA events for NFS and CIFS on a file system, ensure that you add both these options in the command. nfsv4delegation={NONE|READ|RW} Indicates that specific actions on a file are delegated to the NFSv4 client. NONE indicates that no file delegation is granted. READ indicates only read delegation is granted. RW (default) indicates write delegation is granted. SEE ALSO -------- Managing Volumes and File Systems with VNX Automatic Volume Management, Managing Volumes and File Systems for VNX Manually, Configuring NFS on VNX, Configuring and Managing CIFS on VNX, Using VNX SnapSure, nas_fs, server_checkup, server_export, server_mountpoint, server_nfs, server_setup, server_umount, and server_viruschk. EXAMPLE #1 ---------- To display all mounted file systems on server_2, type: $ server_mount server_2 server_2 : root_fs_2 on / uxfs,perm,rw root_fs_common on /.etc_common uxfs,perm,ro ufs2 on /ufs2 uxfs,perm,rw EXAMPLE #2 ---------- To mount all file systems temporarily umounted from the mount table of server_2, type: $ server_mount server_2 -all server_2 : done EXAMPLE #3 ---------- To mount ufs1, on mount point/ufs1, and enable CEPP for both CIFS and NFS, type: $ server_mount server_2 -o ceppcifs,ceppnfs ufs1 /ufs1 server_2 : done EXAMPLE #4 ---------- To mount ufs1, on mount point/ufs1, with nonotify, nolock and cifssyncwrite turned on, type: $ server_mount server_2 -option nonotify,nolock,cifssyncwrite ufs1 /ufs1 server_2 : done EXAMPLE #5 ---------- To mount ufs1, on mount point/ufs1, with the access policy set to NATIVE, and nooplock turned on, type: $ server_mount server_2 -option accesspolicy=NATIVE,nooplock ufs1 /ufs1
server_2 : done EXAMPLE #6 ---------- To mount ufs1, on mount point/ufs1, with noscan and noprefetch set to on, type: $ server_mount server_2 -option noscan,noprefetch ufs1 /ufs1 server_2 : done EXAMPLE #7 ---------- To mount ufs1, on mount point /ufs1, with notifyonaccess,notifyonwrite set to on, type: $ server_mount server_2 -option notifyonaccess,notifyonwrite ufs1 /ufs1 server_2 : done EXAMPLE #8 ---------- To mount a copy of a file system, ufs1_snap1 on mount point/ufs1_snap1 with read-write access, type: $ server_mount server_2 -Force -option rw ufs1_snap1 /ufs1_snap1 server_2 : done EXAMPLE #9 ---------- To mount ufs1, on mount point/ufs1, with uncached writes turned on, type: $ server_mount server_2 -option uncached ufs1 /ufs1 server_2 : done EXAMPLE #10 ----------- To mount ufs1, on mount point/ufs1, with the trigger level of notification change set to 256, type: $ server_mount server_2 -option triggerlevel=256 ufs1 /ufs1 server_2 : done EXAMPLE #11 ----------- To mount ufs1, on mount point/ufs1, change the default name of the checkpoint in the ".ckpt" directory, and specify a mount point, type: $ server_mount server_2 -option cvfsname=test ufs1 /ufs1 server_2 : done EXAMPLE #12 ----------- To mount ufs1, on mount point/ufs1, with the access policy set to MIXED, type: $ server_mount server_2 -option accesspolicy=MIXED ufs1 /ufs1 server_2 : done EXAMPLE #13 ----------- To mount ufs1, on mount point/ufs1, with the access policy set to MIXED_COMPAT, type:
$ server_mount server_2 -option accesspolicy=MIXED_COMPAT ufs1 /ufs1 server_2 : done EXAMPLE #14 ----------- To mount ufs1, as a part of the nested file system nmfs1, type: $ server_mount server_2 ufs1 /nmfs1/ufs1 server_2 : done EXAMPLE #15 ----------- To mount ufs1, specifying that no file is granted to the NFSv4 client, type: $ server_mount server_2 ufs1 nfsv4delegation=NONE server_2 : done EXAMPLE #16 ----------- To check diskmark value for the file system ufs1632_snap1, type: $ server_mount server_2 -check ufs1632_snap1/ufs1632_snap1 server_2 : Error 13423542320: server_2 : The marks on disks rootd17 with file system ufs1632_snap1 are not the same on NAS_DB and the Data Mover. EXAMPLE #17 ----------- To check if the diskmark for the file system ufs1632_snap1 exists, type: $ server_mount server_2 -check ufs1632_snap1/ufs1632_snap1 server_2 : Error 13423542324: server_2 : The marks on disks rootd17 with file system ufs1632_snap1 cannot be found on the Data Mover. EXAMPLE #18 ----------- To mount the file system named "fs 105" on the VDM "vdm1" to the mount point /fs 105, type: $ server_mount vdm1 -o smbca fs105/fs105 vdm1:done -------------------------------------- Last Modified: November 20, 2012 12:15 pm
server_mountpoint Manages mount points for the specified Data Movers. SYNOPSIS -------- server_mountpoint {
$ server_mountpoint ALL -exist /ufs1 server_2 : /ufs1 : exists server_3 : /ufs1 : does not exist EXAMPLE #4 ---------- To delete the mount point /ufs1, on server_2, type: $ server_mountpoint server_2 -delete /ufs1 server_2 : done -------------------------------------- Last Modified: April 14, 2011 12:50 pm
server_mpfs Sets up and configures MPFS protocol. SYNOPSIS -------- server_mpfs {
SEE ALSO --------- Using VNX Multi-Path File System, server_setup, and server_mt. EXAMPLE #1 ---------- To set a value for a specified MPFS variable, type: $ server_mpfs server_2 -set threads=32 server_2 :done EXAMPLE #2 ---------- To display the MPFS stats for server_2, type: $ server_mpfs server_2 -Stats server_2 : Server ID=server_2 FMP Threads=32 Max Threads Used=2 FMP Open Files=0 FMP Port=4656 HeartBeat Time Interval=30 EXAMPLE #3 ---------- To reset all variables back to their factory default value, type: $ server_mpfs server_2 -Default server_2 :done EXAMPLE #4 ---------- To check the mount status of a Data Mover, type: $ server_mpfs server_2 -mountstatus server_2 : fs mpfs compatible? reason -- ---------------- ------ no not a ufs file system testing_renaming no volume structure not FMP compatible no not a ufs file system server2_fs1_ckpt no volume structure not FMP compatible mpfs_fs2_lockdb_ckpt_5 no volume structure not FMP compatible mpfs_fs2_lockdb_ckpt_4 no volume structure not FMP compatible mpfs_fs2_lockdb_ckpt_3 no volume structure not FMP compatible mpfs_fs2_lockdb_ckpt_2 no volume structure not FMP compatible mpfs_fs2_lockdb_ckpt_1 no volume structure not FMP compatible mpfs_fs2_lockdb_ckpt_10
no volume structure not FMP compatible mpfs_fs2_lockdb_ckpt_9 no volume structure not FMP compatible mpfs_fs2_lockdb_ckpt_8 no volume structure not FMP compatible mpfs_fs2_lockdb_ckpt_7 no volume structure not FMP compatible no not a ufs file system mpfs_fs2_lockdb_ckpt_6 no volume structure not FMP compatible root_fs_common yes mpfs_fs2 yes mpfs_fs1 mounted server2_fs1 yes root_fs_2 yes EXAMPLE #5 ---------- To add 16 threads for server_2, type: $ server_mpfs server_2 -add 16 server_2 : done EXAMPLE #6 ---------- To delete 16 threads from server_2, type: $ server_mpfs server_2 -delete 16 server_2 : done -------------------------------------- Last Modified: April 14, 2011 01:00 pm
server_mt Manages the magnetic tape drive for the specified Data Mover. SYNOPSIS -------- server_mt
To send the rewind command to tape1 (magnetic tape drive) on a Data Mover, type: $ server_mt server_2 -f tape1 rewind server_2: done ------------------------------------------------------ Last modified: May 12, 2011 9:33 am.
server_name Manages the name for the specified Data Movers. You must delete all user-defined interconnects configured for a Data Mover before you can rename it using this command. After you rename the Data Mover, you must re-create the source and peer interconnects with the new Data Mover name and then restart any associated replication sessions. SYNOPSIS -------- server_name {
server_netstat Displays the network statistics for the specified Data Mover. SYNOPSIS -------- server_netstat {
tcp *.12345 *.* LISTEN tcp *.5080 *.* LISTEN tcp *.2272 *.* LISTEN tcp *.2271 *.* LISTEN tcp *.2270 *.* LISTEN tcp *.ftp *.* LISTEN tcp *.10000 *.* LISTEN tcp *.4658 *.* LISTEN tcp *.2269 *.* LISTEN tcp *.2268 *.* LISTEN tcp *.nfs *.* LISTEN tcp *.1234 *.* LISTEN tcp *.5033 *.* LISTEN tcp *.8888 *.* LISTEN tcp *.sunrpc *.* LISTEN Proto Local Address ******************* udp *.sunrpc udp *.netbios-ns udp *.netbios-dgm udp *.snmp udp *.router udp *.1024 udp *.1036 udp *.1037 udp *.1038 udp *.1046 udp *.1054 udp *.1065 udp *.1234 udp *.nfs udp *.2268 udp *.4646 udp *.4647 udp *.4658 udp *.9999 udp *.12345 udp *.31491 udp *.38914 EXAMPLE #3 ---------- To display a summary of the state of all physical interfaces, type: $ server_netstat server_2 -i Name Mtu Ibytes Ierror Obytes Oerror PhysAddr **************************************************************************** fxp0 1500 758568220 0 534867239 0 8:0:1b:43:49:9a cge0 9000 18014329 0 7195540 0 8:0:1b:42:46:3 cge1 9000 306495706 0 9984 0 8:0:1b:42:46:4 cge2 9000 0 0 0 0 8:0:1b:42:46:2 cge3 9000 0 0 0 0 8:0:1b:42:46:7 cge4 9000 0 0 0 0 8:0:1b:42:46:5 cge5 9000 0 0 0 0 8:0:1b:42:46:6 EXAMPLE #4 ---------- To display routing table statistics, type: $ server_netstat server_2 -r Destination Gateway Mask Type Proto Interface ****************************************************************************** 0.0.0.0 172.24.102.254 255.255.255.0 DIRECT RIP cge0
128.221.253.0 128.221.253.2 255.255.255.0 DIRECT RIP fxp0 172.24.102.0 172.24.102.237 255.255.255.0 DIRECT RIP cge0 128.221.252.0 128.221.252.2 255.255.255.0 DIRECT RIP fxp0 EXAMPLE #5 ---------- To display the statistics of each protocol, type: $ server_netstat server_2 -s ip: *** 2315636 total packets received 0 bad header checksums 0 with unknown protocol 4 fragments received 0 fragments dropped (dup or out of space) 0 fragments dropped after timeout 4 packets reassembled 2 packets forwarded 13046 packets not forwardable 13046 no routes 2302596 packets delivered 2267772 total packets sent 3 packets fragmented 0 packets not fragmentable 6 fragments created icmp: ***** 162 calls to icmp_error Output histogram: echo reply: 1079145 destination unreachable: 90 echo: 1996 Input histogram: echo reply: 1993 destination unreachable: 162 routing redirect: 0 echo: 1079145 time exceeded: 0 address mask request: 0 1081300 messages received 1081231 messages sent tcp: **** 437648 packets sent 2 data packets retransmitted 0 resets 434138 packets received 212 connection requests 19 connections lingered udp: **** 0 incomplete headers 27048 bad ports 760361 input packets delivered 744999 packets sent EXAMPLE #6 ---------- To display TCP protocol statistics, type: $ server_netstat server_2 -s -p tcp tcp: ****
437690 packets sent 2 data packets retransmitted 0 resets 434195 packets received 212 connection requests 19 connections lingered -------------------------------------- Last Modified: April 14, 2011 6:15 pm
server_nfs Manages the NFS service, including secure NFS and NVSv4, for the specified Data Movers. SYNOPSIS -------- server_nfs {
-user -release {principal=
Displays RPC statistics or displays NFS statistics. [-zero][-rpc] Resets to zero all RPC statistics. [-zero][-nfs] Resets to zero all NFS statistics. SEE ALSO -------- Configuring NFS on VNX and server_kerberos. EXAMPLE #1 ---------- To display the status of the secure NFS service, type: $ server_nfs server_2 -secnfs server_2 : RPCSEC_GSS server stats Credential count: 2 principal: nfs@dm112-cge0.nasdocs.emc.com principal: nfs@dm112-cge0 Total number of user contexts: 1 Current context handle: 3 EXAMPLE #2 ---------- To enable secure NFS service on server_2, type: $ server_nfs server_2 -secnfs -service -start server_2 : done EXAMPLE #3 ---------- To disable secure NFS service on server_2, type: $ server_nfs server_2 -secnfs -service -stop server_2 : done EXAMPLE #4 ---------- To display all secure NFS service instances, type: $ server_nfs server_2 -secnfs -user -list server_2 : RPCSEC_GSS server stats Credential count: 2 principal: nfs@dm112-cge0.nasdocs.emc.com principal: nfs@dm112-cge0 Total number of user contexts: 1 Current context handle: 3 PARTIAL user contexts: Total PARTIAL user contexts: 0 USED user contexts: principal=nfsuser1@NASDOCS.EMC.COM, service=nfs@dm112-cge0.nasdocs.emc.com, handle=3, validity=35914s Total USED user contexts: 1 EXPIRED user contexts:
Total EXPIRED user contexts: 0 EXAMPLE #5 ---------- To display the attributes of an authenticated server as specified by the handle, type: $ server_nfs server_2 -secnfs -user -info handle=3 server_2 : principal: nfsuser1@NASDOCS.EMC.COM service: nfs@dm112-cge0.nasdocs.emc.com handle: 3 validity: 35844s GSS flags: mutl conf intg redy tran credential: uid=1010, inuid=1010, gid=1000 EXAMPLE #6 ---------- To release the authentication context of the user specified by the handle, type: $ server_nfs server_2 -secnfs -user -release handle=3 server_2 : done EXAMPLE #7 ---------- To create a secure NFS service instance, type: $ server_nfs server_2 -secnfs -principal -create nfs1@dm112-cge0.nasdocs.emc.com server_2 : done EXAMPLE #8 ---------- To delete a secure NFS service instance, type: $ server_nfs server_2 -secnfs -principal -delete nfs1@dm112-cge0.nasdocs.emc.com server_2 : done EXAMPLE #9 ---------- To set the mapping provider for the file, type: $ server_nfs server_2 -secnfs -mapper -set -source file server_2 : done EXAMPLE #10 ----------- To set the location of the password database, type: $ server_nfs server_2 -secnfs -mapper -set -passwddb file server_2 : done EXAMPLE #11 ----------- To display the secure NFS mapping service configurations for the local file, type: $ server_nfs server_2 -secnfs -mapper -info server_2 : Current NFS user mapping configuration is: gsscred db = File /.etc/gsscred_db gsscred db version = Dart_v1 passwd db = File
EXAMPLE #12 ----------- To create a new mapping record, type: $ server_nfs server_2 -secnfs -mapper -mapping -create name=nfsuser1 server_2 : done EXAMPLE #13 ----------- To display a list of the mapping records, type: $ server_nfs server_2 -secnfs -mapper -mapping -list server_2 : 0401000B06092A864886F7120102020000001A7365636E66737573657231407374617465732E656D6 32E636F6D 1000 nfsuser1, kerberos_v5 EXAMPLE #14 ----------- To delete a mapping record, type: $ server_nfs server_2 -secnfs -mapper -mapping -delete name=nfsuser1 server_2 : done EXAMPLE #15 ----------- To enable the NFSv4 service on server_2, type: $ server_nfs server_2 -v4 -service -start server_2 : done EXAMPLE #16 ----------- To start the NFSv4 service, type: $ server_nfs {
To display the status of the NFSv4 service and the pNFS service, type: $ server_nfs server_2 -v4 server_2 : -------------- nfsv4 server status --------------- * Service Started * * pNFS service Started * * (yet operating) * -------- NFSv4 Clients -------- Confirmed Clients : 1 UnConfirmed Clients : 0 Number of users : 0 Number of lock owners : 0 Longest List : 0 Shortest List : 0 Greatest depth to date : 0 Average List length : 0.00 Domain Name : Not Defined -------------------------------- --------- NFSv4 State -------- Opens : 4 Locks : 0 Delegations: 4 Layouts : 0 Free : 524280 -------------------------------- -------------------------------------------------- Where: Value Definition Confirmed Clients Active client (ready to work). UnConfirmed Clients Client in the process to establishing context. Number of users To be removed in non-debug images. Longest List To be removed in non-debug images. Shortest List To be removed in non-debug images. Greatest depth to date To be removed in non-debug images. Average List length To be removed in non-debug images. Opens Number of open files. Locks Number of locks being held. Delegations Number of granted delegations. Free To be removed in non-debug images. EXAMPLE #20 ----------- To display all NFSv4 clients, type: $ server_nfs server_2 -v4 -client -list server_2 : ------------ nfsv4 server client list ------------ hostname/ip : Index NFSCLIENT1.nasdocs.emc.com : 0xa5400000 -------------------------------------------------- EXAMPLE #21 ----------- To display the attributes of the NFSv4 client as specified by the index, type: $ server_nfs server_2 -v4 -client -info index=0xa5400000 server_2 : NFSCLIENT1.nasdocs.emc.com : 0xa5400000 user: nfsuser1 : inode# 81 EXAMPLE #22 -----------
To release the client ID of the client specified by the index, type: $ server_nfs server_2 -v4 -client -release index=0xa5400000 server_2 : done EXAMPLE #23 ----------- To disable the NFSv4 service on server_2, type: $ server_nfs server_2 -v4 -service -stop server_2 : done EXAMPLE #24 ----------- To display all NFS statistics, type: $ server_nfs {
v4compound 33645 48.8 0.1 0 v4reserved 0 0.0 0.0 0 v4access 217 0.3 0.0 0 v4close 44 0.1 0.0 0 v4commit 0 0.0 0.0 0 v4create 0 0.0 0.0 0 v4delegPrg 0 0.0 0.0 0 v4delegRet 30 0.0 0.0 0 v4getAttr 858 1.2 0.1 0 v4getFh 220 0.3 0.0 0 v4link 0 0.0 0.0 0 v4lock 0 0.0 0.0 0 v4lockT 0 0.0 0.0 0 v4lockU 0 0.0 0.0 0 v4lookup 171 0.2 0.0 37 v4lookupp 0 0.0 0.0 0 v4nVerify 0 0.0 0.0 0 v4open 48 0.1 8.2 37 v4openAttr 0 0.0 0.0 0 v4open_Conf 5 0.0 0.0 0 v4open_DG 0 0.0 0.0 0 v4putFh 1305 1.9 0.0 0 v4putpubFh 0 0.0 0.0 0 v4putrootFh3 0.0 0.0 0 v4read 1 0.0 0.0 0 v4readDir 21 0.0 0.6 0 v4readLink 0 0.0 0.0 0 v4remove 30 0.0 2.9 0 v4rename 2 0.0 0.0 0 v4renew 32335 46.9 0.0 2 v4restoreFh 0 0.0 0.0 0 v4saveFh 2 0.0 0.0 0 v4secInfo 0 0.0 0.0 0 v4setAttr 39 0.1 0.7 0 v4setClntid 2 0.0 0.0 0 v4clntid_Conf 2 0.0 0.0 0 v4verify 0 0.0 0.0 0 v4write 24 0.0 5.7 0 v4rel_Lockown 0 0.0 0.0 0 v4backChanCtl 0 0.0 0.0 0 v4bindConn 0 0.0 0.0 0 v4exchangeId 0 0.0 0.0 0 v4createSess 0 0.0 0.0 0 v4destroySess 0 0.0 0.0 0 v4freeStateid 0 0.0 0.0 0 v4getDirDeleg 0 0.0 0.0 0 v4getDevInfo 0 0.0 0.0 0 v4getDevList 0 0.0 0.0 0 v4layoutCmmt 0 0.0 0.0 0 v4layoutGet 0 0.0 0.0 0 v4layoutRet 0 0.0 0.0 0 v4secinfoNoName 0 0.0 0.0 0 v4sequence 0 0.0 0.0 0 v4setSsv 0 0.0 0.0 0 v4testStateid 0 0.0 0.0 0 v4wantDeleg 0 0.0 0.0 0 v4destroyClid 0 0.0 0.0 0 v4reclaimCmpl 0 0.0 0.0 0 v4illegal 0 0.0 0.0 0 Server lookupcache: nHit nFind nNegadd nChecked 39459 46408 21 39459 Server rpc: ncalls nBadRpcData nDuplicates nResends nBadAuths 822126 Where: Value Definition ncalls Number of calls per NFS operation.
%totcalls Percentage of calls per operation out of total NFS calls received. ms/calls Average time taken for the NFS operations. failures Number of NFS failures per NFS operation. nHit Directory name lookup cache hits. nFind Directory name lookup cache operations. nNegadd Number of negative entries added to the Directory name lookup cache. nChecked Directory name lookup cache entries searched. nBadRpcData Calls with bad RPC header. nDuplicate Calls with duplicate XID. nResends Number of RPC replies resent. nBadAuths Number of replies failing RPC authentication. EXAMPLE #25 ----------- To display RPC statistics, type: $ server_nfs server_2 -stats -rpc server_2 : Server rpc: ncalls nBadRpcData nDuplicates nResends nBadAuths 822155 0 0 0 0 EXAMPLE #26 ----------- To reset statistics counters, type: $ server_nfs {
server_nis Manages the Network Information Service (NIS) configuration for the specified Data Movers. SYNOPSIS -------- server_nis {
and when performing a basic query testing. SEE ALSO -------- Configuring VNX Naming Services and server_dns. EXAMPLE #1 ---------- To provide connectivity to the NIS lookup server for the specified domain, type: $ server_nis server_2 nasdocs 172.24.102.30 server_2 : done EXAMPLE #2 ---------- To query NIS lookup servers using both a hostname and IP address, type: $ server_nis server_2 -query test40,172.24.102.36,test44 server_2 : test40 = 172.24.102.30 test46 = 172.24.102.36 test44 = 172.24.102.34 EXAMPLE #3 ---------- To display the NIS configuration, type: $ server_nis server_2 server_2 : yp domain=nasdocs server=172.24.102.30 EXAMPLE #4 ---------- To display the status of the NIS lookup servers, type: $ server_nis server_2 -status server_2 : NIS default domain: nasdocs NIS server 172.24.102.30 If NIS was not started, the output of this command will appear as: $ server_nis server_2 -status server_2 : NIS not started EXAMPLE #5 ---------- To delete all of the NIS lookup servers for a Data Mover, type: $ server_nis server_2 -delete server_2 : done EXAMPLE #6 ---------- To configure the first domain, type: $ server_nis server_2 emclab 192.168.67.11 server_2 : done $ server_nis server_2 server_2 : yp domain=emclab server=192.168.67.11
EXAMPLE #7 ---------- To configure the second domain, type: $ server_nis server_2 -add eng 192.168.67.13 server_2 : done Note: This operation requires the usage of -add option, otherwise the first domain is overwritten. EXAMPLE #8 ---------- To query for the current configuration, type: $ server_nis server_2 server_2 : yp domain=emclab server=192.168.67.11 yp domain=emceng server=192.168.67.13 EXAMPLE #9 ---------- To query for all domain status, type: $ server_nis server_2 -status -all server_2 : NIS emclab context (5): Servers: 192.168.67.11 Online (current server) RPC failure NIS eng context (4): Servers: 192.168.67.13 RPC failure (current server) EXAMPLE #10 ----------- To delete a domain, type: $ server_nis server_2 -delete emclab server_2 : done Note: Once multiple domains are configured, the delete operation requires specifying the domain name. EXAMPLE #11 ----------- To perform a single query on a particular domain, type: $ server_nis server_2 -query emclab bbvm server_2 : bbvm = 192.168.67.237 --------------------------------------------------------- Last modified: October 24, 2011 10:40 a.m.
server_nsdomains Allows the user to manage the domain configuration per Data Mover or VDM. SYNOPSIS -------- server_nsdomains
$ server_nsdomains vdm1 -enable vdm1 : done Note: By default, there is no domain configured. With such configuration, any NIS, LDAP, or DNS query will fail. Only local host resolution works assuming a hosts file exists in the physical Data Mover or VDM root filesystem. $ server_nsdomains vdm1 vdm1 : NSDOMAINS CONFIGURATION = Enabled NIS domain :
server_param Manages parameter information for the specified Data Movers. SYNOPSIS -------- server_param {
dns Domain Name Service ds Domain Controller service for CIFS fcTach Agilent Fibre Channel Controller file Overall file system parameters filesystem File system ftpd File Transfer Protocol Daemon http Hypertext Transfer Protocol ip Internet protocol iscsi Internet Scsi Protocol kernel THREADs deadlock detection ldap Lightweight Directory Access Protocol lockd Network Lock Manager lockmgr CFS Lock Manager mount NFS Mount Protocol nbs Network Block Service Protocol nfs Network File System nfsv4 NFS version 4 protocol quota File system quota management security Security/Credential parameters shadow Cross Protocol naming support ssl SSL security network protocol statd Host status demon statmon Statistics Framework streamio Streaming tape I/O support tcp Transmission Control Protocol tftp Trivial File Transfer Protocol Daemon trunk Network trunking support ufs Dart native file system usrmap User name mapping support vbb Volume Based Backup vdevice Virtual IP Device Parameters viruschk Virus checking service Where: Value Definition facility Facility for the parameter. description Description of the facility. EXAMPLE #2 ---------- To view the PAX parameters that can be modified, type: $ server_param server_2 -facility PAX -list server_2 : param_name facility default current configured checkUtf8Filenames PAX 1 1 dump PAX 0 0 nPrefetch PAX 8 8 nThread PAX 64 64 writeToArch PAX 1 1 paxReadBuff PAX 64 64 writeToTape PAX 1 1 filter.numDirFilter PAX 5 5 paxWriteBuff PAX 64 64 filter.numFileFilter PAX 5 5 filter.dialect PAX nFTSThreads PAX 8 8 paxStatBuff PAX 128 128 readWriteBlockSizeInKB PAX 64 64 nRestore PAX 8 8 filter.caseSensitive PAX 1 1 scanOnRestore PAX 1 1 noFileStreams PAX 0 0 allowVLCRestoreToUFS PAX 0 0 Where: Value Definition param_name Name of the parameters with the specified facility that can be
modified. facility Facility for the parameters. default Default value for the parameter. current Current value used by the Data Mover. configured Value set by the user. If some user action is pending (such as a Data Mover reboot), it might not have taken effect. If the values for current and configured differ, refer to the user_action field of the -info. EXAMPLE #3 ---------- To view information on the nThread parameter, type: $ server_param server_2 -facility PAX -info nThread server_2 : name = nThread facility_name = PAX default_value = 64 current_value = 64 configured_value = user_action = none change_effective = immediate range = (1,128) description = Number of worker threads per backup session Where: Value Definition facility_name Facility for the parameter. default_value Default value set for the parameter. current_value Value set on the Data Mover. configured_value Value set by the user. If some user action is pending (such as a Data Mover reboot), it might not have taken effect. user_action Action necessary for the parameter to take effect. change_effective States when the change will be effective. range Range of possible parameter values. description Description of what the parameter does. EXAMPLE #4 ---------- To modify the configured nThread parameter, type: $ server_param server_2 -facility PAX -modify nThread -value 32 server_2 : done EXAMPLE #5 ---------- To modify the configured cipher parameter, type: $ server_param server_2 -facility PAX -modify cipher -value foobar server_2 : done Warning 17716815750: server_2 : You must reboot server_2 for paxReadBuff changes to take effect. To verify the configured cipher parameter, restart the Data Mover and type: $ server_param server_2 -facility ssl -info cipher server_2 : name = cipher facility_name = ssl default_value = ALL:!ADH:!SSLv2:@STRENGTH current_value = ALL:!ADH:!SSLv2:@STRENGTH configured_value = foobar
user_action = reboot DataMover change_effective = reboot DataMover range = * description = Keyword specifying the default supported SSL cipher suites (e.g: ALL:!LOW:@STRENGTH) Note: If the current_value and configured_value parameters differ and if the user_action and change_effective parameters display the text reboot Data Mover, restart the Data Mover. After restarting the Data Mover, if the current_value and configured_value parameters continue to differ, it indicates that the Data Mover encountered an error after it was restarted. Check the server_log output to view the error reported. To view the server_log command output file, type: $ server_log server_2 | grep param ... 2009-08-25 12:20:59: ADMIN: 3: Command failed: param ssl cipher=foobar ... EXAMPLE #6 ---------- To view the values of the NDMP port ranges on the Data Mover server_2, type: $ server_param server_2 -facility NDMP -info portRange server_2 : name = portRange facility_name = NDMP default_value = 1024-65535 current_value = 1024-65535 configured_value = user_action = none change_effective = immediate range = 1024-65535 description = Port range for NDMP data connection listening EXAMPLE #7 ---------- To set the values of the NDMP port ranges on the Data Mover server_2, type: $ server_param server_2 -facility NDMP -modify portRange -value 50000-50100 server_2 : done EXAMPLE #8 ---------- To display the parameters for the SSL facility, type: $ server_param server_2 -facility ssl -info -all server_2 : name = trace facility_name = ssl default_value = 0x00000000 current_value = 0x00000000 configured_value = user_action = none change_effective = immediate range = (0x00000000,0xffffffff) description = Define SSL traces displayed in the server log name = timeout facility_name = ssl default_value = 5 current_value = 5 configured_value = user_action = reboot DataMover change_effective = reboot DataMover range = (1,120) description = Timeout (in seconds) used to receive SSL packets
from network during SSL handshake name = protocol facility_name = ssl default_value = 0 current_value = 0 configured_value = user_action = reboot DataMover change_effective = reboot DataMover range = (0,2) description = Set the default ssl protocol. Possible values are: 0=all ssl/tls protocol are allowed, 1=only sslv3 is allowed, 2=only tlsv1 is allowed name = threads facility_name = ssl default_value = 10 current_value = 10 configured_value = user_action = reboot DataMover change_effective = reboot DataMover range = (4,30) description = Number of SSL threads name = cipher facility_name = ssl default_value = ALL:!ADH:!SSLv2:@STRENGTH current_value = ALL:!ADH:!SSLv2:@STRENGTH configured_value = user_action = none change_effective = reboot DataMover range = * description = Keyword specifying the default supported SSL cipher suites (e.g: ALL:!LOW:@STRENGTH) EXAMPLE #9 ---------- To display the default SSL parameters on server_2, type: $ server_param server_2 -facility ssl -list server_2 : param_name facility default current configured trace ssl 0x00000000 0x00000000 timeout ssl 5 5 protocol ssl 0 0 threads ssl 10 10 cipher ssl ALL:!ADH:!SSLv2:@STRENGTH EXAMPLE #10 ----------- To modify the SSL dedicated threads to 20, type: $ server_param server_2 -facility ssl -modify threads -value 20 server_2 : done Warning 17716815750: server_2 : You must reboot server_2 for threads changes to take effect. EXAMPLE #11 ----------- To modify the default cipher suite to all (except low-security algorithms and MD5), type: $ server_param server_2 -facility ssl -modify cipher -value ALL:!LOW:!MD5:@STRENGTH server_2 : done Warning 17716815750: server_2 : You must reboot server_2 for cipher changes to take effect.
EXAMPLE #12 ----------- To display the default ftpd parameters, type: # server_param server_2 -facility ftpd -list server_2 : param_name facility default current configured shortpathdir ftpd 0 0 defaultdir ftpd / / wildcharsInDir ftpd 0 0 bounceAttackChk ftpd 1 1 EXAMPLE #13 ----------- To display the parameters for the ftpd facility, type: $ server_param server_2 -facility ftpd -info -all server_2 : name = shortpathdir facility_name = ftpd default_value = 0 current_value = 0 configured_value = user_action = none change_effective = immediate range = (0,1) description = Enable return file name instead of full pathname in DIR command name = defaultdir facility_name = ftpd default_value = / current_value = / configured_value = user_action = none change_effective = immediate range = * description = Sets the default working directory for FTP name = wildcharsInDir facility_name = ftpd default_value = 0 current_value = 0 configured_value = user_action = none change_effective = immediate range = (0,1) description = Enable wild characters for directory names name = bounceAttackChk facility_name = ftpd default_value = 1 current_value = 1 configured_value = user_action = none change_effective = immediate range = (0,1) description = Enable bounce attack check EXAMPLE #14 ----------- To display the detailed description of the shortpathdir parameter for the ftpd facility, type: $ server_param server_2 -facility ftpd -info shortpathdir -verbose
server_2 : name = shortpathdir facility_name = ftpd default_value = 0 current_value = 0 configured_value = user_action = none change_effective = immediate range = (0,1) description = Enable return file name instead of full pathname in DIR command detailed_description Enable (1) or disable (0) return file name instead of full pathname in the commands dir or ls. if wild char are used this parameter is inefficient. -------------------------------------------------------------------------- Last Modified: December 14, 2011 12:40 p.m.
server_pax Displays and resets backup and restore statistics and file system information for a backup session already in progress. SYNOPSIS -------- server_pax {
32KB+1 -- 64KB size file processed: 0 64KB+1 -- 1MB size file processed: 0 1MB+1 -- 32MB size file processed: 724 32MB+1 -- 1GB size file processed: 0 1G more size file processed: 0 fs /16m_ok_1_0 size is: 120855445504 Bytes Estimated time remain is 1524 sec nasa01 is not doing backup/restore nasa02 is not doing backup/restore nasa03 is not doing backup/restore ---- NASW STATS ---- nasw00 RESTORE (in progress) Session Total Time: 00:02:50 (h:min:sec) Session Idle Time: 00:00:56 (h:min:sec) KB Tranferred: 11858820 Block Size: 61440 (60 KB) Average Transfer Rate: 68 MB/Sec 239 GB/Hour Average Burst Transfer: 101 MB/Sec 357 GB/Hour __Point-in-Time__ (over the last 10 seconds): Rate=69 MB/Sec Burst=96 MB/Sec Idle=283 msec/sec Get Pool: 17 buffers Put Pool: 29 buffers Compression Page not available ReadC=0.00 WriteC=0.00 Read=0 KB Written=0 KB nasw01 BACKUP (terminated) nasw02 BACKUP (terminated) nasw03 BACKUP (terminated) Value Definition NASS STATS Thread responsible for traversing the file system and providing metadata for each directory and/or file. Total file processed Total number of files and/or directories for which metadata was processed. Total NASS wait NASA count The number of times NASS waited for NASA. Total NASS wait NASA time Amount of time NASS waited for NASA. Total time since last reset Time since the last reset; a reset occurs automatically when a backup completes. fts_build time Time spent building the file system or directory tree. getstatpool If the value is consistently zero, then NASA may be slowing down the backup. putstatpool If the value is consistently zero, then NASS may be slowing down the backup. NASA STATS Thread responsible for writing file header information, reading file data, and writing to the buffer. Backup root directory Directory being backed up. Total bytes processed Bytes backed up since the last reset or start of the current backup. Total file processed Number of files backed up since the start or reset of the current backup. Throughput How fast NASA processed data. Average file size Average file size for the current backup. Total nasa wait nass count time Number of times NASA waited for NASS. Total nasa wait nass time Amount of time NASA waited for NASS. Total time since last reset Amount of time since the backup statistics were reset; a reset occurs automatically when a backup completes. Tape device name Target device for the backup data. File size statistics Statistics on the size of files backed up since the start or reset of the current backup. NASW STATS Thread responsible for getting data from the buffer pool, writing it to tape or sending it to a remote Data Mover. Session total time Total time of the current session. Session idle time Idle time for the current session. KB transferred Total KB transferred. Average transfer rate Per second and per hour transfer rate for the current sessions data.
Average burst transfer Burst transfer rate in MB/s and GB/s. Write block counters (List/Direct) Scatter/gather write count. Point-in-time_ (over the last Information on data processed during a 10 10 seconds) second interval. Rate Transfer rate in MB/s. Burst Burst transfer rate in MB/s. Idle Amount of time NASW was idle in msec. Get pool Number of buffers in get pool; if value is consistently 0, then NASA and NASS may be slowing down the backup. Put pool Number of buffers in put pool; if value is consistently 0, then the tape may be slowing down the backup. Compression rate retrieved Compression rate. ReadC Read compression rate at the tape device. WriteC Write compression rate at the tape device. Read Amount of data read in KB. Written Amount of data written in KB. EXAMPLE #3 ---------- To view the verbose statistics for an active NDMP restore session on server_2, type: $ server_pax server_2 -stats -verbose server_2 : ************** SUMMARY PAX STATS **************** ---- NASS STATS ---- nass00 is not doing backup nass01 is not doing backup nass02 is not doing backup nass03 is not doing backup ---- NASA STATS ---- ** nasa thid 0 (non-DAR RESTORE) ** The first five entries of restore name list are: original name: /filt, destination name /ufsvbbr/r_filter_pax Total bytes processed: 172326912 Total file processed: 42 throughput: 7 MB/sec average file size: 4006KB Total nasa wait nass count: 0 Total nasa wait nass time: 0 msec Total time since last reset: 21 sec Tape device name: c0t0l1 dir or 0 size file processed: 17 1 -- 8KB size file processed: 6 8KB+1 -- 16KB size file processed: 18 16KB+1 -- 32KB size file processed: 0 32KB+1 -- 64KB size file processed: 0 64KB+1 -- 1MB size file processed: 1 1MB+1 -- 32MB size file processed: 0 32MB+1 -- 1GB size file processed: 0 1G more size file processed: 0 nasa01 is not doing backup/restore nasa02 is not doing backup/restore nasa03 is not doing backup/restore ---- NASW STATS ---- nasw00 RESTORE (in progress) Session Total Time: 00:00:21 (h:min:sec) Session Idle Time: 00:00:00 (h:min:sec) KB Tranferred: 168384 Block Size: 32768 (32 KB) Average Transfer Rate: 7 MB/Sec 27 GB/Hour Average Burst Transfer: 7 MB/Sec 27 GB/Hour __Point-in-Time__ (over the last 10 seconds): Rate=6 MB/Sec Burst=7 MB/Sec Idle=0 msec/sec Get Pool: 61 buffers Put Pool: 0 buffers
nasw01 No session found nasw02 No session found nasw03 No session found -------------------------------------- Last Modified: April 19, 2011 5:50 pm
server_ping Checks the network connectivity for the specified Data Movers. SYNOPSIS --------- server_ping {
Error 6: server_2 : No such device or address no answer from 172.24.102.5 EXAMPLE #2 ---------- To display connectivity for a Data Mover to the outside world while sending continuous ECHO_REQUEST messages, type: $ server_ping server_2 -send 172.24.102.2 server_2 : 172.24.102.2 is alive, time= 0 ms 172.24.102.2 is alive, time= 0 ms 172.24.102.2 is alive, time= 0 ms 172.24.102.2 is alive, time= 0 ms 172.24.102.2 is alive, time= 0 ms 172.24.102.2 is alive, time= 0 ms 172.24.102.2 is alive, time= 0 ms 172.24.102.2 is alive, time= 0 ms 172.24.102.2 is alive, time= 0 ms 172.24.102.2 is alive, time= 3 ms 172.24.102.2 is alive, time= 0 ms 172.24.102.2 is alive, time= 0 ms 172.24.102.2 is alive, time= 0 ms EXAMPLE #3 ---------- To display connectivity from a Data Mover to the outside world using the specified interface, type: $ server_ping server_2 -interface cge0 172.24.102.2 server_2 : 172.24.102.2 is alive, time= 0 ms ----------------------------------------------------------------- Last modified: April 18, 2011 2:00 pm
server_ping6 Checks the IPv6 network connectivity for the specified Data Movers. SYNOPSIS -------- server_ping {
EXAMPLE #2 ---------- To ping link-local address fe80::260:16ff:fe0c:205%cge0_0000_ll, type: $ server_ping6 server_2 fe80::260:16ff:fe0c:205%cge0_0000_ll server_2 : fe80::260:16ff:fe0c:205%cge0_0000_ll is alive, time= 0 ms or $ server_ping6 server_2 fe80::260:16ff:fe0c:205%cge0_0000_ll server_2 : Error 6: server_2 : No such device or address no answer from client EXAMPLE #3 ---------- To ping multicast address ff02::1%cge0_0000_ll, type: $ server_ping6 server_2 ff02::1%cge0_0000_ll server_2 : ff02::1%cge0_0000_ll is alive, time= 0 ms or $ server_ping6 server_2 ff02::1%cge0_0000_ll server_2 : Error 6: server_2 : No such device or address no answer from client. ------------------------------------------------ Last modified: April 18, 2011 1:15 pm.
server_rip Manages the Routing Information Protocol (RIP) configuration for the specified Data Movers. SYNOPSIS -------- server_rip {
server_route Manages the routing table for the specified Data Movers. SYNOPSIS -------- server_route {
-------- Configuring and Managing Networking on VNX, server_netstat, and server_ifconfig. EXAMPLE #1 ---------- To list the routing table for server_2, type: $ server_route server_2 -list server_2 : net 128.221.253.0 128.221.253.2 255.255.255.0 el31 net 128.221.252.0 128.221.252.2 255.255.255.0 el30 net 172.24.102.0 172.24.102.238 255.255.255.0 cge0 host 127.0.0.1 127.0.0.1 255.255.255.255 loop Where: The output displayed is as follows:
server_security Manages the security policy settings for the specified Data Movers. The VNX provides support for Group Policy Objects (GPOs) by retrieving and storing a copy of the GPO settings for each CIFS server joined to a Windows domain. SYNOPSIS -------- server_security {
$ server_security server_2 -info -policy gpo server_2 : Server compname: dm102-cge0 Server NetBIOS: DM102-CGE0 Domain: nasdocs.emc.com Kerberos Max Clock Skew (minutes): 5 LAN Manager Auth Level: Not defined Digitally sign client communications (always): Not defined Digitally sign client communications (if server agrees): Not defined Digitally sign server communications (always): Not defined Digitally sign server communications (if client agrees): Not defined Send unencrypted password to connect to third-party SMB servers: Not defined Disable machine account password changes: Not defined Maximum machine account password age: Not defined Audit account logon events: Not defined Audit account management: Not defined Audit directory service access: Not defined Audit logon events: Not defined Audit object access: Not defined Audit policy change: Not defined Audit privilege use: Not defined Audit process tracking: Not defined Audit system events: Not defined Back up files and directories: Not defined Restore files and directories: Not defined Bypass traverse checking: Not defined Generate security audits: Not defined Manage auditing and security log: Not defined Access this computer from the network: Not defined Deny access to this computer from the network: Not defined Take ownership of files or other objects: Not defined EMC Virus Checking: Not defined Maximum security log size: Not defined Restrict guest access to security log: Not defined Retention period for security log: Not defined Retention method for security log: Not defined Maximum system log size: Not defined Restrict guest access to system log: Not defined Retention period for system log: Not defined Retention method for system log: Not defined Maximum application log size: Not defined Restrict guest access to application log: Not defined Retention period for application log: Not defined Retention method for application log: Not defined Disable background refresh of Group Policy: Not defined Group Policy Refresh interval (minutes): 90 Refresh interval offset (minutes): Not defined GPO Last Update time (local): Thu Dec 1 13:49:08 EST 2005 GPO Next Update time (local): Thu Dec 1 15:19:08 EST 2005 EXAMPLE #2 ---------- To add a new CHAP security for client1, type: $ server_security server_2 -add -policy chap -name client1 server_2 : Enter Secret:**** done EXAMPLE #3 ---------- To display CHAP information for client1, type: $ server_security server_2 -info -policy chap -name client1 server_2 : chapdb name=client1 pass=******** EXAMPLE #4 ----------
To update the GPO settings for the CIFS server, type: $ server_security server_2 -update -policy gpo server=dm32-cge0 server_2 : done EXAMPLE #5 ---------- To modify a password for client1, type: $ server_security server_2 -modify -policy chap -name client1 server_2 : Enter New Secret:**** done EXAMPLE #6 ---------- To delete CHAP security for client1, type: $ server_security server_2 -delete -policy chap -name client1 server_2 : done -------------------------------------- Last Modified: April 20, 2011 1:35 pm
server_setup Manages the type and protocol component for the specified Data Movers. SYNOPSIS -------- server_setup {
Note: The [,comment=
To load a new image onto a Data Mover, type: $ server_setup server_2 -Load nas.exe server_2: will load : nas.exe -------------------------------------- Last Modified: April 20, 2011 3:30 pm
server_snmpd Manages the Simple Network Management Protocol (SNMP) configuration values for the specified Data Movers. SYNOPSIS -------- server_snmpd {
-modify
$ server_snmpd server_2 -modify -location "RTP, NC" -contact "Anamika Kadian" -community public server_2: OK EXAMPLE #7 ---------- To clear the community value on Data Mover server_2, type: $ server_snmpd server_2 -modify -community -clear server_2: OK EXAMPLE #8 ---------- To display the list of SNMPv3 users on all Data Movers, type: $ server_snmpd ALL -user -list server_2: user gsmith smith server_3: user clowe EXAMPLE #9 ---------- To create a new user John, on Data Mover server_2, type: $ server_snmpd server_2 -user -create john -authpw -privpw Enter the authentication password:********* Confirm the authentication password:********* Enter the privacy password:********* Confirm the privacy password:********* server_2: OK EXAMPLE #10 ----------- To delete the user John, on Data Mover server_2, type: $ server_snmpd server_2 -user -delete John server_2: OK EXAMPLE #11 ----------- To modify the passwords of the user John, on Data Mover server_2, type: $ server_snmpd server_2 -user -modify John -authpw -privpw Enter the authentication password:********* Confirm the authentication password:********* Enter the privacy password:********* Confirm the privacy password:********* server_2: OK ----------------------------------------------------------------------- Last modified: April 20, 2011 at 4:00 pm
server_ssh Manages and configures the SSH server on the specified Data Mover. SYNOPSIS -------- server_ssh serverX -info | -start server_ssh serverX -info | -start | -stop | -modify { -banner
Modifies some configuration parameters of the SSH server. The arguments are: [-banner
Specifies the available Message Authentication Code or MAC algorithms to guarantee the integrity of the SSH packets on the network. The default value is undefined, which means all these algorithms are allowed. -maxauthtries
If no allowed group is defined, then all users of the groups are allowed to connect by default. The list of allowed groups is saved in the SSH configuration file on the Data Mover. Duplicate names of an allowed group are prohibited. The Data Mover can save up to 256 different allowed groups. -allowusers
Defines a new user that is disallowed to connect through SSH to the Data Mover. "user" should be the name of the user and numerical user IDs are ignored. If the user is a CIFS user, his format should be user@domain or domain\user. If specified, SSH connections are disallowed for user names that match one of the patterns, that is any user listed as a denied user receives systematically an access denied error. If no denied user is defined, then all users are allowed to connect. This is the default. The list of the denied users is saved in the SSH configuration file on the DART. Duplicate names of denied users are prohibited. The Data Mover can save up to 256 different denied users. -remove Removes from the current configuration an allowed/denied group/user. The arguments are: -allowhosts
-delete: Deletes all the generated host keys of the Data Mover. This command is useful if the administrator needs to generate new host keys. The options are: -type {rsa|dsa} : This argument specifies the type of key to delete. The two valid types are: rsa and dsa. If not specified, both key types are deleted. GENERAL NOTES ------------- * The allow or deny directives are processed in the following order: denyhost, allowhost, denyusers, allowusers, denygroups and finally allowgroups. * The allow or deny directives can specify multiple items separated by a , or comma without spaces. * It is recommended to encapsulate IPv6 address in [ ] or square brackets. * The host keys are generated automatically when the SSH server is started for the very first time if no host key exists. In that case, both keys are generated with their default size (RSA and DSA keys each). * VDM are not supported by the server_ssh command. EXAMPLE #1 ---------- To display the current configuration, type: $ server_ssh server_2 -info server_2 : done SERVICE CONFIGURATION Port : 22 State : running Thread count : 4 Banner : /server2fs1/banner.txt Default home directory : / Restrict home directory : disabled Application : sftp,scp Cipher :
Banner : /fs40/banner.txt Default home directory : / Restrict home directory : disabled Application : sftp,scp Cipher :
---------- To modify the timeout to five minutes, type: $ server_ssh server_2 -modify -timeout 300 server_2 : done EXAMPLE #9 ---------- To add a new allowed user defined in NIS or LDAP, type: $ server_ssh server_2 -append -allowusers john server_2 : done EXAMPLE #10 ----------- To add a new allowed user defined in the dom10 Windows domain, type: $ server_ssh server_2 -append -allowusers dom10\\cindy server_2 : done EXAMPLE #11 ----------- To add a new allowed group of users, type: $ server_ssh server_2 -append -allowgroups admin server_2 : done EXAMPLE #12 ----------- To add a new allowed client IP, type: $ server_ssh server_2 -append -allowhosts 110.171.1.10 server_2 : done EXAMPLE #13 ----------- To add new allowed client hosts using their subnet, type: $ server_ssh server_2 -append -allowhosts 110.121.0.0/16 server_2 : done EXAMPLE #14 ----------- To add a new denied user, type: $ server_ssh server_2 -append -denyusers john server_2 : done EXAMPLE #15 ----------- To add a new denied group of users, type: $ server_ssh server_2 -append -denygroups guest server_2 : done
EXAMPLE #16 ----------- To add a new denied client IP, type: $ server_ssh server_2 -append -denyhosts 110.171.1.54 server_2 : done EXAMPLE #17 ----------- To generate a new host key for the Data Mover, type: $ server_ssh server_2 -generate server_2 : done Note: This operation may take a long time. The SSH server must be stopped as the Data Mover does not have existing host keys. EXAMPLE #18 ----------- To delete the existing host keys of the Data Mover, type: $ server_ssh server_2 -delete server_2 : done The SSH server must be stopped. ERROR CASE #1 ------------- To change the banner file to a non-existing file, type: $ server_ssh server_2 -modify -banner foo server_2 : Error 13163823109: server_2 : Invalid SSH configuration: Invalid banner file name. ERROR CASE #2 ------------- To enable an unknown application on top of SSH, type: $ server_ssh server_2 -modify -application foo server_2 : Error 13163823110: server_2 : Invalid configuration value for the SSH server: Unknown application foo.. ERROR CASE #3 ------------- To change the number of SSHD threads to an unauthorized value, type: $ server_ssh server_2 -modify -threads 256 server_2 : Error 13163823110: server_2 : Invalid configuration value for the SSH server: Bad threads value specified, allowed range is (4-128). ERROR CASE #4 -------------- To change the port of the SSH server to an already used port, type: $ server_ssh server_2 -modify -port 445
server_2 : Error 13163823111: server_2: The SSH server cannot bind the TCP port 445. Note: 445 is used by the CIFS server. ERROR CASE #5 ------------- To regenerate the Data Mover host keys while the SSH server is active, type: $ server_ssh server_2 -generate server_2 : Error 13163823112: server_2 : The SSH server must be stopped before executing this command. ERROR CASE #6 ------------- To generate the Data Mover host key, and specify an invalid key type, type: $ server_ssh server_2 -generate -type foo server_2 : Error 13163823109: server_2 : Invalid SSH configuration: Bad KEYTYPE value attribute. ERROR CASE #7 ------------- To generate the Data Mover host key, and specify an RSA key with an invalid size, type: $ server_ssh server_2 -generate -type RSA -keysize 23 server_2 : Error 13163823110: server_2 : Invalid configuration value for the SSH server: Bad keysize value specified. ERROR CASE #8 ------------- To generate the Data Mover host key as a DSA key when a key of this type already exists, type: $ server_ssh server_2 -generate -type dsa server_2 : Error 13163823123: server_2 : The command failed as the DSA host key is already defined. ------------------------------------------------------------------- Created on: July 13 2011, 04:20 pm
server_standby Manages the standby and RDF relationships for the specified Data Movers. SYNOPSIS -------- server_standby {
retry Attempts to recover the primary Data Mover first, then if recovery fails, initiates activation of the standby. manual (default) Reboots the primary Data Mover. No action on the standby is initiated. SEE ALSO -------- Configuring Standbys on VNX and server_setup. EXAMPLE #1 ----------- To create a standby relationship between server_2 (primary) and server_3 (standby), type: $ server_standby server_2 -create mover=server_3 server_2 : server_3 is rebooting as standby Note: Before any other actions can take place, the reboot must be complete. EXAMPLE #2 ---------- To activate the server_3 (standby) to take over for server_2 (primary), type: $ server_standby server_2 -activate mover server_2 : server_2 : going offline server_3 : going active replace in progress ...done failover activity complete commit in progress (not interruptible)...done server_2 : renamed as server_2.faulted.server_3 server_3 : renamed as server_2 EXAMPLE #3 ---------- To restore server_3 as the standby Data Mover and server_2.faulted.server_3 as the primary, type: $ server_standby server_2 -restore mover server_2 : server_2 : going standby server_2.faulted.server_3 : going active replace in progress ...done failover activity complete commit in progress (not interruptible)...done server_2 : renamed as server_3 server_2.faulted.server_3 : renamed as server_2 EXAMPLE #4 ---------- To verify readiness of the standby Data Mover, type: $ server_standby server_2 -verify mover server_2 : ok EXAMPLE #5 ----------
To delete the standby relationship for server_2, type: $ server_standby server_2 -delete mover server_2 : done EXAMPLE #6 ---------- To create a standby relationship for three Data Movers, type: $ server_standby server_2 -create mover=server_3 server_2 : server_3 is rebooting as standby $ server_standby server_4 -create mover=server_3 server_4 : done $ server_standby server_5 -create mover=server_3 server_5 : done Note: Before any other actions can take place, the reboot must be complete. -------------------------------------- Last Modified: April 21, 2011 12:45 pm
server_stats Displays sets of statistics that are running on the specified Data Mover. SYNOPSIS -------- server_stats
-------------- The nfs.user stat is indexed by user id (UID). To maintain consistency, UIDs need to be resolved to User Names. The server_stats command now does this resolution by default when a users requests this stat. The NIS Service or a local password file must have user information for resolution to work. GID Resolution -------------- The nfs.group stat is indexed by group id (GID). To maintain consistency, GIDs need to be resolved to Group Names. The server_stats command now does this resolution by default when a users requests this stat. This is a support stat that requires the -vis support argument. The NIS Service or a local group file must have group information for resolution to work. In addition, server_stats manages the Statistics Monitoring service (statmonService) running on Data Movers including the ability to disable and enable statistics. NEW CORRELATED STATISTICS ------------------------- The new statistics are: cifs.branchcache The cifs.branchcache counters provide the statistics about the SMB2 BranchCache functionality, a new feature introduced with Microsoft Windows 7 and Microsoft Windows 2008 R2. They are divided in two sections: the cifs.branchcache.basic branch and the cifs.branchcache.usage branch. cifs.branchcache.basic Provides the counters related to the dialog with the BranchCache client. The statistics contain the following information: * Hit * Miss * hashCount * hashSize * hashTransferred * hashError * filtered * taskQueued * taskRunning cifs.branchcache.usage Provides the counters related to the generation of the hash files. The statistics contain the following information: * hashSizeMax * hashSizeAvg * hashSizeMin * hashTimeMax * hashTimeAvg * hashTimeMin * taskCount * taskQueueFull * maxUsedThread
cifs.user Provides cifs read and write statistics by call and bytes correlated to cifs users. It displays the same data that cifs.client does but correlated to user as opposed to IP address. This statistic provides the following information: * Total CIFS Ops/S * read Ops/S * write Ops/S * Suspicious Ops Diff * Total KiB/Sec * Read KiB/Sec * Write KiB/Sec * avgTime The default sort field is Total Ops/S. cifs.server Displays the same data as cifs.client but correlated to CIFS server name (as opposed to the CIFS clients IP address for cifs.client). This statistic provides the following information: * Total CIFS Ops/S * read Ops/S * write Ops/S * Suspicious Ops Diff * Total KiB/Sec * Read KiB/Sec * Write Ki B/Sec * avgTime nfs.user Provides nfs read and write statistics by call and bytes correlated to nfs users. It displays the same data that nfs.client does but correlated to user as opposed to IP address. This statistic provides the following information: * Total NFS Ops/S * read Ops/S * write Ops/S * Suspicious Ops Diff * Total KiB/Sec * Read KiB/Sec * Write Ki B/Sec * avgTime The default sort field is Total Ops/S. nfs.export Displays the same data as nfs.client, but correlated to NFS export (as opposed to the NFS clients IP address). This statistic provides the following information: * Total NFS Ops/S * read Ops/S * write Ops/S * Suspicious Ops Diff * Total KiB/Sec * Read KiB/Sec * Write Ki B/Sec * avgTime nfs.group Displays the same data as nfs.client, but correlated to NFS group ID (as opposed to the NFS clients IP address).
This statistic provides the following information: * Total NFS Ops/S * read Ops/S * write Ops/S * Suspicious Ops Diff * Total KiB/Sec * Read KiB/Sec * Write Ki B/Sec * avgTime nfs.vdm Names of VDMs configured on a Data Mover will be the elements of this set statistic. Physical Data Mover name will also be one of the elements in this set statistic. nfs.vdm.*.client Displays the same data as nfs.client, but only for VDMs. This statistic provides the following information: - Total NFS Ops/S - read Ops/S - write Ops/S - Suspicious Ops Diff - Total KiB/Sec - Read KiB/Sec - Write Ki B/Sec - avgTime nfs.vdm.*.user Displays the same data as nfs.user, but only for VDMs. This statistic provides the following information: - Total NFS Ops/S - read Ops/S - write Ops/S - Suspicious Ops Diff - Total KiB/Sec - Read KiB/Sec - Write Ki B/Sec - avgTime nfs.vdm.*.group Displays the same data as nfs.group, but only for VDMs. This statistic provides the following information: - Total NFS Ops/S - read Ops/S - write Ops/S - Suspicious Ops Diff - Total KiB/Sec - Read KiB/Sec - Write Ki B/Sec - avgTime nfs.vdm.*.export Displays the same data as nfs.export, but only for VDMs. This statistic provides the following information: - Total NFS Ops/S - read Ops/S - write Ops/S - Suspicious Ops Diff - Total KiB/Sec - Read KiB/Sec - Write Ki B/Sec
- avgTime fs.filesystem Displays the most active files within each specified filesystem. This statistic provides the following information: * Total KiB/Sec * readBytes * writtenBytes * avgTime * readAvgTime * writeAvgTime Note: File inodes will not be resolved until the filesystem is configured for file resolution using the server_fileresolve command. fs.qtreeFile Displays the most active files within each specified Qtree. This statistic provides the following information: * Total KiB/Sec * readBytes * writtenBytes * avgTime * readAvgTime * writeAvgTime Note: File inodes will not be resolved until the Quota Tree is configured for fil e resolution using the server_fileresolve command. store.volume Provides Disk Volume read and write statistics by blocks and bytes correlated to FileSystem and Disk Volume. It displays top FileSystems per disk volume. To list filesystems for a specific disk volume (for example, a volume named d133), run the server_stats command as: $ server_stats server_2 -m store.volume.d133 This statistic provides the following information: * totalBlocks * readBlocks * writtenBlocks * Total KiB/Sec * readBytes * writeBytes The default sort field is totalBlocks. OPTIONS ------- No arguments Displays a basic summary of statistics for the specified Data Mover as defined by the basic-std Statistics Group. -list Displays all defined statistics starting with the statgroup names followed by statpaths and their types. -info Displays the statgroup and statpath information. -service Specifies whether to start, stop, delete, or query the status of the statmonService. The statmonService runs on the Data Mover and listens for the server_stats requests.
[-start] Starts the statmonService on the Data Mover. If the -port argument is specified, it is used by the statmonManager service. These settings are persistent and execute as part of the Data Movers boot-up configurations. [-stop] Shuts down the statmonService on the specified Data Mover. [-delete] Deletes the statmonService persistent configurations so it does not execute as part of the Data Movers boot-up settings. If -delete is executed while the statmonService is running, the service stops and its configuration is deleted. [-status] Checks the status of the statmonService on the specified Data Mover. -monitor [-action] Enables, disables, or queries the state of the stats collection. -monitor {statpath_name|statgroup_name} Takes a comma-separated list of statpath and statgroup names. In cases where stats are available for multiple elements, the user can specify an element name or use ALL-ELEMENTS to refer to all elements at once. Since server_stats command considers periods within the statpath name as delimiters, statpath names with periods as part of the element name require those periods to be double escaped. For example, statistics for a filesystem named ufs1.accounting should be requested using the following statpath name: store.logicalvolume.metavolume.ufs1\\.accounting Any duplicate statpath or statgroup names is consolidated and reported once. The options below are only applicable to Set and Correlated Set statpath names: [-sort
Correlated Set is being sorted on a numeric field, -order defaults to descending order; otherwise, it defaults to ascending order. Correlated Sets cannot be sorted on non numeric fields, including the Correlated Set element ID. [-count
information. [-type {rate|diff|accu}] Specifies the display type of value for statistics with monotonically increasing values. The display type applies to statistics that increase monotonically, for example, network in-bound bytes. Other statistics that represent a point-in-time value, for example, current CIFS connections, are not affected by this option. The rate value displays the rate of change since the previous sample, the diff value displays the change in value since the previous sample, and the accu value displays the change in value since the initial sample. The default display type is rate. [-file
% KiB/s KiB/s KiB/s KiB/s Minimum 29 88618 729 8976 78599 Average 41 110083 860 12507 111368 Maximum 61 142057 1087 18632 167076 Where: Value Definition ----- ---------- Timestamp Time the poll was taken. CPU Util CPU utilization in percentage in this interval. Network In KiB/s Network kibibytes received over all network interfaces. Network Out KiB/s Network kibibytes sent over all network interfaces. dVol Read KiB/s Storage kibibytes received from all server-storage interf aces. dVol Write Kib/s Storage kibibytes sent to all server-storage interfaces. EXAMPLE #2 ---------- To display the basic-std group by indicating the change in value since the previous sample, type: $ server_stats server_2 -monitor basic-std -interval 5 -count 5 -type diff server_2 CPU Network Network dVol dVol Timestamp Util In KiB Out KiB Read KiB Write % diff diff diff KiB diff 02:53:29 46 267660 2136 26128 232654 02:53:31 38 200668 1543 23144 211182 02:53:33 46 226761 1749 26488 230558 02:53:35 48 246921 1876 28720 255957 02:53:37 40 212353 1673 23016 210573 server_2 CPU Network Network dVol dVol Summary Util In KiB Out KiB Read KiB Write % diff diff diff KiB diff Minimum 38 200668 1543 23016 210573 Average 44 230873 1795 25499 228185 Maximum 48 267660 2136 28720 255957 Where: Value Definition ----- ---------- Timestamp Time the poll was taken. CPU Util % CPU utilization in percentage in this interval. Network In KiB diff Network kibibytes received over all network interfaces per differential value. Network Out KiB diff Network kibibytes sent over all network interfaces per differential value. dVol Read KiB diff Storage kibibytes received from all server-storage interfaces per differential value. dVol Write KiB diff Storage kibibytes sent to all server-storage interfaces per differential value. EXAMPLE #3 ---------- To display the basic-std group by indicating the change in value since the first sample, type: $ server_stats server_2 -monitor basic-std -interval 5 -count 5 -type accu server_2 CPU Network In KiB Network Out KiB dVol Read KiB dVol Write KiB Timestamp Util %
02:53:48 42 236257 1880 25504 224832 02:53:50 54 505640 3983 55760 500538 02:53:52 29 686282 5377 74096 662494 02:53:54 46 922765 7183 101704 908813 02:53:56 41 1125518 8777 126640 1134362 server_2 CPU Network In KiB Network Out KiB dVol Read KiB dVol Write KiB Summary Util % Minimum 29 236257 1880 25504 224832 Average 42 695293 5440 76741 686208 Maximum 54 1125518 8777 126640 1134362 Where: Value Definition ----- ---------- Timestamp Time the poll was taken. CPU Util Disk utilized in percentage in this interval. Network In KiB Network kibibytes received over all network interfaces per accumulated value. Network Out KiB Network kibibytes sent over all network interfaces per accumulated value. dVol Read KiB Storage kibibytes received from all server-storage interfaces per accumulated value. dVol Write KiB Storage kibibytes sent to all server-storage interfaces per accumulated value. EXAMPLE #4 ---------- To display a list of statistics group names followed by statpaths and their types, type: $ server_stats server_2 -list Type Stat Name ... Correlated Set cifs.user Counter cifs.user.ALL-ELEMENTS.totalCalls Counter cifs.user.ALL-ELEMENTS.readCalls Counter cifs.user.ALL-ELEMENTS.writeCalls Fact cifs.user.ALL-ELEMENTS.suspectCalls Counter cifs.user.ALL-ELEMENTS.totalBytes Counter cifs.user.ALL-ELEMENTS.readBytes Counter cifs.user.ALL-ELEMENTS.writeBytes Fact cifs.user.ALL-ELEMENTS.avgTime Fact cifs.user.ALL-ELEMENTS.server Fact cifs.user.ALL-ELEMENTS.client ... Correlated Set nfs.user Counter nfs.user.ALL-ELEMENTS.totalCalls Counter nfs.user.ALL-ELEMENTS.readCalls Counter nfs.user.ALL-ELEMENTS.writeCalls Fact nfs.user.ALL-ELEMENTS.suspectCalls Counter nfs.user.ALL-ELEMENTS.totalBytes Counter nfs.user.ALL-ELEMENTS.readBytes Counter nfs.user.ALL-ELEMENTS.writeBytes Fact nfs.user.ALL-ELEMENTS.avgTime ... Set store.volume Correlated Set store.volume.ALL-ELEMENTS.fileSystem Counter store.volume.ALL-ELEMENTS.fileSystem.ALL-ELEMENTS.totalBlocks Counter store.volume.ALL-ELEMENTS.fileSystem.ALL-ELEMENTS.readBlocks Counter store.volume.ALL-ELEMENTS.fileSystem.ALL-ELEMENTS.writeBlocks Counter store.volume.ALL-ELEMENTS.fileSystem.ALL-ELEMENTS.totalBytes Counter store.volume.ALL-ELEMENTS.fileSystem.ALL-ELEMENTS.readBytes Counter store.volume.ALL-ELEMENTS.fileSystem.ALL-ELEMENTS.writeBytes
... EXAMPLE #5 ---------- To display the statgroup information, type: $ server_stats server_2 -info statsb server_2 : name = statsB description = My group # 2 type = Group - user-defined member_stats = nfs.basic,cifs.basic,iscsi.basic member_elements = member_of = statsA EXAMPLE #6 ---------- To display information of statistics group names followed by statpaths, type: $ server_stats server_2 -info server_2 : name = statsA description = My group # 1 type = Group - user-defined member_stats = statsB member_elements = member_of = ... name = cifs description = The CIFS-protocol service type = Family member_stats = member_elements = member_of = ... EXAMPLE #7 ---------- To start the statmonService, type: $ server_stats server_2 -service -start -port 7777 statmonService started on port: 7777. EXAMPLE #8 ---------- To stop the statmonService, type: $ server_stats server_2 -service -stop server_2: done. EXAMPLE #9 ---------- To delete the statmonService configurations from the Data Movers boot-up settings, type: $ server_stats server_2 -service -delete server_2: done. EXAMPLE #10 -----------
To query the status of the statmonService, type: $ server_stats server_2 -service -status server_2: The statmonService has started. Interface=INTERNAL Port=7777 Allow=128.221.252.100:128.221.252.101:128.221.253.100:128.221.253.101 The statmonService is listing for incoming network connections Max Connections: 32, Current: 0 EXAMPLE #11 ----------- To enable stats collection, type: $ server_stats server_2 -monitor -action enable server_2: done. EXAMPLE #12 ----------- To query the state of the stats collection, type: $ server_stats server_2 -monitor -action status server_2 : Statistics are enabled. EXAMPLE #13 ----------- To display five iterations of the of the cifs-std statistics group with a three second interval, type: $ server_stats server_2 -monitor cifs-std -i 3 -c 5 server_2 CIFS CIFS CIFS CIFS Avg CIFS CIFS CIFS Avg CIFS CIFS Timestamp Total Read Read Read Write Write Write Share Open Ops/s Ops/s KiB/s Size KiB Ops/s KiB/s Size KiB Connections Files 02:54:31 2133 0 0 - 1947 110600 57 96 587 02:54:34 1895 0 0 - 1737 99057 57 96 631 02:54:37 2327 0 0 - 2104 119556 57 96 649 02:54:40 2109 0 0 - 1864 106081 57 96 653 02:54:43 2439 0 0 - 2172 123578 57 96 639 server_2 CIFS CIFS CIFS CIFS Avg CIFS CIFS CIFS Avg CIFS CIF S Summary Total Read Read Read Write Write Write Share Ope n Ops/s Ops/s KiB/s Size KiB Ops/s KiB/s Size KiB Connections File s Minimum 1895 0 0 - 1737 99057 57 96 58 7 Average 2180 0 0 - 1965 111775 57 96 63 2 Maximum 2439 0 0 - 2172 123578 57 96 65 3 Where: Value Definition ----- ---------- Timestamp Time the poll was taken. CIFS Total Ops/s Total operations per second. CIFS Read Ops/s CIFS read operations per second in the interval. CIFS Read KiB/s CIFS read data response in kibibytes per second. CIFS Avg Size Read KiB Average read data response. CIFS Write Ops/s CIFS write operations per second. CIFS Write KiB/s CIFS write data response in kibibytes per second.
CIFS Avg Size Write KiB Average write data size. CIFS Share Connections Number of CIFS protocol connections. CIFS Open Files Number of open CIFS files. EXAMPLE #14 ----------- To display five iterations of the nfs-std statistics group with a one second interval, type: $ server_stats server_2 -monitor nfs-std -i 1 -c 5 server_2 Total NFS NFS NFS Avg NFS NFS NFS Avg NFS Timestamp NFS Read Read Read Size Write Write Write Size Active Ops/s Ops/s KiB/s Bytes Ops/s KiB/s Bytes Threads 13:44:53 20650 4121 67506 16774 2214 29737 13754 648 13:44:54 11663 2318 37140 16407 1238 17307 14316 648 13:44:55 8678 1790 30761 17597 945 12511 13557 648 13:44:56 17655 3543 56382 16296 1967 27077 14096 648 13:44:57 20302 4033 63822 16205 2271 31469 14189 648 server_2 Total NFS NFS NFS Avg NFS NFS NFS Avg NFS Summary NFS Read Read Read Size Write Write Write Size Active Ops/s Ops/s KiB/s Bytes Ops/s KiB/s Bytes Threads Minimum 8678 1790 30761 16205 945 12511 13557 648 Average 15790 3161 51122 16656 1727 23620 13982 648 Maximum 20650 4121 67506 17597 2271 31469 14316 648 Where: Value Definition ----- ---------- Timestamp Time the poll was taken. Total NFS Ops/s Total number of operations per second. NFS Read Ops/s NFS read operations per second in the interval. NFS Read KiB/s NFS read data response in kibibytes per second. NFS Avg Read Size Bytes Average read data response. NFS Write Ops/s NFS write operations per second. NFS Write KiB/s NFS write data response in kibibytes per second. NFS Avg Write Size Bytes Average write data size. NFS Active Threads Number of NFS active threads. ----------------------------------------------------------------------------- Note: Part of the accuracy of statistics can be linked to how often server_stats reports results. For example, statistics was used to monitor NFS write bytes to a Data Mover. The NFS client, swiftest, wrote a single by each second for five minutes. When server_stats was run with an interface of ten minutes, all bytes written were accounted for. At smaller intervals, such as one second, bytes were lost. Detailed results are as follows: 1 2 5 15 30 120 600 S1 0.005 0.045 0.052 0.000 0.050 0.000 0.000 S2 0.002 0.000 0.043 0.050 0.000 0.000 0.000 To review, these numbers are the number of kilobytes per report lost at each reporting period. The first row (S1) is the result of a single server_stat session, the second (S2) is two (average to produce a single value). Each column is the server_stat interval value. In context to informal numbers, the larger the amount of time between reporting periods, the more accurate the server_stat numbers. However, even when loss was prevalent at higher reporting periods, the loss rate was still very low. EXAMPLE #15 -----------
To display five iterations of the summary statistics for caches with a three second interval, type: $ server_stats server_2 -monitor caches-std -i 5 -c 5 server_2 DNLC OF Cache Buffer Timestamp Hit Hit Cache Ratio % Ratio % Hit % 02:55:26 - 100 71 02:55:29 - 100 72 02:55:32 - 100 73 02:55:35 - 100 73 02:55:38 - 100 72 server_2 DNLC OF Cache Buffer Summary Hit Hit Cache Ratio % Ratio % Hit % Minimum - 100 71 Average - 100 72 Maximum - 100 73 Where: Value Definition Timestamp Time the poll was taken. DNLC Hit Ratio % Directory Name Lookup Cache (DNLC) hit ratio. OF Cache Hit Ratio % Open file cache hit ratio. Buffer Cache Hit % Kernel buffer cache hit ratio. EXAMPLE #16 ----------- To display the netDevices-std statistics group with a three second interval, type : $ server_stats server_2 -monitor netDevices-std -i 3 -c 3 server_2 device Network Network Network Network Network Network Timestamp In In In Out Out Out Pkts/s Errors/s KiB/s Pkts/s Errors/s KiB/s 02:55:52 mge0 2 0 0 1 0 0 mge1 17 0 23 9 0 1 cge0 3593 0 26566 2289 0 203 cge1 6912 0 50206 4444 0 378 cge2 3637 0 25570 2342 0 209 02:55:55 mge0 0 0 0 0 0 0 mge1 7 0 9 4 0 0 cge0 3444 0 24744 2252 0 204 cge1 7415 0 53354 4721 0 400 cge2 3913 0 27796 2502 0 222 02:55:58 mge0 2 0 0 2 0 0 mge1 32 0 39 19 0 2 cge0 4029 0 29334 2594 0 230 cge1 7461 0 54030 4791 0 406 cge2 3902 0 27319 2505 0 223 server_2 device Network Network Network Network Network Network Summary In In In Out Out Out Pkts/s Errors/s KiB/s Pkts/s Errors/s KiB/s Minimum mge0 0 0 0 0 0 0 mge1 7 0 9 4 0 0 cge0 3444 0 24744 2252 0 203 cge1 6912 0 50206 4444 0 378 cge2 3637 0 25570 2342 0 209 cge3 0 0 0 0 0 0 Average mge0 1 0 0 1 0 0 mge1 19 0 24 11 0 1 cge0 3689 0 26882 2378 0 213 cge1 7263 0 52530 4652 0 395 cge2 3817 0 26895 2450 0 218 cge3 0 0 0 0 0 0 Maximum mge0 2 0 0 2 0 0
mge1 32 0 39 19 0 2 cge0 4029 0 29334 2594 0 230 cge1 7461 0 54030 4791 0 406 cge2 3913 0 27796 2505 0 223 cge3 0 0 0 0 0 0 Where: Value Definition ----- ---------- Timestamp Time the poll was taken. Device Name of the network device. Network In Pkts/s Network packets received per second. Network In Errors/s Network input errors encountered per second. Network In KiB/s Network kibibytes received per second. Network Out Pkts/s Network packets sent per second. Network Out Errors/s Network output errors encountered per second. Network Out KiB/s Network kibibytes sent per second. EXAMPLE #17 ----------- To display the netDevices-std statistics group without the summary and with a three second interval, type: $ server_stats server_2 -monitor netDevices-std -i 3 -c 3 -terminationsummary no server_2 device Network Network Network Network Network Network Timestamp In In In Out Out Out Pkts/s Errors/s KiB/s Pkts/s Errors/s KiB/s 02:56:11 mge0 16 0 1 19 0 23 mge1 43 0 60 24 0 2 cge0 3960 0 29053 2547 0 226 cge1 6709 0 48414 4296 0 366 cge2 4829 0 33996 3125 0 281 02:56:14 mge0 0 0 0 0 0 0 mge1 3 0 3 2 0 0 cge0 3580 0 25905 2335 0 211 cge1 6663 0 48212 4273 0 364 cge2 3970 0 28113 2523 0 222 02:56:17 mge0 2 0 0 2 0 0 mge1 5 0 6 2 0 0 cge0 3561 0 25891 2296 0 206 cge1 7091 0 51721 4564 0 389 cge2 3931 0 27703 2514 0 223 cge3 0 0 0 0 0 0 EXAMPLE #18 ----------- To display the cifsOps-std statistics with a five second interval, type: $ server_stats server_2 -monitor cifsops-std -i 5 -c 3 server_2 SMB Operation Op Min Max Avg Timestamp Calls/s uSec uSec uSec/call 02:57:00 SMB1_Close 89 45 406775 10273 SMB1_WriteX 1837 30 1618776 144030 SMB1_CreateNTX 84 51 458090 379 02:57:03 SMB1_Close 122 45 406775 10057 SMB1_WriteX 1867 30 1618776 133180 SMB1_CreateNTX 126 51 458090 1826 02:57:06 SMB1_Close 105 45 406775 14663 SMB1_WriteX 2119 30 1618776 121976 SMB1_CreateNTX 103 51 458090 1801 server_2 SMB Operation Op Min Max Avg Summary Calls/s uSec uSec uSec/call Minimum SMB1_Mkdir 0 0 0 -
SMB1_Rmdir 0 0 0 - SMB1_Open 0 0 0 - SMB1_Create 0 0 0 - SMB1_Close 89 45 406775 10057 SMB1_Flush 0 0 0 - SMB1_Unlink 0 0 0 - SMB1_Rename 0 0 0 - SMB1_GetAttr 0 0 0 - SMB1_SetAttr 0 0 0 - SMB1_Read 0 0 0 - SMB1_Write 0 0 0 - SMB1_Lock 0 0 0 - SMB1_Unlock 0 0 0 - SMB1_CreateTmp 0 0 0 - SMB1_MkNew 0 0 0 - SMB1_ChkPath 0 0 0 - SMB1_Exit 0 0 0 - SMB1_Lseek 0 0 0 - SMB1_LockRead 0 0 0 - SMB1_WriteUnlock 0 0 0 - SMB1_ReadBlockRaw 0 0 0 - SMB1_WriteBlockRaw 0 0 0 - SMB1_SetAttrExp 0 0 0 - SMB1_GetAttrExp 0 0 0 - SMB1_LockingX 0 0 0 - SMB1_Trans 0 0 0 - SMB1_TransSec 0 0 0 - SMB1_Copy 0 0 0 - SMB1_Move 0 0 0 - SMB1_Echo 0 0 0 - SMB1_WriteClose 0 0 0 - SMB1_OpenX 0 0 0 - SMB1_ReadX 0 0 0 - SMB1_WriteX 1837 30 1618776 121976 SMB1_CloseTreeDisco 0 0 0 - SMB1_Trans2Prim 0 0 0 - SMB1_Trans2Secd 0 0 0 - SMB1_FindClose2 0 0 0 - SMB1_FindNotifyClose 0 0 0 - SMB1_TreeConnect 0 0 0 - SMB1_TreeDisco 0 0 0 - SMB1_NegProt 0 44 85 - SMB1_SessSetupX 0 1088 12058 - SMB1_UserLogoffX 0 0 0 - SMB1_TreeConnectX 0 82 499 - SMB1_DiskAttr 0 0 0 - SMB1_Search 0 0 0 - SMB1_FindFirst 0 0 0 - SMB1_FindUnique 0 0 0 - SMB1_FindClose 0 0 0 - SMB1_TransNT 0 0 0 - SMB1_TransNTSecd 0 0 0 - SMB1_CreateNTX 84 51 458090 379 SMB1_CancelNT 0 0 0 - SMB1_SendMessage 0 0 0 - SMB1_BeginMessage 0 0 0 - SMB1_EndMessage 0 0 0 - SMB1_MessageText 0 0 0 - SMB2_Negotiate 0 0 0 - SMB2_SessionSetup 0 0 0 - SMB2_Logoff 0 0 0 - SMB2_TreeConnect 0 0 0 - SMB2_TreeDisConnect 0 0 0 - SMB2_Create 0 0 0 - SMB2_Close 0 0 0 - SMB2_Flush 0 0 0 - SMB2_Read 0 0 0 - SMB2_Write 0 0 0 - SMB2_Lock 0 0 0 - SMB2_Ioctl 0 0 0 - SMB2_Cancel 0 0 0 -
SMB2_Echo 0 0 0 - SMB2_QueryDirectory 0 0 0 - SMB2_ChangeNotify 0 0 0 - SMB2_QueryInfo 0 0 0 - SMB2_SetInfo 0 0 0 - SMB2_OplockBreak 0 0 0 - Average SMB1_Mkdir 0 0 0 - SMB1_Rmdir 0 0 0 - SMB1_Open 0 0 0 - SMB1_Create 0 0 0 - SMB1_Close 105 45 406775 11664 SMB1_Flush 0 0 0 - SMB1_Unlink 0 0 0 - SMB1_Rename 0 0 0 - SMB1_GetAttr 0 0 0 - SMB1_SetAttr 0 0 0 - SMB1_Read 0 0 0 - SMB1_Write 0 0 0 - SMB1_Lock 0 0 0 - SMB1_Unlock 0 0 0 - SMB1_CreateTmp 0 0 0 - SMB1_MkNew 0 0 0 - SMB1_ChkPath 0 0 0 - SMB1_Exit 0 0 0 - SMB1_Lseek 0 0 0 - SMB1_LockRead 0 0 0 - SMB1_WriteUnlock 0 0 0 - SMB1_ReadBlockRaw 0 0 0 - SMB1_WriteBlockRaw 0 0 0 - SMB1_SetAttrExp 0 0 0 - SMB1_GetAttrExp 0 0 0 - SMB1_LockingX 0 0 0 - SMB1_Trans 0 0 0 - SMB1_TransSec 0 0 0 - SMB1_Copy 0 0 0 - SMB1_Move 0 0 0 - SMB1_Echo 0 0 0 - SMB1_WriteClose 0 0 0 - SMB1_OpenX 0 0 0 - SMB1_ReadX 0 0 0 - SMB1_WriteX 1941 30 1618776 133062 SMB1_CloseTreeDisco 0 0 0 - SMB1_Trans2Prim 0 0 0 - SMB1_Trans2Secd 0 0 0 - SMB1_FindClose2 0 0 0 - SMB1_FindNotifyClose 0 0 0 - SMB1_TreeConnect 0 0 0 - SMB1_TreeDisco 0 0 0 - SMB1_NegProt 0 44 85 - SMB1_SessSetupX 0 1088 12058 - SMB1_UserLogoffX 0 0 0 - SMB1_TreeConnectX 0 82 499 - SMB1_DiskAttr 0 0 0 - SMB1_Search 0 0 0 - SMB1_FindFirst 0 0 0 - SMB1_FindUnique 0 0 0 - SMB1_FindClose 0 0 0 - SMB1_TransNT 0 0 0 - SMB1_TransNTSecd 0 0 0 - SMB1_CreateNTX 104 51 458090 1335 SMB1_CancelNT 0 0 0 - SMB1_SendMessage 0 0 0 - SMB1_BeginMessage 0 0 0 - SMB1_EndMessage 0 0 0 - SMB1_MessageText 0 0 0 - SMB2_Negotiate 0 0 0 - SMB2_SessionSetup 0 0 0 - SMB2_Logoff 0 0 0 - SMB2_TreeConnect 0 0 0 - SMB2_TreeDisConnect 0 0 0 - SMB2_Create 0 0 0 -
SMB2_Close 0 0 0 - SMB2_Flush 0 0 0 - SMB2_Read 0 0 0 - SMB2_Write 0 0 0 - SMB2_Lock 0 0 0 - SMB2_Ioctl 0 0 0 - SMB2_Cancel 0 0 0 - SMB2_Echo 0 0 0 - SMB2_QueryDirectory 0 0 0 - SMB2_ChangeNotify 0 0 0 - SMB2_QueryInfo 0 0 0 - SMB2_SetInfo 0 0 0 - SMB2_OplockBreak 0 0 0 - Maximum SMB1_Mkdir 0 0 0 - SMB1_Rmdir 0 0 0 - SMB1_Open 0 0 0 - SMB1_Create 0 0 0 - SMB1_Close 122 45 406775 14663 SMB1_Flush 0 0 0 - SMB1_Unlink 0 0 0 - SMB1_Rename 0 0 0 - SMB1_GetAttr 0 0 0 - SMB1_SetAttr 0 0 0 - SMB1_Read 0 0 0 - SMB1_Write 0 0 0 - SMB1_Lock 0 0 0 - SMB1_Unlock 0 0 0 - SMB1_CreateTmp 0 0 0 - SMB1_MkNew 0 0 0 - SMB1_ChkPath 0 0 0 - SMB1_Exit 0 0 0 - SMB1_Lseek 0 0 0 - SMB1_LockRead 0 0 0 - SMB1_WriteUnlock 0 0 0 - SMB1_ReadBlockRaw 0 0 0 - SMB1_WriteBlockRaw 0 0 0 - SMB1_SetAttrExp 0 0 0 - SMB1_GetAttrExp 0 0 0 - SMB1_LockingX 0 0 0 - SMB1_Trans 0 0 0 - SMB1_TransSec 0 0 0 - SMB1_Copy 0 0 0 - SMB1_Move 0 0 0 - SMB1_Echo 0 0 0 - SMB1_WriteClose 0 0 0 - SMB1_OpenX 0 0 0 - SMB1_ReadX 0 0 0 - SMB1_WriteX 2119 30 1618776 144030 SMB1_CloseTreeDisco 0 0 0 - SMB1_Trans2Prim 0 0 0 - SMB1_Trans2Secd 0 0 0 - SMB1_FindClose2 0 0 0 - SMB1_FindNotifyClose 0 0 0 - SMB1_TreeConnect 0 0 0 - SMB1_TreeDisco 0 0 0 - SMB1_NegProt 0 44 85 - SMB1_SessSetupX 0 1088 12058 - SMB1_UserLogoffX 0 0 0 - SMB1_TreeConnectX 0 82 499 - SMB1_DiskAttr 0 0 0 - SMB1_Search 0 0 0 - SMB1_FindFirst 0 0 0 - SMB1_FindUnique 0 0 0 - SMB1_FindClose 0 0 0 - SMB1_TransNT 0 0 0 - SMB1_TransNTSecd 0 0 0 - SMB1_CreateNTX 126 51 458090 1826 SMB1_CancelNT 0 0 0 - SMB1_SendMessage 0 0 0 - SMB1_BeginMessage 0 0 0 - SMB1_EndMessage 0 0 0 -
SMB1_MessageText 0 0 0 - SMB2_Negotiate 0 0 0 - SMB2_SessionSetup 0 0 0 - SMB2_Logoff 0 0 0 - SMB2_TreeConnect 0 0 0 - SMB2_TreeDisConnect 0 0 0 - SMB2_Create 0 0 0 - SMB2_Close 0 0 0 - SMB2_Flush 0 0 0 - SMB2_Read 0 0 0 - SMB2_Write 0 0 0 - SMB2_Lock 0 0 0 - SMB2_Ioctl 0 0 0 - SMB2_Cancel 0 0 0 - SMB2_Echo 0 0 0 - SMB2_QueryDirectory 0 0 0 - SMB2_ChangeNotify 0 0 0 - SMB2_QueryInfo 0 0 0 - SMB2_SetInfo 0 0 0 - SMB2_OplockBreak 0 0 0 - Where: Value Definition ----- ---------- Timestamp Time the poll was taken. SMB Operation Name of the SMB operation. Op Calls/s Number of calls to this SMB operation per second. Min uSec Minimum time in microseconds per call. Max uSec Maximum time in microseconds per call. Avg uSec/Call Average time in microseconds consumed per call. EXAMPLE #19 ----------- To display the cifsOps-std statistics group without the summary and with a five second interval, type: $ server_stats server_2 -m cifsops-std -i 5 -c 3 -te no server_2 SMB Operation Op Min Max Avg Timestamp Calls/s uSec uSec uSec/call 02:57:24 SMB1_Close 56 45 552768 25299 SMB1_WriteX 1360 29 1618776 161125 SMB1_CreateNTX 46 51 458090 971 02:57:27 SMB1_Close 130 45 568291 16814 SMB1_WriteX 1627 29 1618776 182622 SMB1_CreateNTX 147 51 458090 276 02:57:30 SMB1_Close 50 45 568291 29992 SMB1_WriteX 1615 29 1618776 151924 SMB1_CreateNTX 37 51 458090 2850 EXAMPLE #20 ----------- To display the nfsOps-std statistics group without the summary and with a five second interval, type: $ server_stats server_2 -m nfsops-std -i 5 -c 3 -te no server_2 NFS Op NFS NFS NFS NFS Op % Timestamp Op Op Op Calls/s Errors/s uSec/call 03:18:21 v3Read 23442 0 63846 50 v3Write 23372 0 99156 50 03:18:24 v3Read 23260 0 65756 50 v3Write 23243 0 101135 50 03:18:27 v3Read 23385 0 66808 50 v3Write 23323 0 102201 50
Where: Value Definition ----- ---------- Timestamp Time the poll was taken. NFS Op Name of the NFS operation. NFS Op Calls/s Number of calls to this NFS operation per second. NFS Op Errors/s Number of times the NFS operation failed per second. NFS Op uSec/Call Average time in microseconds consumed per call. NFS Op % Percent of total NFS calls attributed to this operation. EXAMPLE #21 ----------- To display the diskVolumes-std statistics group without the summary and with a five second interval, type: $ server_stats server_2 -m diskVolumes-std -i 5 -c 3 -te no server_2 dVol Queue Read Read Avg Read Write Write Avg Write Util% Timestamp Depth Ops/s KiB/s Size Ops/s KiB/s Size Bytes/s Bytes/s 02:58:09 NBS1 0 0 3 8192 1 7 6827 0 root_ldisk 0 0 0 - 461 490 1090 47 d7 0 113 904 8192 530 19619 37881 83 d11 0 249 1995 8192 431 11640 27634 91 d8 0 68 547 8192 372 11472 31607 79 d12 33 424 3389 8192 609 20045 33705 99 d9 0 36 291 8192 592 20339 35161 67 d13 0 333 2664 8192 347 11925 35158 93 d10 0 24 189 8192 385 11896 31668 63 d14 36 573 4581 8192 454 20173 45468 100 02:58:12 root_ldisk 0 0 0 - 401 462 1182 44 NBS6 0 0 0 - 1 3 3072 0 d7 0 78 624 8192 388 13851 36523 70 d11 0 216 1728 8192 470 11147 24268 84 d8 0 51 411 8192 333 10672 32850 85 d12 0 301 2408 8192 483 14411 30531 98 d9 0 24 192 8192 422 14285 34691 50 d13 0 290 2317 8192 340 10920 32856 87 d10 0 19 152 8192 346 10944 32389 70 d14 47 407 3259 8192 342 14288 42822 100 02:58:15 NBS1 0 0 0 - 3 1 512 0 root_ldisk 0 0 0 - 409 454 1135 43 NBS5 0 0 0 - 9 83 9070 1 d7 0 122 976 8192 471 20179 43839 90 d11 1 144 1149 8192 225 6608 30118 94 d8 2 33 261 8192 229 6515 29131 48 d12 41 424 3395 8192 666 20632 31722 93 d9 0 44 355 8192 577 20848 36999 82 d13 2 185 1483 8192 201 6768 34423 93 d10 0 13 101 8192 238 6789 29252 36 d14 0 583 4667 8192 521 21131 41505 95 Where: Value Definition ----- ---------- Timestamp Time the poll was taken. dVol Name of the disk volume. Queue Depth Queue depth of the disk volume. Read Ops/s Number of read operations per second. Read KiB/s Kibibytes read per second. Avg Read Size Bytes Average size in bytes of read requests per second. Write Ops/s Number of write operations per second. Write KiB/s Kibibytes written per second. Avg Write Size Bytes Average size in bytes for write requests per second. Util % Disk utilized in percetage.
EXAMPLE #22 ----------- To display the metaVolumes-std statistics group without the summary and with five second interval, type: $ server_stats server_2 -m metaVolumes-std -i 5 -c 3 -te no server_2 MetaVol Read Read Avg Read Read Write Write Avg Write Write Timestamp Requests/s KiB/s Size Ops/s Requests/s KiB/s Size Ops/s Bytes Bytes 02:58:37 SNBS6 0 0 - 0 1 3 3072 1 ufs_4 0 0 - 0 160 1285 8209 161 ufs_5 0 0 - 0 163 1299 8175 162 ufs_3 0 0 - 0 11 2155 200580 11 ufs_2 347 2776 8192 347 140 23544 172208 140 ufs_0 315 2517 8192 315 148 21427 147916 148 ufs_1 654 5229 8192 654 313 45512 148895 313 root_fs_3 1 11 8192 1 0 0 - 0 02:58:40 SNBS5 0 0 - 0 3 37 12743 3 SNBS1 0 0 - 0 3 1 512 3 ufs_4 0 0 - 0 159 1257 8089 157 ufs_5 0 0 - 0 160 1273 8158 159 ufs_3 0 0 - 0 2 511 224695 2 ufs_2 396 3166 8192 396 195 27326 143200 195 ufs_0 431 3446 8192 431 187 29574 162161 187 ufs_1 408 3262 8192 408 159 27782 178784 159 root_fs_3 1 5 8192 1 0 0 - 0 02:58:43 SNBS5 0 0 - 0 1 5 5461 1 SNBS6 0 0 - 0 1 3 4608 1 ufs_4 0 0 - 0 146 1159 8136 145 ufs_5 0 0 - 0 148 1183 8174 148 ufs_3 0 0 - 0 8 1965 262144 8 ufs_2 522 4174 8192 522 219 35546 166238 219 ufs_0 492 3933 8192 492 222 33356 153886 222 ufs_1 467 3736 8192 467 188 31955 173819 188 Where: Value Definition ----- ---------- MetaVol Name of the meta volume associated with the file system. Read Request/s Number of read requests per second to this volume. Read KiB/s Kibibytes read per second. Avg Read Size Bytes Average size for read requests to this volume. Read Ops/s Number of read operations per second. Write Requests/s Number of write requests per second. Write KiB/s Number of kibibytes written per second to this volume. Avg Write Size Bytes Average size in bytes for write requests. Write Ops/s Number of write operations per second. EXAMPLE #23 ----------- To display the nfsOps-std statistics group sorted by the percentage of all the NF S operations for a five second interval, type: $ server_stats server_2 -monitor nfsOps-std -sort opPct -i 5 -c 3 -te server_2 NFS Op NFS NFS NFS NFS Op % Timestamp Op Op Op Calls/s Errors/s uSec/call 03:18:57 v3Read 23263 0 81632 50 v3Write 23352 0 116645 50 03:19:00 v3Read 23431 0 82443 50 v3Write 23345 0 118088 50 03:19:03 v3Read 23176 0 84759 50 v3Write 23326 0 119608 50
EXAMPLE #24 ----------- To display the nfsOps-std statistics group sorted by the average time in microseconds used for a five second interval, in ascending order, type: $ server_stats server_2 -m nfsops-std -sort avgTime -order asc -i 5 -c 3 -te no server_2 NFS Op NFS NFS NFS NFS Op % Timestamp Op Op Op Calls/s Errors/s uSec/call 04:05:27 v3Write 605 0 8022318 100 v3Create 2 0 25304786 0 04:05:30 v3Create 8 0 7722823 1 v3Write 579 0 8435543 99 04:05:33 v3Create 41 0 1468883 7 v3Write 567 0 8690860 93 EXAMPLE #25 ----------- To display the nfsOps-std statistics group sorted by the average time in microseconds used for a five second interval, in descending order, and including the three counts of data output, type: $ server_stats server_2 -m nfsops-std -sort avgTime -order desc -lines 3 -i 5 -c 3 -te no server_2 NFS Op NFS NFS NFS NFS Op % Timestamp Op Op Op Calls/s Errors/s uSec/call 04:09:39 v3Create 1 0 31657550 0 v3Write 610 0 6223366 100 04:09:44 v3Write 607 0 6275942 98 v3Create 11 0 3978054 2 04:09:49 v3Write 574 0 6691264 93 v3Create 42 0 1073819 7 EXAMPLE #26 ----------- To display the Correlated Set list, type: $ server_stats server_3 -l server_3 : Type Stat Name ... Correlated Set cifs.user Counter cifs.user.ALL-ELEMENTS.totalCalls Counter cifs.user.ALL-ELEMENTS.readCalls Counter cifs.user.ALL-ELEMENTS.writeCalls Fact cifs.user.ALL-ELEMENTS.suspectCalls Counter cifs.user.ALL-ELEMENTS.totalBytes Counter cifs.user.ALL-ELEMENTS.readBytes Counter cifs.user.ALL-ELEMENTS.writeBytes Fact cifs.user.ALL-ELEMENTS.avgTime Fact cifs.user.ALL-ELEMENTS.server Fact cifs.user.ALL-ELEMENTS.client ... Correlated Set nfs.user Counter nfs.user.ALL-ELEMENTS.totalCalls Counter nfs.user.ALL-ELEMENTS.readCalls Counter nfs.user.ALL-ELEMENTS.writeCalls Fact nfs.user.ALL-ELEMENTS.suspectCalls Counter nfs.user.ALL-ELEMENTS.totalBytes Counter nfs.user.ALL-ELEMENTS.readBytes Counter nfs.user.ALL-ELEMENTS.writeBytes Fact nfs.user.ALL-ELEMENTS.avgTime ... Set store.volume
Correlated Set store.volume.ALL-ELEMENTS.fileSystem Counter store.volume.ALL-ELEMENTS.fileSystem.ALL-ELEMENTS.totalBl ocks Counter store.volume.ALL-ELEMENTS.fileSystem.ALL-ELEMENTS.readBlo cks Counter store.volume.ALL-ELEMENTS.fileSystem.ALL-ELEMENTS.writeBl ocks Counter store.volume.ALL-ELEMENTS.fileSystem.ALL-ELEMENTS.totalBy tes Counter store.volume.ALL-ELEMENTS.fileSystem.ALL-ELEMENTS.readByt es Counter store.volume.ALL-ELEMENTS.fileSystem.ALL-ELEMENTS.writeBy tes ... EXAMPLE #27 ----------- To display cifs.client information with IP resolution, type: $ server_stats server_2 -i 2 -m cifs.client -l 10 server_2 IP address CIFS CIFS CIFS CIFS CIFS CIFS CIFS CIFS Timestamp Total Read Write Suspicious Total Read W rite Avg Ops/s Ops/s Ops/s Ops KiB/s KiB/s KiB/s u Secs/call 09:46:49 id=10.103.11.105_20107 28 0 28 0 1627 0 1 627 33106 id=10.103.11.105_20363 27 0 27 0 1533 0 1 533 27774 id=10.103.11.105_18571 26 0 26 0 1470 0 1 470 29917 id=10.103.11.105_13707 25 0 25 0 1439 0 1 439 38483 id=10.103.11.105_17803 25 0 25 0 1466 0 1 466 46276 id=10.103.11.105_13195 23 0 23 0 1340 0 1 340 28742 id=10.103.11.105_16267 23 0 22 0 1277 0 1 277 37569 id=10.103.11.105_16523 23 0 23 0 1340 0 1 340 28957 id=10.103.11.105_17291 23 0 22 0 1277 0 1 277 34895 id=10.103.11.105_19339 23 0 23 0 1313 0 1 313 32875 09:46:51 p24.perf1.com_15499 27 0 27 0 1568 0 1 568 27840 p24.perf1.com_16523 26 0 26 0 1507 0 1 507 34868 p24.perf1.com_19595 26 0 26 0 1507 0 1 507 27609 p24.perf1.com_20875 25 0 25 0 1441 0 1 441 27752 p24.perf1.com_14987 25 0 25 0 1410 0 1 410 34752 p24.perf1.com_15243 24 0 24 0 1348 0 1 348 28965 p24.perf1.com_19083 23 0 23 0 1317 0 1 317 39723 p24.perf1.com_19339 23 0 22 0 1256 0 1 256 29662 p24.perf1.com_20619 23 0 23 0 1317 0 1 317 33112 p24.perf1.com_13195 23 0 21 0 1194 0 1 194 37954 EXAMPLE #28 -----------
To display nfs.client information with IP resolution, type: $ server_stats server_2 -monitor nfs.client -te no -c 2 server_2 Client NFS NFS NFS NFS NFS NFS NFS N FS Timestamp Total Read Write Suspicious Total Read Write A vg Ops/s Ops/s Ops/s Ops KiB/s KiB/s KiB/s uSe c/call 09:48:09 id=10.103.11.106 83 0 83 0 42604 0 42604 360 77 id=10.103.11.104 70 0 66 0 28448 0 28448 318 2 id=10.103.11.105 52 0 52 0 26659 0 26659 399 84 09:48:11 p25.perf1.com 97 0 97 0 49868 0 49868 132 44 p23.perf1.com 87 0 82 0 35815 0 35815 254 9 p24.perf1.com 61 0 57 0 29242 0 29242 145 16 09:48:13 p25.perf1.com 116 0 116 0 59576 0 59576 102 01 p23.perf1.com 99 0 91 0 38273 0 38273 130 6 p24.perf1.com 51 0 51 0 26224 0 26224 110 14 09:48:15 p25.perf1.com 85 0 85 0 43591 0 43591 173 85 p23.perf1.com 70 0 62 0 27396 0 27396 551 p24.perf1.com 45 0 45 0 23214 0 23214 145 04 EXAMPLE #29 ----------- To monitor cifs.user information, type: $ server_stats server_2 -i 1 -m cifs.user server_2 User name CIFS CIFS CIFS CIFS CIFS CIFS CIFS CIFS CIFS CIFS Timestamp Total Read Write Susp- Total Read Write Avg Server Client Op/s Op/s Op/s -cious KiB/s uSecs/ Name Name 14:38:52 TESTDOMAIN\admin 1 1 0 0 0 0 0 135 PITTA -DM2-0 P27 14:38:57 TESTDOMAIN\admin 11 11 0 0 1 1 0 2257 PITTA -DM2-0 P27 14:39:02 14:39:07 TESTDOMAIN\admin 0 0 0 0 0 0 0 22 PITTA -DM2-0 P27 14:39:22 server_2 User name CIFS CIFS CIFS CIFS CIFS CIFS CIFS CIFS CIFS CIFS Timestamp Total Read Write Susp- Total Read Write Avg Serve r Client Op/s Op/s Op/s -cious KiB/s uSecs/ Name Name Ops call Minimum TESTDOMAIN\admin 0 0 0 0 0 0 0 22 - - Average TESTDOMAIN\admin 4 4 0 0 0 0 0 634 - - Maximum TESTDOMAIN\admin 11 11 0 0 1 1 0 2257 -
- EXAMPLE #30 ----------- To monitor NFS User information, type: $ server_stats server_2 -i 5 -m nfs.user server_2 User name NFS NFS NFS NFS NFS NFS NFS NFS Timestamp Total Read Write Suspicious Total Read Write Avg Ops/s Ops/s Ops/s Ops KiB/s KiB/s KiB/s uSecs/c all 14:38:52 TESTDOMAIN\admin 3 3 0 0 0 0 0 405 14:38:57 TESTDOMAIN\admin 33 33 0 0 3 3 0 6771 14:39:02 14:39:07 TESTDOMAIN\admin 0 0 0 0 0 0 0 66 14:39:22 server_2 User name NFS NFS NFS NFS NFS NFS NFS NFS Summary Total Read Write Suspicious Total Read Write Avg Ops/s Ops/s Ops/s Ops KiB/s KiB/s KiB/s uSecs/c all Minimum TESTDOMAIN\admin 0 0 0 0 0 0 0 66 Average TESTDOMAIN\admin 12 12 0 0 1 1 0 1902 Maximum TESTDOMAIN\admin 33 33 0 0 3 3 0 6771 EXAMPLE #31 ----------- To view Correlated Statistics information for Filesystem, type: $ server_stats server_2 -c 2 -i 2 -m fs.filesystem server_2 Filesystem File Total Read Written Average Read Average W riteAverage Timestamp KiB/s KiB/s KiB/s uSecs/Call uSecs/Call u Secs/Call 02:54:49 ufs_2 id=38:7339 512 0 512 43873 0 4 3873 id=38:7221 512 0 512 79528 0 7 9528 id=38:8056 512 0 512 66702 0 6 6702 id=38:8060 512 0 512 50447 0 5 0447 id=38:6099 512 0 512 33244 0 3 3244 id=38:7338 512 0 512 86104 0 8 6104 id=38:6513 512 0 512 45073 0 4 5073 id=38:8192 512 0 512 48825 0 4 8825 id=38:6640 512 0 512 2417 0 2 417 id=38:7332 512 0 512 26889 0 2 6889 id=38:6556 512 0 512 88549 0 8 8549 id=38:7104 512 0 512 25379 0 2 5379 id=38:6136 512 0 512 17293 0 1 7293 id=38:6317 512 0 512 76986 0 7 6986 ufs_0 id=36:6483 512 0 512 11392 0 1 1392 id=36:6724 512 0 512 23286 0 2 3286 id=36:6701 512 0 512 62777 0 6
2777 id=36:8118 512 0 512 30374 0 3 0374 id=36:6054 512 0 512 31020 0 3 1020 ufs_4 id=40:173 2560 0 2560 2184 0 2 184 id=40:156 2560 0 2560 1722 0 1 722 id=40:178 2560 0 2560 1980 0 1 980 id=40:144 2560 0 2560 2167 0 2 167 id=40:166 2560 0 2560 2236 0 2 236 ufs_1 id=37:10470 512 0 512 29354 0 2 9354 id=37:10605 512 0 512 47099 0 4 7099 id=37:10968 512 0 512 31437 0 3 1437 id=37:10721 512 0 512 68134 0 6 8134 id=37:10944 512 0 512 10422 0 1 0422 ufs_5 /server_2/ ufs_5/dir00005/ testdir/ raN_0000053252.tmp 2560 0 2560 2199 0 2199 /server_2/ ufs_5/dir00005/ testdir/ jZE_0000057348.tmp 2560 0 2560 2416 0 2 416 /server_2/ ufs_5/dir00005/ testdir/ gCw_0000008196.tmp 2560 0 2560 2426 0 2 426 /server_2/ ufs_5/dir00005/ testdir/ KNj_0000002052.tmp 2560 0 2560 2654 0 2 654 /server_2/ ufs_5/dir00005/ testdir/ cdd_0000012292.tmp 2560 0 2560 2454 0 2 454 ufs_3 id=39:169 2304 0 2304 34 0 3 4 id=39:184 2304 0 2304 35 0 3 5 id=39:176 2304 0 2304 37 0 3 7 id=39:188 2304 0 2304 32 0 3 2 id=39:172 2048 0 2048 31 0 3 1 id=39:189 2048 0 2048 33 0 3 3 02:54:51 ufs_2 id=38:7238 512 0 512 27145 0 2 7145 id=38:5616 512 0 512 201 0 2 01 id=38:6468 512 0 512 81317 0 8 1317 id=38:6913 512 0 512 78302 0 7 8302 id=38:6441 512 0 512 76825 0 7
6825 ufs_0 id=36:6565 512 0 512 4537 0 4 537 id=36:6937 512 0 512 72659 0 7 2659 id=36:7716 512 0 512 11224 0 1 1224 id=36:7138 512 0 512 85855 0 8 5855 id=36:6233 512 0 512 68502 0 6 8502 ufs_4 id=40:173 2560 0 2560 3216 0 3 216 id=40:185 2560 0 2560 3349 0 3 349 id=40:156 2560 0 2560 3268 0 3 268 id=40:152 2560 0 2560 2587 0 2 587 id=40:201 2560 0 2560 3123 0 3 123 ufs_1 id=37:9952 512 0 512 30769 0 3 0769 id=37:10818 512 0 512 16503 0 1 6503 id=37:10053 512 0 512 13284 0 1 3284 id=37:10658 512 0 512 35439 0 3 5439 id=37:10676 512 0 512 10284 0 1 0284 ufs_5 /server_2/ ufs_5/dir00005/ testdir/ raN_0000053252.tmp 2560 0 2560 3299 0 3 299 /server_2/ ufs_5/dir00005/ testdir/ JeD_0000020484.tmp 2560 0 2560 3267 0 3 267 /server_2 /ufs_5/dir00005/ testdir/ jZE_0000057348.tmp 2560 0 2560 2066 0 2 066 /server_2/ ufs_5/dir00005/ testdir/ gCw_0000008196.tmp 2560 0 2560 2931 0 2 931 ufs_3 id=39:169 2560 0 2560 34 0 34 id=39:140 2560 0 2560 33 0 3 3 id=39:181 2560 0 2560 31 0 3 1 id=39:184 2560 0 2560 26 0 2 6 id=39:167 2560 0 2560 29 0 2 9 server_2 Filesystem File Total Read Written Average Read Average Write Average Summary KiB/s KiB/s KiB/s uSecs/Call uSec s/Call uSecs/Call Minimum ufs_2 id=38:5616 512 0 512 201 0 201 id=38:5787 512 0 512 26397 0 26397 id=38:5939 512 0 512 73921
0 73921 id=38:6099 512 0 512 33244 0 33244 id=38:6136 512 0 512 17293 0 17293 ufs_1.op ufs_0 id=36:5924 512 0 512 11537 0 11537 id=36:6054 512 0 512 31020 0 31020 id=36:6062 512 0 512 8021 0 8021 id=36:6140 512 0 512 93666 0 93666 id=36:6214 512 0 512 47252 0 47252 id=36:6233 512 0 512 68502 0 68502 ufs_4 id=40:129 2560 0 2560 2416 0 2416 id=40:139 2560 0 2560 2079 0 2079 id=40:144 2560 0 2560 2167 0 2167 id=40:145 2560 0 2560 2507 0 2507 id=40:156 2560 0 2560 1722 0 1722 ufs_1 id=37:10053 512 0 512 13284 0 13284 id=37:10210 512 0 512 24623 0 24623 id=37:10228 512 0 512 11273 0 11273 id=37:10385 512 0 512 36637 0 36637 id=37:10396 512 0 512 23479 0 23479 ufs_5 /server_2/ufs_5/ dir00005/testdir/ D2K_0000014340.tmp 2560 0 2560 2457 0 2457 /server_2/ ufs_5/dir00005/ testdir/ HNi_0000018436.tmp 2560 0 2560 2727 0 2727 ufs_3 id=39:140 2560 0 2560 33 0 33 id=39:143 2560 0 2560 32 0 32 id=39:146 2560 0 2560 31 0 31 id=39:177 2560 0 2560 29 0 29 id=39:169 2304 0 2304 34 0 34 root_fs_common root_fs_2 Average ufs_2 id=38:5616 512 0 512 201 0 201 id=38:5787 512 0 512 26397 0 26397 id=38:5939 512 0 512 73921 0 73921 id=38:6099 512 0 512 33244 0 33244 id=38:6136 512 0 512 17293 0 17293 ufs_1.op ufs_0 id=36:5924 512 0 512 11537
0 11537 id=36:6054 512 0 512 31020 0 31020 id=36:6062 512 0 512 8021 0 8021 id=36:6140 512 0 512 93666 0 93666 id=36:6214 512 0 512 47252 0 47252 ufs_4 id=40:129 2560 0 2560 2416 0 2416 id=40:139 2560 0 2560 2558 0 2558 id=40:144 2560 0 2560 2167 0 2167 id=40:145 2560 0 2560 2507 0 2507 id=40:156 2560 0 2560 2495 0 2495 ufs_1 id=37:10053 512 0 512 13284 0 13284 id=37:10210 512 0 512 24623 0 24623 id=37:10228 512 0 512 11273 0 11273 id=37:10385 512 0 512 36637 0 36637 id=37:10396 512 0 512 23479 0 23479 ufs_5 /server_2/ ufs_5/dir00005/ testdir/ D2K_0000014340.tmp 2560 0 2560 2880 0 2880 /server_2/ ufs_5/dir00005/ testdir/ HNi_0000018436.tmp 2560 0 2560 2824 0 2824 ufs_3 id=39:140 2560 0 2560 33 0 33 id=39:143 2560 0 2560 32 0 32 id=39:146 2560 0 2560 31 0 31 id=39:177 2560 0 2560 29 0 29 id=39:169 2432 0 2432 34 0 34 root_fs_common root_fs_2 Maximum ufs_2 id=38:5616 512 0 512 201 0 201 id=38:5787 512 0 512 26397 0 26397 id=38:5939 512 0 512 73921 0 73921 id=38:6099 512 0 512 33244 0 33244 id=38:6136 512 0 512 17293 0 17293 ufs_1.op ufs_0 id=36:5924 512 0 512 11537 0 11537 id=36:6054 512 0 512 31020 0 31020 id=36:6062 512 0 512 80210 0 8021 id=36:6140 512 0 512 93666 0 93666 id=36:6214 512 0 512 47252
0 47252 ufs_4 id=40:129 2560 0 2560 2416 0 2416 id=40:135 2560 0 2560 2656 0 2656 id=40:139 2560 0 2560 3037 0 3037 id=40:143 2560 0 2560 2896 0 2896 id=40:144 2560 0 2560 2167 0 2167 ufs_1 id=37:10053 512 0 512 13284 0 13284 id=37:10210 512 0 512 24623 0 24623 id=37:10228 512 0 512 11273 0 11273 id=37:10385 512 0 512 36637 0 36637 id=37:10396 512 0 512 23479 0 23479 ufs_5 /server_2/ufs_5/ dir00005/testdir/ D2K_0000014340.tmp 2560 0 2560 3302 0 3302 /server_2/ufs_5/ dir00005/testdir/ HNi_0000018436.tmp 2560 0 2560 2920 0 2920 ufs_3 id=39:140 2560 0 2560 33 0 33 id=39:143 2560 0 2560 32 0 32 id=39:146 2560 0 2560 31 0 31 id=39:161 2560 0 2560 28 0 28 id=39:167 2560 0 2560 29 0 29 root_fs_common root_fs_2 Note: In order to have proper resolution, perform the following steps: 1.Start the service (server_fileresolve movername -service -start) 2.Register filesystem (server_fileresolve movername -add /filesystem_mount_path). If, however, the service is running, but the filesystem is not registered with it, one can resolve the filename manual ly: server_fileresolve movername -lookup -filesystem ufs_2 -inode 38 EXAMPLE #32 ----------- To monitor store.volume information, type: server_stats server_2 -i 1 -m store.volume 9:30:06 NBS1 id=0 71762 0 71762 35881 0 35881 root_fs_2 16 0 16 8 0 8 d16 ufs_1 2173 2047 126 1087 1024 63 d9 ufs_1 362 236 126 181 118 63 ufs_4 47 0 47 24 0 24 d10 ufs_2 425 362 63 213 181 31 d18 ufs_1 2835 2756 79 1417 1378 39 d11 ufs_0 441 378 63 220 189
31 d19 ufs_2 1465 1339 126 732 669 63 d12 ufs_2 252 142 110 126 71 55 ufs_5 31 0 31 16 0 16 d20 ufs_0 1559 1433 126 780 717 63 d13 ufs_0 252 157 94 126 79 47 ufs_3 47 0 47 24 0 24 d21 ufs_2 1921 1827 94 961 913 47 d14 ufs_1 772 646 126 386 323 63 d22 ufs_0 2079 2016 63 1039 1008 31 EXAMPLE #33 ----------- To monitor NFS statistics information, type: $ server_stats server_2 -i 1 -m nfs.client -noresolve server_2 Client NFS NFS NFS NFS NFS NFS NFS NFS Timestamp Total Read Write Suspicious Total Read Write Avg Ops/s Ops/s Ops/s Ops KiB/s KiB/ s KiB/s uSec/call 09:31:41 id=10.103.11.106 81 0 81 0 41307 0 41307 11341 id=10.103.11.104 41 0 41 0 20908 0 20908 1534 id=10.103.11.105 40 0 40 0 20398 0 20398 13981 09:31:42 id=10.103.11.104 79 0 79 0 40564 0 40564 1085 id=10.103.11.106 74 0 74 0 38091 0 38091 16159 id=10.103.11.105 35 0 35 0 17809 0 17809 12770 09:31:43 id=10.103.11.106 87 0 87 0 44384 0 44384 14268 id=10.103.11.104 58 0 58 0 29589 0 29589 470 id=10.103.11.105 31 0 31 0 15851 0 15851 10026 $ server_stats server_2 -i 1 -m nfs.user -noresolve server_2 NFS User NFS NFS NFS NFS NFS NFS NFS NFS Timestamp Total Read Write Suspicious Total Read Write Avg Ops/s Ops/s Ops/s Ops KiB/s KiB/s KiB/s uSec/call 09:32:51 id=0 144 0 144 0 73841 0 73841 11514 id=550 4 0 1 0 8 0 8 4219 id=553 2 0 0 0 0 0 0 4696 id=555 2 0 0 0 0 0 0 4369 id=558 2 0 0 0 0 0 0 286 id=563 2 0 0 0 0 0 0 2231 id=569 2 0 0 0 0 0
0 228 id=585 2 0 0 0 0 0 0 247 id=588 2 0 0 0 0 0 0 282 id=589 2 0 2 0 8 0 8 25418 id=591 2 0 0 0 0 0 0 214 id=595 2 0 0 0 0 0 0 3700 id=551 1 0 1 0 0 0 0 8535 id=556 1 0 0 0 0 0 0 238 id=557 1 0 1 0 0 0 0 312 id=564 1 0 0 0 0 0 0 12921 id=582 1 0 1 0 0 0 0 5748 EXAMPLE #34 ----------- To display NFS statistics correlated by file system and NFS operation attributes, type: $ server_stats server_2 -i 1 -m nfs.filesystem server_2 Filesystem Client NFS Op NFS Op NFS Timestamp Avg Op uSecs/Call Calls/s 16:50:42 ufs_5 id=10.103.11.18 v3Write 522 1 16:50:43 16:50:44 ufs_5 l18.perf1.com v3Lookup 13 11 16:50:45 ufs_5 l18.perf1.com v3Write 1810 17 v3Read 49 133 v3Lookup 10 180 ufs_4 l17.perf1.com v3Write 1311 18 v3Read 47 115 v3Lookup 10 220 16:50:46 ufs_5 l18.perf1.com v3Write 7026 137 v3Read 52 248 ufs_4 l17.perf1.com v3Write 6347 131 v3Read 121 354 ufs_3 id=10.103.11.16 v3Write 297 122 v3Read 91 390 16:50:47 ufs_5 l18.perf1.com v3Write 6701 161 ufs_4 l17.perf1.com v3Write 4754 159 v3Read 47 39 ufs_3 l16.perf1.com v3Write 129 160 v3Read 123 38 server_2 Filesystem Client NFS Op NFS Op NFS Summary Avg Op uSecs/Call Calls/s Minimum ufs_5 l18.perf1.com v3Write 522 1 v3Read 49 133 v3Lookup 10 11 ufs_4 l17.perf1.com v3Write 1311 18 v3Read 47 39 v3Lookup 10 220 ufs_1 ufs_2 ufs_0 ufs_3 l16.perf1.com v3Write 129 122 v3Read 91 38 root_fs_common root_fs_2 Average ufs_5 l18.perf1.com v3Write 4015 79 v3Read 50 190
v3Lookup 12 95 ufs_4 l17.perf1.com v3Write 4137 102 v3Read 72 170 v3Lookup 10 220 ufs_1 ufs_2 ufs_0 ufs_3 l16.perf1.com v3Write 213 141 v3Read 107 214 root_fs_common root_fs_2 Maximum ufs_5 l18.perf1.com v3Write 7026 161 v3Read 52 248 v3Lookup 13 180 ufs_4 l17.perf1.com v3Write 6347 159 v3Read 121 354 v3Lookup 10 220 ufs_1 ufs_2 ufs_0 ufs_3 l16.perf1.com v3Write 297 160 v3Read 123 390 root_fs_common root_fs_2 EXAMPLE #35 ----------- To display a summary of NFS filesystem statistics correlated by single file system, type: $ server_stats server_2 -m nfs.filesystem.ufs_4 server_2 Filesystem Client NFS Op NFS Op NFS Timestamp Avg Op uSecs/Call Calls/s 02:46:00 ufs_4 l23.perf1.com v3Write 2569 132 v3Create 38 0 02:46:15 ufs_4 l23.perf1.com v3Write 3313 132 server_2 Filesystem Client NFS Op NFS Op NFS Summary Avg Op uSecs/Call Calls/s Minimum ufs_4 l23.perf1.com v3Write 2569 132 v3Create 38 0 Average ufs_4 l23.perf1.com v3Write 2941 132 v3Create 38 0 Maximum ufs_4 l23.perf1.com v3Write 3313 132 v3Create 38 0 123 390 EXAMPLE #36 ----------- To display a summary of NFS filesystem statistics correlated by a specific filesystem and specific client, type: $ server_stats server_2 -i 2 -m nfs.filesystem.ufs_4.client.10.103.11.23 server_2 Filesystem Client NFS Op NFS Op NFS Timestamp Avg Op uSecs/Call Calls/s 02:41:36 ufs_4 l23.perf1.com v3Write 2083 120 02:41:38 ufs_4 l23.perf1.com v3Write 4318 132 02:41:40 ufs_4 l23.perf1.com v3Write 2660 116 server_2 Filesystem Client NFS Op NFS Op NFS Summary Avg Op uSecs/Call Calls/s Minimum ufs_4 l23.perf1.com v3Write 2083 116 Average ufs_4 l23.perf1.com v3Write 3020 123 Maximum ufs_4 l23.perf1.com v3Write 4318 132
EXAMPLE #37 ----------- To display a summary of NFS filesystem statistics for a specific client and operation, type: $ server_stats server_2 -i 2 -m nfs.filesystem.ufs_4.client.10.103.11.23.op.v3Wri te server_2 Filesystem Client NFS Op NFS Op NFS Timestamp Avg Op uSecs/Call Calls/s 02:42:39 ufs_4 l23.perf1.com v3Write 2335 123 02:42:41 ufs_4 l23.perf1.com v3Write 4836 134 02:42:43 ufs_4 l23.perf1.com v3Write 5093 142 02:42:45 ufs_4 l23.perf1.com v3Write 2129 142 server_2 Filesystem Client NFS Op NFS Op NFS Summary Avg Op uSecs/Call Calls/s Minimum ufs_4 l23.perf1.com v3Write 2129 123 Average ufs_4 l23.perf1.com v3Write 3598 135 Maximum ufs_4 l23.perf1.com v3Write 5093 142 EXAMPLE #38 ----------- To monitor the BranchCache information while a SMB2 BranchCache client is reading a tree, type: $ server_stats server_3 -i 3 -m cifs.branchcache.basic server_3 Filtered Generated Fail Hash Transf Hits Miss Queued Running Timestamp Hash Hash Hash Files Hash Req/s Req/s T asks Tasks Files/s Files/s Files/s kB/s kB/s 16:27:13 0 0 0 0 0 0 0 0 1 16:27:16 0 0 0 0 0 0 2 0 2 16:27:19 0 0 0 0 0 0 0 0 2 16:27:22 0 0 0 0 0 0 0 0 2 16:27:34 0 0 0 0 0 0 2 0 3 16:27:37 0 0 0 0 0 0 1 1 3 16:27:40 0 0 0 0 0 0 2 2 3 16:27:43 0 0 0 0 0 0 2 3 3 16:28:07 0 2 0 35 0 0 0 0 1 16:28:10 0 0 0 0 0 0 0 0 1 16:28:13 0 0 0 30 0 0 0 0 0 16:28:16 0 0 0 0 0 0 0 0 0 server_stats server_3 -i 3 -m cifs.branchcache.usage server_3 Avg Max Min Avg Max Min Max Coun t Rejected Timestamp Hash Hash Hash Hash Hash Hash Threads Task s Tasks Files B Files kB Files B Files ms Files ms Files ms 18:16:39 24934 90 1268 20431 71831 132 3 8 0 18:16:42 24934 90 1268 20431 71831 132 3 8
0 18:16:45 24934 90 1268 20431 71831 132 3 8 0 EXAMPLE #39 ----------- To monitor NFS group statistics, type: $ server_stats server_2 -m nfs.group erver_2 NFS Group NFS NFS NFS NFS NFS NFS NFS NFS Timestamp Total Read Write Suspicious Total Read Write Avg Ops/s Ops/s Ops/s Ops diff KiB/s KiB/s KiB/s uSec/cal l 02:47:14 id=0 264 0 213 0 108919 0 108919 1683 02:47:29 id=0 416 0 416 0 212821 0 212821 2184 02:47:44 id=0 432 0 432 0 221252 0 221252 5206 server_2 NFS Group NFS NFS NFS NFS NFS NFS NFS NFS Summary Total Read Write Suspicious Total Read Write Avg Ops/s Ops/s Ops/s Ops diff KiB/s KiB/s KiB/s uSec/call Minimum id=0 264 0 213 0 108919 0 108919 1683 Average id=0 370 0 354 0 180998 0 180998 3024 Maximum id=0 432 0 432 0 221252 0 221252 5206 EXAMPLE #40 ----------- To monitor NFS export statistics, type: $ server_stats server_2 -m nfs.export server_2 NFS Export NFS NFS NFS NFS NFS NFS NFS NFS Timestamp Total Read Write Susp- Total Read Write Avg Op/s Op/s Op/s -cious KiB/s KiB/s KiB/s uSecs/call Ops call 02:48:14 /server_2/ufs_5/dir00005 157 0 157 0 80213 0 80213 6272 DM2-0 /server_2/ufs_3/dir00003 139 0 139 0 71305 0 71305 462 /server_2/ufs_4/dir00004 130 0 130 0 66662 0 66662 3700 02:48:29 /server_2/ufs_5/dir00005 158 0 158 0 80828 0 80828 3454 /server_2/ufs_3/dir00003 140 0 140 0 71646 0 71646 29 /server_2/ufs_4/dir00004 133 0 133 0 68233 0 68233 3040 server_2 NFS Export NFS NFS NFS NFS NFS NFS NFS NFS Timestamp Total Read Write Susp- Total Read Write Avg Op/s Op/s Op/s -cious KiB/s KiB/s KiB/s uSecs/call Ops Minimum /server_2/ufs_5/dir00005 157 0 157 0 80213 0 80213 3454
/server_2/ufs_3/dir00003 139 0 139 0 71305 0 71305 29 /server_2/ufs_4/dir00004 130 0 130 0 66662 0 66662 3040 Average /server_2/ufs_5/dir00005 157 0 157 0 80521 0 80521 4863 /server_2/ufs_3/dir00003 140 0 140 0 71475 0 71475 246 /server_2/ufs_4/dir00004 132 0 132 0 67447 0 67447 3370 Maximum /server_2/ufs_5/dir00005 158 0 158 0 80828 0 80828 6272 /server_2/ufs_3/dir00003 140 0 140 0 71646 0 71646 462 /server_2/ufs_4/dir00004 133 0 133 0 68233 0 68233 3700 EXAMPLE # 41 ------------ To monitor CIFS server statistics, type: $ server_stats server_2 -m cifs.server server_2 Server name CIFS CIFS CIFS CIFS CIFS CIFS CIFS CIFS Timestamp Total Read Write Susp- Total Read Write Avg Op/s Op/s Op/s -cious KiB/s uSecs/ Ops call 02:50:29 RAVEN-DM2-2 2176 0 1957 0 0 0 0 135 14:38:57 TESTDOMAIN\admin 11 11 0 0 1 1 0 2257 14:39:02 14:39:07 TESTDOMAIN\admin 0 0 0 0 0 0 0 22 14:39:22 server_2 Server name CIFS CIFS CIFS CIFS CIFS CIFS CIFS CIFS Timestamp Total Read Write Susp- Total Read Write Avg Op/s Op/s Op/s -cious KiB/s uSecs/ Ops call Minimum TESTDOMAIN\admin 0 0 0 0 0 0 0 22 Average TESTDOMAIN\admin 4 4 0 0 0 0 0 634 Maximum TESTDOMAIN\admin 11 11 0 0 1 1 0 2257 EXAMPLE #42 ----------- To monitor FS qtreefile statistics, type: $ server_stats server_2 -i 1 -c 2 -m fs.qtreefile server_2 Quota Tree File File Total Read Written Average Read Av erage Write Average Timestamp KiB/s KiB/s KiB/s uSecs/Call uSecs /Call uSecs/Call 02:55:57 ufs_2:/dir00002 id=38:7339 512 0 512 43873 0 43873 id=38:10137 1024 0 1024 49557 0 49557 ufs_0:/dir00000 id=36:10769 1024 0 1024 26188 0 26188 id=36:11712 1024 0 1024 45377 0 45377 ufs_4:/dir00004 id=40:251 2560 0 2560 1538 0 1538 id=40:256 2560 0 2560 1280 0
1280 ufs_1:/dir00001 id=37:17393 1024 0 1024 54210 0 54210 id=37:17572 1024 0 1024 39708 0 02:55:58 ufs_2:/dir00002 id=38:10221 1024 0 1024 51350 0 51350 id=38:9981 1024 0 1024 37275 0 37275 ufs_0:/dir00000 id=36:10155 1024 0 1024 60618 0 60618 id=36:10453 1024 0 1024 32847 0 32847 ufs_4:/dir00004 id=40:183 2560 0 2560 3332 0 3332 id=40:256 2560 0 2560 1391 0 1391 ufs_1:/dir00001 id=37:17129 1024 0 1024 77310 0 77310 id=37:17453 1024 0 1024 17741 0 17741 ufs_5:/dir00005 /server_2/ ufs_5/dir00005/testdir/ iv9_0000055303.tmp 2560 0 2560 1982 0 1982 /server_2/ ufs_5/dir00005/testdir/ KRc_0000008199.tmp 2560 0 2560 2019 0 2019 ufs_3:/dir00003 id=39:243 2560 0 2560 26 0 26 id=39:248 2560 0 2560 29 0 29 server_2 Quota Tree File File Total Read Written Average Read Ave rage Write Average Summary KiB/s KiB/s KiB/s uSecs/Call uSecs/ Call uSecs/Call Minimum ufs_2:/dir00002 id=38:10063 1024 0 1024 32177 0 32177 id=38:10066 1024 0 1024 18897 0 18897 ufs_2:/dir00008 ufs_2:/dir00014 ufs_2:/dir00020 ufs_2:/dir00026 ufs_0:/dir00000 id=36:10151 1024 0 1024 42949 0 42949 id=36:10155 1024 0 1024 60618 0 60618 ufs_0:/dir00006 ufs_0:/dir00012 ufs_0:/dir00018 ufs_0:/dir00024 ufs_4:/dir00004 id=40:183 2560 0 2560 3332 0 3332 id=40:194 2560 0 2560 1488 0 1488 ufs_4:/dir00010 ufs_4:/dir00016 ufs_4:/dir00022 ufs_4:/dir00028 ufs_1:/dir00001 id=37:15343 1024 0 1024 533 0 533 id=37:16235 1024 0 1024 2197 0 2197 ufs_1:/dir00007 ufs_1:/dir00013 ufs_1:/dir00019 ufs_1:/dir00025 ufs_5:/dir00005 /server_2/
ufs_5/dir00005/testdir/ 72n_0000028679.tmp 2560 0 2560 1580 0 1580 /server_2/ ufs_5/dir00005/testdir/ 74u_0000022535.tmp 2560 0 2560 1547 0 1547 ufs_5:/dir00011 ufs_5:/dir00017 ufs_5:/dir00023 ufs_5:/dir00029 ufs_3:/dir00003 id=39:165 2560 0 2560 32 0 32 id=39:174 2560 0 2560 29 0 29 ufs_3:/dir00009 ufs_3:/dir00015 ufs_3:/dir00021 ufs_3:/dir00027 Average ufs_2:/dir00002 id=38:10063 1024 0 1024 32177 0 32177 id=38:10066 1024 0 1024 18897 0 18897 ufs_2:/dir00008 ufs_2:/dir00014 ufs_2:/dir00020 ufs_2:/dir00026 ufs_0:/dir00000 id=36:10151 1024 0 1024 42949 0 42949 id=36:10155 1024 0 1024 60618 0 60618 ufs_0:/dir00006 ufs_0:/dir00012 ufs_0:/dir00018 ufs_0:/dir00024 ufs_4:/dir00004 id=40:183 2560 0 2560 3332 0 3332 id=40:194 2560 0 2560 1488 0 1488 ufs_4:/dir00010 ufs_4:/dir00016 ufs_4:/dir00022 ufs_4:/dir00028 ufs_1:/dir00001 id=37:15343 1024 0 1024 533 0 533 id=37:16235 1024 0 1024 2197 0 2197 ufs_1:/dir00007 ufs_1:/dir00013 ufs_1:/dir00019 ufs_1:/dir00025 ufs_5:/dir00005 /server_2/ufs_5 /dir00005/testdir/ 72n_0000028679.tmp 2560 0 2560 1724 0 1724 /server_2/ufs_5/ dir00005/testdir/ 74u_0000022535.tmp 2560 0 2560 1627 0 1627 ufs_5:/dir00011 ufs_5:/dir00017 ufs_5:/dir00023 ufs_5:/dir00029 ufs_3:/dir00003 id=39:165 2560 0 2560 32 0 32 id=39:174 2560 0 2560 29 0 29 ufs_3:/dir00009 ufs_3:/dir00015 ufs_3:/dir00021 ufs_3:/dir00027
Maximum ufs_2:/dir00002 id=38:10063 1024 0 1024 32177 0 32177 id=38:10066 1024 0 1024 18897 0 18897 ufs_2:/dir00008 ufs_2:/dir00014 ufs_2:/dir00020 ufs_2:/dir00026 ufs_0:/dir00000 id=36:10151 1024 0 1024 42949 0 42949 id=36:10155 1024 0 1024 60618 0 60618 ufs_0:/dir00006 ufs_0:/dir00012 ufs_0:/dir00018 ufs_0:/dir00024 ufs_4:/dir00004 id=40:183 2560 0 2560 3332 0 3332 id=40:191 2560 0 2560 1969 0 1969 ufs_4:/dir00010 ufs_4:/dir00016 ufs_4:/dir00022 ufs_4:/dir00028 ufs_1:/dir00001 id=37:15343 1024 0 1024 533 0 533 id=37:16235 1024 0 1024 2197 0 2197 ufs_1:/dir00007 ufs_1:/dir00013 ufs_1:/dir00019 ufs_1:/dir00025 ufs_5:/dir00005 /server_2/ ufs_5/dir00005/testdir/ 72n_0000028679.tmp 2560 0 2560 1867 0 1867 /server_2/ ufs_5/dir00005/testdir/ 74u_0000022535.tmp 2560 0 2560 1964 0 1964 ufs_5:/dir00011 ufs_5:/dir00017 ufs_5:/dir00023 ufs_5:/dir00029 ufs_3:/dir00003 id=39:165 2560 0 2560 32 0 32 id=39:168 2560 0 2560 29 0 29 EXAMPLE #43 ----------- To monitor NFS VDM client statistics, type: $ server_stats server_3 -i 1 -m nfs.vdm.*.client -c 5 -te no server_2 VDM name Client NFS NFS NFS NFS NFS NFS NFS NFS Timestamp Total Read Write Suspicious Total Read Write Avg Ops/s Ops/s Ops/s Ops KiB/s KiB/s KiB/s uSe cs/call 10:42:43 vdm_1 id=10.103.11.13 2 0 0 1 0 0 0 188 57 vdm_2 id=10.103.11.14 7 0 1 0 0 0 0 163 78 10:42:44 vdm_1 id=10.103.11.13 2 0 0 1 0 0 0 788 2 vdm_2 id=10.103.11.14 8 0 1 0 0 0 0 117 84 10:42:45 vdm_1 l13.perf1.com 2 0 0 1 0 0 0 976 2 vdm_2 l14.perf1.com 7 0 1 0 0 0 0 198
13 10:42:46 vdm_1 l13.perf1.com 2 0 0 1 0 0 0 697 73 vdm_2 l14.perf1.com 7 0 1 0 0 0 0 825 7 10:42:47 vdm_1 l13.perf1.com 2 0 0 1 0 0 0 104 73 vdm_2 l14.perf1.com 8 0 1 0 0 0 0 183 5 Example #44 ----------- To monitor NFS VDM user statistics, type: $ server_stats server_3 -i 1 -m nfs.vdm.*.user -c 5 -te no server_2 VDM name NFS NFS NFS NFS NFS NFS NFS NFS NFS Timestamp User Total Read Write Suspicious Total Read Avg Ops/s Ops/s Ops/s Ops KiB/s KiB/s Write uSecs/ KiB/s call 10:43:20 vdm_1 id=0 2 0 0 1 0 0 0 18791 vdm_2 id=0 5 0 1 0 0 0 0 6070 10:43:21 vdm_1 id=0 2 0 0 1 0 0 0 15574 vdm_2 id=0 3 0 0 0 0 0 0 94 10:43:22 vdm_1 id=0 2 0 0 1 0 0 0 16976 vdm_2 id=0 2 0 1 0 0 0 0 116061 10:43:23 vdm_1 id=0 2 0 0 1 0 0 0 38150 vdm_2 id=0 5 0 1 0 0 0 0 19183 10:43:24 vdm_1 id=0 2 0 0 1 0 0 0 63362 vdm_2 id=0 5 0 1 0 0 0 0 53115 Example #45 ----------- To monitor NFS VDM group statistics, type: $ server_stats server_3 -i 1 -m nfs.vdm.*.group -c 5 -te no server_2 VDM name NFS NFS NFS NFS NFS NFS NFS NFS NFS Timestamp Group Total Read Write Suspicious Total Read Write Avg Ops/s Ops/s Ops/s Ops KiB/s KiB/s KiB/s uSecs/call 10:43:46 vdm_1 id=0 2 0 0 1 0 0 0 6381 vdm_2 id=0 5 0 1 0 0 0 0 7557 10:43:47 vdm_1 id=0 1 0 0 0 0 0 0 10 vdm_2 id=0 5 0 1 0 0 0 0 29440 10:43:48 vdm_1 id=0 1 0 0 1 0 0 0 198524 vdm_2 id=0 6 0 1 0 0 0 0 5877 10:43:49 vdm_1 id=0 2 0 0 1 0 0 0 52406 vdm_2 id=0 5 0 1 0 0 0 0 34691 10:43:50 vdm_1 id=0 2 0 0 1 0 0 0 13695 vdm_2 id=0 5 0 1 0 0 0 0 2984 Example #46 ----------- To monitor NFS VDM export statistics, type: $ server_stats server_3 -i 1 -m nfs.vdm.*.export -c 5 -te no server_2 VDM name NFS NFS NFS NFS NFS NFS NFS NFS NFS Timestamp Export Total Read Write Suspicious Total Read Write Avg Ops/s Ops/s Ops/s Ops KiB/s KiB/s KiB/s uSecs/call 10:44:10 vdm_1 /demo_0 2 0 0 1 0 0 0 16975 vdm_2 /demo_1 5 0 1 0 0 0 0 4755 10:44:11 vdm_1 /demo_0 2 0 0 1 0 0 0 42104 vdm_2 /demo_1 5 0 1 0 0 0 0 75083 10:44:12 vdm_1 /demo_0 2 0 0 0 0 0 0 18991 vdm_2 /demo_1 3 0 0 0 0 0 0 96 10:44:13 vdm_1 /demo_0 2 0 0 1 0 0 0 14357 vdm_2 /demo_1 2 0 1 0 0 0 0 138779 10:44:14 vdm_1 /demo_0 2 0 0 1 0 0 0 137153 vdm_2 /demo_1 5 0 1 0 0 0 0 9511
--------------------------------------------------------------------------------- ----------------------- Last modified: August 24, 2012, 11:06 a.m
server_sysconfig Manages the hardware configuration for the specified Data Mover(s). SYNOPSIS -------- server_sysconfig {
Caution: The speed and duplex settings on both sides of the physical connection must be the same. Mismatched speed and duplex settings can cause errors and impact network performance. For example, if the duplex is set to half on one end and full on the other, there might be network errors and performance issues. GIGABIT ETHERNET FIBER ---------------------- For Gigabit Ethernet Fiber connections, the speed is automatically set to 1000, and since it must remain at that setting, no speed setting is required. linkneg={enable|disable} Disables autonegotiation on the network adapter card if it is not supported by the network Gigabit switch. The default is enable. rxflowctl={enable|disable} Enables the ability to accept and process pause frames.The default is disable. txflowctl={enable|disable} Enables pause frames to be transmited. The default is disable. GIGABIT ETHERNET COPPER ----------------------- speed={10|100|1000|auto} Sets the speed for the port. The auto (default) turns on autonegotiation; setting a fixed speed disables autonegotiation. duplex={full|half|auto} Sets the duplex to full, half, or auto. The auto (default) turns autonegotiation on; setting a fixed duplex disables autonegotiation. Caution: The speed and duplex settings on both sides of the physical connection must be the same. Mismatched speed and duplex settings can cause errors and impact network performance. For example, if the duplex is set to half on one end and full on the other, there might be network errors and performance issues. rxflowctl={enable|disable} Enables the ability to accept and process pause frames. The default is disable. txflowctl={enable|disable} Enables pause frames to be transmited. The default is disable. -virtual -delete [-Force]
device=
INTERFACE OUTPUTS ----------------- The network interface cards available are dependent on the type of system used. For the VNX, the following NICs are available: loop, ace, ana, cge, el30, el31, fpa, and fa2. Note that loop, el30, and el31 are for internal use only. For the NS series, the following NICs are available: loop, cge, el30, el31, and fge. VDMs are included in both the CNS and NS series. EXAMPLE #1 ---------- For the NS series, to view the system configuration for a Data Mover, type: server_2 : Processor = Intel Pentium 4 Processor speed (MHz) = 3100 Total main memory (MB) = 4023 Mother board = Barracuda XP Bus speed (MHz) = 533 Bios Version = 3.30 Post Version = Rev. 02.14 For the CNS series, to view the system configuration for a Data Mover, type: $ server_sysconfig server_2 -Platform server_2 : Processor = Intel Pentium 4 Processor speed (MHz) = 1600 Total main memory (MB) = 3967 Mother board = CMB-400 Bus speed (MHz) = 400 Bios Version = No Ver Info Post Version = No Ver Info EXAMPLE #2 ---------- For the NS series, to view the installed PCI configuration for a Data Mover, type: $ server_sysconfig server_2 -pci server_2 : PCI DEVICES: On Board: Agilent Fibre Channel Controller 0: fcp-0 IRQ: 22 addr: 50060160006004f0 0: fcp-1 IRQ: 21 addr: 50060161006004f0 0: fcp-2 IRQ: 18 addr: 50060162006004f0 0: fcp-3 IRQ: 20 addr: 50060163006004f0 Broadcom Gigabit Ethernet Controller 0: fge0 IRQ: 24 linkneg=enable txflowctl=disable rxflowctl=disable 0: fge1 IRQ: 23 linkneg=enable txflowctl=disable rxflowctl=disable 0: cge0 IRQ: 24 speed=auto duplex=auto txflowctl=disable rxflowctl=disable
0: cge1 IRQ: 23 speed=auto duplex=auto txflowctl=disable rxflowctl=disable 0: cge2 IRQ: 26 speed=auto duplex=auto txflowctl=disable rxflowctl=disable 0: cge3 IRQ: 25 speed=auto duplex=auto txflowctl=disable rxflowctl=disable 0: cge4 IRQ: 28 speed=auto duplex=auto txflowctl=disable rxflowctl=disable 0: cge5 IRQ: 27 speed=auto duplex=auto txflowctl=disable rxflowctl=disable For the CNS series, to view the installed PCI configuration for a Data Mover, type: $ server_sysconfig server_2 -pci server_2 : PCI DEVICES: Slot: 1 Emulex LP9000 Fibre Channel Controller 0: fcp-0 IRQ: 23 addr: 10000000c92b5a10 1: fcp-1 IRQ: 24 addr: 10000000c92b5a11 Slot: 2 Emulex LP9000 Fibre Channel Controller 0: fcp-2 IRQ: 22 addr: 10000000c92b514e Slot: 4 Intel 10/100/1K Ethernet Controller 0: cge0 IRQ: 18 speed=auto duplex=auto rxflowctl=disable txflowctl=disable 1: cge1 IRQ: 19 speed=auto duplex=auto rxflowctl=disable txflowctl=disable 2: cge2 IRQ: 20 speed=auto duplex=auto rxflowctl=disable txflowctl=disable 3: cge3 IRQ: 21 speed=auto duplex=auto rxflowctl=disable txflowctl=disable Slot: 5 Alteon Tigon-2 Gigabit Ethernet Controller 0: ace0 IRQ: 25 linkneg=enable rxflowctl=disable txflowctl=disable 0: ace0 IRQ: 25 linkneg=enable rxflowctl=disable txflowctl=disable Where: Value Definition On Board The names of each PCI card installed. 0 Port number inside the slot the card is plugged into. If the card put in the slot has 4 ports, the first port is marked as 0, second port as 1, third port as 2 and fourth port as 3. IRQ Interrupt vector. speed Speed configured. Possible values are: auto, 10, 100, 1000 duplex Duplex setting configured. Possible values are: auto, half, full txflowctl Transmit MAC flow control. Possible values are: disable, enable
rxflowctl Receive MAC flow control. Possible values are: disable, enable EXAMPLE #3 ---------- To view the firmware version for all devices, type: $ server_sysconfig server_2 -pci -fmwr_version server_2 : PCI DEVICES: On Board: VendorID=0x1120 DeviceID=0x1B00 Controller 0: scsi-0 IRQ: 32 0: scsi-16 IRQ: 33 0: scsi-32 IRQ: 34 0: scsi-48 IRQ: 35 Broadcom 10 Gigabit Ethernet Controller 0: fxg-8-0 IRQ: 38 Firmware Version: 6.2.11 0: fxg-8-1 IRQ: 40 Firmware Version: 6.2.11 0: cxg-9-0 IRQ: 44 Firmware Version: 6.2.17 0: cxg-9-1 IRQ: 46 Firmware Version: 6.2.17 To view the firmware version for a single device, type: $ server_sysconfig server_2 -pci -fmwr_version fxg-2-1 server_2 : On Board: Broadcom 10 Gigabit Ethernet Controller 0: fxg-2-1 IRQ: 38 Firmware Version: 6.2.11 EXAMPLE #4 ---------- To set the 100 Mbits speed and full duplex setting for cge0 interface, type: $ server_sysconfig server_2 -pci cge0 -option speed=100,duplex=full server_2 : done EXAMPLE #5 ---------- To display the hardware configuration for network device, cge0, type: $ server_sysconfig server_2 -pci cge0 server_2 : On Board: Broadcom Gigabit Ethernet Controller 0: cge0 IRQ: 24 speed=100 duplex=full txflowctl=disable rxflowctl=disable EXAMPLE #6 ---------- To create an Ethernet channel as a virtual device, type: $ server_sysconfig server_2 -virtual -name trk0_ec -create trk -option "device=cge2,cge3" server_2 : done EXAMPLE #7 ---------- To display all virtual devices on server_2, type: $ server_sysconfig server_2 -virtual server_2 :
Virtual devices: trk0_ec devices=cge2 cge3 fsn failsafe nic devices : trk trunking devices : trk0_ec Where: Value Definition Virtual Devices All the configured virtual devices ( trunking or fail safe) on the server. devices Lists the virtual or physical device names that in the
standby Standby device in the FSN. EXAMPLE #11 ----------- To create an aggregated link using the LACP protocol with load balancing method set to mac, type: $ server_sysconfig server_2 -virtual -name trk0_la -create trk -option "device=cge2,cge3 protocol=lacp lb=mac" server_2 : done EXAMPLE #12 ----------- To delete an Ethernet channel, trk0_ec, type: $ server_sysconfig server_2 -virtual -delete -Force trk0_ec server_2 : done -------------------------------------- Last Modified: May 12, 2011 01:15 pm
server_sysstat Displays the operating system statistics for the specified Data Movers. SYNOPSIS -------- server_sysstat {
total paged out = 1 page in rate = 0 page out rate = 0 block map memory quota = 1048576(KB) block map memory consumed = 624(KB) Where: ------ Value Definition ----- ---------- total paged in Total number of blockmap pages paged in since the system booted. total paged out Total number of blockmap pages paged out since the system booted. page in rate Number of blockmap pages paged in per second (over last 180 seconds). page out rate Number of blockmap pages paged out per second (over last 180 seconds). block map memory quota Current value of the blockmap memory quota. block map memory consumed Amount of memory consumed for blockmaps. ------------------------------------------------------------------------------ Last modified: April 26 2011, 06:00 pm.
server_tftp Manages the Trivial File Transfer Protocol (TFTP) for the specified Data Movers. SYNOPSIS -------- server_tftp {
$ server_tftp server_2 -service -status server_2 : Tftp Running EXAMPLE #3 ---------- To modify a path on server_2 for TFTP service with read access for all, and write access for nobody, type: $ server_tftp server_2 -set -path /ufs1 -readaccess all -writeaccess none server_2 : done EXAMPLE #4 ---------- To display TFTP information for server_2, type: $ server_tftp server_2 -info server_2 : path="/ufs1/" readaccess=all writeaccess=none EXAMPLE #5 ---------- To display statistics for server_2, type: $ server_tftp server_2 -service -stats server_2 : Attempted Transfers:28 Successful Transfers:27 createdthrds:28 deletedthrds:28 timedoutthrds:0 TotalBinds:28 TotalUnbinds:28 BindFailures:0 InvalidAttempts:0 AttemptedReadTransfers:19 SuccessfulReadTransfers:19 AttemptedWriteTransfers:9 SuccessfulWriteTransfers:8 Where: Value Definition Attempted Transfers Total TFTP transfers attempted by that time. Successful Transfers Total number of successful TFTP transfers. createdthrds Total number of TFTP threads created (equal to total transfers). deletedthrds Total number of threads deleted (equal to total created threads). Number of timed-out threads. For TFTP timedoutthrds transfers, in case of any failures, each thread will time out and free itself. TotalBinds Total number of binds. TotalUnbinds Total number of unbinds. Number of bind failures. If the port we try BindFailures to bind to is in use, the bind fails, and retries with a different port. InvalidAttempts Invalid TFTP transfer requests from clients such as trying to transfer a non-existent file.
AttemptedReadTransfers Total TFTP read transfers initiated. SuccessfulReadTransfers Total TFTP read transfers successfully completed. AttemptedWriteTransfers Total TFTP write transfers initiated. SuccessfulWriteTransfers Total TFTP write transfers successfully completed. EXAMPLE #6 ---------- To stop TFTP service on server_2, type: $ server_tftp server_2 -service -stop server_2 : done EXAMPLE #7 ---------- To delete the settings for the TFTP service on server_2, type: $ server_tftp server_2 -clear server_2 : done -------------------------------------- Last Modified: April 26, 2011 3:00 pm
server_umount Unmounts file systems. SYNOPSIS -------- server_umount {
To temporarily unmount a file system by specifying its mount point as /bin, type: $ server_umount server_2 -temp /bin server_2: done EXAMPLE #3 ---------- To temporarily unmount a file system by specifying its file system name as ufs1, type: $ server_umount server_2 -temp ufs1 server_2: done -------------------------------------- Last Modified: April 27, 2011 12:55 pm
server_uptime Displays the length of time that a specified Data Mover has been running since the last reboot. SYNOPSIS -------- server_uptime {
server_user Manages user accounts for the specified Data Movers. SYNOPSIS -------- server_user {
# /nas/sbin/server_user server_2 -add user1 Creating new user user1 User ID: 100 Group ID: 101 Comment: Home directory: Shell: Note: Comment, Home directory and Shell are optional, all others are required. EXAMPLE #2 ---------- To create a user account for NDMP connections, with MD5 password encryption and to configure the password, type: # /nas/sbin/server_user server_2 -add -md5 -passwd user_name Creating new user user_name User ID: User ID: 101 Group ID: 100 Home directory: Changing password for user user_name New passwd: Retype new passwd: EXAMPLE #3 ---------- To list the user accounts, type: # /nas/sbin/server_user server_2 -list server_2: APM000438070430000_APM000420008180000:LNEa7Fjh/43jQ:9000:9000:ftsQgHsc2oMrdysaOnW eLhN8vB::ndmp_md5 user1:!!:100:101::: user_name:WX72mBTFp/qV.:101:100:W9z7HIndimdaHs2anCL20EBfNd::ndmp_md5 EXAMPLE #4 ---------- To modify account information for user1, type: # /nas/sbin/server_user server_2 -modify user1 Modifying user account user1 1 User ID (100) 2 Group ID (101) 3 Home directory () 4 Comment () 5 Shell () Please select a field to modify, "done" to apply your changes or "quit" to cancel: 2 Group ID: 102 Please select a field to modify, "done" to apply your changes or "quit" to cancel: quit EXAMPLE #5 ---------- To lock an account password for ndmp, type: # /nas/sbin/server_user server_2 -passwd -lock user_name Changing password for user user_name Locking password for user user_name EXAMPLE #6 ---------- To disable the password for user1, type:
# /nas/sbin/server_user server_2 -passwd -disable user1 Changing password for user user1 Removing password for user user1 EXAMPLE #7 ---------- To unlock an account password for user1, type: # /nas/sbin/server_user server_2 -passwd -unlock -force user1 Changing password for user user1 Unlocking password for user user1 EXAMPLE #8 ---------- To delete a user account for user1, type: # /nas/sbin/server_user server_2 -delete user1 -------------------------------------- Last Modified: April 26, 2011 3:00 pm
server_usermapper Provides an interface to manage the Internal Usermapper service. SYNOPSIS -------- server_usermapper {
Note: If there is no specific reason to use particular UID and GID ranges for your environments domains, EMC encourages you to use the automatic mapping method and let Internal Usermapper automatically assign new UIDs/GIDs based on the next available values. If you need to use an existing Usermapper configuration file, you must specify the config option during the upgrade procedure, that is, before Internal Usermapper has begun issuing default UIDs and GIDs. -disable Disables the Usermapper service. -remove -all Removes all entries from the Usermapper databases and destroys the database structure. The Usermapper service must be disabled before you can issue this option. Caution: It is recommended that you consult with Customer Support before issuing the -remove -all option. This option deletes all Usermapper database entries and may result in users losing access to file systems. If you decide to use the -remove -all option, you should first back up your existing Usermapper database file and usrmap.cfg file (if one is in use). -Import {-user|-group} [-force]
Service Class: If the service is a primary or secondary service. Primary The IP address of the primary Usermapper service used by a secondary service. The (c) against the IP address indicates that the primary Usermapper is available and has been connected. EXAMPLE #2 ---------- To enable a secondary Usermapper service, type: $ server_usermapper server_4 -enable primary=172.24.102.238 server_4 : done EXAMPLE #3 ---------- To verify the status of Internal Usermapper for the primary Usermapper, type: $ server_usermapper server_2 server_2 : Usrmapper service: Enabled Service Class: Primary EXAMPLE #4 ---------- To verify the status of Internal Usermapper for the secondary Usermapper, type: $ server_usermapper server_4 server_4 : Usrmapper service: Enabled Service Class: Secondary Primary = 172.24.102.238 (c) See Example #1 for a description of command outputs. EXAMPLE #5 ---------- To export user information from the Usermapper database, type: $ server_usermapper server_2 -Export -user /home/nasadmin/users_server_2.passwd server_2 : done EXAMPLE #6 ---------- To export group information from the Usermapper database, type: $ server_usermapper server_2 -Export -group /home/nasadmin/group_server_2.group server_2 : done EXAMPLE #7 ---------- To import the user file users_server_2.passwd for server_2, type: $ server_usermapper server_2 -Import -user /home/nasadmin/users_server_2.passwd server_2 : done EXAMPLE #8 ---------- To import the group file group_server_2.group for server_2, type: $ server_usermapper server_2 -Import -group /home/nasadmin/group_server_2.group server_2 : done
EXAMPLE #9 ---------- To disable an Internal Usermapper service, type: $ server_usermapper server_2 -disable server_2 : done EXAMPLE #10 ----------- To remove all entries from the Usermapper database, type: $ server_usermapper server_2 -remove -all server_2 : Warning: This operation will erase all user/group mappings. CIFS users may lose access. Continue(Y/N): done -------------------------------------- Last Modified: April 26, 2011 2:30 pm
server_version Displays the software version running on the specified Data Movers. SYNOPSIS -------- server_version {
server_viruschk Manages the virus checker configuration for the specified Data Movers. SYNOPSIS -------- server_viruschk {
$ server_viruschk server_2 server_2 : 10 threads started 1 Checker IP Address(es): 172.24.102.18 ONLINE at Mon Jan 31 18:35:43 2005 (GMT-00:00) RPC program version: 3 CAVA release: 3.3.5, AV Engine: Network Associates Last time signature updated: Thu Jan 27 19:38:35 2005 (GMT-00:00) 31 File Mask(s): *.exe *.com *.doc *.dot *.xl? *.md? *.vxd *.386 *.sys *.bin *.rtf *.obd *.dll *.scr *.obt *.pp? *.pot *.ole *.shs *.mpp *.mpt *.xtp *.xlb *.cmd *.ovl *.dev *.zip *.tar *.arj *.arc *.z No File excluded Share \\DM112-CGE0\CHECK$ RPC request timeout=25000 milliseconds RPC retry timeout=5000 milliseconds High water mark=200 Low water mark=50 Scan all virus checkers every 60 seconds When all virus checkers are offline: Continue to work with Virus Checking and CIFS Scan on read if access Time less than Thu Jan 27 19:38:35 2005 (GMT-00:00) Panic handler registered for 65 chunks Where: Value Indicates threads started The number of threads that have been started. Checker IP Address(es) The number of VC servers defined in /.etc/viruschecker.conf version=2 CAVA uses RPC program version 2. Share The UNC name used by CAVA to access the Data Mover. RPC request timeout Time out for the full CAVA request. RPC retry timeout Time out for one unitary CAVA request. High water mark A log event is generated when the number of files in the request queue becomes greater than 200. Low water mark A log event is generated when the number of files in the request queue become less than 50. Panic handler registered Panic is used to memorize name of unchecked for 65 chunks files. ERROR_SETUP List of errors reported by CAVA. min, max, average Min, max, and average time for CAVA requests. EXAMPLE #2 ---------- To display the status of the virus checker, type: $ server_viruschk server_2 -audit server_2 : Total Requests : 138 Requests in progress : 25 NO ANSWER from the Virus Checker Servers: 0 ERROR_SETUP : 0 FILE_NOT_FOUND : 0 ACCESS_DENIED : 0 FAIL : 0 TIMEOUT : 0 Total Infected Files : 875 Deleted Infected Files : 64 Renamed Infected Files : 0 Modified Infected Files : 811 min=70915 uS, max=1164891 uS, average=439708 uS 15 File(s) in the collector queue 10 File(s) processed by the AV threads
Read file /.etc/viruschecker.audit to display the list of pending requests Where: Value Definition Total Infected Files The number of files found that contained viruses. This displays only if infected files are found and remains visible until the Data Mover is rebooted or the CAVA viruschecking service has been restarted. Deleted Infected Files The number of files that contained viruses that were deleted. This displays only if infected files are found and remains visible until the Data Mover is rebooted or the CAVA viruschecking service has been restarted. Renamed Infected Files The number of files that contained viruses that were renamed. This displays only if infected files are found and remains visible until the Data Mover is rebooted or the CAVA viruschecking service has been restarted. Modified Infected Files The number of files that contained viruses that were modified. This displays only if infected files are found and remains visible until the Data Mover is rebooted or the CAVA viruschecking service has been restarted. EXAMPLE #3 ---------- To update the virus checker configuration file that is resident on the Data Mover, type: $ server_viruschk server_2 -update server_2 : done EXAMPLE #4 ---------- To set the access time for the virus checker configuration file, type: $ server_viruschk server_2 -set accesstime=now server_2 : done EXAMPLE #5 ---------- To start a scan on file system, type: $ server_viruschk server_2 -fsscan ufs1 -create server_2 : done EXAMPLE #6 ---------- To check the scan of a file system, type: $ server_viruschk server_2 -fsscan ufs1 -list server_2 : FileSystem 24 mounted on /ufs1: 8 dirs scanned and 22 files submitted to the scan engine firstFNN=0x0, lastFNN=0xe0f34b70, queueCount=0, burst=10 EXAMPLE #7 ---------- To check the scan status on all file systems, type: $ server_viruschk server_2 -fsscan server_2 : FileSystem 24 mounted on /ufs1: 8 dirs scanned and 11 files submitted to the scan engine firstFNN=0x0, lastFNN=0xe0eba410, queueCount=0, burst=10
FileSystem 25 mounted on /ufs2: 9 dirs scanned and 11 files submitted to the scan engine firstFNN=0x0, lastFNN=0xe0010b70, queueCount=0, burst=10 EXAMPLE #8 ---------- To stop a scan on a file system, type: $ server_viruschk server_2 -fsscan ufs1 -delete server_2 : done -------------------------------------- Last Modified: April 26, 2011 at 12:30 pm
server_vtlu SYNOPSIS -------- Configures a virtual tape library unit (VTLU) on the specified Data Movers. server_vtlu {
Configures the number of slots in the VTLU. If no value is defined, then the default value of 32 is used. [-impexp
of -tapesize
-tape -eject
Value Definition id Unique VTLU identifier that is assigned automatically. slots Number of virtual slots in the VTLU. import/export slots Number of virtual import/export slots in the VTLU. robot vendor Vendor name of the virtual robot; maximum length is eight characters. robot product Product name of the virtual robot; maximum length is 16 characters. robot revision Revision number of the virtual robot; maximum length is four characters. robot serial number Serial number of the virtual robot that is assigned automatically. robot device name Device name of the virtual robot; only the first number, the starting chain, can be modified. drives Number of virtual drives in the VTLU. drive vendor Vendor name of the virtual drive; maximum length is eight characters. drive product Product name of the virtual drive; maximum length is 16 characters. drive revision Revision number of the virtual drive; maximum length is four characters. EXAMPLE #4 ---------- To list all of the VTLUs on server_2, type: $ server_vtlu server_2 -tlu -list server_2 : id vendor product revision serial_number device_name 3 EMCCorp vtluRobot 1.1a P8gIgqs2k5 c1t0l0 Where: Value Definition id Unique VTLU identifier that is assigned automatically. vendor Vendor name of the virtual robot; maximum length is eight characters. product Product name of the virtual robot; maximum length is 16 characters. revision VTLU robots revision number; maximum length is four characters. serial_number VTLU serial number that is assigned automatically. device_name The device name of the VTLU robot; only the first number, the starting chain, can be modified. EXAMPLE #5 ---------- To display the information for the VTLU on the Data Mover identified by its ID, type:
$ server_vtlu server_2 -tlu -info 3 server_2 : id = 3 slots = 256 import/export slots = 64 robot vendor = EMCCorp robot product = vtluRobot robot revision = 1.1a robot serial number = P8gIgqs2k5 robot device name = c1t0l0 drives = 2 drive vendor = EMCCorp drive product = vtluDrive drive revision = 2.2a EXAMPLE #6 ---------- To modify vendor, product and revision information for the robot and drive of VTLU 3 for server_2, type: $ server_vtlu server_2 -tlu -modify 3 -robot -vendor EMC -product vRobot -revision 1.1b -drives 3 -drive -vendor EMC -product vDrive -revision 2.2b server_2 : done EXAMPLE #7 ---------- To modify the number of virtual import/export slots and number of virtual slots of VTLU 1 for server_2, type: $ server_vtlu server_2 -tlu -modify 3 -slots 8 -impexp 4 server_2 : done EXAMPLE #8 ---------- To add new storage for VTLU 3 on server_2, with 5 virtual tapes of 1 GB located in slots each with barcode prefix dstpre, using ufs1 file system, type: $ server_vtlu server_2 -storage -new ufs1 -tlu 3 -tape size 1G -tapes 5 -barcodeprefix dstpre -destination slot server_2 : done EXAMPLE #9 ---------- To extend VTLU 3 on server_2 by adding 2 virtual tapes of 1 GB and placing them in the import/export virtual slots, type: $ server_vtlu server_2 -storage -extend ufs1 -tlu 3 -tapesize 1G -tapes 2 -destination impexp server_2 : done EXAMPLE #10 ----------- To export storage from VTLU 3 stored on ufs1 located on server_2 type: $ server_vtlu server_2 -storage -export ufs1 -tlu 3 server_2 : done EXAMPLE #11 ----------- To import the ufs2 file system to VTLU 3 and place the virtual tapes in the vault, type: $ server_vtlu server_2 -storage -import ufs1 -tlu 3 -destination vault server_2 : done
EXAMPLE #12 ----------- To list the storage on VTLU 3, type: $ server_vtlu server_2 -storage -list 3 server_2 : tlu_id filesystem barcode_prefix 3 ufs1 dstpre Where: Value Definition tlu_id Unique VTLU identifier that is assigned automatically. filesystem Name of the file system associated with the VTLU. barcode_prefix Modifiable prefix assigned to virtual tapes that is constant across a file system. EXAMPLE #13 ----------- To list VTLU information on VTLU 3, type: $ server_vtlu server_2 -tape -list 3 server_2 : barcode filesystem capacity(GB) location source_slot dstpre0001 ufs1 1 vault dstpre0002 ufs1 1 vault dstpre0003 ufs1 1 vault dstpre0004 ufs1 1 vault dstpre0005 ufs1 1 vault dstpre0006 ufs1 1 vault dstpre0000 ufs1 1 impexp:0 Where: Value Definition Virtual tape barcode, consisting of the modifiable barcode barcode prefix and a four-digit number that is assigned automatically. filesystem Name of the file system. capacity (GB) Virtual tape capacity in GB. Element type and element ID of the virtual tape; possible location element types are slot, drive, import/export, robot, and vault. source_slot Slot ID of the tapes previous location. EXAMPLE #14 ----------- To insert the specified tape in a virtual import/export slot on VTLU 3, type: $ server_vtlu server_2 -tape -insert dstpre0001 -tlu 3 server_2 : done EXAMPLE #15 ----------- To eject the specified tape from VTLU 3, type: $ server_vtlu server_2 -tape -eject dstpre0001 -tlu 3
server_2 : done EXAMPLE #16 ----------- To list the storage drive on VTLU 3, type: $ server_vtlu server_2 -drive -list 3 server_2 : drive_id device_name serial_number status tape_barcode 0 c1t0l1 NXB2w4W000 empty 1 c1t0l2 3u0bx4W000 empty 2 c1t0l3 g0pgy4W000 empty Where: Value Definition drive_id Unique VTLU drive identifier that is assigned automatically. device_name The device name of the VTLU drive. serial_number The VTLU serial number that is automatically assigned. status Status of the virtual tape drive; possible values are empty, loaded, and in use. tape_barcode Barcode of the virtual tape if status is not empty. EXAMPLE #17 ----------- To display information for drive 0 on VTLU 3, type: $ server_vtlu server_2 -drive -info 0 -tlu 3 server_2 : id = 0 device_name = c1t0l1 serial_number = NXB2w4W000 status = empty tape_barcode = EXAMPLE #18 ----------- To delete storage from VTLU 3, type: $ server_vtlu server_2 -storage -delete ufs1 -tlu 3 server_2 : done EXAMPLE #19 ----------- To delete VTLU 3 from server_2, type: $ server_vtlu server_2 -tlu -delete 3 server_2 : done -------------------------------------- Last Modified: April 26, 2011 1:30 pm
CS Command
This chapter describes the cs_standby command, including its command line
syntax (Synopsis), a description of the options, and examples of usage.
cs_standby
cs_standby Initiates a takeover and failover of a Control Station on a VNX with dual Control Stations. SYNOPSIS -------- cs_standby {-takeover|-failover} DESCRIPTION ----------- The cs_standby command initiates a Control Station takeover and failover. When a Control Station is activated, the name of the primary Control Station is displayed. su to root and execute this command from the /nas/sbin or /nasmcd/sbin directory. Note: EMC SRDF is not supported on the secondary Control Station. OPTIONS ------- -takeover Executed from the standby Control Station, initiates a reboot of the primary Control Station, then changes the state of the standby to that of the primary. The original primary Control Station now becomes the standby Control Station. The -takeover option can be used to failback Control Station 0 to the role of primary Control Station after a failover, or to set Control Station 1 to the role of primary Control Station on demand. Caution: When executing a takeover or failover, Data Movers performing functions such as RDF, EMC TimeFinder/FS, file system extends, or quotas may be interrupted. Caution: If a primary Control Station fails over to a standby Control Station, fo r remote replication, service continues to run but replication management capabilities are no longer available. Note: After executing a takeover or failover, a few minutes may be needed to stop Linux and other services active on the Control Station. -failover Executed from the primary Control Station, initiates a reboot of the primary Control Station, then activates the standby to take over the role of the primary Control Station. The -failover option can be used to complete a failback by forcing a failover from Control Station 1 back to Control Station 0 after Control Station 0 had failed over, or to set Control Station 1 to the role of primary Control Station on demand. To display the primary Control Station, type: $ nas/sbin/getreason EXAMPLE #1 ---------- To change the state of the standby Control Station to primary, cd to the /nasmcd/sbin directory of the standby Control Station, then type: #./cs_standby -takeover Taking over as Primary Control Station............done If the takeover command is executed on the primary Control Station, the following error message appears: The -takeover option is only valid on a standby Control Station
EXAMPLE #2 ---------- To initiate a failover from the primary Control Station to the standby Control Station, cd to the /nas/sbin directory of the primary Control Station, then type: #./cs_standby -failover The system will reboot, do you wish to continue [yes or no]: y Failing over from Primary Control Station -------------------------------------- Last Modified: March 28, 2011 12:30 pm
The migrate Command
This migrate Command Set provided for managing, configuring, and monitoring of Data
Movers. The commands are prefixed with migrate, and appear alphabetically.
- migrate_system_conf
migrate_system_conf Migrates Data Mover level or cabinet level configurations from source system. SYNOPSIS -------- migrate_system_conf { -mover -source_system {
Configures services identical to those of the source system, if the source system has services configured. Otherwise, resets the services to the default value, which are customized for destination VNX system version. -cabinet Migrates cabinet-level configuration. -source_system {
-------------------------------------------------------------------------------- Succeed to copy: [ntp dns] EXAMPLE #3 ---------- To migrate configuration of CAVA from a source Data Mover, when the destination overwritten option is specified, type: migrate_system_conf.pl -mover -source_system 145_16 -source_user nasadmin -source_mover server_2 -destination_mover server_2 -service cava -overwrite_destination Check network connnection....................................started Check network connnection..................................succeeded CAVA migration...............................................started CAVA migration.............................................succeeded ------------------------------------------------------------------------ Succeed to copy: [cava] [WARNING]: The virus checking rights on the local group of the source data mover has not been migrated to the destination.You will need to reconfigure the virus checking rights on the destination using the MMC Snap-in. EXAMPLE #4 ---------- To migrate usermapper service from a source cabinet, type: migrate_system_conf.pl -cabinet -source_system id=1 -source_user nasadmin -service usermapper Check network connection...................................started Check network connection.................................succeeded Check USERMAPPER conflict..................................started Check USERMAPPER conflict................................succeeded USERMAPPER mgiration.......................................started Backup destination usermapper user database on DataMover [server_2] to /tmp/root/ migrate_system_conf/usermapper_backup/server_2_user_db_2013-Apr-25-09:27:23.gz Backup destination usermapper group database on DataMover [server_2] to /tmp/root /migrate_system_conf/usermapper_backup/server_2_group_db_2013-Apr-25-09:27:23.gz Start to import usermapper [USER] database.. Done Start to import usermapper [GROUP] database.. Done USERMAPPER mgiration.....................................succeeded -------------------------------------------------------------------------------- Succeed to copy: [usermapper]
Roll back script
You can roll back a VDM or FS level migration using a roll back script, which runs the commands for a migration roll back. For information about the commands to run a roll back, see the Using VNX File Migration Technical Notes.
SYNOPSIS migrate_utility -migration {
DESCRIPTION After the Complete process ends and before a Delete process is initiated during a VDM or FS level migration, you can execute a manual roll back of the VDM or FS level migration. To roll back a migration or a usermapper service using the roll back script, use this command syntax:
OPTIONS -migration {
-rollback {
-usermapper -rollback -source_system {
EXAMPLE #1
The following is a sample of running the roll back script: $ /nas/tools/migration_utility -migration vdmMig3001 -rollback
Query information of migration (vdmMig3001) ... succeeded Rollback migration (vdmMig3001) ... Check pre-conditions ... Check migration state ... Check migration dr solution and state ... Check migration dr solution and state ... succeeded Check migration state ... succeeded Check replication status ... - Replication (MIGVDM_vdm3001_3001) at remote: SYNCING (destination->source) - Replication (MIGVDM_vdm3001_3003) at remote: SYNCING (destination->source) - Replication (MIGVDM_vdm3001_3004) at remote: SYNCING (destination->source) - Replication (MIGVDM_vdm3001_3002) at remote: SYNCING (destination->source) - Replication (MIGVDM_vdm3001_3001) at local: SYNCING (destination->source) - Replication (MIGVDM_vdm3001_3003) at local: SYNCING (destination->source) - Replication (MIGVDM_vdm3001_3004) at local: SYNCING (destination->source) - Replication (MIGVDM_vdm3001_3002) at local: SYNCING (destination->source) Check replication status ... succeeded Check pre-conditions ... succeeded Set migration state(ROLLING_BACK) ... Check migration state ... Check migration dr solution and state ... Check migration dr solution and state ... succeeded Check migration state ... succeeded Set migration state ... succeeded Get read-only file systems ... - 2 Source File Systems to become read-only:fs3002, fs3003 Get read-only file systems ... succeeded Refresh replications ... - 4 Replication(s):MIGVDM_vdm3001_3001, MIGVDM_vdm3001_3002, MIGVDM_vdm3001_3003, MIGVDM_vdm3001_3004
Refresh replications ... succeeded Get interfaces attached ... - 2 Interface(s):eth32, eth33 Get interfaces attached ... succeeded Cut over the migration ... --------------------------------------- Cut-over start time: 2014-02-12 02:20:28 Turn down destination interfaces ... - 2 Interface(s):eth32, eth33 Turn down destination interfaces ... succeeded Reverse replications(FS) ... - 3 Replication(s):MIGVDM_vdm3001_3002, MIGVDM_vdm3001_3003, MIGVDM_vdm3001_3004 Reverse replications(FS) ... succeeded Restore FS back to Read-Only ... - 2 Source File Systems:fs3002, fs3003 - Retry 1 ... Restore FS back to Read-Only ... succeeded Reverse replications(VDM) ... - 1 Replication(s):MIGVDM_vdm3001_3001 Reverse replications(VDM) ... succeeded Turn up source interfaces ... - 2 Interface(s):eth32, eth33 Turn up source interfaces ... succeeded --------------------------------------- Cut over the migration ... succeeded - Start time: 2014-02-12 02:20:28 - End time: 2014-02-12 02:21:20 - Duration: 52 secs Set migration state(READY_TO_COMPLETE) ... Check migration state ... Check migration dr solution and state ... Check migration dr solution and state ... succeeded Check migration state ... succeeded Set migration state ... succeeded Check post-conditions ... Check replication status ... - Replication (MIGVDM_vdm3001_3001) at remote: SYNCING (source->destination) - Replication (MIGVDM_vdm3001_3003) at remote: SYNCING (source->destination) - Replication (MIGVDM_vdm3001_3004) at remote: SYNCING (source->destination) - Replication (MIGVDM_vdm3001_3002) at remote: SYNCING (source->destination) - Replication (MIGVDM_vdm3001_3001) at local: SYNCING (source->destination) - Replication (MIGVDM_vdm3001_3003) at local: SYNCING (source->destination) - Replication (MIGVDM_vdm3001_3004) at local: SYNCING (source->destination) - Replication (MIGVDM_vdm3001_3002) at local: SYNCING (source->destination) Check replication status ... succeeded Check post-conditions ... succeeded Rollback migration ... succeeded OK
THE GET AND SET COMMANDS
This chapter lists the eNAS Command Set provided for managing, configuring, and monitoring of File Movers. The commands are network protocol applications, prefixed with get or set, and appear alphabetically. The command line syntax (Synopsis), a description of the options, and an example of usage are provided for each command.
get_attributes Reads the attributes of the specified file on the primary storage and verifies the eNAS FileMover API configuration.
SYNOPSIS get_attributes [-u
DESCRIPTION get_attributes is executed on the Control Station after starting the eNAS FileMover API service to return CIFS, NFS, and all vendor extended attributes in a text format. get_attributes uses the program digest to actually transport the command over the network. Note: get_attributes is not part of Control Station CLI framework.
PREREQUISITES Starts the HTTP server for eNAS FileMover by using server_http, and creates user account for the specified eNAS FileMover using server_user unless user authentication is set to none.
OPTIONS -u
EXAMPLE #1 To verify offline status using eNAS FileMover API, type: $ /nas/tools/dhsm/get_attributes 10.5.8.111 /fs1/pax.tar
Where: 10.5.8.111 --> Indicates the IP address of the Data Mover which hosts the primary file. /fs1/pax.tar --> Indicates the path to the primary file.
EXAMPLE #2 To verify offline status of a deduped file with -d option, type: $ /nas/tools/dhsm/get_attributes -d 128.221.252.2
New Command length is 65 spawn telnet 128.221.252.2 5080 Trying 128.221.252.2... Connected to server_2 (128.221.252.2). Escape character is ^]. POST /dhsm HTTP/1.0 Content-type: text/xml Content-length: 65
FILE_TYPE="File" />
EXAMPLE #3 To verify offline status of a deduped file without -d option, type: $ /nas/tools/dhsm/get_attributes 128.221.252.2 /afs/3-1.log
New Command length is 52 spawn telnet 128.221.252.2 5080 Trying 128.221.252.2... Connected to server_2 (128.221.252.2). Escape character is ^]. POST /dhsm HTTP/1.0 Content-type: text/xml Content-length: 52
EXAMPLE #4 To read the status of a given primary storage, type: $ /nas/tools/dhsm/get_attributes -u dhsm_user -p bad_password 10.5.8.111 /
Sending 105 bytes *** POST /dhsm HTTP/1.0 Content-type: text/xml Content-length: 38
Server: EMC File Mover service Date: Mon, 01 Oct 2007 17:34:09 GMT basic challenge open_connection: server IP 10.5.8.111 open_connection: streaming socket open open_connection: bind successful open_connection: connect successful open_connection: local port = 55315, local addr = 10.5.8.111 Sending 160 bytes *** POST /dhsm HTTP/1.0 Authorization: Basic ZGhzbV91c2VyOmJhZF9wYXNzd29yZA== Content-type: text/xml Content-length: 38
set_attributes Changes a file on primary storage into a Stub File or a WORM file.
SYNOPSIS set_attributes [-m
DESCRIPTION set_attributes uses the program \223digest\224 to actually transport the command over the network to set EMC specific attributes, which are not available in CIFS or NFS. Note: set_attributes is not part of Control Station CLI framework.
PREREQUISITES Before running the command, firstly enables eNAS FileMover operations on a file system by using fs_dhsm, starts the HTTP server for eNAS FileMover by using pserver_http, and creates user account for the specified eNAS FileMover using rserver_userunless user authentication is set to none.
OPTIONS -m
-v
SEE ALSO Using VNX FileMover and server_http, server_user, and server_certificate.
EXAMPLE #1 To create a stub file on the primary storage, type: $ /nas/tools/dhsm/set_attributes -v 1191008770 10.5.8.111 /fs1/pax.tar nfs://io2/fs1ata/pax.tar
open_connection: server IP 10.5.8.111 open_connection: streaming socket open open_connection: bind successful open_connection: connect successful open_connection: local port = 55315, local addr = 10.5.8.111 Sending 260 bytes *** POST /dhsm HTTP/1.0 Content-type: text/xml Content-length: 192
Note: Make sure HTTP service for eNAS FileMover is started by using server_http.
EXAMPLE #2 To create a stub file on a secondary server for HTTP connections, type: $ /nas/tools/dhsm/set_attributes -u dhsm_user -p dhsm_user -e f5040c-14a000-c986cd80 -V HTTP/1.1 10.5.8.111 /fs1/pax.tar http://linc57/pax.tar
FILE == /fs1/pax.tar open_connection: server IP 10.5.8.111 open_connection: streaming socket open open_connection: bind successful open_connection: connect successful open_connection: local port = 55315, local addr = 10.5.8.111 Sending 275 bytes *** POST /dhsm HTTP/1.1 Host:10.5.8.111 Content-type: text/xml Content-length: 190
Scripting Guidelines It is recommended that users follow the guidelines outlined below when invoking eNAS commands withing scripts. The following The following recomendations should
Scheduling eNAS Database Backups: The eNAS backs up the NAS database that stores specific configuration information require d for each Data Mover every hour, at one minute after the hour. During part of the backup, the datab ase is locked, and some commands that rely on the database might not have access. It is recommen ded that command scripts avoid starting at one minute after the hour. Note that scripts with compl ex commands that run for an extended period may overlap the backup period. The duration of the backup may vary. Use the following Linux command to check the state o f the backup process prior to executing scripts: ps -ef | grep nasdb_backup. If a lock conditio n occurs, wait a few minutes and retry.
Command sequencing: Some commands must lock the database in order to execute. If multiple user-entered comman ds or scripts are active at the same time, some of these commands may lock the database and pre vent other commands from executing. To avoid this, you should arrange commands whenever possible.
Sleep statements Some processes within a script can take time to complete. Use proper timing and adequate sleep statements to prevent timing-related issues.
Pipe and grep Piping script outputs through grep is a helpful tool to check the status of the script. Use periodic checks to grep for file or database locked messages, timeouts , resource unavailable warnings, and other failure or success messages, and use this information to check status, pause the script, or halt it.
Return code check All commands return a UNIX-style return code (for example: 0 for success or 1 for failure) or a text-based status code (for example, done) which can be used to help de termine if the command completed or if there was an error or a conflict with the NAS database backup , or other commands being run. If a lock condition occurs, wait a few minutes and retry. If you crea te and run scripts, be sure to incorporate return code checks and verify for proper return codes fro m individual operations.
If you interrupt a command by issuing Ctrl-C, expect the following messages or traces at the console: \225 nas_cmd: system execution failed. \225 nas_cmd: PANIC: caught signal #11 (Segmentation fault) -- Giving up.
NOTE: Use eNAS CLI to add IPv6 addresses to the NFS export host list. Enclose the IPv6 address in { } or square brackets in the CLI. Unisphere displays the IPv6 addresses added to the NFS export list via the CLI as read-on ly fields.
Scripting examples The RECOVERABLE variable contains the following errors to retry on: -> Unable to acquire lock -> Resource temporarily unavailable -> Unable to connect to host -> Socket: All ports in use -> Database resource vanished
-> Connection timed out -> NAS_DB locked object is stale
An example of what the RECOVERABLE variable looks like is as follows: RECOVERABLE="unable to acquire lock|Resource temporarily unavailable|unable to connect to host|socket: All ports in use|database resource vanished|Connection timed out|NAS_DB locked object is stale". The res variable contains the command output: #!/bin/sh ######################################################## # File: nas_cmdrcvr.sh # Created by: NAS Engineering # Date: Thursday, May 25, 2006 # Version: 1.0 # Notes: # 1) script will retry commands for specified period of time #2) script will log messages to file only if theres available disk space ######################################################## NAS_DB=/nas export NAS_DB PATH=$PATH:$NAS_DB/bin:$NAS_DB/sbin:/usr/sbin:. export PATH RETRIES=60 SLEEPTIME=30 LOGDIR="/home/nasadmin" LOGFILE="$0.log" LOGLAST="$0.last" DISKFULL=98 RECOVERABLE="Resource temporarily unavailable|\ unable to acquire lock|\ unable to connect to host|\ socket: All ports in use|\ database resource vanished|\ Connection timed out|\ NAS_DB locked object is stale" # # function to log messages to a file # nas_log() { DISKCHK=df -k $LOGDIR|awk NR>1{print $5}|sed s/\%// # if theres enough free disk space, append to log if [ $DISKCHK -lt $DISKFULL ]; then TDSTAMP=date +%Y-%m-%d %T echo $TDSTAMP: $LOGMSG >> $LOGDIR/$LOGFILE fi # regardless of available space, always write last error echo $TDSTAMP: $LOGMSG > $LOGDIR/$LOGLAST } # # function to execute (and potentially retry) commands # nas_cmd() { # initialize variable(s) retry_count=0 # loop until either successful of retry count exceeded while [ $retry_count -le $RETRIES ]; do # execute command and gather response RES=$CMD 2>&1 # check if response means command is recoverable if [ echo "$RES"|egrep -c "$RECOVERABLE" -ne 0 ]; then # check retry count if [ $retry_count -ne $RETRIES ]; then # retry count has not been exceeded LOGMSG="Command ($CMD) failed with ($RES)...retrying in $SLEEPTIME s"
nas_log sleep $SLEEPTIME else # retry count has been exceeded LOGMSG="Command ($CMD) failed with ($RES)...exiting (retry count of $RETRIES exceeded)" nas_log exit 1 fi else # command was either successful or failed for an unknown reason LOGMSG="Command ($CMD) successful with ($RES)" nas_log retry_count=$RETRIES exit 0 fi #increment counter for retries retry_count=expr $retry_count + 1 done } # # main # CMD="nas_volume -d mtv1" nas_cmd
Using the NAS database and query facility EMC has partially changed the layout or format of eNAS internal databases. This change can impact the use of awk or grep utilities when used in scripts that assume specific positions of fields in databases. To enable searching of the NAS database, eNAS has developed a new query subsystem that appears as a hidden option on some of the nas_commands. This query subsystem enables you to specify the information you are interested in, allows you to format the output, and is independent of the database format.
CAUTION Do not use grep and awk to scan the database files. Database positions may change and substrings may return false matches for database objects.
Following is an example of a query to view unused disks: nas_disk -query:inuse==n -format:%s\n -Fields:Id To filter root disks, refer to the List all non-root disks that are not in use. Examples Use the following commands to view the tags (fields) that you can query: nas_disk -query:tags nas_fs -query:tags nas_volume -query:tags nas_slice -query:tags
Following contains a list of examples to help you get started. Note that these commands can be run on the Control Station CLI, so the hardcoded values can be replaced with shell script variables.
Task and query examples
Task Example
Query the ID of a named file system nas_fs -query:Name==RLL_fs10 -format:%s\n -Fields:Id Query the ID of a named file system nas_fs -query:Name==RLL_fs10 without the new line -format:%s -Fields:Id
Query the name of a file system ID nas_fs -query:id==20 that corresponds to a particular ID -format:%s\n -Fields:Name
List of all server IDs nas_server -query:* -format:%s\n -Fields:Id
List of all server names nas_server -query:* -format:%s\n -Fields:Name
List of all the checkpoint file nas_fs -query:type==ckpt -fields:name systems -format:"%s\n"
List type of file system with ID 20 nas_fs -query:id==20 -format:%s\n -Fields:Type
List the file systems that are in use nas_fs -query:inuse==y -format:%s\n -Fields:Name or nas_fs -query:inuse==y -format:%s\n -Fields:Id Identify file system of which file nas_fs -query:id==28 -format:%s system ID 28 is a backup -Fields:BackupOf
List the name of the server with ID 2 nas_server -query:id==2 -format:%s\n -fields:name" View which volume file system is nas_fs -query:Name==my_fs -format:%d built on -fields:VolumeID
View the block count of meta nas_volume -query:Name==my_meta3 volume -format:%d -fields:Blocks
View the block size of meta volume nas_volume -query:Name==JAH_meta3 -format:%d -fields:BlockSize Find which server IDs use fs123 nas_fs -query:name==fs123 -format:%s\n -fields:ServersNumeric List all non-root disks that are not in nas_disk -query:inuse==n:IsRoot==False use -format:"%s\n" -fields:name
List unused volumes that contain nas_volume -query:inuse==n:IsRoot== <93>dc<94> in the volume name False:name=dc -format:"%s\n" -fields:name
List all available disks on a nas_disk -query:inuse==n:SymmID==$symm_ particular storage device (symm_id id:IsRoot==False -format:"%s\n" is a script/env variable) -fields:name
Query operators Use the operators in the table below when building your queries:
Operator Definition =
Related manuals for Dell Symmetrix VMAX 100K Storage CLI Reference Guide
Manualsnet FAQs
If you want to find out how the Symmetrix VMAX Dell works, you can view and download the Dell Symmetrix VMAX 100K Storage CLI Reference Guide on the Manualsnet website.
Yes, we have the CLI Reference Guide for Dell Symmetrix VMAX as well as other Dell manuals. All you need to do is to use our search bar and find the user manual that you are looking for.
The CLI Reference Guide should include all the details that are needed to use a Dell Symmetrix VMAX. Full manuals and user guide PDFs can be downloaded from Manualsnet.com.
The best way to navigate the Dell Symmetrix VMAX 100K Storage CLI Reference Guide is by checking the Table of Contents at the top of the page where available. This allows you to navigate a manual by jumping to the section you are looking for.
This Dell Symmetrix VMAX 100K Storage CLI Reference Guide consists of sections like Table of Contents, to name a few. For easier navigation, use the Table of Contents in the upper left corner.
You can download Dell Symmetrix VMAX 100K Storage CLI Reference Guide free of charge simply by clicking the “download” button in the upper right corner of any manuals page. This feature allows you to download any manual in a couple of seconds and is generally in PDF format. You can also save a manual for later by adding it to your saved documents in the user profile.
To be able to print Dell Symmetrix VMAX 100K Storage CLI Reference Guide, simply download the document to your computer. Once downloaded, open the PDF file and print the Dell Symmetrix VMAX 100K Storage CLI Reference Guide as you would any other document. This can usually be achieved by clicking on “File” and then “Print” from the menu bar.