Hot! VMware Certified Professional 5 Tips-3

Plan and configure vSphere Storage

  • Identify storage adapters and devices
    • storage adapters provide connectivity for your esxi host to a specific storage unit or network
    • esxi supports different classes of adapter including scsi, iscsi, raid, fc, fcoe, and Ethernet. ESXi accesses the adapters directly through device drivers in the vm kernel
    • View storage adapters
      • click host, config, hw, storage adapters
      • click add storage adapter for iscsi (sw) fcoe (sw)
      • under details can clock devices and paths
      • Types of physical storage
        • local storage
          • can be internal hard disks in esxi host or external storage systems located outside and connected to the host directly through protocols such as sas or sata
          • does not require storage networks
          • cannot use ide/ata or usb to store vms
      • do not support sharing across multiple hosts, a datastore on a local storage device can be accessed by only one host
      • Network storage
        • consists of external storage systems your esxi host uses to store vm files remotely
        • are shared
        • can be accessed by multiple hosts concurrently
        • accessing the same storage through diff transport protocols (ISCSI, FC) at the same time is not supported
        • fc
          • host should be equipped with hba’s, switches, if host contains fcoe adapters can connect to shared fc devices using Ethernet
          • datastores use vmfs format
  • iscsi
    • iscsi connections
      • hw
        • host connects to storage through 3rd party adapter capable of offloading the iscsi network processing, can be independent and dependent
        • sw
          • uses sw based initiator in vmkernel to connect to storage. Host needs only standard network adapter
          • can create vmfs datastores
  • NAS
    • store vm files on remote file servers accessed over TCP/IP networks
    • uses NFS version 3
    • host requires a standard network adapter
  • shared serial attached scsi (SAS)
    • stores vms on directed attached sas systems that often share access to multiple hosts
    • houses vmfs datastores
    • Storage naming conventions
      • Device identifier
        • scsi inquiry
          • persistent
            • naa, t10, eui
  • path based mpx,vmhba1:co:t1:l3 (non persistent
  • legacy identifiers
    • vml
      • vml number, series of digits unique to device
  • runtime
    • first path to device
    • Identify hw/sw/dependent hw/iscsi initator requirements
      • sw
        • vmware code built into the vm kernel. Connect to storage device through standard network adapters
      • hw
        • third party adapter that offloads isci and network processing from your host
          • dependent
            • depends on vmware networking, iscsi config and management interfaces provided by vmware
            • ex iscsi licensed Broadcom
            • can be standard network nic adapter and iscsi offload functionality
            • independent
              • implements its own vm networking and isci config and management (qlogic adapter)
      • Compare and contrast array thin provisioning and virtual disk thin provisioning
        • Virtual disk thin provisioning
          • can create virtual disks in a thin format, esxi provisions the entire space required for the disks current and future activities, but commits only as much storage space as the disk needs for initial operations, as the disk requires more space it can grow
          • if a virtual disk was created in thin format you can later inflate it to its full size
          • NFS datastores that do not support hw acceleration only thin format is available
          • NFS datastores with hw acceleration and vmfs datastores support the following
            • thick provision lazy zeroed
              • creates virtual disk in default thick format, space required for the virtual disk is allocated when the virtual disk is created. Data remaining on physical device is not erased during creation, but is zeroed out on demand at a later time on first write from vm, also known as flat disk
              • cannot convert flat disk to thin disk
              • thick provision eager zeroed
                • supports clustering features such as FT, Space required for the virtual disk is allocated at creation time
  • Array thin provisioning
    • a vmfs datastore that you deploy on thin provisioned lun can detect only the logical size of the lun
    • when you use storage api’s for array integration the host can integrate with physical storage and become aware of underlying thin provisioned luns and their space usage
    • space usage monitoring, helps you monitor the space usage on thin provisioned luns to avoid running out of space
    • space reclamation, when you delete a vm files from a vmfs datastore, or migrate through storage vmotion, the datatsore frees blocks of space and informs the storage array so that the blocks can be reclaimed
    • Describe zoning and lun masking practices
      • zones define which HBAs can connect to which SPs. Devices outside a zone are not visible to the devices inside the zone
      • lun masking is a process that makes a lun available to some hosts and unavailable to other hosts
      • with esxi hosts use a single initiator zoning or a single initiator single target zoning (latter is preferred)
      • lun masking can be performed on host or storage side
      • Scan/rescan storage
        • can be done from each host, or right click datacenter, cluster, folder and select rescan for datastores
        • by default vmkernel scans luns 0255 (total of 256 luns) for every target
      • When choosing rescan all on storage or storage adapters you have the option to scan for new storage devices and new vmfs volume (datastores)
      • Indentify use cases for fcoe
      • 2 adapters can be used
        • converged network adapter
          • hw fcoe
      • NIC with fcoe support
        • software fcoe (use native fcoe stack in esxi)
      • fcoe, protcols fc/scsi, block access of data lun
      • hw
        • host detects and can use both cna components, networking component appears as a standard network adapter (vmnic) and the fc component as a fcoe adapter (vmhba)
      • sw
        • used with nic that offers data center bridging, esxi supports max of 4 sw fcoe adapters on one host
        • create an nfs share for use with vsphere
          • supports the following capabilities
            • vmotion
            • vmware drs, ha
            • iso images
            • vm snapshots
      • esxi does not impose any limits on the NFS datastore size
      • can use add storage wizard to mount an nfs volume
        • enter server
          • IP. DNS name, NFS UID
          • folder (path)
          • can check box to mount as read only (if volume exported as read only by nfs server)
      • hw acceleration on NAS devices only thin format available if NO hw acceleration
      • allows host to integrate with NAS devices and use several hw operations that NAS storage provides
        • file clone, NAS Clone, entire files
        • reserve space, enable storage arrays to allocate space for a virtual disk in thick format
        • extend file statistics
      • HW acceleration is implemented through vendor specific NAS Plug ins
      • Enable/configure/disable vcenter server storage filters
        • when you perform vmfs datastore management operations, vcenter server uses default storage protection filters. The filters help you to avoid storage corruption by retrieving only the storage devices that can be used for a particular operation. Unsuitable devices are not displayed for selection. Can turn off filters to view all file
        • administration, vcenter server settings, advanced settings
        • vmfs filter, filters out luns that already used by vmfs datastore config.vpxd.filter.vmfsfilter
        • rdm filter, filter out luns that are already referenced by an rdm, config.vpxd.filter.rdmfilter
        • same host and transports filter, filters out luns ineligible for use as VMFS datastore extents because of host and storage type compatibility, config.vpxd.filters.samehostandtransportfilter
        • host rescan filter, automatically rescans and updates vmfs datastores after you perform datatsore management operations, config.vpxd.filters.hostrescan.filter
  • Configure/edit hw/dependent hw initiators
    • Dependent
      • when installed on host presents two components, standard network adapter (vmnic) and iscsi (vmhba)
      • Although iscsi adapter is enabled by default, to make it functional you must first connect it through a virtual vmkernel interface, to a physical network adapter
      • Enable/disable sw iscsi adapter
        • make sure each vmkernel port is connected to one physical in port group
        • Configure CHAP for ISCSI
          • esxi support one way chap for all types of iscsi initiators, and mutual chap for sw and dependent hw iscsi
            • One way chap, target authenticates initiator

 

Bir Cevap Yazın