Hot! VMware Certified Professional 5 Tips – 6


Create and configure vm clusters

  • describe DRS vm entitlement
  • DRS will calculate cpu and memory entitlement’s for vms. The host local schedules are actually responsible for supplying those resources to vms and therefore also calculate cpu and memory entitlements for the vm’s that reside on it
  • Static entitlement
    • user defined
      • shares
      • limits
      • reservations
    • Memory entitlement backed by a reservation cannot be reclaimed even if it is not being used. CPU entitlement can be “loaned out” to other vm’s if not being used
  • Dynamic entitlement
    • calculated by DRS cluster and by the host local schedules. Dynamic is flexible and will increase/decrease based on vm demand, but will never increase passed it’s configured cpu/memory size

 

  • CPU entitlement
  • based on the active cpu which taken from %run and %ready MHZ metrics of that vm
  • Memory entitlement
  • based on active memorys. The working memory set of the vm (actual physical ram pages_
    • includes memory overhead
    • includes 25% idle memory

 

  • DRS: automatically balances load across groups of ESXi hosts (clcusters)

 

  • Vmotion requirements
  • shared storage
  • both source and destination hosts must be configured with identical virtual switches with vmotion enabled vmkernel ports
  • with distributed switches both hosts must be participating in the same distributed switch
  • all port groups to which the vm being migrated is attached must exist on both esxi hosts
  • port group naming is case sensitive
  • make sure they plug into the same physical subnets or vlans
  • processors on both hosts must be compatible
    • cpu must be from same vendor
    • same cpu family
    • supports same features such as NX/XD
    • for 64 bit vms, must have virtualization technology enabled
  • Vm must not be connected to any device physically available to only one esxi host (includes cd drives, serial ports)
  • vm can’t be connected to internal only switch
  • must not have cpu affinity set for a specific cpu
  • vm must have all disk, config, log files, stored on a datastore accessible by both hosts

 

  • EVC
  • enable at the cluster level
  • vmware created sw features that would take advantage of hw functionality to create a common cupid baseline for all series in the cluster
  • DRS
  • set at properties of cluster
  • decides which node of a cluster should a vm run on when it is powered on
  • needs vcenter server
  • by default DRS checks every 5 minutes to see if the cluster’s workload is balanced, also evoked by certain changes in the cluster like adding esxi host
  • 3 levels regarding automation level
    • manual
      • Every time you power on a vm the cluster prompts you to select the esxi host on which the vm should be hosted, the lower the priority the better
      • Suggests vmotion migrations
      • From drs tab the apply recommendations button allows you to agree with any pending DRS recommendations and initiate a migration
    • partially automated
      • DRS will make an automatic decision about which host a vm should run on when it is initially powered on, but still prompts for all migrations on DRS tab
    • fully automated
      • makes decisions for initial placement without prompting and also makes automatic vmotion decisions based the selected automation level (slide bar)
      • 5 positions for slide bar
      • Range from conservative to aggressive
      • Conservative, automatically apply recommendations ranked as priority 1, any other recommendations will require approval
        • next stop: moves priority 1 and 2 automatically
      • Aggressive, any imbalance in cluster is automatically approved (even 5)

 

  • Affinity rules
    • vm affinity rules
      • keeps vms together
    • vm antiaffinity rules
      • separate vm’s
    • Host affinity rules: vm’s to hosts
    • Before you can start creating a host affinity rule, you have to create at least one vm drs group and at least one host drs group
  • 4 host affinity rules behavior
    • must run on hosts in group
    • should run on hosts in group
    • must not run on hosts in group
    • should not run on hosts in group
    • Rules with must are honored by HA and dpm also
  • Ex) windows vm is in a group that is a member of 2 diff host affinity rules, as a result vm can only run on hosts that satisfy both rules, in the event of conflict, the older rule prevails and newer rule is automatically disabled
  • help only with balancing cpu and memory load
  • Create/delete drs/ha cluster
  • right click data center and create cluster
    • options
      • turn oh ha
      • turn on drs
      • ha must be turned on to use ft

 

  • HA
    • provides automatic failover of vms
    • automatic restart of vms
    • primarily targets esxi host failures, but can also be used to protect against vm and application level failures
    • Uses new vmware tool called fault domain manager
    • Uses a master/slave architecture that does not rely on primary/secondary host designation
    • Uses both management network and storage devices for communication
    • Introduces support for ipv6
    • Uses concept of ha agent running on each esxi host
    • When ha is enabled, the ha agents participate in an election to pick master
    • Master: monitors slave hosts and will restart vm’s in the event of a slave host failure, monitors power state of vms, if vm fails it will restart the vm, manages the list of hosts that are members of the cluster and manages the process of adding and removing hosts from the cluster, manages list of protected vms, caches cluster config, notifies and informs slaves of config changes, sends heartbeat messages to slave hosts so that they know master is alive, report state info to vcenter, if master fails a new master is automatically detected
    • Slave: watches runtime state of vm’s running locally on host, report significant changes to master, monitor health of master, implements things like vm health monitoring, in the event the master cannot communicate with a slave across the management network the master can check its heartbeat datastores
    • Network partition: uses to describe situation in which one or more slave hosts cannot communicate with master even though they still have network connectivity
    • Network isolation: slaves lose network connectivity
    • Requirements for ha
      • all hosts in ha enabled cluster must have access to the same shared storage locations used by all vms on the cluster
      • all hosts should have identical virtual networking config, if new switch is added to one host same switch should be added to all hosts in cluster. If using vds all hosts should participate on same vds
    • disable enable host monitoring when performing network maintenance
    • Admission control
      • enable: disallow vm power on operations that violate availability constraints
      • disable: allow vm power on operations that violate availability constraints
      • admission control policy
        • host failures cluster tolerates
        • percent of cluster resources reserved as failover spare capacity (cpu, memory pct)
        • specify failover hosts
          • when you select an ESXi host as a failover host DRS won’t place vm’s here
          • can’t manually power on vms on failover hosts
    • Slots and slot sizes
      • host failures a cluster tolerates
      • Slot size: ha examines all the vm’s in the cluster to determine the largest values for reserved memory and cpu, if no vm’s have reservations for cpu and memory, will use default value of 32 MGHZ for cpu, for memory will use largest memory overhead
        • then calculates total number of slots each esxi host in the cluster could support
        • then determines how many slots the cluster could support if the hosts with the largest number of slots were to fail, for cpu and memory most restrictive wins
    • VM options for HA
      • vm restart policy
        • disabled, low, med, high for vms that’s hould be brought up first choose high
      • host isolation response
        • when network isolation triggers this response, default is leave powered on, then power off, shut down
        • Power off: will power off vm without graceful shutdown of guest os
        • Shut down: will try graceful shut down of os if vmware tools is installed
        • When isolated will modify a special bit in binary host x power on file on all datastores that are configured for datastore heartbeart
      • HA vm monitoring
        • has the ability to look for guest os and app failures
        • when failure is detected restart vm
        • vmware tools provides a series of heartbeats from guest os up to esxi host
      • Options
        • vm monitoring only
        • disabled
        • vm and application monitoring
      • datastore heartbeating
        • vcenter server selects datastores for each host
        • select only preferred datastores, select any datastores, select any datastores taking into account my preference
      • Ha advanced runtime info shows things like slot size

 

  • add/remove host from DRS/HA cluster
    • in order to add host to evc cluster must put into maintenance mode
    • enable maintenance mode then remove from cluster
    • can disconnect host from cluster without maintenance mode
  • Storage vmotion
    • migrates a running vm’s virtual disks from one datatstore to another
  • Storage DRS
    • Need to create datastore cluster
    • Datastores of different sizes and I/O capacities can be combined in a datstore cluster, datastores from diff arrays and vendors can be combined, cannot combine nfs and vmfs in a datastore cluster, cannot combine replicated and non replicated datastores
    • all hosts connected to datastore must be 5.0
    • datastores shared across multiple datacenters is not supported
    • Manual mode
      • make recommendations only
    • Fully automated
    • Iometric inclusion
      • autmatically enable storage I/O control on datastores, storage drs thresholds
    • Utilized space (default 80%) when datastore reaches 80% full will recommend to perform migration
    • I/O latency: default 15ms
    • Vmdk and vm anti affinity
  • EVC
    • In order to use hosts and vm’s must meet following requirements
      • Cpu’s from single vendor
      • Hosts 3.5 u2 and higher
      • Connected to vcenter
      • Configured for vmotion
      • have advanced features enabled
  • Snapshots
  • PIT checkpoints of vms
  • Snapshots are also leveraged by update manager
  • FT does not support snapshot options
  • When taking snapshots
    • snapshot vm’s memory
      • specifies whether RAM of vm should be included in snap, current contents of ram are written to .vmsn
    • Quiesce guest fs
      • needs vmware tools installed
  • Disk space must be allocated to delta disks on demand, esxi hosts must update metafile (.st). To update meta file luns are locked
  • Snapshot manager can revert to previous snapshots, but all data written since that snapshot will be lost
  • Snapshots don’t protect vmx file
  • Each branch in snapshot tree can have 32 snapshots
  • Preserves following info
    • vm settings
    • Power state
    • Disk state
  • Each snapshot creates additional .vmdk disk file, 64 KB granularity
  • Capturing the memory state of the vm lets you revert to a turned on vm state
  • Vm’s with independent disks must be powered off before you take snap
  • Deleting a snap leaves the current state of the vm or any other snap untouched
  • Revert to current snapshot goes to the you are here
  • Consolidation consolidates redundant log files
  • Host profile
  • can export as vpf format (vmware profile format)
  • Vmware data recovery
  • plug into vsphere client and a backup appliance to store backups on hard disk
  • client plug in is installed on a comp that will be used to manage data recovery
  • backup appliance is installed on esxi host
  • option for file level restore in vms
  • requires vcenter server

 

Bir Cevap Yazın