Sunday, June 11, 2017

How to gather data collect on VNXe

How to gather data collect on VNXe

Product:
VNXe

Description:
Gathering data collect on  VNXe

Resolution:

There are two ways of gathering data collect.
Collecting from Unisphere GUI (Recommended Method):

1) Log in to the Unisphere GUI with admin credentials.
2) Click on Settings and then on Service system.
3) Enter service password.
4) Under "System Components" highlight "Storage System".
5) Select "Collect Service Information " under "Service Actions."
6) Click "Execute service action."
7) This message is displayed: "The service data has previously been collected and is available for download.  Do you want to download this existing service data or start a new process to collect new service data? Click Yes to download the existing service data file or No to start a new collection of service data."
8) Select Yes or No as appropriate to your situation.
9) Click Yes to save the files to your hard drive. 


Collecting logs through SSH:

1) Open an SSH tool (like Putty).
2) Connect to VNXe management IP through Putty.
3) Log in as the service user.
4) Run the svc_dc command. It will take couple of minutes to complete.
5) Use third party scp/sftp tools like WinSCP or FileZilla to connect to VNXe management IP (log on as service).
6) Browse to  /EMC/backend/service/data_collection on VNXe.
7) Copy the .tar file to your local desktop


Monday, August 22, 2016

Netapp Clustered Commands


On this post I will be constantly adding Netapp Clustered Data Ontap CLI commands


MISC

set -privilege advanced (Enter into privilege mode)
set -privilege diagnostic (Enter into diagnostic mode)
set -privilege admin (Enter into admin mode)
system timeout modify 30 (Sets system timeout to 30 minutes)
system node run – node local sysconfig -a (Run sysconfig on the local node)
The symbol ! means other than in clustered ontap i.e. storage aggregate show -state !online (show all aggregates that are not online)
node run -node -command sysstat -c 10 -x 3 (Running the sysstat performance tool with cluster mode)
system node image show (Show the running Data Ontap versions and which is the default boot)
dashboard performance show (Shows a summary of cluster performance including interconnect traffic)
node run * environment shelf (Shows information about the Shelves Connected including Model Number)
network options switchless-cluster show (Displays if nodes are setup for cluster switchless or switched – need to be in advanced mode)
network options switchless-cluster modify true (Sets the nodes to use cluster switchless, setting to false sets the node to use cluster switches – need to be in advanced mode)

DIAGNOSTICS USER CLUSTERED ONTAP

security login unlock -username diag (Unlock the diag user)
security login password -username diag (Set a password for the diag user)
security login show -username diag (Show the diag user)

SYSTEM CONFIGRUATION BACKUPS FOR CLUSTERED ONTAP

system configuration backup create -backup-name node1-backup -node node1 (Create a cluster backup from node1)
system configuration backup create -backup-name node1-backup -node node1 -backup-type node (Create a node backup of node1)
system configuration backup upload -node node1 -backup node1.7z -destination ftp://username:password@ftp.server.com (Uploads a backup file to ftp)

LOGS

To look at the logs within clustered ontap you must log in as the diag user to a specific node
set -privilege advanced
systemshell -node
username: diag
password:
cd /mroot/etc/mlog
cat command-history.log | grep volume (searches the command-history.log file for the keyword volume)
exit (exits out of diag mode)

COREDUMP

system coredump status (shows unsaved cored, saved cores and partial cores)
system coredump show (lists coredump files and panic dates)

SERVICE PROCESSOR

system node image get -package http://webserver/306-02765_A0_SP_3.0.1P1_SP_FW.zip -replace-package true (Copies the firmware file from the webserver into the mroot directory on the node)
system node service-processor image update -node node1 -package 306-02765_A0_SP_3.0.1P1_SP_FW.zip -update-type differential (Installs the firmware package to node1)
system node service-processor show (Show the service processor firmware levels of each node in the cluster)
system node service-processor image update-progress show (Shows the progress of a firmware update on the Service Processor)
service-processor reboot-sp -node NODE1 (reboot the sp of node1)

Disk Shelves

storage shelf show (an 8.3 command that displays the loops and shelf information)

AUTOSUPPORT

system node autosupport budget show -node local (In diag mode – displays current time and size budgets)
system node autosupport budget modify -node local -subsystem wafl -size-limit 0 -time-limit 10m (In diag mode – modification as per Netapp KB1014211)
system node autosupport show -node local -fields max-http-size,max-smtp-size (Displays max http and smtp sizes)
system node autosupport modify -node local -max-http-size 0 -max-smtp-size 8MB (modification as per Netapp KB1014211)

CLUSTER

set -privilege advanced (required to be in advanced mode for the below commands)
cluster statistics show (shows statistics of the cluster – CPU, NFS, CIFS, FCP, Cluster Interconnect Traffic)
cluster ring show -unitname vldb (check if volume location database is in quorum)
cluster ring show -unitname mgmt (check if management application is in quorum)
cluster ring show -unitname vifmgr (check if virtual interface manager is in quorum)
cluster ring show -unitname bcomd (check if san management daemon is in quorum)
cluster unjoin (must be run in priv -set admin, disjoins a cluster node. Must also remove its cluster HA partner)
debug vreport show (must be run in priv -set diag, shows WAFL and VLDB consistency)
event log show -messagename scsiblade.* (show that cluster is in quorum)
cluster kernel-service show -list (in diag mode, displays in quorum information)
debug smdb table bcomd_info show (displays database master / secondary for bcomd)

NODES

system node rename -node -newname
system node reboot -node NODENAME -reason ENTER REASON (Reboot node with a given reason. NOTE: check ha policy)

FLASH CACHE

system node run -node * options flexscale.enable on (Enabling Flash Cache on each node)
system node run -node * options flexscale.lopri_blocks on (Enabling Flash Cache on each node)
system node run -node * options flexscale.normal_data_blocks on (Enabling Flash Cache on each node)
node run NODENAME stats show -p flexscale (fashcache configuration)
node run NODENAME stats show -p flexscale-access (display flash cache statistics)

FLASH POOL

storage aggregate modify -hybrid-enabled true (Change the AGGR to hybrid)
storage aggregate add-disks -disktype SSD (Add SSD disks to AGGR to begin creating a flash pool)
priority hybrid-cache set volume1 read-cache=none write-cache=none (Within node shell and diag mode disable read and write cache on volume1)

FAILOVER

storage failover takeover -bynode (Initiate a failover)
storage failover giveback -bynode (Initiate a giveback)
storage failover modify -node -enabled true (Enabling failover on one of the nodes enables it on the other)
storage failover show (Shows failover status)
storage failover modify -node -auto-giveback false (Disables auto giveback on this ha node)
storage failover modify -node -auto-giveback enable (Enables auto giveback on this ha node)
aggregate show -node NODENAME -fields ha-policy (show SFO HA Policy for aggregate)

AGGREGATES

aggr create -aggregate -diskcount -raidtype raid_dp -maxraidsize 18 (Create an AGGR with X amount of disks, raid_dp and raidgroup size 18)
aggr offline | online (Make the aggr offline or online)
aggr rename -aggregate -newname
aggr relocation start -node node01 -destination node02 -aggregate-list aggr1 (Relocate aggr1 from node01 to node02)
aggr relocation show (Shows the status of an aggregate relocation job)
aggr show -space (Show used and used% for volume foot prints and aggregate metadata)
aggregate show (show all aggregates size, used% and state)
aggregate add-disks -aggregate -diskcount (Adds a number of disks to the aggregate)
reallocate measure -vserver vmware -path /vol/datastore1 -once true (Test to see if the volume datastore1 needs to be reallocated or not)
reallocate start -vserver vmware -path /vol/datastore1 -force true -once true (Run reallocate on the volume datastore1 within the vmware vserver)

DISKS

storage disk assign -disk 0a.00.1 -owner (Assign a specific disk to a node) OR
storage disk assign -count -owner (Assign unallocated disks to a node)
storage disk show -ownership (Show disk ownership to nodes)
storage disk show -state broken | copy | maintenance | partner | percent | reconstructing | removed | spare | unfail |zeroing (Show the state of a disk)
storage disk modify -disk NODE1:4c.10.0 -owner NODE1 -force-owner true (Force the change of ownership of a disk)
storage disk removeowner -disk NODE1:4c.10.0 -force true (Remove ownership of a drive)
storage disk set-led -disk Node1:4c.10.0 -action blink -time 5 (Blink the led of disk 4c.10.0 for 5 minutes. Use the blinkoff action to turn it off)

 VSERVER

vserver setup (Runs the clustered ontap vserver setup wizard)
vserver create -vserver -rootvolume (Creates a new vserver)
vserver show (Shows all vservers in the system)
vserver show -vserver (Show information on a specific vserver)

VOLUMES

volume create -vserver -volume -aggregate -size 100GB -junction-path /eng/p7/source (Creates a Volume within a vserver)
volume move -vserver -volume -destination-aggregate -foreground true (Moves a Volume to a different aggregate with high priority)
volume move -vserver -volume -destination-aggregate -cutover-action wait (Moves a Volume to a different aggregate with low priority but does not cutover)
volume move trigger-cutover -vserver -volume (Trigger a cutover of a volume move in waiting state)
volume move show (shows all volume moves currently active or waiting. NOTE: You can only do 8 volume moves at one time, more than 8 and they get queued)
system node run – node vol size 400g (resize volume_name to 400GB) OR
volume size -volume -new-size 400g (resize volume_name to 400GB)
volume modify -vserver -filesys-size-fixed false -volume (Turn off fixed file sizing on volumes)
volume recovery-queue purge-all (An 8.3 command that purges the volume undelete cache)

LUNS

lun show -vserver (Shows all luns belonging to this specific vserver)
lun modify -vserver -space-allocation enabled -path (Turns on space allocation so you can run lun reclaims via VAAI)
lun geometry -vserver path /vol/vol1/lun1 (Displays the lun geometry)

NFS

vserver modify -4.1 -pnfs enabled (Enable pNFS. NOTE: Cannot coexist with NFSv4)

FCP

storage show adapter (Show Physical FCP adapters)
fcp adapter modify -node NODENAME -adapter 0e -state down (Take port 0e offline)
node run fcpadmin config (Shows the config of the adapters – Initiator or Target)
node run -t target 0a (Changes port 0a from initiator or target – You must reboot the node)

CIFS

vserver cifs create -vserver -cifs-server -domain (Enable Cifs)
vserver cifs share create -share-name root -path / (Create a CIFS share called root)
vserver cifs share show
vserver cifs show

SMB

vserver cifs options modify -vserver -smb2-enabled true (Enable SMB2.0 and 2.1)

SNAPSHOTS

volume snapshot create -vserver vserver1 -volume vol1 -snapshot snapshot1 (Create a snapshot on vserver1, vol1 called snapshot1)
volume snapshot restore -vserver vserver1 -volume vol1 -snapshot snapshot1 (Restore a snapshot on vserver1, vol1 called snapshot1)
volume snapshot show -vserver vserver1 -volume vol1 (Show snapshots on vserver1 vol1)

DP MIRRORS AND SNAPMIRRORS

volume create -vserver -volume vol10_mirror -aggregate -type DP (Create a destinaion Snapmirror Volume)
snapmirror create -vserver -source-path sysadmincluster://vserver1/vol10 -destination -path sysadmincluster://vserver1/vol10_mirror -type DP (Create a snapmirror relationship for sysadmincluster)
snapmirror initialize -source-path sysadmincluster://vserver1/vol10 -destination-path sysadmincluster://vserver1/vol10_mirror -type DP -foreground true (Initialize the snapmirror example)
snapmirror update -source-path vserver1:vol10 -destination-path vserver2:vol10_mirror -throttle 1000 (Snapmirror update and throttle to 1000KB/sec)
snapmirror modify -source-path vserver1:vol10 -destination-path vserver2:vol10_mirror -throttle 2000 (Change the snapmirror throttle to 2000)
snapmirror restore -source-path vserver1:vol10 -destination-path vserver2:vol10_mirror (Restore a snapmirror from destination to source)
snapmirror show (show snapmirror relationships and status)
NOTE: You can create snapmirror relationships between 2 different clusters by creating a peer relationship

SNAPVAULT

snapmirror create -source-path vserver1:vol5 -destination-path vserver2:vol5_archive -type XDP -schedule 5min -policy backup-vspolicy (Create snapvault relationship with 5 min schedule using backup-vspolicy)
NOTE: Type DP (asynchronous), LS (load-sharing mirror), XDP (backup vault, snapvault), TDP (transition), RST (transient restore)

NETWORK INTERFACE

network interface show (show network interfaces)
network interface modify -vserver vserver1 -lif cifs1 -address 192.168.1.10 -netmask 255.255.255.0 -force-subnet-association (Data Ontap 8.3 – forces the lif to use an IP address from the subnet range that has been setup)
network port show (Shows the status and information on current network ports)
network port modify -node * -port -mtu 9000 (Enable Jumbo Frames on interface vif_name>
network port modify -node * -port -flowcontrol-admin none (Disables Flow Control on  port data_port_name)
network interface revert * (revert all network interfaces to their home port)
INTERFACE GROUPS
ifgrp create -node -ifgrp -distr-func ip -mode multimode (Create an interface group called vif_name on node_name)
network port ifgrp add-port -node -ifgrp -port (Add a port to vif_name)
net int failover-groups create -failover-group data__fg -node -port (Create a failover group – Complete on both nodes)
ifgrp show (Shows the status and information on current interface groups)
net int failover-groups show (Show Failover Group Status and information)

ROUTING GROUPS

network interface show-routing-group (show routing groups for all vservers)
network routing-groups show -vserver vserver1 (show routing groups for vserver1)
network routing-groups route create -vserver vserver1 -routing-group 10.1.1.0/24 -destination 0.0.0.0/0 -gateway 10.1.1.1 (Creates a default route on vserver1)
ping -lif-owner vserver1 -lif data1 -destination www.google.com (ping www.google.com via vserver1 using the data1 port)

DNS

services dns show (show DNS)

UNIX

vserver services unix-user show
vserver services unix-user create -vserver vserver1 -user root -id 0 -primary-gid 0 (Create a unix user called root)
vserver name-mapping create -vserver vserver1 -direction win-unix -position 1 -pattern (.+) -replacement root (Create a name mapping from windows to unix)
vserver name-mapping create -vserver vserver1 -direction unix-win -position 1 -pattern (.+) -replacement sysadmin011 (Create a name mapping from unix to windows)
vserver name-mapping show (Show name-mappings)

NIS

vserver services nis-domain create -vserver vserver1 -domain vmlab.local -active true -servers 10.10.10.1 (Create nis-domain called vmlab.local pointing to 10.10.10.1)
vserver modify -vserver vserver1 -ns-switch nis-file (Name Service Switch referencing a file)
vserver services nis-domain show

NTP

system services ntp server create -node -server (Adds an NTP server to node_name)
system services ntp config modify -enabled true (Enable ntp)
system node date modify -timezone (Sets timezone for Area/Location Timezone. i.e. Australia/Sydney)
node date show (Show date on all nodes)

DATE AND TIME

timezone -timezone Australia/Sydney (Sets the timezone for Sydney. Type ? after -timezone for a list)
date 201307090830 (Sets date for yyyymmddhhmm)
date -node (Displays the date and time for the node)

CONVERGED NETWORK ADAPTERS (FAS 8000)

ucadmin show -node NODENAME (Show CNA ports on specific node)
ucadmin -node NODENAME -adapter 0e -mode cna (Change adapter 0e from FC to CNA. NOTE: A reboot of the node is required)

PERFORMANCE

show-periodic -object volume -instance volumename -node node1 -vserver vserver1 -counter total_ops|avg_latency|read_ops|read_latency (Show the specific counters for a volume)
statistics show-periodic 0object nfsv3 -instance vserver1 -counter nfsv3_ops|nfsv3_read_ops|nfsv3_write_ops|read_avg_latency|write_avg_latency (Shows the specific nfsv3 counters for a vserver)
sysstat -x 1 (Shows counters for CPU, NFS, CIFS, FCP, WAFL)

GOTCHA’S

Removing a port from an ifgrp – To remove a port from an ifgrp, you must first shut down any sub-interfaces of that ifgrp. For example if your ifgrp is named a0a and you have a vlan on it called a0a-100, you must first shut down a0a-100, you will then be able to remove the port from the ifgrp
FCoE – If you run multiple 10Gb Converged Network Adapters connecting to Nexus 5k, you will not be allowed, and it’s not supported, to run more than 1 link per port-channel per switch and use this port-channel or interface for a virtual fiber channel interface (VFC). For example, if I have 2 x CNA’s in my Netapp Clustered Ontap FAS, and I connect e1a and e2a to Nexus Switch 1, and then connect e1b and e2b to Nexus Switch 2, and create 1 port channel, you will get an error message when you try to bind the interface to the vfc. The error is “VFC cannot be bound to Port Channel as it has more than one member”
FCoE Lif Moves – To move a FCoE lif from it’s current home-port in clustered ontap, you must first offline the FCoE lif, perform the lif move and then online the lif

Thursday, March 17, 2016

NETAPP DEDUPLICATION

                               NETAPP DEDUPLICATION

The first step will be to ensure that your NetApp storage system is licensed for deduplication. As of March 10, NetApp made the NearStore option, which was a prerequisite for deduplication, free. Yes, you read that right: free. Since NearStore is a prerequisite, you’ll need to be sure to license that first:
license add <Code for NearStore>  
license add <Code for Deduplication>`
Once deduplication is licensed, then you can enable it on a per-volume basis using the sis on command:
sis on /vol/<volname>
Note, however, that the volume cannot exceed a certain size, based on the storage system model, in order for deduplication to work. Note that the volume must never have been any bigger than the size limits described, so this means you can’t size it down to the limits set forth and then run deduplication.
Once it’s running, you can check the status with:
sis status /vol/<volname>
After it’s finished running, you can see your space savings like this:
df -s /vol/<volname>
After running deduplication on a small NFS volume that housed only three VMs, the df -s command showed a space savings of 64%. That’s pretty impressive!
Moving forward, deduplication will run automatically every night at midnight, as shown by this command:
sis config /vol/<volname>

VMware ESXi Interview Q & Ans

1. What is a Hypervisor?
It is a program that allows multiple operating systems to share a single hardware host. Each operating system appears to have the host's processor, memory, and other resources all to itself. However, the hypervisor is actually controlling the host processor and resources, allocating what is needed to each operating system in turn and making sure that the guest operating systems (called virtual machines) cannot disrupt each other.

2. What is the hardware version used in VMware ESXi 5.5?
Version 10

Below is the table showing the different version of hardware used in different VMware products along with their release version
Virtual Hardware Version
Products
10
ESXi 5.5, Fusion 6.x, Workstation 10.x, Player 6.x
9
ESXi 5.1, Fusion 5.x, Workstation 9.x, Player 5.x
8
ESXi 5.0, Fusion 4.x, Workstation 8.x, Player 4.x
7
ESXi/ESX 4.x, Fusion 2.x/3.x Workstation 6.5.x/7.x,Player 3.x
6
Workstation 6.0.x
4
ACE 2.x, ESX 3.x, Fusion 1.x, Player 2.x
3 and 4
ACE 1.x, Player 1.x, Server 1.x, Workstation 5.x, Workstation 4.x
3
ESX 2.x, GSX Server 3.x

3. What is the difference between the vSphere ESX and ESXi architectures?
VMware ESX and ESXi are both bare metal hypervisor architectures that install directly on the server hardware.
Although neither hypervisor architectures relies on an OS for resource management, the vSphere ESX architecture relied on a Linux operating system, called the Console OS (COS) or service console, to perform two management functions: executing scripts and installing third-party agents for hardware monitoring, backup or systems management.
In the vSphere ESXi architecture, the service console has been removed. The smaller code base of vSphere ESXi represents a smaller “attack surface” and less code to patch, improving reliability and security.

4. What is a .vmdk file?
This isn't the file containing the raw data. Instead it is the disk descriptor file which describes the size and geometry of the virtual disk file. This file is in text format and contains the name of the –flat.vmdk file for which it is associated with and also the hard drive adapter type, drive sectors, heads and cylinders, etc. One of these files will exist for each virtual hard drive that is assigned to your virtual machine. You can tell which –flat.vmdk file it is associated with by opening the file and looking at the Extent Description field.

Follow the below link for more details

5. What are the different types of virtualization?
Server Virtualization – consolidating multiple physical servers into virtual servers that run on a single physical server.

Application Virtualization – an application runs on another host from where it is installed in a variety of ways. It could be done by application streaming, desktop virtualization or VDI, or a VM package (like VMware ACE creates with a player). Microsoft Softgrid is an example of Application virtualization.

Presentation Virtualization – This is what Citrix Met frame (and the ICA protocol) as well as Microsoft Terminal Services (and RDP) are able to create. With presentation virtualization, an application actually runs on another host and all that you see on the client is the screen from where it is run.

Network Virtualization – with network virtualization, the network is “carved up” and can be used for multiple purposes such as running a protocol analyzer inside an Ethernet switch. Components of a virtual network could include NICs, switches, VLANs, network storage devices, virtual network containers, and network media.

Storage Virtualization – with storage virtualization, the disk/data storage for your data is consolidated to and managed by a virtual storage system. The servers connected to the storage system aren’t aware of where the data really is. Storage virtualization is sometimes described as “abstracting the logical storage from the physical storage.

6. What is VMware vMotion and what are its requirements?
VMware VMotion enables the live migration of running virtual machines from one physical server to another with zero downtime.

VMotion lets you:
·         Automatically optimize and allocate entire pools of resources for maximum hardware utilization and
·         availability.
·         Perform hardware maintenance without any scheduled downtime.
·         Proactively migrate virtual machines away from failing or under performing servers.
Below are the pre-requisites for configuring vMotion
Each host must be correctly licensed for vMotion
Each host must meet shared storage requirements
vMotion migrates the vm from one host to another which is only possible with both the host are sharing a common storage or to any storage accessible by both the source and target hosts. 
A shared storage can be on a Fibre Channel storage area network (SAN), or can be implemented using iSCSI SAN and NAS.
If you use vMotion to migrate virtual machines with raw device mapping (RDM) files, make sure to maintain consistent LUN IDs for RDMs across all participating hosts.
Each host must meet the networking requirements
Configure a VMkernel port on each host.
Dedicate at least one GigE adapter for vMotion.
Use at least one 10 GigE adapter if you migrate workloads that have many memory operations.
Use jumbo frames for best vMotion performance.
Ensure that jumbo frames are enabled on all network devices that are on the vMotion path including physical NICs, physical switches and virtual switches.

7. What is the difference between clone and template in VMware?
Clone
·         A clone is a copy of virtual machine.
·         You cannot convert back the cloned Virtual Machine.
·         A Clone of a Virtual Machine can be created when the Virtual Machine is powered on
·         Cloning can be done in two ways namely Full Clone and Linked Clone.
·         A full clone is an independent copy of a virtual machine that shares nothing with the parent virtual machine after the cloning operation. Ongoing operation of a full clone is entirely separate from the parent virtual machine.
·         A linked clone is a copy of a virtual machine that shares virtual disks with the parent virtual machine in an ongoing manner. This conserves disk space, and allows multiple virtual machines to use the same software installation.
·         Cloning a virtual machine can save time if you are deploying many similar virtual machines. You can create, configure, and install software on a single virtual machine, and then clone it multiple times, rather than creating and configuring each virtual machine individually.
·         Template
·         A template is a master copy or a baseline image of a virtual machine that can be used to create many clones.
·         Templates cannot be powered on or edited, and are more difficult to alter than ordinary virtual machine.
·         You can convert the template back to Virtual Machine to update the base template with the latest released patches and updates and to install or upgrade any software and again convert back to template to be used for future deployment of Virtual Machines with the latest patches.
·         Convert virtual Machine to template cannot be performed, when Virtual machine is powered on.  Only Clone to Template can be performed when the Virtual Machine is powered on.
·         A template offers a more secure way of preserving a virtual machine configuration that you want to deploy many times.
·         When you clone a virtual machine or deploy a virtual machine from a template, the resulting cloned virtual machine is independent of the original virtual machine or template.
8. What is promiscuous mode in Vmware?
Promiscuous mode is a security policy which can be defined at the virtual switch or portgroup level
A virtual machine, Service Console or VMkernel network interface in a portgroup which allows use of promiscuous mode can see all network traffic traversing the virtual switch.
If this mode is set to reject, the packets are sent to intended port so that the intended virtual machine will only be able to see the communication.
Example: In case you are using a virtual xp inside any Windows VM. If promiscuous mode is set to reject then the virtual xp won't be able to connect the network unless promiscuous mode is enabled for the Windows VM.

9. What is the difference between Thick provision Lazy Zeroed, Thick provision Eager Zeroed and Thin provision?
Thick Provision Lazy Zeroed
·         Creates a virtual disk in a default thick format.
·         Space required for the virtual disk is allocated when the virtual disk is created.
·         Data remaining on the physical device is not erased during creation, but is zeroed out on demand at a later time on first write from the virtual machine.
·         Using the default flat virtual disk format does not zero out or eliminate the possibility of recovering deleted files or restoring old data that might be present on this allocated space.
·         You cannot convert a flat disk to a thin disk.
Thick Provision Eager Zeroed
·         A type of thick virtual disk that supports clustering features such as Fault Tolerance.
·         Space required for the virtual disk is allocated at creation time.
·         In contrast to the flat format, the data remaining on the physical device is zeroed out when the virtual disk is created.
·         It might take much longer to create disks in this format than to create other types of disks.
Thin Provision
·         It provides on on-demand allocation of blocks of data.
·         All the space allocated at the time of creation of virtual disk is not utilized on the hard disk, rather only the size with utilized data is locked and the size increases as the amount of data is increased on the disk.
·         With thin provisioning, storage capacity utilization efficiency can be automatically driven up towards 100% with very little administrative overhead.
10. What is a snapshot?
A snapshot is a “point in time image” of a virtual guest operating system (VM). That snapshot contains an image of the VMs disk, RAM, and devices at the time the snapshot was taken. With the snapshot, you can return the VM to that point in time, whenever you choose. You can take snapshots of your VMs, no matter what guest OS you have and the snapshot functionality can be used for features like performing image level backups of the VMs without ever shutting them down.

11. What is VDI?
·         VDI stands for Virtual Desktop Infrastructure where end user physical machine like desktop or laptop are virtualized due to which VMware described VDI as "delivering desktops from the data center”.
·         Once VDI is used the end user connect to their desktop using a device called thin client.
·         The end user can also connect to their desktop using VMware Horizon View installed on any desktop or mobile devices

12. What is VMware HA?
·         VMware HA i.e. High Availability which works on the host level and is configured on the Cluster.
·         A Cluster configured with HA will migrate and restart all the vms running under any of the host in case of any host-level failure automatically to another host under the same cluster.
·         VMware HA continuously monitors all ESX Server hosts in a cluster and detects failures.
·         VMware HA agent placed on each host maintains a heartbeat with the other hosts in the cluster using the service console network. Each server sends heartbeats to the others servers in the cluster at five-second intervals. If any servers lose heartbeat over three consecutive heartbeat intervals, VMware HA initiates the failover action of restarting all affected virtual machines on other hosts.
·         You can set virtual machine restart priority in case of any host failure depending upon the critical nature of the vm.
NOTE: Using HA in case of any host failure with RESTART the vms on different host so the vms state will be interrupted and it is not a live migration

13. What is the difference between VMware HA and vMotion?
VMware HA is used in the event when any of the hosts inside a cluster fails then all the virtual machines running under it are restarted on different host in the same cluster.
Now HA is dependent on vMotion to perform live migration of the vms to different host so vMotion is just used for the migration purpose between multiple hosts which is also used by other functionality like DRS.

NOTE: Anyhow HA can work very will without vMotion as its primary functionality is to restart the vm from the affected host to the working host but this will be service affecting as the vms will be 'powered off' and then 'powered on' on the new host.

14. What is storage vMotion?
·         Storage vMotion is similar to vMotion in the sense that "something" related to the VM is moved and there is no downtime to the VM guest and end users. However, with SVMotion the VM Guest stays on the server that it resides on but the virtual disk for that VM is what moves.
·         With Storage vMotion, you can migrate a virtual machine and its disk files from one datastore to another while the virtual machine is running.
·         You can choose to place the virtual machine and all its disks in a single location, or select separate locations for the virtual machine configuration file and each virtual disk.
·         During a migration with Storage vMotion, you can transform virtual disks from Thick-Provisioned Lazy Zeroed or Thick-Provisioned Eager Zeroed to Thin-Provisioned or the reverse.
·         Perform live migration of virtual machine disk files across any Fibre Channel, iSCSI, FCoE and NFS storage

15. What is VMware DRS and how does it works?
·         Here DRS stands for Distributed Resource Scheduler which dynamically balances resource across various host under Cluster or resource pool.
·         VMware DRS allows users to define the rules and policies that decide how virtual machines share resources and how these resources are prioritized among multiple virtual machines.
·         Resources are allocated to the virtual machine by either migrating it to another server with more available resources or by making more “space” for it on the same server by migrating other virtual machines to different servers.
·         The live migration of virtual machines to different physical servers is executed completely transparent to end-users through VMware VMotion
·         VMware DRS can be configured to operate in either automatic or manual mode. In automatic mode, VMware DRS determines the best possible distribution of virtual machines among different physical servers and automatically migrates virtual machines to the most appropriate physical servers. In manual mode, VMware DRS provides a recommendation for optimal placement of virtual machines, and leaves it to the system administrator to decide whether to make the change.
16. What is VMware Fault Tolerance?
·         VMware Fault Tolerance provides continuous availability to applications running in a virtual machine, preventing downtime and data loss in the event of server failures.
·         VMware Fault Tolerance, when enabled for a virtual machine, creates a live shadow instance of the primary, running on another physical server.
·         The two instances are kept in virtual lockstep with each other using VMware vLockstep technology
·         The two virtual machines play the exact same set of events, because they get the exact same set of inputs at any given time.
·         The two virtual machines constantly heartbeat against each other and if either virtual machine instance loses the heartbeat, the other takes over immediately. The heartbeats are very frequent, with millisecond intervals, making the failover instantaneous with no loss of data or state.
·         VMware Fault Tolerance requires a dedicated network connection, separate from the VMware VMotion network, between the two physical servers.
17. In a cluster with more than 3 hosts, can you tell Fault Tolerance where to put the Fault Tolerance virtual machine or does it chose on its own?
You can place the original (or Primary virtual machine). You have full control with DRS or vMotion to assign it to any node. The placement of the Secondary, when created, is automatic based on the available hosts. But when the Secondary is created and placed, you can vMotion it to the preferred host.

18. How many virtual CPUs can I use on a Fault Tolerant virtual machine ?
vCenter Server 4.x and vCenter Server 5.x support 1 virtual CPU per protected virtual machine.

19. What happens if vCenter Server is offline when a failover event occurs?
When Fault Tolerance is configured for a virtual machine, vCenter Server need not be online for FT to work. Even if vCenter Server is offline, failover still occurs from the Primary to the Secondary virtual machine. Additionally, the spawning of a new Secondary virtual machine also occurs without vCenter Server.

20. What is the difference between Type 1 and Type 2 Hypervisor?
Type 1 Hypervisor
1.    This is also known as Bare Metal or Embedded or Native Hypervisor.
2.    It works directly on the hardware of the host and can monitor operating systems that run above the hypervisor.
3.    It is completely independent from the Operating System.
4.    The hypervisor is small as its main task is sharing and managing hardware resources between different operating systems.
5.    A major advantage is that any problems in one virtual machine or guest operating system do not affect the other guest operating systems running on the hypervisor.
6.    Examples: VMware ESXi Server, Microsoft Hyper-V, Citrix/Xen Server

Type 2 Hypervisor
1.    This is also known as Hosted Hypervisor.
2.    In this case, the hypervisor is installed on an operating system and then supports other operating systems above it.
3.    It is completely dependent on host Operating System for its operations
4.    While having a base operating system allows better specification of policies, any problems in the base operating system a ffects the entire system as well even if the hypervisor running above the base OS is secure.
5.    Examples: VMware Workstation, Microsoft Virtual PC, Oracle Virtual Box
21. How does vSphere HA works?
When we configure multiple hosts for HA cluster, a single host is automatically elected as the master host. The master host communicates with vCenter Server and monitors the state of all protected virtual machines and of the slave hosts. When you add a host to a vSphere HA cluster, an agent is uploaded to the host and configured to communicate with other agents in the cluster.

22. What are the monitoring methods used for vSphere HA?
The Master and Slave hosts uses two types of monitoring the status of the hosts
·         Datastore Heartbeat
·         Network Heartbeat
23. What are the roles of master host in vSphere HA?
·         Monitoring the state of slave hosts. If a slave host fails or becomes unreachable, the master host identifies which virtual machines need to be restarted.
·         Monitoring the power state of all protected virtual machines. If one virtual machine fails, the master host ensures that it is restarted. Using a local placement engine, the master host also determines where the restart should be done.
·         Managing the lists of cluster hosts and protected virtual machines.
·         Acting as vCenter Server management interface to the cluster and reporting the cluster health state.
24. How is a Master host elected in vSphere HA environment?
When vSphere HA is enabled for a cluster, all active hosts (those not in standby or maintenance mode, or not disconnected) participate in an election to choose the cluster's master host. The host that mounts the greatest number of datastores has an advantage in the election. Only one master host typically exists per cluster and all other hosts are slave hosts.

If the master host fails, is shut down or put in standby mode, or is removed from the cluster a new election is held.

25. If the vCenterserver goes down with a situation that it was pre configured with vSphere HA and DRS, so after power down will HA and DRS perform their task?
vSphere HA is not dependent on vCenterserver for its operations as when HA is configured it installs an agent into each host which does its part and is not dependent on vCenterserver. Also HA doesnot uses vMotion, it justs restarts the vms into another host in any case of host failure.

Further vSphere DRS is very much dependent on vCenterserver as it uses vMotion for its action for live migration of vms between multiple hosts so in case vCenterserver goes down the vMotion won't work leading to failure of DRS.

26. What is the use of vmware tools?
VMware Tools is a suite of utilities that enhances the performance of the virtual machine's guest operating system and improves management of the virtual machine. Without VMware Tools installed in your guest operating system, guest performance lacks important functionality. Installing VMware Tools eliminates or improves these issues:
·         Low video resolution
·         Inadequate color depth
·         Incorrect display of network speed
·         Restricted movement of the mouse
·         Inability to copy and paste and drag-and-drop files
·         Missing sound
·         Provides the ability to take quiesced snapshots of the guest OS
·         Synchronizes the time in the guest operating system with the time on the host

·         Provides support for guest-bound calls created with the VMware VIX API