Tuesday, January 19, 2016

VFILER LAB How to

 VFILER LAB


Use the FAS3170 Simulator for this LAB.
# Use the console, or telnet/ssh to 10.130.10.102 
1. Create a vfiler called test
•   fas3170> vfiler create test -s ipspace1 -i 10.130.10.105 /vol/fas6080a_vfiler1_root  
/vol/fas6080a_vfiler1_nas /vol/fas6080a_vfiler1_san  
•   use .105 address 
•   use ns1 interface 
•   use “netapp” password 
•   don’t set anything else up. ..no dns, no nis, CTRL-C cifs setup
2. Show vfiler status
•   fas3170> vfiler status
•   fas3170> vfiler status -a
3. Rename vfiler
•   fas3170>  vfiler rename test fas3170_vfiler1
•   fas3170> vfiler status
4. VFILER LIMIT (how many vfilers on the system)
NOTE: to increase this value, you must reboot for it to take effect.  Limits are based on memory.  In 
failover mode, a node can handle double the vfilers to handle the cluster partner.  The simulator is 
limited to 5 and defaults to 5. 
–  FAS Controllers with <1GB RAM    11 max vFilers  
–  FAS Controllers with >=1GB RAM  26 max vFilers  
–  FAS Controllers with >=2GB Ram  65 max vFilers  
•   fas3170>  vfiler limit  # show it is set to 5 and 2 are in use 
•   fas3170>  vfiler limit 4 # change to 4 
•   fas3170>  vfiler limit  # show change of 45. Allow and Disallow Protocols per vfiler
•   Show all protocols (by default all are enabled) but requires base license in vfiler0 to use a 
protocol 
o   fas3170>  vfiler status -a 
•   Disallow rsh, ftp and http 
o   fas3170>  vfiler disallow fas3170_vfiler1 proto=rsh proto=ftp proto=http 
o   fas3170>  vfiler status -a   # confirm
•   Allow FTP 
o   fas3170>  vfiler allow fas3170_vfiler1 proto=ftp 
o   fas3170>  vfiler status -a   # confirm
6. vfiler run and vfiler context
•   vfiler context – go into the vfiler context itself 
o   fas3170> vfiler context fas3170_vfiler1       # notice the prompt change 
o   fas3170_vfiler1@fas3170> vol status
o   fas3170_vfiler1@fas3170> vfiler context vfiler0     # go back to vfiler0 
•   Run a command directly to a vfiler 
o   fas3170> vfiler run fas3170_vfiler1 vol status 
•   Run a command to ALL vfilers 
o   fas3170> vfiler run * vol status 
7. Stop and Start a vfiler
•   fas3170> vfiler stop fas3170_vfiler1 
•   fas3170> vfiler start fas3170_vfiler1 
8. Destroy a vfiler
•   fas3170> vfiler stop fas3170_vfiler1 
•   fas3170> vfiler destroy fas3170_vfiler1   # NOTE: Volumes go back to vfiler0 – enter “YES” to
confirm
•   fas3170> vfiler status -a 
9. Recreate the vfiler (you only need the root volume name to recreate if it was a vfiler before)
•   fas3170>  vfiler create fas3170_vfiler1 -r /vol/fas6080a_vfiler1_root -b fas3170_vfiler1 
o   NOTE: We didn’t specify the other 2 volumes that were in the vfiler, but they are 
added back 
   fas3170> vfiler status -a 
10. Create another vfiler (fas3170_vfiler2) and move volumes between vfilers
 •   fas3170> vfiler create fas3170_vfiler2 -s ipspace1 -i 10.130.10.106 
/vol/fas6080b_vfiler1_root  /vol/fas6080b_vfiler1_nas /vol/fas6080b_vfiler1_san   
•   use .106 address 
•   use ns1 interface 
•   use “netapp” password 
•   don’t set anything else up. ..no dns, no nis, CTRL-C cifs setup
11. FlexClone a fas3170_vfiler2 volume and add that clone to fas3170_vfiler1
•   fas3170>  snap create fas6080b_vfiler1_nas clone_snap
•   fas3170>  vol clone create fas6080b_vfiler1_nas_clone -s none -b fas6080b_vfiler1_nas 
clone_snap
•   fas3170> vfiler run * vol status  # show volumes (clone is only in vfiler0) 
•   fas3170>  vfiler status -a  #  show volumes in vfilers (but not vfiler0) 
•   fas3170>  vfiler add fas3170_vfiler1 /vol/fas6080b_vfiler1_nas_clone
•   fas3170>  vfiler status -a  # show volumes…clone is now in vfiler1 
12. Move the source of the cloned Volume from fas3170_vfiler2 to fas3170_vfiler1
•   fas3170>  vfiler move fas3170_vfiler2 fas3170_vfiler1 /vol/fas6080b_vfiler1_nas
•   fas3170>  vfiler status -a13. Remove a volume from a vfiler1  and it goes back to vfiler0, then add back to vfiler2 (same as 
move but to show remove and add function)
•   fas3170>  vfiler remove fas3170_vfiler1 /vol/fas6080b_vfiler1_nas_clone 
•   fas3170> vfiler run * vol status 
•   fas3170>  vfiler status -a 
•   fas3170>  vfiler add fas3170_vfiler2 /vol/fas6080b_vfiler1_nas_clone 
•   fas3170> vfiler run * vol status
•   fas3170>  vfiler status -a
14. Destroy vfiler1 and add all it’s resources to vfiler2 (both IP and volumes)
•   fas3170>  vfiler stop fas3170_vfiler1 
•   fas3170>  vfiler destroy fas3170_vfiler1  # confirm “YES” 
•   fas3170>  vfiler add fas3170_vfiler2 -i 10.130.10.105 /vol/fas6080a_vfiler1_san 
/vol/fas6080a_vfiler1_nas /vol/fas6080b_vfiler1_nas 
•   fas3170>  vfiler status -a 
15. Remove an IP and volume from vfiler2
•   fas3170>  vfiler remove fas3170_vfiler2 -i 10.130.10.105 
•   fas3170>  vfiler remove fas3170_vfiler2 /vol/fas6080a_vfiler1_san
•   fas3170>  vfiler status -a 16. Use “setup” to Modify IP/DNS/NIS/administrator hostname ip/root password  for a vfiler with the 
setup command (see alternate methods) - Setup will wack many setup files so be careful to 
restore from .bak if needed. 
•   **** NOTE: You can ifconfig a new interface or ifconfig alias an existing interface from vfiler0, 
or run “vfiler run vfilername setup -e interface:ip:subnet”.  Setup will wack several setup files 
(hosts, hosts.equiv, resolv.conf, exports, nsswitch.conf) so be careful to restore from .bak if 
needed.  Setup -e will create an alias if the interface is already in use.
a.   IP    [-e <ifname>:<ipv4 address | [ipv6 address]>:<netmask | prefixlen>,...] # 
alternatively use “ifconfig” which can do the same without wacking setup files..you must 
add the ip with “-i” though 
i. fas3170> vfiler add fas3170_vfiler2 -i 10.130.10.225 
ii. fas3170> vfiler status -a  
iii. fas3170> vfiler run fas3170_vfiler2 setup -e ns1:10.130.10.225:255.255.255.0 
1. confirm /etc/rc of vfiler0 has the correct ip
iv. fas3170> vfiler status -a  
b.   DNS  [-d <DNS domain name>:<DNS server ipv4 address | [DNS server ipv6 
address]>:...]  # alternatively…manually set “options dns” and update resolv.conf 
i. fas3170> vfiler run fas3170_vfiler2 setup -d 
insight.com:10.130.10.2:10.130.10.3 
1. confirm “vfiler run fas3170 options dns” and “rdfile 
/vol/fas6080a_vfiler1_root/etc/resolv.conf”
c.   NIS   [-n <NIS domain name>:<NIS server ipv4 address | [NIS server ipv6 address]>:...] 
# alternatively change “options nis” settings in the vfiler 
i. fas3170> vfiler run fas3170_vfiler2 setup -n 
netapp.nis:10.130.10.2:10.130.10.3 
1. confirm “vfiler run fas3170 options nis”
d.   Administrator Host Name and IP  [-a <ipv4 address | [ipv6 address]> | <name>:<ipv4
address | [ipv6 address]>]  # alternatively update hosts.equiv and exports in vfiler root etc 
directory. 
i.   fas3170> vfiler run fas3170_vfiler2 setup -a 10.130.10.250
1.   Confirm “rdfile /vol/fas6080a_vfiler1_root/etc/hosts.equiv” and “rdfile 
/vol/fas6080a_vfiler1_root/etc/exports” 
e.   Root Password  [-p <root password>] # alternatively run “passwd” in the vfiler. 
i. fas3170> vfiler run fas3170_vfiler2 setup -p netapp 1. confirm ssh access direct to vfiler “ssh root@vfiler” for a non-interactive 
command.
17. Stop and Destroy vfiler2 and destroy the clone volume for the next labs
•   fas3170>  vfiler stop fas3170_vfiler2
•   fas3170>  vfiler destroy fas3170_vfiler2  # confirm “YES” 
•   fas3170>  vfiler status -a 
•   fas3170> vol offline fas6080b_vfiler1_nas_clone 
•   fas3170> vol destroy fas6080b_vfiler1_nas_clone  # confirm “YES” 
18. View All VFILER Commands
•   fas3170>  man vfiler
•   fas3170>  vfiler
The following commands are available; for more information 
type "vfiler help <command>" 
add                 disallow            migrate             run 
allow               dr                  move                start 
context             help                remove              status 
create              limit               rename              stop 
destroy 
     vfiler help - Help for vfiler command. 
     vfiler context - Set the vfiler context of the CLI. 
     vfiler create - Create a new vfiler. 
     vfiler rename - Rename an existing vfiler. 
     vfiler destroy - Release vfiler resources. 
     vfiler dr - Configure a vfiler for disaster recovery. 
     vfiler add - Add resources to a vfiler. 
     vfiler remove - Remove resources from a vfiler. 
     vfiler migrate - Migrate a vfiler from a remote filer. 
     vfiler move - Move resources between vfilers. 
     vfiler start - Restart a stopped vfiler. 
     vfiler stop - Stop a running vfiler. 
     vfiler status - Provide status on vfiler configuration. 
     vfiler run - Run a command on a vfiler. 
     vfiler allow - Allow use of a protocol on a vfiler. 
     vfiler disallow - Disallow use of a protocol on a vfiler. 
     vfiler limit - Limit the number of vfilers that can be created. 

Monday, January 18, 2016

EMC- NAS Implementation Exam


 EMC  E20-361 

1. A customer's Celerra administrator wants to present several CIFS shares through one file server. The shares are stored on many different physical machines.
What can be used with the Celerra to achieve this goal?
A. Celerra Data Migration Service
B. Distributed File System
C. Nested Mount File System
D. Virtual Data Mover
Answer: B

2. The file system fs01 must be mounted and exported on /mp1 as a CIFS share so only fs01 can be seen by users.
To do this, a system administrator will need to
  mount fs01 to /mp1/fs01
  export /mp1/fs01 as a CIFS share
What other step will the system administrator need to take to present this share so users only see fs01?
A. create the directory /mp1/fs01 from a Windows host
B. move .etc and lost+found to another directory.
C. mark .etc and lost+found as hidden.
D. unmount fs01 from /mp1
Answer: A

3. When you configure a CIFS Server on a Virtual Data Mover, which dynamic configuration information is only
saved to the Virtual Data Mover?
A. Home directory information
B. NFS file system information
C. Usermapper information
D. Virus checker information
Answer: A

4. What is stored in a Virtual Data Mover's root file system?
A. CIFS security tokens
B. Home directory information
C. NAS root directory
D. Usermapper configuration
Answer: B

5. What important function does Kerberos provide in both Active Directory and NFS v4?
A. Access tokens to hold a user's SID
B. Authentication Service
C. Encryption of passwords sent to domain controllers
D. Security encryption used by authentication protocols
Answer: B

6. What is one property of the default CIFS server?
A. Can be used to access all shares on the Data Mover
B. Has a hidden CIFS name of DEFAULT
C. Uses all assigned IP interfaces on the Data Mover
D. Uses all unassigned IP interfaces on the Data Mover
Answer: D

7. What must be done in for the CIFS server to appear in Active Directory?
A. Add server to DNS
B. Add to domain using DFS
C. Join CIFS server to domain
D. Manually start the CIFS server
Answer: C

8. What is a way of mapping Windows SIDs to UIDs and GIDs?
A. NDS
B. NetBios
C. NIS
D. Usermapper
Answer: D

9. Which MMC snap-in feature makes the administration of connecting to personal shares on the Celerra simpler?
A. Group Policy
B. Home Directory
C. Shared Folders
D. User Rights Assignment
Answer: B

10. What is the tool used to determine the security event types that are logged in a Data Mover's CIFS security log?
A. audit policy snap-in
B. nas_logviewer
C. server_log
D. User Rights Assignment snap-in
Answer: A

11. Which default authentication mechanism is utilized in a Windows Native Mode environment?
A. Certificate Authentication
B. Kerberos
C. LDAP
D. NTLM
Answer: B

12. You are attempting to join a Celerra CIFS server to a Windows domain. You receive an error: "Failed to complete
command". You are able to ping the DNS server and the domain controller from the Data Mover with no problem.
What will need verification?
A. that the DNS record for the Data Mover is correct
B. that the DNS SOA record for the host is correct
C. that the date and time are set correctly
D. that the share is exported
Answer: C

13. What is the default mount option when implementing an NFS solution?
A. AutofFS
B. Hard mount
C. Soft mount
D. Virtual mount
Answer: B

14. Which type of file locking allows NFS clients to read or write to a file locked by an NFS client?
A. All Lock options
B. No Locking option
C. Read/Write Lock option
D. Write Lock option
Answer: A

15. A Celerra administrator would like to implement NFS. The administrator would like to implement some form of
security while presenting NFS shares to clients.
Which NFS version natively supports strong security?
A. NFS v1
B. NFS v2
C. NFS v3
D. NFS v4
Answer: D

16. After the Celerra file system has been exported to NFS, what should be done on the hosts in order to access the
file system?
A. format the file system
B. mount the file system
C. set export to automount
D. update DNS records
Answer: B17.

18.Which object resolution can be provided by NIS?
A. Default gateway
B. DNS
C. SAMBA server
D. Username
Answer: D

18. Which component of NFS currently requests other NFS components?
A. Export
B. Mount
C. Portmapper
D. Remote Procedure Call
Answer: D

19. What is the core component of the NFS protocol?
A. Automounter
B. File Export
C. Remote Procedure Call
D. ypbind
Answer: C

20. Which access policy must be specified when implementing a multi-protocol solution with NFS version 4?
A. Mixed/mixed compat
B. Native
C. NT
D. UNIX
Answer: A

Sunday, January 17, 2016

Some Netapp Basics

Aggregate
An aggregate is the physical storage. It is made up of one or more raid groups of disks.
Aggregates are collections of raid groups.  consist of one or more Raid Groups. 
I like to think of aggregates as a big hard drive.  there are a lot of similarities in this.  When you buy a hard drive you need partition it and format it before it can be used.  until then its basically raw space.  Well, thats an aggregate.  its just raw space.

Volume
A volume is analogous to a partition.  its where you can put data.  think of the previous analogy.  an aggregate is the raw space (hard drive), the volume is the partition, its where you put the file system and data.  some other similarities include the ability to have multiple volumes per aggregate, just like you can have multiple partitions per hard drive.  and you can grow and shrink volumes, just like you can grow and shrink partitions.

Qtree
A qtree is analogous to a subdirectory.  lets continue the analogy.  aggregate is hard drive, volume is partition, and qtree is subdirectory.  why use them? to sort data.  the same reason you use them (at least i hope you use them) on your personal PC.  There are 5 things you can do with a qtree you can't do with a directory and thats why they aren't just called directories.  Oplocks, security style, Quotas, Snapvault, Qtree SnapMirror.

LUN
Its a logical representaion of space, off your SAN.  but the normal question is when do i use a LUN over a CIFS or NFS share/export.  i normally say it depends on the Application.  Some applications need local storage, they just can't seem to write data into a NAS (think CIFS or NFS) share.  Microsoft Exchange and SQL are this way.  they require local hard drives.  So the question is, how do we take this network storage and make it look like an internal hard drive.  the answer is a LUN.  it takes a bit of logical space out of the aggregate (actually just a big file sitting in a volume or qtree) and it gets mounted up on the windows box, looking like an internal hard drive.  the file system makes normal SCSI commands against it.  the SCSI commands get encapsulated in FCP or iSCSI and are sent across the network to the SAN hardware where its converted back into SCSI commands then reinterpreted as WAFL read/writes.  

Some applications know how to use a NAS for storage (think Oracle over NFS, or ESX with NFS datastores) and they don't need LUNs.  they just need access to shared space and they can store their data in it.

Thursday, January 14, 2016

Provision EMC VNX Storage

How to provision storage from VNX Block? / How to assign LUN to Host
and Storage Group from VNX?

Resolution:

Follow the steps below to perform storage provisioning from VNX
(Allocate LUN from VNX to a particular Host):

  Make sure the initiators are Registered and Logged in to the VNX,
  also the Failover Mode and Initiator type should be rightly set.
  Make sure there are LUNs, carved from either RAID group or a Storage pool.
  Login to Unisphere, goto Hosts > Storage Groups. Click on Create
  button to Create a Storage Group.
  Provide a Storage Name and see if the Storage System for which you
  want to create a Storage Group is correct.
  Once you click on Apply, click on Yes and confirm the operation.
  The new pop up says, Do you wish to add LUNs or Connect Hosts,
  click Yes if you add them at the same time, else click No, as the same
  can be done using Connect LUNs and Connect Hosts or Properties tab.
  If you clicked Yes, select the LUNs you wish to present to the
  server under Available LUNs window and click Add.
  Once the LUN is visible in Selected LUNs list, under the Host LUN
  ID (HLU), select the HLU ID you wish to be assigned to the server.
  In the Hosts tab in the same window of Storage Group Properties,
 select the Host from the Available Hosts list in the left side and
 move it to the Host to be Connected window in the right side.
 Click on Apply and OK. The new Storage Group will be visible under
 the Storage Groups list. Properties tab can be used to make any
 changes if required.



The same can be done using Command Line Interface (CLI) as well. Here
are the steps:


1. Create Storage Group.


naviseccli -h <SP_IP address> -user <username> -password <password>
-scope 0 storagegroup -create -gname <userdefined_storagegroupname>


2. Connect host to Storage Group.


naviseccli -h <SP_IP address> -user <username> -password <password>
-scope 0 storagegroup -connecthost -host <name_of_host> -gname
<userdefined_storagegroupname>


3. Connect LUN to Storage Group.


naviseccli -h <SP_IP address> -user <username> -password <password>
-scope 0 storagegroup -addhlu -gname <userdefined_storagegroupname>
-hlu <HLU_ID> -alu <ALU_ID>


Note:



If a LUN is already assigned to another server and you want to share
that LUN to the current server, select "All" under the drop-down list
in the Show LUNs tab so that the LUN is visible for assignment.



A Host can be part of only one Storage Group at any point of time

Fault Tolerance Questions & Answers

Fault Tolerance Questions & Answers

1. Which is the most common cause of soft errors in hardware?
a. Thermal Issue
b. Cosmic Rays
c. Alpha Particile
d. Voltage Fluctuation
Answer:b
2. If X is the MTBF of a system and Y is the failure rate of the system then which one is true?
a. X * Y = 1
b. X = Y
c. NX = Y, where N is the life time
d. X/Y = N, where N is the life time
Answer:a
3. Which one of the property is NOT a requirement for Fault Tolerance?
a. Fault Containments
b. Fault Isolation
c. Dynamic Recovery
d. Fail Safe
Answer:d
4. Which of the operating system architecture is suitable for FT based systems?
a. A – Monolithic Kernel
b. B – Micro Kernel
c. C – Real Time Kernel
d. D – All of the Above
Answer:b
5. The common mechanism used to find latent failure in memory modules:
a. Sniffing
b. Scrubbing
c. Swapping
d. Paging
Answer:a
6. Which one of the availability criteria is optimal for carrier grade class systems?
a. 40 seconds of down time per year
b. 40 minutes of down time per year
c. 10 minutes of down time per year
d. 10 seconds of down time per year
Answer:a

7. MTTR is the best way to characterize
a. Availability
b. Reliability
c. Fault Tolerance
d. Dependability
Answer:a

Deduplication

Data deduplication is a technique to reduce storage needs by eliminating redundant data in your backup environment. Only one copy of the data is retained on storage media, and redundant data is replaced with a pointer to the unique data copy. Dedupe technology typically divides data sets in to smaller chunks and uses algorithms to assign each data chunk a hash identifier, which it compares to previously stored identifiers to determine if the data chunk has already been stored. Some vendors use delta differencing technology, which compares current backups to previous data at the byte level to remove redundant data.
Dedupe technology offers storage and backup administrators a number of benefits, including lower storage space requirements, more efficient disk space use, and less data sent across a WAN for remote backups, replication, and disaster recovery. Jeff Byrne, senior analyst for the Taneja Group, said deduplication technology can have a rapid return on investment (ROI). "In environments where you can achieve 70% to 90% reduction in needed capacity for your backups, you can pay back your investment in these dedupe solutions fairly quickly."
While the overall data deduplication concept is relatively easy to understand, there are a number of different techniques used to accomplish the task of eliminating redundant backup data, and it's possible that certain techniques may be better suited for your environment. So when you are ready to invest in dedupe technology, consider the following technology differences and data deduplication best practices to ensure that you implement the best solution for your needs.

Source Deduplication vs. Target

Deduping can be performed by software running on a server (the source) or in an appliance where backup data is stored (the target). If the data is deduped at the source, redundancies are removed before transmission to the backup target. "If you're deduping right at the source, you get the benefit of a smaller image, a smaller set of data going across the wire to the target," Byrne said. Source deduplication uses client software to compare new data blocks on the primary storage device with previously backed up data blocks. Previously stored data blocks are not transmitted. Source-based deduplication uses less bandwidth for data transmission, but it increases server workload and could increase the amount of time it takes to complete backups.
When you have large backup sets or a small backup window, you don't want to degrade the performance of your backup operation. For certain workloads, a target-based solution might be better suited.
 Deduplication is well suited for backing up smaller and remote sites because increased CPU usage doesn't have as big of an impact on the backup process. Whitehouse also said virtualized environments are also "excellent use cases" for source deduplication because of the immense amounts of redundant data in virtual machine disk (VMDK) files. However, if you have multiple virtual machines (VMs) sharing one physical host, running multiple hash calculations at the same time may overburden the host's I/O resources.
Most well-known data backup applications now include source dedupe, including Symantec Corp.'s Backup Exec and NetBackup, EMC Corp.'s Avamar, CA Inc.'s ArcServe Backup, and IBM Corp.'s Tivoli Storage Manager (TSM) with ProtecTier.
Target deduplication removes redundant data in the backup appliance -- typically a NAS device or virtual tape library (VTL). Target dedupe reduces the storage capacity required for backup data, but does not reduce the amount of data sent across a LAN or WAN during backup. "A target deduplication solution is a purpose built appliance, so the hardware and software stack are tuned to deliver optimal performance," Whitehouse said. "So when you have large backup sets or a small backup window, you don't want to degrade the performance of your backup operation. For certain workloads, a target-based solution might be better suited."
Target deduplication may also fit your environment better if you use multiple backup applications and some do not have built-in dedupe capabilities. Target-based deduplication systems include Quantum Corp.'s DXi series, IBM's TSM, NEC Corp.'s Hydrastor series, FalconStor Software Inc.'s File-interface Deduplication System (FDS), and EMC's Data Domain series.

Inline Deduplication vs. Post-processing Dedupe

Another option to consider is when the data is deduplicated. Inline deduplication removes redundancies in real time as the data is written to the storage target. Software-only products tend to use inline processing because the backup data doesn't land on a disk before it's deduped. Like source deduplication, inline increases CPU overhead in the production environment but limits the total amount of data ultimately transferred to backup storage. Asigra Inc.'s Cloud Backup and CommVault Systems Inc.'s Simpana are software products that use inline deduplication.
Post-process deduplication writes the backup data into a disk cache before it starts the dedupe process. It doesn't necessarily write the full backup to disk before starting the process; once the data starts to hit the disk the dedupe process begins. The deduping process is separate from the backup process so you can dedupe the data outside the backup window without degrading your backup performance. Post-process deduplication also allows you quicker access to your last backup. "So on a recovery that might make a difference," Whitehouse said.
However, the full backup data set is transmitted across the wire to the deduplication disk staging area or to the storage target before the redundancies are eliminated, so you have to have the bandwidth for the data transfer and the capacity to accommodate the full backup data set and deduplication process. Hewlett-Packard Co.'s StorageWorks StoreOnce technology uses post-process deduplication, while Quantum Corp.'s DXi series backup systems use both inline and post-process technologies.
Content-aware or application-aware deduplication products that use delta-differencing technology can compare the current backup data set with previous data sets. "They understand the content of that backup stream, and they know the format that the data is in when the backup application sends it to that target device," Whitehouse said. "They can compare the workload of the current backup to the previous backup to understand what the differences are at a block or at a byte level." Whitehouse said delta-differencing-based products are efficient but they may have to reverse engineer the backup stream to know what it looks like and how to do the delta differencing. Sepaton Inc.'s DeltaStor system and Exagrid System Inc.'s DeltaZone architecture are examples of products that use delta differencing technology.

Global Deduplication

Global deduplication removes backup data redundancies across multiple devices if you are using target-based appliances and multiple clients with source-based products. It allows you to add nodes that talk to each other across multiple locations to scale performance and capacity. Without global deduplication capabilities, each device dedupes just the data it receives. Some global systems can be configured in two-node clusters, such as FalconStor Software's FDS High Availability Cluster. Other systems use grid architectures to scale to dozens of nodes, such as Exarid Systems'DeltaZone and NEC's Hydrastor.
The more backup data you have, the more global deduplication can increase your dedupe ratios and reduce your storage capacity needs. Global deduplication also introduces load balancing and high availability to your backup strategy, and allows you to efficiently manage your entire backup data storage environment. Users with large amounts of backup data or multiple locations will gain the most benefits from the technology. Most of the backup software providers offer products with global dedupe, including Symantec NetBackup and EMC Avamar, and data deduplication appliances, such as IBM's ProtecTier and Sepaton's DeltaStor offer global deduplication.

As with all data backup and storage products, the technologies used are only one factor you should consider when evaluating potential deduplication systems. In fact, according to Whitehouse, the type of dedupe technologies vendors use is not the first attribute many administrators look at when investigating deduplication solutions. Price, performance, and ease of use and integration top deduplication shopper's lists, Whitehouse said. Both Whitehouse and Byrne recommend first finding out if your current backup product has deduplication capabilities. If not, analyze your needs long term and study the vendors' architectures to determine if they match your workload and scaling requirements.

Storage Virtualization

Storage virtualization is a concept and term used within computer science. Specifically, storage systems may use virtualization concepts as a tool to enable better functionality and more advanced features within and across storage systems.
Broadly speaking, a 'storage system' is also known as a storage array or Disk array or a filer. Storage systems typically use special hardware and software along with disk drives in order to provide very fast and reliable storage for computing and data processing. Storage systems are complex, and may be thought of as a special purpose computer designed to provide storage capacity along with advanced data protection features. Disk drives are only one element within a storage system, along with hardware and special purpose embedded software within the system.
Storage systems can provide either block accessed storage, or file accessed storage. Block access is typically delivered over Fibre Channel, iSCSI, SAS, FICON or other protocols. File access is often provided using NFS or CIFS protocols.
Within the context of a storage system, there are two primary types of virtualization that can occur:
  • Block virtualization used in this context refers to the abstraction (separation) of logical storage (partition) from physical storage so that it may be accessed without regard to physical storage or heterogeneous structure. This separation allows the administrators of the storage system greater flexibility in how they manage storage for end users.[1]

  • File virtualization addresses the NAS challenges by eliminating the dependencies between the data accessed at the file level and the location where the files are physically stored. This provides opportunities to optimize storage use and server consolidation and to perform non-disruptive file migrations.

Shrink a CLARiiON LUN on Windows Server 2008

Shrink a CLARiiON LUN on Windows Server 2008

Follow these steps:
  1. Install the Host Agent and Naviseccli for FLARE Release 29 on the server.
  2. Install the EMC VDS Provider and Solutions Enabler.  You can download  EMC VDS from EMC Support Site. The VDS provided at EMC Support site conveniently contains Solutions Enabler.
  3. Install the DiskRAID.exe that supports LUN shrinking on Windows 2008. DiskRAID.exe is provided by Microsoft.
  4. Configure the Solutions Enabler and discover CLARiiON within Solutions Enabler.

    a. Add the Solutions Enabler base license:

    C:\program files\EMC\SYMCLI\bin\symlmf
    register License Key (y/[n])?  y
    Enter License Key:  xxxx-xxxx-xxxx-xxxx

    b. Within Solutions Enabler run the authorization for each SP that is connect to the host:

    C:\program files\EMC\SYMCLI\bin > symcfg authorization add –host IP_Address_Of_SPA  –username xxxx –password xxx
    C:\program files\EMC\SYMCLI\bin > symcfg authorization add –host IP_Address_Of_SPB  –username xxxx –password xxx
    C:\program files\EMC\SYMCLI\bin > symcfg authorization add –host IP_Address_Of_The_Server  –username xxxx –password xxx

    Note: Use the Navisphere username and password.

    c. Within Solutions Enabler discover connected arrays:

    C:\program files\EMC\SYMCLI\bin > symcfg discover –clariion
    C:\program files\EMC\SYMCLI\bin > symcfg list –clariion
  5. Shrink the disk in Disk Management of Windows Server.
  6. Use DiskRAID.exe on Windows Server to free up the disk space from the server side.

    a. DISKRAID>  list provider   #iVDS provider is for iSCSI, VDS provider is for FC
    b. DISKRAID>  list subsystems
    c. DISKRAID>  detail provider   #Use this command to check that you are selecting the correct subsystem.
    d. DISKRAID>  list lun #The LUN number list here is not actually the LUN number in the Navisphere GUI.
    e. DISKRAID>  select lun x    #Select the lun that you want to shrink
    f. DISKRAID>  detail lun   #Compare the LUN name and Identifier here with the LUN name and UUID in the Navisphere GUI, make sure you are selecting the correct LUN.
    g. DISKRAID>  shrink lun size=xxGB     #The number must be an integer.

     8.Check the LUN capacity to confirm that the LUN size has been decreased.

iSCSI Questions & Answers

iSCSI Questions & Answers

This set of multiple choice SAN storage questions and answers focuses on iSCSI protocol.
1. iSCSI is mapping of
a. SCSI over TCP/IP
b. IP over SCSI
c. FC over IP
d. None of the above
View Answer
Answer:a
2. iSCSI allows what type of access
a. block level
b. file level
c. Both a & b
d. none of the above
View Answer
Answer:a
3. iSCSI names are:
a. Globally unique
b. Local to the setup
c. permanent
d. temporary
View Answer
Answer:a , c
4.Which of the following is not true of iSCSI names?
a. iSCSI names are associated with iSCSI nodes(targets and initiators.
b. iSCSI names are associated with n/w adapter cards
c. iSCSI names are world wide unique.
d. iSCSI names are permanant.
View Answer
Answer:b
5. Which of the following is not a valid iSCSI name?
a. iqn.2001-04.com.mystorage:storage.tape1
b. iqn.2001-04.com.mystorage
c. iqn.01-04.com.example.disk
d. none of the above.
View Answer
Answer:c
6. Which of the following is not a valid iSCSI name? a. eui.1234098769785341
b. eui.4237098769785341
c. eui.12340987697853422.disk
d. none of the above
View Answer
Answer:c
7. Discovery session in iSCSI is used for:
a. Discovering iSCSI targets and their TargetAddresses.
b. Probing Luns on iSCSI targets.
c. either of above
d. None of the above.
View Answer
Answer:a
8. Which of the following are valid SendTargets commands?
a. SendTargets=iqn.2001-04.com.mystorage:storage.tape1
b. SendTargets=all
c. Both a. and b.
d. None of the above
View Answer
Answer:c
9. iSCSI targets can be discovered by
a. SendTargets
b. Static configuration
c. using SLP/iSNS
d. All of the above
View Answer
Answer:d
10. Which of the following is false?
a. iSCSI requires login from initiator to target
b. there can be multiple paths between initiator and target
c. Data integrity is ensured using digests
d. None of the above
View Answer
Answer:d
iSCSI Protocol Interview Questions & Answers
This set of SAN questions and answers helps anyone preparing for Netapp Data ONTAP Storage certification exams. One should practice our complete set of questions and answers continously for 2-3 months for through understanding of SAN Fundamentals and various exams and interviews.
1. Which of the following is false regarding multiple paths?
a. Load balancing can be done
b. Higher throughput can be achieved
c. Path Failover can be done
d. None of the above
View Answer
2. Which of the following is invalid error recovery level?
a. Session level recovery
b. SCSI level recovery
c. Connection level recovery
d. Digest failure recovery
View Answer
Answer:b
3. Which of the following statement is true?
a. iSCSI can be implemented in software
b. iSCSI can be implemented in hardware
c. System can boot from iSCSI disk over network
d. All the above
View Answer
Answer:d
4. Which of the following statement is false?
a. iSCSI is limited to local LAN
b. iSCSI can run both on 1GB and 10GB networks
c. iSCSI requires Fibre Channel HBA on the host
d. iSCSI requires network card on the host
View Answer
Answer:a , c
5. Which of the following companies have iSCSI products?
a. Equallogic
b. Qlogic
c. EMC
d. All the above
View Answer
Answer:d



Netapp Interview Online Test

Q: - Identify the two commands that could be entered on the SnapMirror destination storage system.
(Choose two)
A. Options snapmirror.access on
B. Options snapmirror.resync on
C. Snapmirror initialize
D. Snap mirror resync

Correct Answer: CD
Q: - Which two operations can be performed with the SnapDrive for windows graphical user interface?
A. Create volume
B. Create Snapshot copies
C. Create File
D. Create Disk

Correct Answer: BD
Q: - Which storage system command would display the WWPNs of hosts that have logged into storage system
using a Fibre Channel connection?
A. fcp config
B. fcp initiator show
C. fcp show i
D. fcp show initiator

Correct Answer: D
Q: - How can you "throttle" SnapValue updates and baseline transfers so that the primary or secondary is not
transmitting data as it can?
A. Use the k option in the snapvault start or snapshot modify commands.
B. SnapVault does not support throttling of network throughout.
C. Use the snapvault throttle command.
D. Use the k option in the snapvault initialize command.
Correct Answer: A
Q: - Node 1 in a clustered pair detects that it has lost connectivity to one of its disk shelves. Node 1 is still up, but it cannot see one of its disk shelves. However, the partner node, Node 2, can see all of the Node 1's disk shelves. Which feature will cause Node 2 to monitor this error condition for a period of three minutes by default, and then forcibly take over Node 1 if the error condition persists?
A. Auto enable of giveback
B. Negotiated Fail Over
C. Takeover on panic
D. Cf.quickloop.enable

Correct Answer: B
Q: - In Data ONTAP, the root user is exempt from those two quotas: ______________. (Choose two)
A. User quotas
B. Tree quotas
C. Root quotas
D. Group quotas
E. File quotas

Correct Answer: A D
Q: - Which two Volume SnapMirror (VSM) relationship are supported? (Choose two)
A. Data ONTAP 8.0.2 64-bit -->Data ONTAP 8.1 64-bit
B. Data ONTAP 8.0.2 32-bit --> Data ONTAP 8.0.2 64-bit
C. Data ONTAP 7.3.2 32-bit --> Data ONTAP 8.1 64-bit
D. Data ONTAP 7.3.2 32-bit --> Data ONTAP 8.0.2 64-bit

Correct Answer: A
Q: - An aggregate is composed of twelve 36-Gigabyte disks. A drive fails and only 72-Gigabyte spare disks are
available. Data ONTAP will then perform what action.
A. Chooses a 72-Gigabyte disk and use it as is.
B. Chooses a 72-Gigabyte disk and right-size it.
C. Halts after 24 hour of running in degraded mode.
D. Alerts you that there are no 36-Gigabyte disks and wait for one to be inserted.

Correct Answer: B
Q: - Which statement is true about expanding an aggregate from 32-bit to 64-bit in place?
A. All aggregates are automatically converted from 32-bit to 64-bit with the Data ONTAP 8.1 upgrade.
B. The expansion is triggered by an aggr convert command.
C. The expansion is triggered by adding disks to exceed 16 TB.
D. The 32-bit aggregates are degraded and must be Volume SnapMirrored to a new 64-bit aggregates with
Data ONTAP 8.1 upgrade.

Correct Answer: C
Q: - What utility on the storage system will allow you to capture network packet information?
A. Snoop
B. Netstats
C. Pktt
D. Traceroute

Correct Answer: C
Q: - The root admin on the UNIX box receives an "Access Denied" message when he attempts to access a newly
mounted qtree. What's the most likely cause of this error?
A. The qtree is missing from the /etc/hosts file.
B. NFS is turned off on the storage system.
C. The qtree is set to ntfs security style.
D. The qtree has not been exported.

Correct Answer: C
Q: - Which two modes support using SnapMirror over multiple paths?
A. Standalone
B. Partner
C. Multi
D. Failover
E. Give back

Correct Answer: CD
Q: - In a Fiber Channel configuration, the host's HBA is referred to as the ___________, and the storage system's
HBA to as the _____________.
A. Target, initiator
B. Primary, secondary
C. Initiator, target
D. Secondary, primary

Correct Answer: C
Q: - Which action will cause a currently in-sync SnapMirror relationship to fail out of sync?
A. Running snapmirroe update on the source storage system.
B. Running snapmirror release on the source storage system.
C. Modifying the /etc/snapmirror.conf file for the relationship on the source storage system
D. Modifying the /etc/snapmirror.conf file for the relationship on the destination storage system.

Correct Answer: D
Q: - You are trying to do a single file SnapRestore for a file, but you are receiving an error message that the
directory structure no longer exists. Which is the most likely explanation?
A. Once the directory structure has been deleted, you cannot restore the file using single file SnapRestore.
You must now SnapRestore the volume.
B. Snapshot copies have been created since the original directory structure was deleted.
C. You must recreate the directory structure before trying to restore the file.
D. You cannot restore a file to an alternate location.

Correct Answer: C
Q: - Which mechanism allows you to make LUNs available to some initiators and unavailable to others?
A. LUN masking
B. LUN grouping
C. LUN cloning
D. LUN hiding

Correct Answer: A
Q: - An iSCSI ______________ is established when the host initiator logs into the iSCSI target. Within a
______________ you can have one or more ____________.
A. session, session, connections
B. connection, session, connection
C. connection, connection, sessions
D. session, connection, sessions

Correct Answer: A
Q: - Which NetApp Virtual Storage Tier component works at the host level?
A. Flash Pool
B. Flash Disk
C. Flash Accel
D. Flash Cache
E. Flash IO

Correct Answer: C
Q: - When using a Protection Manager policy to manage Open systems SnapValut backups on a UNIX server,
which three valid objects to include in the data set? (Choose three)
A. The entire client
B. A directory
C. A file
D. A qtree

Correct Answer: ABC
Q: - What does it signify if the disks are "not owned" in a FAS2020 system?
A. The disks are mailbox disks.
B. The disks are spare disks.
C. The disks are data disks.
D. The disks are not used.

Correct Answer: D
Q: - For each Open Systems platform directory to be backed up to the SnapVault secondary storage system you
must execute _____________.
A. An initial baseline copy
B. A temporary copy
C. An incremental copy
D. A scheduled update copy

Correct Answer: A
Q: - What is the maximum distance between a standard clustered pair at 2Bbps?
A. 10 meters
B. 50 meters
C. 500 meters
D. 100 meters

Correct Answer: C
Q: - Which exportfs command will temporally export the resource while ignoring the options specified in the /etc/
exports file?
A. exportfs v <path>
B. exportfs u <path>
C. exportfs a <path>
D. exportfs i <path>

Correct Answer: D
Q: - Is NetApp storage Encryption supported in Data ONTAP 8.1.1 Cluster-Mode?
A. No, but you can file a PVR to request support.
B. No. it is targeted for a future release of Data ONATP.
C. Yes, only with a special license installed.
D. Yes, it has been supported since 8.0.1.

Correct Answer: A
Q: - Which two cp types would indicate a busy storage system? (Choose two).
A. cp_from_log_full
B. cp_from_busy
C. cp_from_cp
D. cp_from_timer

orrect Answer: AC
Q: - There are three phases of Non Disruptive Volume Movement (NDVM). What is the correct sequence of these
phases?
A. Setup Phase, Mirror Phase, Cutover Phase
B. Initialization phase, Copy Phase, Migrate Phase
C. Begin Phase, Move Phase, Complete Phase
D. Setup Phase, Data Copy Phase, Cutover Phase

Correct Answer: D
Q: - Which statement describes the results of the SnapMirror resynce command?
A. Resynchronization finds the newest common snapshot shared by the two volumes or qtree, and removes
all newer information on the storage system on which the command is run.
B. Resynchronization will cause the loss of all data written to the destination after the original base snapshot
was made.
C. Resynchronization will update the snapshot on the destination filer.
D. Resynchronization will update the snapshot on the source filer.

Correct Answer: A
Q: - If you were troubleshooting and wanted to look at SnapMirror log files, what is the path to these files?
A. /vol/vol0/etc/log/snaplog/
B. /vol/vol0/etc/snapmirror/
C. /vol/vol0/etc/snaplog/
D. /vol/vol0/etc/log/

Correct Answer: D
Q: - Host multi-pathing describes a ____________ solution that has at least two distinct ______________ paths to a LUN.
A. Token ring, physical
B. FC SAN, virtual
C. FC or IP SAN, virtual
D. FC or IP SAN, Physical

Correct Answer: D
Q: - Which two will allow you to read and analyze a packet trace file generated by the storage system? (Choose
two)
A. WireShark
B. Pktt
C. Netmon
D. Eternal View

Correct Answer: AC
Q: - If you were troubleshooting and wanted to look at SnapMirror log files, what is the path to these files?
A. /vol/vol0/etc/log/snaplog/
B. /vol/vol0/etc/snapmirror/
C. /vol/vol0/etc/snaplog/
D. /vol/vol0/etc/log/

Correct Answer: D
Q: - Host multi-pathing describes a ____________ solution that has at least two distinct ______________ paths to a LUN.
A. Token ring, physical
B. FC SAN, virtual
C. FC or IP SAN, virtual
D. FC or IP SAN, Physical

Correct Answer: D
Q: - Which two will allow you to read and analyze a packet trace file generated by the storage system? (Choose
two)
A. WireShark
B. Pktt
C. Netmon
D. Eternal View

Correct Answer: AC