NetApp Snapmirror Configuration / Setup How To
If you like this article, please +1 or Recommend via FB with the provided buttons above:
Enable snapmirror on the source and destination filer:
filer> options snapmirror.enable on
Setup snapmirror access to the source filer:
There are two ways for controlling snapmirror access to a source filer. The snapmirror.access option is the preferred method, and if it is set to 'legacy', the snapmirror.allow file defines the access permissions. Set this option on the source filer as follows:
source-filer> options snapmirror.access legacy
To ensure the destination filer is allowed to snapmirror from the source, populate the /etc/snapmirror.allow file on the source filer with the IP address or hostname of the destination filer. If you use the hostname, make sure it can resolve via DNS. Otherwise, add the entry into the local source filers /etc/hosts.
The current values of snapmirror.allow and /etc/hosts can be viewed with the rdfile comand:
source-filer> rdfile /etc/snapmirror.allow
source-filer> rdfile /etc/hosts
It is a good idea to look at the current values (if there are anyway) before writing to these files. The files can be written to directly on the NetApp with the wrfile command. Note that the wrfile command does NOT append, it overwrites. If existing entries need to be kept, they will have to be pasted in, then new entries can be added. ctrl-c writes to the file. Be careful using wrfile, espeically when messing with /etc/hosts. wrfile is safe to use, but just beware of how it functions as described here.
A safe alternative is to NFS mount /vol/vol0 somewhere and use vi to make the edits.
Create destination snapmirror volumes:
On the destination filer, create volume(s) of the same name (or different) and of same size (or larger) than the source, and restrict them. For example, say there is a source volume vol1 of 100GB that needs to be snapmirrored:
destination-filer> vol create vol1 aggr1 100g
destination-filer> vol restrict vol1
Initialize the snapmirror:
desination-filer> snapmirror initialize -S source-filer:vol1 destination-filer:vol1
Note: do not preface /vol before the volume name in a volume snapmirror. Only in a qtree snapmirror should you specify /vol in the initialize command.
Performing a volume snapmirror creates a snapshot copy known as the baseline snapshot copy before the initial transfer. After the initial copy is performed, the snapmirror only sends the destination blocks that have changed since the last successful replication. During this update process, a new snapshot copy is created and comparison of the changed blocks takes place. These changed blocks are what's sent as part of the update transfer.
Remember that the snapmirror process is controlled from the destination filer, meaning that once things are setup, all of your snapmirror commands will be issued there (though you can check the status of the snapmirror(s) from either).
Monitoring the snapmirror:
snapmirrors can be monitored with the snapmirror status command
Updating the snapmirror:
Once the intialization is complete, snapmirrors can be updated on a periodic basis. This can be done manually with the snapmirror update command:
destination-filer> snapmirror update -S source-filer:vol1 destination-filer:vol1
Or performed automatically via a crontab like schedule via snapmirror.conf. This example shows the snapmirror running once a day at 10 PM. Input the desired schedule into /etc/snapmirror.conf on the destination filer:
source-filer:vol1 destination-filer:vol1 - 0 22 * *
<source-filer:volume destination-filer:volume <arguments> <minute> <hour> <day-of-month> <day-of-week>
use a "-" (dash) for the 'arguments' parameter to specify no arguments
For qtree snapmirrors, only the destination volume needs to be created. There is no need to restrict the volume or create a destination qtree. The snapmirror command will do the latter for you.
qtrees can be viewed on the source filer with the qtree status command:
otenetapp1> qtree status
Volume Tree Style Oplocks Status
-------- -------- ----- -------- ---------
vol0 unix enabled normal
vol0 hosting4 unix enabled normal
vol0 lun unix enabled normal
vol0 prov_control_plc unix enabled normal
In this example, one of two approaches can be taken in snapmirroring these qtrees to the destination filer.
- Create a destination volume and snapmirror the qtrees over into that volume
- Create an individual volume for each qtree on the destination filer to break them up into their own volumes
In my case, I went with option 2 and created individual volumes on the destination:
destination> vol create hosting4 aggr4 40g
destination> vol create lun aggr4 40g
destination> vol create prov_control_plc aggr4 40g
destination> vol create otevol0 aggr4 40g
* I set snap reserve and snap sched to 0 as was the config on my source filer. Not shown here via example
Initialize qtree snapmirrors:
destination> snapmirror initalize -S otenetapp1:/vol/vol0/hosting4 destination:/vol/hosting4/hosting4
destination> snapmirror initalize -S otenetapp1:/vol/vol0/lun destination:/vol/lun/lun
destination> snapmirror initalize -S otenetapp1:/vol/vol0/- destination:/vol/otevol0/root
destination> snapmirror initalize -S otenetapp1:/vol/vol0/prov_control_plc destination:/vol/prov_control_plc/prov_control_plc
Notice the volumes are prefaced wtih /vol for a qtree snapmirror. This is especially important when utilzing snapmirror.conf to schedule replication
<source-filer:qtree destination-filer:qtree <arguments> <minute> <hour> <day-of-month> <day-of-week>