Andrew's Storage Administration Notes


Change to Admin mode: priv set admin
Change to Advanced Admin mode: priv set advanced
Change to Diagnostic mode: priv set diag

Find out serial number: sysconfig -a
Find out model number: sysconfig -v
Find out the ONTAP software version: show version
Find out CPU Usage: sysstat -M 3
List NFS connections: nfsstat

Show disk sizes:
1. priv set advanced
2. disk_list
3. priv set

Show disks that aren't owned: disk show -n

View licences: license
Apply licence: license add KEYHERE

Service Processor:
Find out the status and IP address of the SP: sp status

man commandhere
commandhere help

User Account Administration:
List users: useradmin user list
Create user: useradmin user add duty -c "Duty User" -n "Duty User" -g Administrators

To swap ssh keys: 
1. ssh velma-1
2. priv set advanced
3. ls /etc/sshd
4. on another unix host: mount velma-1:/vol/vol0 /mnt
5. on another unix host: cd /mnt/etc/sshd
6. on another unix host: mkdir -p duty/.ssh
7. on another unix host: copy ssh keys into directory
8. Test: ssh duty@velma-1 "vol status"

Aggregate Commands:
List aggregates: aggr status
List aggregate options: aggr options
Show aggregate space: aggr show_space -h

Volume Commands:
List volumes: vol status
List volumes (nice formatting): vol status -b
Create a volume: vol create volnamehere aggr_3TB_SATA 20g
Create a volume with thin provisioning on: vol create volnamehere -s none aggr_3TB_SATA 20g
Show volume options: vol options volnamehere
Reduce a volume's size by 20GB: vol size volnamehere -20g
Take a volume offline: vol offline volnamehere
Destroy volume: vol destroy volnamehere
Show volume permissions style: qtree status
Set volume permissions style to ntfs: qtree security /vol/volnamehere ntfs

Create a qtree quota: vfiler run vfilernamehere qtree create /vol/cifssharenamehere/qtreedirnamehere

Thin Provisioning:
To enable thin provisioning: vol options volnamehere guarantee none
To disable thin provisioning: vol options volnamehere guarantee volume
To check if thin provisioning is on: vol options volnamehere

List snapshots: snap list
Create a snapshot: snap create volumenamehere snapshotnamehere

Deduplication (SIS):
List filesystems that could be deduped and status: df -s
Enable Deduplication: sis on /vol/volnamehere
Disable Deduplication: sis off /vol/volnamehere
Start deduplication scan: sis start -s /vol/volnamehere
Deduplication schedule status: sis status -l

List CIFS shares: cifs shares
Create CIFS share: cifs shares -add staffuserprofiles /vol/staffuserprofiles -comment "test"
Give everyone cifs access: cifs access staffuserprofiles everyone Full Control
Give just one user cifs access: vfiler run vfilernamehere cifs access groups STAFF\s1234 Full Control
Delete CIFS share: cifs shares -delete staffuserprofiles
Show cifs sessions: cifs sessions
Show cifs sessions verbose: cifs sessions -s
Show cifs AD Domain info: cifs domaininfo

Check what is shared through what interfaces: options interface.blocked

Change Volume Permissions style (unix/ntfs/mixed)
View current style: qtree security /vol/volnamehere
Change style to ntfs: qtree security /vol/volnamehere ntfs

Show NFS shares: rdfile /etc/exports OR exportfs
Show NFS options: options nfs
Disable every volume (even CIFS ones) from being exported when created: options nfs.export.auto_update off

Show all interfaces: ifconfig -a
Show interface groups: ifgrp status

List all options and settings: options

Working with Files:
Display a file: rdfile /etc/hosts

Date and Time:
Show date and time: date
Show time options: options timed

Performance Analysis:
System Utilisation: sysstat -s 5/sysstat -s -u 5/sysstat -x 1
In Depth: priv set advanced; statit -b (statit -e to end)
Stats: stats list
Stats: stats show
Stats: stats show system
Stats Disk: stats show disk
More info:
Check to see if dedupe is running: sis status
Check to see if snapshots are running: ?
Check to see if replication is running:?
Log a support case with NetApp and run -f controller -t 3 -i 5 -F -l username:password > perfstat.out

Browse NAS volume:
1. set priv advanced
2. vol status (shows /vol/directories)
3. ls /vol/directory

List vfilers: vfiler status
Show vfiler's resources: vfiler status -r
Show protocols allowed for each vfiler: vfiler status -a
Show CIFS sessions on a vfiler: vfiler run vfilernamehere cifs sessions
Show CIFS shares on a vfiler: vfiler run vfilernamehere cifs shares
Show Quota usage on a vfiler: vfiler run vfilernamehere quota report
Show Qtree Status on a vfiler: vfiler run vfilernamehere qtree status

Change a vfiler quota:
1. rdfile /vol/vfilername/etc/quotas
2. wrfile /vol/vfilername/etc/quotas (newline and then ctrl+c to exit!)
3. Show the current quota: vfiler run vfilernamehere quota report
4. Force a reread of /etc/quotas: vfiler run vfilernamehere quota resize volumenamehere

Exporting and Mounting /etc from NetApp to a *nix host:
1. Find name of mountpoint on Netapp: df -h
2. Add export to /etc/exportfs on Netapp: rdfile;wrfile /vol/vfilername/etc/exports
3. Check that the NetApp export file is correct: rdfile /vol/vfilernamehere/etc/exports
4. If it's correct then export the NFS share/s: vfiler run vfilernamehere exportfs -av
5. Export the share on the NetApp: exportfs -av
6. Check that you can see the share on the *nix client: showmount -e netapphostnamehere/vfilerhostnamehere
7. Mount on the *nix host: mount netapphostname:/vol/mountname /localmountdirectory
OR for vfilers: mount vfilerhostname:/vol/mountname /localmountdirectory

show version: version
show system configuration: sysconfig
Show RAID pool disk membership: sysconfig -r
Show disk ownership: storage show
Show disk ownership: disk show -v
Show model number,version,serial etc..: sysconfig -a


EMC Celerra

Storage Performance Troubleshooting

Common Suspects:
1. Front end port utilisation
2. Cache utilisation
3. Poor architecture (e.g Databases running on SATA)
4. Blocksize mismatches

1. Check zfs blocksize is 8K: zfs get recordsize
Oracle I/O Tuning Guide
Best practices for building Virtualized SPARC
Oracle LUN Alignment guide

1. Check zfs disk scrubbing (cronjobs)
2. Check io stats: iozone -xn 10 ; zpool iostat 10
3. Check zfs and /etc/system tuning (including queue depth)
4. Check ssd_max_throttle (queue depth settings): echo "ssd_max_throttle/D" | mdb -k
   To change on live system: echo "ssd_max_throttle/W 0x20" | mdb -kw
   To change for reboot: vi /etc/system and add: set ssd:ssd_max_throttle = 32
5. Check zfs:zfs_vdev_max_pending (should be the same as ssd:ssd_max_throttle): echo "zfs_vdev_max_pending/D" | mdb -k
   To change on live system: echo "zfs_vdev_max_pending/W 0x20" | mdb -kw
   To change for reboot: vi /etc/system and add: set zfs:zfs_vdev_max_pending = 32