StorNext Metadata Appliances

Use the links below to access M-Series Qwikipedia content. If you have additional M-Series content to share, please feel free create a new page under this topic and add your content there, so that others can review, contribute to, and benefit from your information.

 

Includes recorded presentations about StorNext Metadata Appliances

 

This page provides links to documented procedures routinely performed by the Professional Services team. There is an ongoing effort between StorNext Engineering and Information Solutions to document more of these procedures.

 

Overview of the MetaData Array Status & Troubleshooting methodologies using the M-Series SMCli Utility and SanTricity GUi

Notes

 

Updating ip routes on the MDC

Friday, July 17, 2015

3:38 PM

Problem statement:

 APPLE MVP  SR3568266

 after 5.1.0 to 5.2.1 upgrade, it was discovered the routing table  was not carried over.

 

Marc,

The routing files are indeed removed and then replaced using the netcfg.sh script. So we will be updating the route files for eth8 and eth9 below.

 

Here are the contents of the syslog from July 18 showing the commands that were run…

 

Jul 18 12:08:31: INFO: *** /opt/DXi/scripts/netcfg.sh script started with arguments: del --devname bond2 ***

Jul 18 12:08:32: INFO: *** /opt/DXi/scripts/netcfg.sh script completed successfully ***

Jul 18 12:08:32: INFO: *** /opt/DXi/scripts/netcfg.sh script started with arguments: del --devname bond3 ***

Jul 18 12:08:33: INFO: *** /opt/DXi/scripts/netcfg.sh script completed successfully ***

Jul 18 12:08:33: INFO: *** /opt/DXi/scripts/netcfg.sh script started with arguments: del --devname eth1 ***

Jul 18 12:08:35: INFO: *** /opt/DXi/scripts/netcfg.sh script completed successfully ***

Jul 18 12:08:35: INFO: *** /opt/DXi/scripts/netcfg.sh script started with arguments: del --devname eth2 ***

Jul 18 12:08:36: INFO: *** /opt/DXi/scripts/netcfg.sh script completed successfully ***

Jul 18 12:08:36: INFO: *** /opt/DXi/scripts/netcfg.sh script started with arguments: del --devname eth3 ***

Jul 18 12:08:37: INFO: *** /opt/DXi/scripts/netcfg.sh script completed successfully ***

Jul 18 12:08:37: INFO: *** /opt/DXi/scripts/netcfg.sh script started with arguments: del --devname eth4 ***

Jul 18 12:08:38: INFO: *** /opt/DXi/scripts/netcfg.sh script completed successfully ***

Jul 18 12:08:38: INFO: *** /opt/DXi/scripts/netcfg.sh script started with arguments: del --devname eth5 ***

Jul 18 12:08:39: INFO: *** /opt/DXi/scripts/netcfg.sh script completed successfully ***

Jul 18 12:08:39: INFO: *** /opt/DXi/scripts/netcfg.sh script started with arguments: del --devname eth6 ***

Jul 18 12:08:41: INFO: *** /opt/DXi/scripts/netcfg.sh script completed successfully ***

Jul 18 12:08:41: INFO: *** /opt/DXi/scripts/netcfg.sh script started with arguments: del --devname eth7 ***

Jul 18 12:08:42: INFO: *** /opt/DXi/scripts/netcfg.sh script completed successfully ***

Jul 18 12:08:42: INFO: *** /opt/DXi/scripts/netcfg.sh script started with arguments: del --devname eth8 ***

Jul 18 12:08:43: INFO: *** /opt/DXi/scripts/netcfg.sh script completed successfully ***

Jul 18 12:08:43: INFO: *** /opt/DXi/scripts/netcfg.sh script started with arguments: del --devname eth9 ***

Jul 18 12:08:44: INFO: *** /opt/DXi/scripts/netcfg.sh script completed successfully ***

 

Jul 18 12:08:45: INFO: *** /opt/DXi/scripts/netcfg.sh script started with arguments: add --devname eth1 --ipaddr 17.218.188.24 --netmask 255.255.252.0 --gateway 17.218.188.1 --mtu STD ***

Jul 18 12:08:46: INFO: *** /opt/DXi/scripts/netcfg.sh script completed successfully ***

Jul 18 12:08:46: INFO: *** /opt/DXi/scripts/netcfg.sh script started with arguments: add --devname eth2 --ipaddr 10.55.188.24 --netmask 255.255.0.0 --gateway 10.55.188.1 --mtu STD ***

Jul 18 12:08:48: INFO: *** /opt/DXi/scripts/netcfg.sh script completed successfully ***

Jul 18 12:08:48: INFO: *** /opt/DXi/scripts/netcfg.sh script started with arguments: add --devname eth8 --ipaddr 17.218.184.174 --netmask 255.255.252.0 --gateway 17.218.184.1 --mtu JUMBO ***

Jul 18 12:08:49: INFO: *** /opt/DXi/scripts/netcfg.sh script completed successfully ***

Jul 18 12:08:49: INFO: *** /opt/DXi/scripts/netcfg.sh script started with arguments: add --devname eth9 --ipaddr 17.218.188.174 --netmask 255.255.252.0 --gateway 17.218.188.1 --mtu JUMBO ***

Jul 18 12:08:50: INFO: *** /opt/DXi/scripts/netcfg.sh script completed successfully ***

 

 

 

 

Update routing tables for eth8 and eth9

 

There are three files we are concerned with :

/etc/sysconfig/iptables

/etc/sysconfig/network-scripts/route-ethX

/etc/sysconfig/network-scripts/rule-ethX

 

Iptables is the default firewall of Linux so if the routes change;  most likely the iptables will need updated as well to reflect the changes.

Route-ethX lists the static routes for that eth

Rule-ethX specifies which data uses what eth portis used for  private meta and public access.

 

 

 

 

 

 

 

First save off the current /etc/sysconfig/network-scripts/route-eth8 and /etc/sysconfig/network-scripts/route-eth9 files to /tmp as a backup using mv.

 

Here are the routes from an old snapshot when things were OK…>

 

                #file route-eth9 looks like this...>

               

                17.218.188.171 via 17.218.188.174 dev eth9

                17.218.188.179 via 17.218.188.174 dev eth9

                17.111.120.37 via 17.218.188.1 dev eth9

                17.111.120.165 via 17.218.188.1 dev eth9

                17.218.188.172 via 17.218.188.174 dev eth9

                17.111.120.38 via 17.218.188.1 dev eth9

                17.111.120.166 via 17.218.188.1 dev eth9

                17.218.188.173 via 17.218.188.174 dev eth9

 

               

                #file route-eth8 looks like this... > 

               

                17.218.184.171 via 17.218.184.174 dev eth8

                17.218.184.179 via 17.218.184.174 dev eth8

                17.111.120.5 via 17.218.184.1 dev eth8

                17.111.120.133 via 17.218.184.1 dev eth8

                17.218.184.172 via 17.218.184.174 dev eth8

                17.111.120.6 via 17.218.184.1 dev eth8

                17.111.120.134 via 17.218.184.1 dev eth8

                17.218.184.173 via 17.218.184.174 dev eth8

               

               

Using the above data and vi  or whatever you want to make the two files, recreate the two route files and make sure they are in the correct directory (/etc/sysconfig/network-scripts/).   

                  

To effect changes,  networking has to be restarted which means cvfs also needs to be restarted

If you want a failover for minimum intrusiveness you stop the secondary, restart the network and then start it back up again, then do the same on the primary. 

 

If you want to keep the primary as the primary without a failover then you do it as below but you will lose services for a minute or so…

Shut down StorNext before restarting the network. Put HA in config mode first (easiest)

 

OR

 

we can just stop StorNext on the secondary node then the primary node and  then after restarting networking,  start on the primary and then on to the secondary. Something like:

 

Secondary:

    service cvfs stop

    service network restart

Primary:

    service cvfs stop

    service network restart

               

    service cvfs start

Secondary:

    service cvfs start

 

               

Don't forget to delete the two old/wrong files you put in /tmp when you are done.

 

Once thi

 

Note by Chuck Forry on 07/29/2015 01:13 PM

From: Oliver Lemke
Sent: Friday, July 24, 2015 10:06 AM
To: DL-AMER-SPS
Subject: OUTCOME: Anyone ever seen Replication Policies disappearing after upgrade?

 

I just wanted to feedback to y’all on this…..

 

Basically it did seem that the policies got wiped out during an upgrade attempt. Fsm cored during the first upgrade attempt and a later rerun of the upgrade succeeded. Around the time of that core, all of the policies for one FS disappeared. The one policy that was left was in a FS that had only 1 replication policy. The logs showed no reference to the replication policies getting deleted.

 

Fix:

Just recreate the replication policies and point them at the folders that need to be replicated. The Source and Target replication keys are derived from the folder’s inode value, therefore when you recreate the key should be the same as before. The source and target keys for each policy will not be the same.

 

To check if replication policies are configured:

- check the GUI and snpolicy_gather.out

 

To check what replication policies are running:

# grep "realized" /usr/cvfs/debug/snpolicy.out         - This will indicate data was transferred. The path to the replicated folder is shown

 

# grep "up-to-date" /usr/cvfs/debug/snpolicy.out     - This will indicate the policy is active, but no data was transferred. The path to the replicated folder is shown

 

- Combining the list from both of the above two outputs will total the active policies

- Cross reference the date/time stamp against the expected policy schedule

 

How to find the original replication policy setup (if gone):

- Check back through any of the old cases on //srscratch, until you find one where the snpolicy_gather.out is populated.

- Check with customer if any older snapshots exist on the MDC that could be re-uploaded. If in doubt over which, upload them all, oldest first.

 

Bug:

Not created yet, but Steve Cole will be creating one shortly against SR3559030. Just cross reference this SR, I will put it in there once the bug is out there.

 

Kind Regards

Oliver


Quantum

Oliver Lemke Software Product Support (SPS)

Working Hours: 8am to 5pm MT
719.536.5642 | Oliver.Lemke@Quantum.com | Quantum.com

Note by Chuck Forry on 07/24/2015 12:25 PM


This page was generated by the BrainKeeper Enterprise Wiki, © 2018