[HPADM] SUMMARY: Missing EMC disks after Ignite

From: Ayson, Alison {Info~Palo Alto} (ALISON.AYSON@ROCHE.COM)
Date: Thu Dec 19 2002 - 19:36:23 EST


Thanks to everyone for replying (my original post is at the bottom).

Well...I feel really stupid because it turns out there were no missing disks. However I got so many good replies that I wanted to summarize anyway.

It turns out the "inq" output taken before the ignite was incorrect. I'm not sure how or why, but the "inq" output taken before the ignite showed about twice as many disks as we actually had. It showed 4 paths for each device (but we only have two fiber cards on this server, so there should only have been two paths for each disk). So the "inq" taken after the ignite was actually the correct one. Sounds really stupid in hindsight...but we were assuming that the data from the original configuration would be the correct one (as everything was working fine) and it never occured to us that the original data might be flawed. It wasn't until about 1:30am in the morning when I was trying to fall asleep that I had a sudden brain surge and thought "wait a minute...how could the original inq command show so many disks? We never had that many disks on that server".

So ...ah hem...we never had any missing disks.

BUT! I did get some great replies and I wanted to pass them on as they contain some great troubleshooting information:

1) PowerPath troubleshooting:

-----
When the devices changed c#'s, this information may still be in powerpath and
is confusing the issue.
run powermt display if it shows that your in a degraded state
run powermt display dev=all and look for the closed or UNKNOWN. These will have to be removed.
then run powermt save
Run ioscan -fnC disk | pg and see if your devices are back.
-----

what does ioscan show?
1. check the link
     /sbin/powermt display
        if it exist go
         /sbin/powermt remove dev=all
         /sbin/powermt config
        else
         check fiber connections ,...probably problem is here

-----

2) Basic EMC/HP troubleshooting:

To start with, make sure the zoning in the switches is correct. A good way to check this is to look for the VCM database, which is used by VolumeLogix (ESN Manager) to determine whether it can show a given LUN to a given HBA (Host Bus Adapter, fibre channel card to the rest of us...). The VCM LUN is about 7600 blocks, and is the only LUN that EMC VolumeLogix can't mask because it has to be there to manage VolumeLogix. It should also be EMC ID 000, but there are cases where it'll be something else.
 
To find it, redirect an inq to /tmp/hostname.inq.txt and page through the file looking for devices around 7600 in size. If you have more than one of these, then you may have two paths. Use lssf to confirm more than one hardware path is in use. Since VolumeLogix won't block this, it has to be blocked at the switch. Check the zoning.
 
If it's not the zoning, it may be the device files. I've seen situations where HP/UX becomes "confused" and doesn't realize that the existing device file should be re-used. In these cases, I've had to go through and lssf each /dev/dsk, /dev/rdsk, and /dev/rscsi device and if the hardware path shows as ??? then rmsf it. After that, an ioscan and insf usually find more drives.
 
If it's not that, then it may be VolumeLogix (sometimes packaged as ESN Manager) blocking the LUNs. Not knowing how your SAN is architected I can't tell you much about how to proceed to fix this. If you want to provide more data, I'd be happy to reply.

3) Procedure to recreate original device numbers

I'm not sure why you aren't seeing the alternate paths. Maybe the FC or
scsi path is not working. Verify you hardware path is working.

If h/w is ok, you could change your ioconfig back to the original so that
c9's return.

A good way to create the original file, is to execute:
$ ioscan -kf | grep -e INTERFACE -e btlan | awk '{print $3, $1, $2}'
Edit the file to return hardware paths as they were prior to the ignite. Save as /stand/infile.

Save original configurations.
$ cp /stand/ioconfig /stand/ioconfig.sav
$ cp /etc/ioconfig /etc/ioconfig.sav

Execute ioinit
$ /sbin/ioinit -f infile -r

The -r will immediately reboot the machine.
Sometimes, the server will NOT boot up.
The server will stop at the 'ioinitrc' prompt.

1. Execute ioinit -c
(ioinitrc) /sbin/ioinit -c

2. After a few seconds, read in the new infile.
(ioinitrc) /sbin/ioinit -f /stand/infile -r

Additional NOTE: There's some concern that Ignite 3.6 will not support PowerPath. We were using Ignite 3.5 so we were ok there, but it's good to know that Ignite 3.6 may have problems with PowerPath (thus a reason to stay at 3.5).

Thanks again for the replies. Sorry for the false alarm. I don't feel too bad as there were 3 of us working on this and none of us saw it.

-- Alison Ayson
   Roche Bioscience
   Palo Alto, CA

Original Post:

I'm having some problems seeing some of my EMC disks after using a "Golden Image" Ignite archive.

I created an Ignite archive of one N4000 class server (running HP-UX 11.0) and "ignited" that image onto another N4000 class server. The N4000 class server that I "ignited" has now lost several of it's EMC device files.

Comparing a "inq" done before vs. after the "ignite" shows exactly half the disks devices are missing. I believe that all the devices missing are the alternate paths.

I've tried:

1) insf -e

2) booting to ODE and running mapper

3) rebooting

Nothing has worked. I think this may be an EMC software issue but I'm not sure. The device files changed after the ignite too, but I don't this is the problem (i.e. all the /dev/dsk/c9's become /dev/dsk/c5's).

We are using PVlinks and EMC's PowerPath software. We are connected to an EMC8380 (SAN) attached via fiber switches.

I realize this is probably more of an EMC question, but I also know that many of you out there are using EMC for storage. Has anyone ran into the particular problem?

Thanks for any help/advice....

 

--
             ---> Please post QUESTIONS and SUMMARIES only!! <---
        To subscribe/unsubscribe to this list, contact majordomo@dutchworks.nl
       Name: hpux-admin@dutchworks.nl     Owner: owner-hpux-admin@dutchworks.nl
 
 Archives:  ftp.dutchworks.nl:/pub/digests/hpux-admin       (FTP, browse only)
            http://www.dutchworks.nl/htbin/hpsysadmin   (Web, browse & search)


This archive was generated by hypermail 2.1.7 : Sat Apr 12 2008 - 11:02:23 EDT