SUMMARY: multipath support

From: Bugs (bb1@humboldt.edu)
Date: Wed Apr 06 2005 - 14:34:32 EDT


My question:

On Wed, 6 Apr 2005, Bugs wrote:

->
->Hello,
->My systems are:
->es40,ds25
->Tru64 PK4
->
->We have setup 2 hsg80's in dual-redundancy failover
->mode, but we still only have one SAN swithch.
->I understand that I have to set the host up with multipath
->support. Does anyone know how to do that?
->
->I checked the docs and could not find anything pertaining
->to it.
->
->
->Thanks, Bugs
->
->Bugs Brouillard Unix system administrator
->Humboldt State Univ. Information Technology Services
->Arcata, Calif.
->
->email bb1@humboldt.edu
->

I received lots of replies, and Im going to include them all
because they have very good info and I wish to share it ALL.

I failed to mention that I am at version 5.1B. And that I only
had one fiber conection to the host.

What I did was a SHO THIS and SHO OTHER using the hsg80 CLI.
This showed me the ports for each controller.

The WWIDMGR quickset was done long ago, but I needed more.

In WWIDMGR, I did -SHO PORT
then
    -SET PORT -ITEM ii -NODE nn
for all the ports from THIS and OTHER.

I was then able to boot.

Thanks for all the help, and here are the replies:

==========================================
From: "Dr Thomas.Blinn

I'm not sure I really understand your question. You are
talking about Tru64 UNIX V5.1B with patch kit 4, right?
(There was a PK4 for V5.0, V5.1, and V5.1A, but never for
V5.0A, it only made it to PK3, so "Tru64 PK4" isn't very
definitive after all -- but I assume you are running the
latest.)

You get multipath support in the OS software if there is
more than one path to a given device (physical or logical).

It is done automatically when the OS recognizes the same
device (usually by WWID matching, which is certainly what
would be involved with your HSG80 setup, the logical disk
volumes have to have the same WWID regardless of which of
the available paths is used to get to them). You won't
have multiple paths unless you have more than one fiber
interface from the host to the fabric, and from the way
you describe this, the switch is both the single point of
failure and perhaps a performance bottleneck (depending
on whether it can deliver full data throughput).

There is a way in the console firmware to tell the
console that the device has more than one name; as
I recall, you list more than one device string in
the "set bootdef_dev" command line, probably with
comma delimiters. I haven't personally had to do
this, so I can't tell you the command syntax off
the top of my head, and it's definitely related to
fibre support in the consoles, and since I don't
have a fibre farm of my own to deal with (maybe I
am lucky in that regard), I haven't really had to
learn all the details.. but I think that's what
you need to do, since from the console you can see
both paths to the same device (from the alternate
ports on the HSG80). The OS just takes care of it
with the native multipathing, but the console isn't
that smart, I think.

And there MAY be something else you need to do in
the HSG80 to make it completely transparent, so do
check the HSG80 support channel too.
================================

From: "Krieg, Bernhard"

You really don't have to setup much!
Assumend that everything is cabled correctly and zoning on the switch is
correct, you should already have multiple pathes to your HSGs.
'hwmgr -show scsi' shows you the number of pathes to the HSGs.

All you have to care is to set the pathes of the console variable
'bootdef_dev' to all the pathes it shows when
issuing a 'show device' in console.
=================================

From: Martin Petder
if your host have more than 2 fibre cards each, then it would make sense
to set hsg's on multipath failover. And servers will configure itself
automagically ;)

I'm assuming you're running on 5.x versions of Tru64 :)

=================================

From: Howard Arnold

Tru64 has dual path support already built in. You don't have to do anything
to get it to work.

To see the paths to the disk do a hwmgr -show scsi full
==================================

From: Tom Webster

You failed to mention what version of Tru64 you were running. We'll
assume that it is 5.1B for now. You also did not mention if the ES40
and the DS25 are clustered or if they are stand-alone hosts.

I'm also unclear from your note if you plan on getting a second SAN
switch for use as a redundant fabric. Also assuming this to be the
case.

Your SAN switch should be setup to use zoning to separate unrelated
systems into their own zones. In the old days we would put all of the
Tru64 boxes and their storage in one zone. Current thinking seems to be
that non-clustered systems should each be in their own zone with the
storage. Clustered systems could still share a zone.

You do want to separate Windows traffic from the Tru64 traffic as a
minimum.

Assuming that you are setting this up with the anticipation of a second
switch to be used as a redundant fabric.... Each of your HSG80
controllers will have one port plugged into the current fabric (one each
for a total of two from each controller pair). Each system would then
have at least one HBA plugged into the fabric (we will assume one HBA
connection to the current fabric).

If you have a separate zone for the ES40, it would have the two ports
from each of the arrays (assuming that it talks to both arrays) and its
HBA in the zone set aside for it.

On the HSG80 side, you will see two connections on each pair for the
ES40 -- sometimes it takes a while for them to show up. It would be a
good idea to rename these from !newconXX to something that makes sense
for you. We use host_hXpY, where host is a short version of the host
name, hX is the HBA number in the system (i.e. h1) and pX is the array
port number (1&2 for the top controller and 3&4 for the bottom). So
host_h1p1 is the host's first HBA's connection to the top controller and
host_h1p3 is the host's first HBA's connection to the bottom
controller. The even numbered ports would be on the second fabric.

It is still a good idea to use selective presentation to limit access to
virtual disks, even if you are zoning down to individual systems.
Create your volumes and selectively present them to the host(s) that use
them. As always, for non-clustered systems, more than one system should
never have access to a volume.

OK, now here is the answer to your question: how to setup multipathing?
You already have. Multi-path support is native to Tru64 and OpenVMS.
Assuming that you have your SAN configured as discussed above, you are
already multipathing.

Use the hardware manager command: hwmgr -show scsi

        SCSI DEVICE DEVICE DRIVER NUM DEVICE FIRST
 HWID: DEVICEID HOSTNAME TYPE SUBTYPE OWNER PATH FILE VLD PTH
-------------------------------------------------------------------------
174: 43 host disk none 2 4 dsk7 [12/8/6]

In this example, dsk7 has four paths to it. The first path is 12/8/6
(Bus 12, target 8, LUN 6). This drive can be reached across two
controllers connected to two fabrics. If you do a show full on the
device:

# hwmgr -show scsi -id 174 -full

        SCSI DEVICE DEVICE DRIVER NUM DEVICE FIRST
 HWID: DEVICEID HOSTNAME TYPE SUBTYPE OWNER PATH FILE VLD PTH
-------------------------------------------------------------------------
  174: 43 host disk none 2 4 dsk7 [12/8/6]

      WWID:xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx

      BUS TARGET LUN PATH STATE
      ---------------------------------
      12 8 6 valid
      12 7 6 valid
      13 6 6 valid
      13 5 6 valid

As you can see the disk can be reached across two different HBAs (bus 12
and bus 13), which in this case are connected to two different fabrics.
You would see something similar if you had two HBAs plugged into the
same fabric. If you had two fabrics with two HBAs plugged into each,
you would see eight paths.
======================================

From: Alan Davis

AFAIK, multipath is the default and only behavior in Tru64. The only
setup that I can think of is in WWIDMGR from the SRM console to
configure the paths to the boot LUNS and then setting those into the
bootdef_dev.

The WWIDMGR docs are on the FW CD.

The WWIDMGR quickset command only sets up paths to one boot device, you
have to build the paths to the second device manually.

=====================================

Bugs Brouillard Unix system administrator
Humboldt State Univ. Information Technology Services
Arcata, Calif.

email bb1@humboldt.edu



This archive was generated by hypermail 2.1.7 : Sat Apr 12 2008 - 10:50:17 EDT