[HPADM] SUMMARY: LTO-2 and Omniback/data protector drive setup suggestions

From: Dan Zucker (daniz@netvision.net.il)
Date: Sun Nov 30 2003 - 00:37:13 EST


I thank
Mike Lavery
Mike White
Tom Myers

The query:

 I have just finished setting up a new library with LTO2 drives for use
 with omniback.
 
 I want to ask if anyone has suggestions for buffers, segsize and blksize
 when defining the drives to omniback?
 
 At least for the next 3 months the library, although SAN attached will
work only with the main media server, so I also need suggestions for
 kernal parms - if any - to adjust.

reply 1:
I would look to do the following........

1. make sure you have latest SCSI patches, particularly SCSI tape
2. latest Omniback patches. I hope you are on 4.x?.
3. Increase disk agent buffers to 32
4. Keep block size the default.
5. For the LTO2 devices in omniback increase the concurrency to around 12 to get the performance. Just keep an eye on the server's resources.
6. Make sure shared memory parms are increased from the default settings
7. Finally, if you are on a SAN, make sure EMS SCSI_tape monitoring is disabled and st_ats_enabled kernel parm is set to 0.

Reply 2:
We have been using the default values. No changes. Let us know if you
find out there is a better way. What type of library do you have?
 

Reply 3:
Since they are LTO (gen1 or gen2) the buffer sizes aren't as critical as if
you were using DLT-8000 drives.

Unless you have any Solaris clients to backup, I would set everything
towards the top end. On my LTO (gen1 or gen2) "devices", I set block size
to 256K, leave the segment size at the default of 2000 and raise disk agent
buffers to 20. If all the clients and the media server have plenty of RAM,
you could push the DA buffers all the way up to 32 and kick block size up to
512K or 1024K.

I've seen backup speeds up to 59GB/hr using my settings, although 50GB/hr
seems to be typical for high-end clients like RPxxxx servers.

Note: For Solaris clients, at least with OB/DP 4.10, I haven't been able to
make it work with block size larger than 128K or DA buffers higher than 6-8.
If you exceed some threshold, the Solaris clients will randomly fail
reporting RPC errors, mostly on Full sessions.

What I have done so far.........

The setup as of today:
1 HBA 2giga Fibre for 7 drives. 2 HBA's are used by DLT7000 on this machine
    and 1 HBA on a different machine is also used by the DLT7000.
(I have 9 DLT7000 on the media/cell server.)

OB4.1 is limited to 5 DA per MA. DP5.1 permits 32 DA per MA.

I set the drives to 32 buffers, 64 blksize, 2000 segments.

I have found that some machines backed up via LAN have only slight
change of total backup time. Some file systems are running at 200%
the speed of previous backups, others are showing only a 1-2% improvement.

Until now I did not use 'compress' on local file systems, but that will be
my next test.

If I find a magic bullet, I will send a second summary. I am attempting
to setup a test machine with DP5.1 at the DRP site, but it means spending
at least a day in the desert.

Hopefully your milage will vary.

DZ

--
             ---> Please post QUESTIONS and SUMMARIES only!! <---
        To subscribe/unsubscribe to this list, contact majordomo@dutchworks.nl
       Name: hpux-admin@dutchworks.nl     Owner: owner-hpux-admin@dutchworks.nl
 
 Archives:  ftp.dutchworks.nl:/pub/digests/hpux-admin       (FTP, browse only)
            http://www.dutchworks.nl/htbin/hpsysadmin   (Web, browse & search)


This archive was generated by hypermail 2.1.7 : Sat Apr 12 2008 - 11:02:37 EDT