SUMMARY: Kernel Rebuild after NIC installation

From: Siebert, Aaron (aaron.siebert@nagrastar.com)
Date: Wed Oct 29 2003 - 11:48:33 EST


Thanks to Dr. Tom Blinn, Alan Rollow, Johan Brusche, and Paul Hendersen
for their answers.

I am including their responses.
#######################################
Tom Blinn
It depends.. The answer to your last question about included drivers is
that, in general, only the drivers needed to support the known hardware
are included in the system-specific kernel. But you can over-ride this
manually (the details depend on the version, it's a different thing for
V4.0x vs. V5.x and even in V4.0x it depends on the specific driver,
since the "new" driver configuration model is supported in part in V4.0F
and V4.0G, but not all the drivers were updated to use it...)

As a general rule, if you make any significant configuration changes in
the hardware, it's a good idea to reboot with the /genvmunix as your
kernel, generate a new configuration file (either by running the
doconfig script or by using sizer -n directly if you know what you are
doing), then if there is new hardware found, re-build your
system-specific kernel. In V4.0x there was still a fair amount of stuff
you might have tuned by manually editing the kernel config file, in V5.x
almost everything is done through /etc/sysconfigtab, but if you have
manual edits in the kernel config file, then you have to be careful to
preserve them when you construct a new file using doconfig.

You also need to run doconfig (unless you really know what you are
doing) if you want to change configurable kernel options (the ones you
get offered in a menu when you run doconfig with no options on the
command line).

Clear as mud? This used to be documented in the system management book
but maybe it's become less clear over time. And it isn't made simpler
by the fact that in V5.x, for most options, if the kernel doesn't have
the driver built it, but can identify the driver .mod from the PCI
configuration data (or other bus config data) built in the various
databases (notably in /etc/sysconfigtab which got used as the bus
configuration database for reasons you don't really want to know), then
the kernel will ATTEMPT to dynamically load in the driver "on the fly",
and sometimes it will succeed (and sometimes it will panic or just fail
gracefully)..
######################################
Alan Rollow
        It depends on the version you're running. I'm lead to
        believe that once V5 knows about the driver for a class
        of device (SCSI adapters using the Qlogic chip set for
        examples; KZPBA, KZPDA, etc), it will autoconfigure
        any adapter is sees that uses that driver when the system
        boots. If this is true and the new network interface
        is in the same family as one already in the system, then
        installing it and booting should be sufficient to see
        it.

        If the adapter isn't like one already present, then
        you'll probably need to boot the generic kernel and
        let it write a new configuration file and new kernel.

        On V4, the kernel needs explicit knowledge of every
        backplane adapter. So, you have to add the line for
        the adapter by hand or boot /genvmunix and let docofig
        write a new configuration file. The SCSI driver is
        smart enough going all the way back to V1.2 to recognize
        new devices on a known bus, so device changes don't need
        a kernel rebuild.

######################################
Johan Brusche
On V4.Ox series: after installation of new HW need to boot from
genvmunix,
                        run sizer -n, do kernel-build and reboot. Migth
need install
                        of NewHwDelivery- or patch-kit to support new
HW.

On V5.x series : just reboot, new HW is autoconfigured if it is
supported
                        by the current Opsys version. If not supported
might need
                        new NewHwDelivery- or patch-kit.
######################################
Paul Hendersen
If the new NIC is not the same 'type' of NIC (for example, you are
putting in a 1GB Ethernet adapter and all the system has now are
10/100MB adapters), then you need to rebuild the kernel. If you are
simply putting in another of the same 'type' of adapter, no kernel
rebuild is needed, just a boot once the hardware is installed.

All the kernel rebuild does is configure into the kernel the drivers
needed for the devices on the system. If there is already a driver in
the kernel for the 'type' of adapter you are adding, then no rebuild is
required.

If the new adapter is a new type, then shutdown the system, install the
new adapter, boot the system using the genvmunix kernel (e.g. >>> boot
-file genvmunix), then perform a 'doconfig' and copy the new kernel to
/vmunix. Then reboot using the new kernel.

-----Original Message-----
From: Siebert, Aaron
Sent: Wednesday, October 29, 2003 9:15 AM
To: tru64-unix-managers@ornl.gov
Subject: Kernel Rebuild after NIC installation

Managers,

I am curious about when a kernel rebuild is necessary. I have read a lot
of messages in the archives concerning this topic but none that describe
when a kernel rebuild is necessary. I am installing a NIC in a DS10 that
is not the same nics installed currently.

Does anyone have an understanding of when a rebuild is necessary?
When rebuilding a kernel does the new kernel include all know drivers or
only ones that are needed at rebuild time?

TIA

Aaron Siebert
Nagrastar Customer Support Engineer
303-706-5492 fax 303-706-5719
Aaron.Siebert@NagraStar.com



This archive was generated by hypermail 2.1.7 : Sat Apr 12 2008 - 10:49:41 EDT