Problem: After being Ignited superman lost most sd-ux functionality.
Note: superman (not its real name) is a vpar running on a superdome complex. Only swlist works, swreg -l depot, swinstall -i, swverify all fail with the same error.
ERROR: “spuerman/”: You do not have the required permissions to
select this target. Check permissions using the “swacl”
command or see your system administrator for assistance. Or,
to manage applications designed and packaged for nonprivileged
mode, see the “run_as_superuser” option in the “sd” man page.
* Target connection failed for “zrtph0v0:/”.
ERROR: More information may be found in the daemon logfile on this
target (default location is
superman:/var/adm/sw/swagentd.log).
* Selection had errors.
Standard techniques say check:
/sbin/init.d/swagentd stop
/sbin/init.d/swagentd start
Check /etc/hosts networking is consistent.
Make sure /etc/nsswitch.conf is present and makes sense.
Check permissions on /var/tmp and all the swagent files.
None of this worked.
swlist -i -s $PWD in a depot generated the following error taken from ITRC because the system is already fixed.:
swacl -l host @ superman
List swacl generates this:
Util_Random internal error: Read of /dev/urandom failed, rv=-1, size=8, No such device (19).
There were a series of other errors all pointing to /dev/urandom
lsdev showed that /urandom did not load the kernel module rng (Randome Number Generator).
Detail root /usr/sam/tui/kc/modulemod.sh rng
Detail root /usr/sbin/kcmodule -a -P ALL
This is normal output. Before the system was fixed the system did not show the module running.
lsdev | grep rng
138 -1 rng pseudo
Fix was to unload the rng module in the kernel (using sam SEP cheats)
Then we loaded it. In spite of being listed as dynamic a reboot was required to restore sd-ux functionality.
Actual source of the problem: Ignite image of supergirl did not exclude the /dev/ “files” This cause the wrong kernel module to be loaded with the /dev/urandom “file” driver. Normally this is not a problem becuase /dev is crecreated but for some reason /dev/udandom was not loading the kernel module rng
Ignite excludes have been updated to exclude these files and the system will be re-ignited to make sure nothing else bad happens.
Tags: high capacity volume group, HP-UX, hpux, Ignite, Ignite-UX, patches, SD-UX, SDUX
Quick and Dirty Example here.
In our last example, we created a volume group vg03. It had thee disk, we expanded it to 4 because we planned proper capacity.
Our volume group now consists of 4 disks.
We are asked to create an approximately 10 GB files system in this SAN based volume group.
vgdisplay /dev/vg03
vgdisplay -v /dev/vg03
< Insert vgdisplay example here>
HP vgdisplay documentation link (Note this tends to change. I can’t help it if HP breaks the links)
This will show an empty volume group as we have not created any logical volumes
pvdisplay /dev/dsk/c10d0t1
… repeat for other disks …
<Insert pvdisplay examples here>
Make sure nothing is on them.
Turns out 10 GB will fit quite nicely on a single disk. Since this is a SAN based disk, we need not worry here about raid configuration. If you are hosting an oracle rdbms, you should make sure the SAN admin sets up data, index and rollback as raid 1 or raid 10 to insure good performance.
lvcreate /dev/vg03
# Creates an empty logical volume on vg03. Uses default naming.
You can also do it this way if you like names.
lvcreate /dev/vg03 -n mydata
lvextend -L 10240 /dev/vg03/mydata /dev/dsk/c10t0d1
# This command creates an approximately 1024 MB logical volume and defines the disk it goes on. Always define the disk. Don’t let LVM or SAM decide where your data is going to go. Plan in advance. Note that LVM for Linux which is a feature port and not a binary recompile does let you define size 10 GB or 10240 MB. Still waiting for that feature on LVM for HP-UX.
newfs -F vxfs -o largefiles /dev/vg03/rmydata
# Why largefiles? Databases are big and the default limit on a file size in a file system is 2 GB. That is too small. I almost always set up my file systems these days for largefiles unless the file system itself is less than 2 GB
# Create a mount point.
mkdir /mydata
# mount it.
mount /dev/vg03/mydata /mydata
# This does not set an optimal JFS logging and recovery options, but that is a different article
bdf
# See if its there and the right capacity.
Next article: Edit /etc/fstab and set permanent mount options.
NOTE: This article needs to be checked and have vgdisplay and pvdisplay and other examples inserted into it.
Tags: forums.itrc.hp.com, high capacity volume group, HP-UX, hpux, largefiles, lvcreate, LVM, newfs
Volume group creation, done right need only be done once to last a long time. A few simple steps can make it a process you do once and then enjoy the long term benefits.
Step one is a little homework. Take a reasonable estimate at how many physical volumes the volume group is going to contain. Why is this important? Because by default lvm allocates resources as if there will be 255 physical volumes. Most volume groups don’t see that many disks, and the overall capacity is impacted by the default. For this example, we will pick a small volume group that is never anticipated to exceed 10 physical volumes. We will set the maximum volumes to 25 to have a fair amount of additional capacity but to more efficiently allocate scarce resources.
Now th fun begins. We will create a volume group called vg03
Discover the new disks, important if LUNS have been presented to the system.
insf -C disk (may not be needed on HP-UX 11.31)
ioscan -fnC disk
ioscan shows three disks for this example.
/dev/rdsk/c10t0d1 /dev/rdsk/c10t0d2 /dev/rdsk/c10t0d3
cd /dev
mkdir vg03
mknod /dev/vg03/group c 64 0x030000
# We have created a device file for the volume group.
We need to pvcreate the disks, which lablels the disk for use by LVM
pvcreate /dev/rdsk/c10t0d1
pvcreate /dev/rdsk/c10t0d2
pvcreate /dev/rdsk/c10t0d3
vgcreate -p25 /dev/vg03 /dev/dsk/c10t0d1 /dev/dsk/c12t0d1 /dev/dsk/c10t0d3
# alternative vgcreate -e 65535 -s 16 /dev/vg10 /dev/dsk/c10t0d1 /dev/dsk/c12t0d1 /dev/dsk/c16t0d1 /dev/dsk/c17t0d1
The option -s lets us set a larger PE size which can also increase capacity.
Now inevitably someone is going to decide to add another disk to this volume group. It may be immediately or it may be down the road. We are prepared.
The SAN admin and project manager want to create a scratch area within the volume group for oracle backups to disk.
They present a new lun disk /dev/rdsk/c16t0d5
We respond like lightning.
insf -C disk
ioscan -fnC disk
pvcreate /dev/rdsk/c16t0d5
vgextend vg03 /dev/dsk/c16t0d5
The disk is ready for use.
Different article for how we set up logical volumes and a file system.
Tags: forums.itrc.hp.com, high capacity volume group, hpux, LVM