Script for detecting APA network bonded pairs. It is already built into the cinam21t drd image. It will save you 3-5 hours of guess work on future builds.
Networking was changed to protect the innocent.
Here is an example:
[root@cinam21t]:/home/root # ./apanetwork_discover 142.18.1.26 142.18.1.96 ——————————————————— -This script figures out which NIC cards are APA paired.- -It has two inputs:……………………………….- -1- The assigned IP address of the APA Group lan90#…..- -2- The known network address of an HP-UX server on net.- -ex ./apanetwork_discover 142.18.1.26 142.18.1.96 ……- – These are cinam21t and stlam31t…………………..- – The system must be OFF network for this to work ……- – Instruction: …………………………………..- – /sbin/init.d/net stop …………………………..- – /sbin/init.d/vlan stop ………………………….- – /sbin/init.d/hplm stop ………………………….- – /sbin/init.d/hpapa stop (You may need to ctrl-break…- – netstat -rn (ifconfig lan# down then unplumb any lans.- – Wash,rinse and repeat for lan901,lan902,lan903 …….- ——————————————————— The LAN is lan0 Success lan0 as 142.18.1.26 was able to ping 142.18.1.96 The LAN is lan8 NO JOY lan8 as 142.18.1.26 was able NOT to ping 142.18.1.96 The LAN is lan16 NO JOY lan16 as 142.18.1.26 was able NOT to ping 142.18.1.96 The LAN is lan19 NO JOY lan19 as 142.18.1.26 was able NOT to ping 142.18.1.96 The LAN is lan2 NO JOY lan2 as 142.18.1.26 was able NOT to ping 142.18.1.96 The LAN is lan49 NO JOY lan49 as 142.18.1.26 was able NOT to ping 142.18.1.96 The LAN is lan52 NO JOY lan52 as 142.18.1.26 was able NOT to ping 142.18.1.96 The LAN is lan56 Success lan56 as 142.18.1.26 was able to ping 142.18.1.96 [root@cinam21t]:/home/root # |
In this case lan0 are in the bonded pair (lan900)
Take a nwmgr output before bringing network down. Run from console only
Here is the script code
/root/build # cat apanetwork_discover
#
echo “———————————————————“
echo “-This script figures out which NIC cards are APA paired.-“
echo “-It has two inputs:……………………………….-“
echo “-1- The assigned IP address of the APA Group lan90#…..-“
echo “-2- The known network address of an HP-UX server on net.-“
echo “-ex ./apanetwork_discover 172.19.1.26 172.19.1.96 ……-“
echo “- These are stlam34t and stlam31t…………………..-“
echo “- The system must be OFF network for this to work ……-“
echo “- Instruction: …………………………………..-“
echo “- /sbin/init.d/net stop …………………………..-“
echo “- /sbin/init.d/vlan stop ………………………….-“
echo “- /sbin/init.d/hplm stop ………………………….-“
echo “- /sbin/init.d/hpapa stop (You may need to ctrl-break…-“
echo “- netstat -rn (ifconfig lan# down then unplumb any lans.-“
echo “- Wash,rinse and repeat for lan901,lan902,lan903 …….-“
echo “———————————————————“
IP2=$2
IPADDY=$1
nwmgr | awk ‘!/hp_apa/{ printf “%s %s\n”, $1,$2 }’ | awk ‘/UP/{print $1}’ | while read -r LN
do
sleep 1
echo "The LAN is ${LN}"
ifconfig ${LN} ${IPADDY} netmask 255.255.255.0 up > /dev/null
ping ${IP2} -n 1 -m 5 > /dev/null
rc=$?
if [ $rc -eq 0 ]
then
echo "Success $LN as $IPADDY was able to ping $IP2"
else
echo "NO JOY $LN as $IPADDY was able NOT to ping $IP2"
fi
ifconfig ${LN} down
ifconfig ${LN} unplumb
done
Troublehooting steps:
1- Remove the /var/opt/perf/ttd.pid and try to start glance again
#rm /var/opt/perf/ttd.pid
#glance
2- If the above fails to fix it then stop and restart Glance as follows
# mwa stop
# midaemon -smdvss 4M -kths 1000 -pids 5000 -p # ps -ef | grep midaemon Make sure the midaemon is running, # mwa start
Modify MWA_START_COMMAND variable in /etc/rc.config.d/ovpa as follows to keep the changes across system reboot.
# grep MWA_START_COMMAND /etc/rc.config.d/ovpa MWA_START_COMMAND=”/opt/perf/bin/midaemon -smdvss 4M -kths 1000 -pids 5000 -p ; /opt/perf/bin/mwa start”
Tags: HP-UX, hpux, hpux 11.31, ia64, patches
claimed on ioscan
fuser -c shows clean
drd is insists the disk is busy
DRD crashed causing the issue.
Don’t want to reboot that is an admission of defeat.
ERROR: Analysis of file system creation fails.
– Analysis of target fails.
– Analysis of the configuration with disk “/dev/disk/disk143” fails.
– The analysis step for creation of an inactive system image failed.
– The default DRD mount point “/var/opt/drd/mnts/sysimage_001/” cannot be used due to the following error(s):
– The mount point /var/opt/drd/mnts/sysimage_001/ is not an empty directory as required.
* Analyzing For System Image Cloning failed with 1 error.
* DRD operation failed, contents of /var/opt/drd/tmp copied to /var/opt/drd/save.
======= 08/13/18 06:39:16 EDT END Clone System Image failed with 1 error. (user=hcladmin) (jobid=ohonq001)
cd /var/opt/drd/mnts/
rm -rf *
scsimgr clear_kmstat -D /dev/rdisk/disk143
scsimgr: Cleared the Kmetric data successfully
DRD nirvana.
If this solution helped you consider making a donation to support the site:
Tags: drd, hpux 11.31, LVM, scsimgr
Send all your spam to this address:
spamdepot@hpux.ws
On LVM 1.0 Volume group, the task is no downtime storage migration. Hitachi to Pure Solid State storage. Mirror/UX required. Disks are almostthe same size: dbrestore:root > diskinfo /dev/rdisk/disk42 SCSI describe of /dev/rdisk/disk42: vendor: HITACHI product id: OPEN-V type: direct access size: 16777216 Kbytes bytes per sector: 512 dbrestore:root > diskinfo /dev/rdisk/disk52 SCSI describe of /dev/rdisk/disk52: vendor: PURE product id: FlashArray type: direct access size: 10485760 Kbytes bytes per sector: 512 pvcreate /dev/rdisk/disk52 vgextend /dev/vgtest /dev/disk/disk52 Before state: dbrestore:root > vgdisplay -v vgtest --- Volume groups --- VG Name /dev/vgtest VG Write Access read/write VG Status available Max LV 255 Cur LV 1 Open LV 1 Max PV 16 Cur PV 2 Act PV 2 Max PE per PV 4095 VGDA 4 PE Size (Mbytes) 4 Total PE 6654 Alloc PE 1024 Free PE 5630 Total PVG 0 Total Spare PVs 0 Total Spare PVs in use 0 VG Version 1.0 VG Max Size 262080m VG Max Extents 65520 --- Logical volumes --- LV Name /dev/vgtest/lvtest LV Status available/syncd LV Size (Mbytes) 4096 Current LE 1024 Allocated PE 1024 Used PV 1 --- Physical volumes --- PV Name /dev/disk/disk42 PV Status available Total PE 4095 Free PE 4095 Autoswitch On Proactive Polling On PV Name /dev/disk/disk52 PV Status available Total PE 2559 Free PE 1535 Autoswitch On Proactive Polling On dbrestore:root > ioscan -NfnCdisk /dev/disk/disk42 Class I H/W Path Driver S/W State H/W Type Description =================================================================== disk 42 64000/0xfa00/0x21 esdisk CLAIMED DEVICE HITACHI OPEN-V /dev/disk/disk42 /dev/rdisk/disk42 dbrestore:root > ioscan -NfnCdisk /dev/disk/disk52 Class I H/W Path Driver S/W State H/W Type Description =================================================================== disk 52 64000/0xfa00/0x35 esdisk CLAIMED DEVICE PURE FlashArray /dev/disk/disk52 /dev/rdisk/disk52 dbrestore:root > bdf | grep test /dev/vgtest/lvtest 4194304 19544 3913845 0% /test dbrestore:root > lvdisplay -v /dev/vgtest/lvtest --- Logical volumes --- LV Name /dev/vgtest/lvtest VG Name /dev/vgtest LV Permission read/write LV Status available/syncd Mirror copies 0 Consistency Recovery MWC Schedule parallel LV Size (Mbytes) 4096 Current LE 1024 Allocated PE 1024 Stripes 0 Stripe Size (Kbytes) 0 Bad block on Allocation strict IO Timeout (Seconds) default --- Distribution of logical volume --- PV Name LE on PV PE on PV /dev/disk/disk42 1024 1024 --- Logical extents --- LE PV1 PE1 Status 1 00000 /dev/disk/disk42 00000 current 00001 /dev/disk/disk42 00001 current 00002 /dev/disk/disk42 00002 current ... 01022 /dev/disk/disk42 01022 current 01023 /dev/disk/disk42 01023 current dbrestore:root > lvextend -m 1 /dev/vgtest/lvtest /dev/disk/disk52 The newly allocated mirrors are now being synchronized.This operation will take some time. Please wait .... Logical volume "/dev/vgtest/lvtest" has been successfully extended. Volume Group configuration for /dev/vgtest has been saved in /etc/lvmconf/vgtest.conf dbrestore:root > lvdisplay -v /dev/vgtest/lvtest --- Logical volumes --- LV Name /dev/vgtest/lvtest VG Name /dev/vgtest LV Permission read/write LV Status available/syncd Mirror copies 1 Consistency Recovery MWC Schedule parallel LV Size (Mbytes) 4096 Current LE 1024 Allocated PE 2048 Stripes 0 Stripe Size (Kbytes) 0 Bad block on Allocation strict IO Timeout (Seconds) default --- Distribution of logical volume --- PV Name LE on PV PE on PV /dev/disk/disk42 1024 1024 /dev/disk/disk52 1024 1024 --- Logical extents --- LE PV1 PE1 Status 1 PV2 PE2 Status 2 00000 /dev/disk/disk42 00000 current /dev/disk/disk52 00000 current 00001 /dev/disk/disk42 00001 current /dev/disk/disk52 00001 current 00002 /dev/disk/disk42 00002 current /dev/disk/disk52 00002 current ... 01023 /dev/disk/disk42 01023 current /dev/disk/disk52 01023 current dbrestore:root > bdf | grep test /dev/vgtest/lvtest 4194304 19544 3913845 0% /test dbrestore:root > lvreduce -m 0 /dev/vgtest/lvtest /dev/disk/disk42 Logical volume "/dev/vgtest/lvtest" has been successfully reduced. Volume Group configuration for /dev/vgtest has been saved in /etc/lvmconf/vgtest.conf dbrestore:root > bdf | grep test /dev/vgtest/lvtest 4194304 19544 3913845 0% /test dbrestore:root > lvdisplay -v /dev/vgtest/lvtest --- Logical volumes --- LV Name /dev/vgtest/lvtest VG Name /dev/vgtest LV Permission read/write LV Status available/syncd Mirror copies 0 Consistency Recovery MWC Schedule parallel LV Size (Mbytes) 4096 Current LE 1024 Allocated PE 1024 Stripes 0 Stripe Size (Kbytes) 0 Bad block on Allocation strict IO Timeout (Seconds) default --- Distribution of logical volume --- PV Name LE on PV PE on PV /dev/disk/disk52 1024 1024 --- Logical extents --- LE PV1 PE1 Status 1 00000 /dev/disk/disk52 00000 current 00001 /dev/disk/disk52 00001 current ... 01023 /dev/disk/disk52 01023 current dbrestore:root > bdf | grep test /dev/vgtest/lvtest 4194304 19544 3913845 0% /test dbrestore:root >
Hitachi shops faced annoyance times two:
1. xpinfo does not work on non-Hitachi storage for example Pure storage
2. xpinfo does not work on hpvm guests depending on how the storage is passed through from the hpvm host
I now present xpinfonew which though raw and unfnished
The output:
myserv0:root > ./xpinfonew
Device path ldev
==========================================================================
/dev/rdisk/disk111 =:=
/dev/rdisk/disk12 30:86
/dev/rdisk/disk172 03:f3
/dev/rdisk/disk215 46:2c
/dev/rdisk/disk216 46:30
/dev/rdisk/disk217 46:34
/dev/rdisk/disk218 46:38
/dev/rdisk/disk219 46:28
/dev/rdisk/disk220 46:25
/dev/rdisk/disk221 46:27
/dev/rdisk/disk222 46:2a
/dev/rdisk/disk223 46:2e
/dev/rdisk/disk224 46:32
/dev/rdisk/disk225 46:2b
/dev/rdisk/disk226 46:2f
/dev/rdisk/disk227 46:33
/dev/rdisk/disk237 46:37
/dev/rdisk/disk238 46:36
/dev/rdisk/disk239 46:26
/dev/rdisk/disk240 46:29
/dev/rdisk/disk241 46:2d
/dev/rdisk/disk242 46:31
/dev/rdisk/disk243 46:35
/dev/rdisk/disk244 46:39
/dev/rdisk/disk4 aa:bf
/dev/rdisk/disk5 8b:c3
/dev/rdisk/disk6 03:a6
/dev/rdisk/disk9 01:00
myserv0:root > ./xpinfonew raw
Device path ldev
==========================================================================
/dev/rdisk/disk111 =
/dev/rdisk/disk12 3086
/dev/rdisk/disk172 03f3
/dev/rdisk/disk215 462c
/dev/rdisk/disk216 4630
/dev/rdisk/disk217 4634
/dev/rdisk/disk218 4638
/dev/rdisk/disk219 4628
/dev/rdisk/disk220 4625
/dev/rdisk/disk221 4627
/dev/rdisk/disk222 462a
/dev/rdisk/disk223 462e
/dev/rdisk/disk224 4632
/dev/rdisk/disk225 462b
/dev/rdisk/disk226 462f
/dev/rdisk/disk227 4633
/dev/rdisk/disk237 4637
/dev/rdisk/disk238 4636
/dev/rdisk/disk239 4626
/dev/rdisk/disk240 4629
/dev/rdisk/disk241 462d
/dev/rdisk/disk242 4631
/dev/rdisk/disk243 4635
/dev/rdisk/disk244 4639
/dev/rdisk/disk4 aabf
/dev/rdisk/disk5 8bc3
/dev/rdisk/disk6 03a6
/dev/rdisk/disk9 0100
cat xpinfonew
#!/bin/ksh
# Get ldev from any disk regardless of storage provider
#
# 10/26/2017 Steven “Shmuel” Protter steven.protter@hcl.com
#
echo “Device path \t\t ldev ”
echo “==========================================================================”
ioscan -NfnCdisk | awk ‘/rdisk/{ print $(NF) }’ | awk -F_ ‘{ print $1 }’ | sort -u |while read -r dv
do
ldev=$(/var/adm/bin/getldev.ksh ${dv} ${1} );
echo “${dv} \t ${ldev}”
done
The code:
cat /var/adm/bin/getldev.ksh
#!/bin/ksh
# Get ldev from any disk regardless of storage provider
#
# 10/26/2017 Steven “Shmuel” Protter steven.protter@hcl.com
#
argies=$#
if [ $argies -eq 0 ]
then
echo “———— 1 argument required device path ex: /dev/rdisk/disk101 ————-”
exit 1
fi
dv=$1
fmt=$2
## /usr/sbin/scsimgr lun_map -D ${dv} | awk ‘/World Wide Identifier/{ print $(NF) }’
rldev=$(/usr/sbin/scsimgr lun_map -D ${dv} | awk ‘/World Wide Identifier/{ print substr ( $NF, length($NF) – 3, length($NF) ) }’);
l1=$(echo ${rldev} | awk ‘{ print substr ( $NF, length($NF) – 3, 2 ) }’);
l2=$(echo ${rldev} | awk ‘{ print substr ( $NF, length($NF) – 1, length($NF) ) }’);
### echo “raw: ${rldev} l1: ${l1} l2: ${l2} …”
if [ “$fmt” = “raw” ]
then
echo ${rldev}
else
echo “${l1}:${l2}”
fi
Should work on any SAN based storage
Tags: HP-UX, hpux, storage ldev, storage ldev works in hpvm guests, xpinfo improvement
Host names need to be descriptive. The trend toward longer names has been going on for years.
As a legacy Unix, HP-UX CAN keep up.
kctune -v expanded_node_host_names
mygush0:root > kctune -v expanded_node_host_names
Tunable expanded_node_host_names
Description Enables expanded node and host names (read manpage for warnings)
Module sysconfig
Current Value 0 [Default]
Value at Next Boot 0 [Default]
Value at Last Boot 0
Default Value 0
Constraints expanded_node_host_names >= 0
expanded_node_host_names <= 1
Can Change Immediately or at Next Boot
hostname is set in /etc/rc.config.d/netconf
But to make the system fully compatible the kernel needs to be set.
mygush0:root > kctune expanded_node_host_names=1
==> Update the automatic ‘backup’ configuration first? y
* The automatic ‘backup’ configuration has been updated.
* Future operations will update the backup without prompting.
WARNING: Setting the expanded_node_host_names parameter to 1 will allow
administrators to set node and host names larger than 8 and 64
characters/bytes, respectively. It is strongly recommended
that all related manpages and documentation be understood
before setting larger names. Larger names can cause some
applications which use those names to behave incorrectly or
fail.
* The requested changes have been applied to the currently
running configuration.
Tunable Value Expression Changes
expanded_node_host_names (before) 0 Default Immed
(now) 1 1
mygush0:root > kctune -v expanded_node_host_names
Tunable expanded_node_host_names
Description Enables expanded node and host names (read manpage for warnings)
Module sysconfig
Current Value 1
Value at Next Boot 1
Value at Last Boot 0
Default Value 0
Constraints expanded_node_host_names >= 0
expanded_node_host_names <= 1
Can Change Immediately or at Next Boot
Good to go.
Tags: hp-ux long hostnames
Real life story.
DMZ based server dedicated to SFTP was configured with sshd rules in /etc/hosts.allow
sshd : ALL@16.89.97.*:ALLOW
sshd : ALL@14.251.*:ALLOW
sshd : AAL@208.94.61.*:ALLOW
Should have been:
sshd : ALL@16.89.97.*:ALLOW
sshd : ALL@14.251.*:ALLOW
sshd : ALL@208.94.61.*:ALLOW
That network was the firewall to the outside world.
The end users were inconvenienced and the firewall team wasted a lot of time reviewing rues and looking at logs.
Starting a series on automation scripting.
This one is meant to be run from a master of the universe host, eg a host with root public keys placed on all work servers.
cat searchforid.ksh
#!/usr/bin/ksh
#
# test script
#
. ./.scriptenv
# provides standardization for example SSH_CMD="ssh -q -f -o ConnectionAttempts=3 -o ConnectTimeout=10 -o PasswordAuthentication=no -o BatchMode=yes"
LF="${LOGDIR}/${0}.logfile.txt"
> ${LF}
sc=0
uid=$1
date >> ${LF}
awk '{ print $1 }' $serverlist | while read -r hn
do
echo "################### ${hn} searching for user ${uid} ######################"
echo "################### ${hn} searching for user ${uid} ######################" >> ${LF}
if [ "${hn}" != "mygush0" ]
then
${SSH_CMD} ${hn} "grep ${uid} /opt/iexpress/sudo/etc/sudoers;grep ${uid} /etc/passwd"
sleep 5
${SSH_CMD} ${hn} "grep ${uid} /opt/iexpress/sudo/etc/sudoers;grep ${uid} /etc/passwd" >> ${LF}
else
grep ${uid} /opt/iexpress/sudo/etc/sudoers;grep ${uid} /etc/passwd
grep ${uid} /opt/iexpress/sudo/etc/sudoers >> ${LF};grep ${uid}
/etc/passwd >> ${LF}
echo
"#######################################################################################################"
echo "#######################################################################################################" >> ${LF}
fi
done
echo "Success count: ${sc} " >> ${LF}