Wednesday, July 01, 2009

Calculating the size of an object

really easy:

select segment_name, segment_type, bytes/1024/1024 as "MEGABYTES"
from dba_segments
where owner = 'SIEBEL'
and segment_name = 'S_EVT_ACT'

Friday, June 19, 2009

Logon Trigger Trace

Generate 10046 level 12 traces when you can't do them in session with:

DROP TRIGGER SYS.ON_LOGON_TRACE;

CREATE OR REPLACE TRIGGER SYS.ON_LOGON_TRACE
AFTER LOGON ON DATABASE
WHEN (
USER = 'AGR55'
)
BEGIN
execute immediate 'alter session set events ''10046 trace name context forever, level 12''';
EXCEPTION
WHEN OTHERS THEN
NULL;
END;
/

ALTER TRIGGER SYS.ON_LOGON_TRACE DISABLE;

Thursday, June 18, 2009

Generating CPU activity

sometimes you might want to generate some cpu activity, to maybe test a new CPU or check performance scripts etc.

On the command line:

while :
do
:
done

or:

while true
do
cat /dev/urandon >/dev/null
done

and then use mpsched to assign a cpu to the process:

mpsched -c /usr/bin/cat /dev/urandom > /dev/null &

or in c:

#include
#include
main()
{
int i=0;
float X[1024],Y[1024];

while (1)
{

for(i=0;i<1024;i++)
{
X[i]=1;
Y[i]=X[i] * M_PI; # M_PI is defined in /usr/include/math.h
}
}
}


Tuesday, June 09, 2009

CRS and Oracle10g

PURPOSE
-------

This document is to provide additional information on CRS (Cluster Ready Services)
in 10g Real Application Clusters.

SCOPE & APPLICATION
-------------------

This document is intended for RAC Database Administrators and Oracle support
enginneers.

CRS and 10g REAL APPLICATION CLUSTERS
-------------------------------------

CRS (Cluster Ready Services) is a new feature for 10g Real Application Clusters
that provides a standard cluster interface on all platforms and performs
new high availability operations not available in previous versions.

CRS KEY FACTS
-------------

Prior to installing CRS and 10g RAC, there are some key points to remember about
CRS and 10g RAC:

- CRS is REQUIRED to be installed and running prior to installing 10g RAC.

- CRS can either run on top of the vendor clusterware (such as Sun Cluster,
HP Serviceguard, IBM HACMP, TruCluster, Veritas Cluster, Fujitsu Primecluster,
etc...) or can run without the vendor clusterware. The vendor clusterware
was required in 9i RAC but is optional in 10g RAC.

- The CRS HOME and ORACLE_HOME must be installed in DIFFERENT locations.

- Shared Location(s) or devices for the Voting File and OCR (Oracle
Configuration Repository) file must be available PRIOR to installing CRS. The
voting file should be at least 20MB and the OCR file should be at least 100MB.

- CRS and RAC require that the following network interfaces be configured prior
to installing CRS or RAC:
- Public Interface
- Private Interface
- Virtual (Public) Interface

- The root.sh script at the end of the CRS installation starts the CRS stack.
If your CRS stack does not start.

- Only one set of CRS daemons can be running per RAC node.

- On Unix, the CRS stack is run from entries in /etc/inittab with "respawn".

- If there is a network split (nodes lose communication with each other). One
or more nodes may reboot automatically to prevent data corruption.

- The supported method to start CRS is booting the machine. MANUAL STARTUP OF
THE CRS STACK IS NOT SUPPORTED UNTIL 10.1.0.4 OR HIGHER.
- The supported method to stop is shutdown the machine or use "init.crs stop".

- Killing CRS daemons is not supported unless you are removing the CRS
installation because flag files can become mismatched.

- For maintenance, go to single user mode at the OS.

Once the stack is started, you should be able to see all of the daemon processes
with a ps -ef command:

[rac1]/u01/home/beta> ps -ef | grep crs

oracle 1363 999 0 11:23:21 ? 0:00 /u01/crs_home/bin/evmlogger.bin -o /u01
oracle 999 1 0 11:21:39 ? 0:01 /u01/crs_home/bin/evmd.bin
root 1003 1 0 11:21:39 ? 0:01 /u01/crs_home/bin/crsd.bin
oracle 1002 1 0 11:21:39 ? 0:01 /u01/crs_home/bin/ocssd.bin


CRS DAEMON FUNCTIONALITY
------------------------

Here is a short description of each of the CRS daemon processes:

CRSD:
- Engine for HA operation
- Manages 'application resources'
- Starts, stops, and fails 'application resources' over
- Spawns separate 'actions' to start/stop/check application resources
- Maintains configuration profiles in the OCR (Oracle Configuration Repository)
- Stores current known state in the OCR.
- Runs as root
- Is restarted automatically on failure

OCSSD:
- OCSSD is part of RAC and Single Instance with ASM
- Provides access to node membership
- Provides group services
- Provides basic cluster locking
- Integrates with existing vendor clusteware, when present
- Can also runs without integration to vendor clustware
- Runs as Oracle.
- Failure exit causes machine reboot.
--- This is a feature to prevent data corruption in event of a split brain.

EVMD:
- Generates events when things happen
- Spawns a permanent child evmlogger
- Evmlogger, on demand, spawns children
- Scans callout directory and invokes callouts.
- Runs as Oracle.
- Restarted automatically on failure

CRS LOG DIRECTORIES
-------------------

When troubleshooting CRS problems, it is important to review the directories
under the CRS Home.

$ORA_CRS_HOME/crs/log - This directory includes traces for CRS resources that are
joining, leaving, restarting, and relocating as identified by CRS.

$ORA_CRS_HOME/crs/init - Any core dumps for the crsd.bin daemon should be written
here.

$ORA_CRS_HOME/css/log - The css logs indicate all actions such as
reconfigurations, missed checkins , connects, and disconnects from the client
CSS listener . In some cases the logger logs messages with the category of
(auth.crit) for the reboots done by oracle. This could be used for checking the
exact time when the reboot occured.

$ORA_CRS_HOME/css/init - Core dumps from the ocssd primarily and the pid for the
css daemon whose death is treated as fatal are located here. If there are
abnormal restarts for css then the core files will have the formats of
core..

$ORA_CRS_HOME/evm/log - Log files for the evm and evmlogger daemons. Not used
as often for debugging as the CRS and CSS directories.

$ORA_CRS_HOME/evm/init - Pid and lock files for EVM. Core files for EVM should
also be written here.

$ORA_CRS_HOME/srvm/log - Log files for OCR.


STATUS FOR CRS RESOURCES
------------------------

After installing RAC and running the VIPCA (Virtual IP Configuration Assistant)
launched with the RAC root.sh, you should be able to see all of your CRS
resources with crs_stat. Example:

cd $ORA_CRS_HOME/bin
./crs_stat

NAME=ora.rac1.gsd
TYPE=application
TARGET=ONLINE
STATE=ONLINE

NAME=ora.rac1.oem
TYPE=application
TARGET=ONLINE
STATE=ONLINE

NAME=ora.rac1.ons
TYPE=application
TARGET=ONLINE
STATE=ONLINE

NAME=ora.rac1.vip
TYPE=application
TARGET=ONLINE
STATE=ONLINE

NAME=ora.rac2.gsd
TYPE=application
TARGET=ONLINE
STATE=ONLINE

NAME=ora.rac2.oem
TYPE=application
TARGET=ONLINE
STATE=ONLINE

NAME=ora.rac2.ons
TYPE=application
TARGET=ONLINE
STATE=ONLINE

NAME=ora.rac2.vip
TYPE=application
TARGET=ONLINE
STATE=ONLINE

There is also a script available to view CRS resources in a format that is
easier to read. Just create a shell script with:

--------------------------- Begin Shell Script -------------------------------

#!/usr/bin/ksh
#
# Sample 10g CRS resource status query script
#
# Description:
# - Returns formatted version of crs_stat -t, in tabular
# format, with the complete rsc names and filtering keywords
# - The argument, $RSC_KEY, is optional and if passed to the script, will
# limit the output to HA resources whose names match $RSC_KEY.
# Requirements:
# - $ORA_CRS_HOME should be set in your environment

RSC_KEY=$1
QSTAT=-u
AWK=/usr/xpg4/bin/awk # if not available use /usr/bin/awk

# Table header:echo ""
$AWK \
'BEGIN {printf "%-45s %-10s %-18s\n", "HA Resource", "Target", "State";
printf "%-45s %-10s %-18s\n", "-----------", "------", "-----";}'

# Table body:
$ORA_CRS_HOME/bin/crs_stat $QSTAT | $AWK \
'BEGIN { FS="="; state = 0; }
$1~/NAME/ && $2~/'$RSC_KEY'/ {appname = $2; state=1};
state == 0 {next;}
$1~/TARGET/ && state == 1 {apptarget = $2; state=2;}
$1~/STATE/ && state == 2 {appstate = $2; state=3;}
state == 3 {printf "%-45s %-10s %-18s\n", appname, apptarget, appstate; state=0;}'

--------------------------- End Shell Script -------------------------------

Example output:

[opcbsol1]/u01/home/usupport> ./crsstat
HA Resource Target State
----------- ------ -----
ora.V10SN.V10SN1.inst ONLINE ONLINE on opcbsol1
ora.V10SN.V10SN2.inst ONLINE ONLINE on opcbsol2
ora.V10SN.db ONLINE ONLINE on opcbsol2
ora.opcbsol1.ASM1.asm ONLINE ONLINE on opcbsol1
ora.opcbsol1.LISTENER_OPCBSOL1.lsnr ONLINE ONLINE on opcbsol1
ora.opcbsol1.gsd ONLINE ONLINE on opcbsol1
ora.opcbsol1.ons ONLINE ONLINE on opcbsol1
ora.opcbsol1.vip ONLINE ONLINE on opcbsol1
ora.opcbsol2.ASM2.asm ONLINE ONLINE on opcbsol2
ora.opcbsol2.LISTENER_OPCBSOL2.lsnr ONLINE ONLINE on opcbsol2
ora.opcbsol2.gsd ONLINE ONLINE on opcbsol2
ora.opcbsol2.ons ONLINE ONLINE on opcbsol2
ora.opcbsol2.vip ONLINE ONLINE on opcbsol2


CRS RESOURCE ADMINISTRATION
---------------------------

You can use srvctl to manage these resources. Below are syntax and examples.

-------------------------------------------------------------------------------

CRS RESOURCE STATUS

srvctl status database -d [-f] [-v] [-S ]
srvctl status instance -d -i >[,]
[-f] [-v] [-S ]
srvctl status service -d -s [,]
[-f] [-v] [-S ]
srvctl status nodeapps [-n ]
srvctl status asm -n

EXAMPLES:

Status of the database, all instances and all services.
srvctl status database -d ORACLE -v
Status of named instances with their current services.
srvctl status instance -d ORACLE -i RAC01, RAC02 -v
Status of a named services.
srvctl status service -d ORACLE -s ERP -v
Status of all nodes supporting database applications.
srvctl status node

-------------------------------------------------------------------------------

START CRS RESOURCES

srvctl start database -d [-o <>]
[-c | -q]
srvctl start instance -d -i
[,] [-o ] [-c | -q]
srvctl start service -d [-s [,]]
[-i ] [-o ] [-c | -q]
srvctl start nodeapps -n
srvctl start asm -n [-i ] [-o ]

EXAMPLES:

Start the database with all enabled instances.
srvctl start database -d ORACLE
Start named instances.
srvctl start instance -d ORACLE -i RAC03, RAC04
Start named services. Dependent instances are started as needed.
srvctl start service -d ORACLE -s CRM
Start a service at the named instance.
srvctl start service -d ORACLE -s CRM -i RAC04
Start node applications.
srvctl start nodeapps -n myclust-4

-------------------------------------------------------------------------------

STOP CRS RESOURCES

srvctl stop database -d [-o ]
[-c | -q]
srvctl stop instance -d -i [,]
[-o ][-c | -q]
srvctl stop service -d [-s [,]]
[-i ][-c | -q] [-f]
srvctl stop nodeapps -n
srvctl stop asm -n [-i ] [-o ]

EXAMPLES:

Stop the database, all instances and all services.
srvctl stop database -d ORACLE
Stop named instances, first relocating all existing services.
srvctl stop instance -d ORACLE -i RAC03,RAC04
Stop the service.
srvctl stop service -d ORACLE -s CRM
Stop the service at the named instances.
srvctl stop service -d ORACLE -s CRM -i RAC04
Stop node applications. Note that instances and services also stop.
srvctl stop nodeapps -n myclust-4

-------------------------------------------------------------------------------

ADD CRS RESOURCES

srvctl add database -d -o [-m ] [-p ]
[-A /netmask] [-r {PRIMARY | PHYSICAL_STANDBY | LOGICAL_STANDBY}]
[-s ] [-n ]
srvctl add instance -d -i -n
srvctl add service -d -s -r
[-a ] [-P ] [-u]
srvctl add nodeapps -n -o
[-A /netmask[/if1[|if2|...]]]
srvctl add asm -n -i -o

OPTIONS:

-A vip range, node, and database, address specification. The format of
address string is:
[]//[/] [,] []//
[/]
-a for services, list of available instances, this list cannot include
preferred instances
-m domain name with the format “us.mydomain.com”
-n node name that will support one or more instances
-o $ORACLE_HOME to locate Oracle binaries
-P for services, TAF preconnect policy - NONE, PRECONNECT
-r for services, list of preferred instances, this list cannot include
available instances.
-s spfile name
-u updates the preferred or available list for the service to support the
specified instance. Only one instance may be specified with the -u
switch. Instances that already support the service should not be
included.

EXAMPLES:

Add a new node:
srvctl add nodeapps -n myclust-1 -o $ORACLE_HOME –A
139.184.201.1/255.255.255.0/hme0
Add a new database.
srvctl add database -d ORACLE -o $ORACLE_HOME
Add named instances to an existing database.
srvctl add instance -d ORACLE -i RAC01 -n myclust-1
srvctl add instance -d ORACLE -i RAC02 -n myclust-2
srvctl add instance -d ORACLE -i RAC03 -n myclust-3
Add a service to an existing database with preferred instances (-r) and
available instances (-a). Use basic failover to the available instances.
srvctl add service -d ORACLE -s STD_BATCH -r RAC01,RAC02 -a RAC03,RAC04
Add a service to an existing database with preferred instances in list one and
available instances in list two. Use preconnect at the available instances.
srvctl add service -d ORACLE -s STD_BATCH -r RAC01,RAC02 -a RAC03,RAC04 -P PRECONNECT

-------------------------------------------------------------------------------

REMOVE CRS RESOURCES

srvctl remove database -d
srvctl remove instance -d [-i ]
srvctl remove service -d -s [-i ]
srvctl remove nodeapps -n

EXAMPLES:

Remove the applications for a database.
srvctl remove database -d ORACLE
Remove the applications for named instances of an existing database.
srvctl remove instance -d ORACLE -i RAC03
srvctl remove instance -d ORACLE -i RAC04
Remove the service.
srvctl remove service -d ORACLE -s STD_BATCH
Remove the service from the instances.
srvctl remove service -d ORACLE -s STD_BATCH -i RAC03,RAC04
Remove all node applications from a node.
srvctl remove nodeapps -n myclust-4

-------------------------------------------------------------------------------

MODIFY CRS RESOURCES

srvctl modify database -d [-n ] [-m ]
[-p ] [-r {PRIMARY | PHYSICAL_STANDBY | LOGICAL_STANDBY}]
[-s ]
srvctl modify instance -d -i -n
srvctl modify instance -d -i {-s | -r}
srvctl modify service -d -s -i
-t [-f]
srvctl modify service -d -s -i
-r [-f]
srvctl modify nodeapps -n [-A ] [-x]

OPTIONS:

-i -t the instance name (-i) is replaced by the
instance name (-t)
-i -r the named instance is modified to be a preferred instance
-A address-list for VIP application, at node level
-s add or remove ASM dependency

EXAMPLES:

Modify an instance to execute on another node.
srvctl modify instance -d ORACLE -n myclust-4
Modify a service to execute on another node.
srvctl modify service -d ORACLE -s HOT_BATCH -i RAC01 -t RAC02
Modify an instance to be a preferred instance for a service.
srvctl modify service -d ORACLE -s HOT_BATCH -i RAC02 –r

-------------------------------------------------------------------------------

RELOCATE SERVICES

srvctl relocate service -d -s [-i ]-t [-f]

EXAMPLES:

Relocate a service from one instance to another
srvctl relocate service -d ORACLE -s CRM -i RAC04 -t RAC01

-------------------------------------------------------------------------------

ENABLE CRS RESOURCES (The resource may be up or down to use this function)

srvctl enable database -d
srvctl enable instance -d -i [,]
srvctl enable service -d -s ] [, ] [-i ]

EXAMPLES:

Enable the database.
srvctl enable database -d ORACLE
Enable the named instances.
srvctl enable instance -d ORACLE -i RAC01, RAC02
Enable the service.
srvctl enable service -d ORACLE -s ERP,CRM
Enable the service at the named instance.
srvctl enable service -d ORACLE -s CRM -i RAC03

-------------------------------------------------------------------------------

DISABLE CRS RESOURCES (The resource must be down to use this function)

srvctl disable database -d
srvctl disable instance -d -i [,]
srvctl disable service -d -s ] [,] [-i ]

EXAMPLES:

Disable the database globally.
srvctl disable database -d ORACLE
Disable the named instances.
srvctl disable instance -d ORACLE -i RAC01, RAC02
Disable the service globally.
srvctl disable service -d ORACLE -s ERP,CRM
Disable the service at the named instance.
srvctl disable service -d ORACLE -s CRM -i RAC03,RAC04

-------------------------------------------------------------------------------

.

Friday, June 05, 2009

Ubuntu 9.04 NFS Server with Solaris 10 automounter nfs client

1. Download the ubuntu nfs packages and install:

# sudo apt-get install nfs-kernel-server nfs-common portmap

2. Setup the /etc/exports file:

# sudo vi /etc/exports

add the nfs export line, for example:

/software 192.168.0.0/24(rw,fsid=0,no_root_squash,async,no_subtree_check)

and export the share and check the nfs server list:

# sudo exportfs -a
# sudo exportfs
/software 192.168.0.0/24

3. Next, setup the Solaris client. It is useful to first manually mount the share. Update the /etc/default/nfs client so that the maximum revision is 3, so that the client and server can talk to each other:

# vi /etc/default/nfs

and change:

# Sets the maximum version of the NFS protocol that will be used by
# the NFS client. Can be overridden by the "vers=" NFS mount option.
# If "vers=" is not specified for an NFS mount, this is the version
NFS_CLIENT_VERSMAX=3

4. Test that the client can mount the share:

# showmount -e ubuntu
export list for ubuntu:
/software 192.168.0.0/24
# mount ubuntu:/software /mnt
# ls /mnt
oracle
# cd /
# umount -f /mnt

5. Now setup the automounter as we don't want /mnt mounted all the time. Edit the /etc/auto_master file and add in an entry for a auto_direct map:

# cat /etc/auto_master
#
# Copyright 2003 Sun Microsystems, Inc. All rights reserved.
# Use is subject to license terms.
#
# ident "@(#)auto_master 1.8 03/04/28 SMI"
#
# Master map for automounter
#
+auto_master
/net -hosts -nosuid,nobrowse
/home auto_home -nobrowse
/- auto_direct

and create the /etc/auto_direct file:

# cat /etc/auto_direct
/mnt -rw ubuntu:/software

next, restart the autofs service and test:

# svcadm disable autofs
# svcadm enable autofs
# df -h /mnt
Filesystem size used avail capacity Mounted on
auto_direct 0K 0K 0K 0% /mnt
# cd /mnt
# ls
oracle
# df -h /mnt
Filesystem size used avail capacity Mounted on
ubuntu:/software 7.5G 2.9G 4.2G 42% /mnt

Tuesday, June 02, 2009

Flashback table in Oracle 9i/10g

To flashback a table, we can create a copy of the table at a certain point in time, using syntax such as:

CREATE TABLE daves_restore AS (SELECT * FROM isfts.lrmp_download_today AS OF TIMESTAMP TO_TIMESTAMP('14-NOV-08 13:30:00', 'DD-MON-YY HH24:MI:SS') MINUS SELECT * FROM lrmp_download_today)

which creates the table with the delta changes in it. Or to create the entire table:

create table agr55.AHSRELVALUE_BAK as (SELECT * FROM agr55.AHSRELVALUE as of timestamp TO_TIMESTAMP('02-JUN-09 10:45:00', 'DD-MON-YY HH24:MI:SS')

Monday, May 11, 2009

Solaris 10 Link Based IPMP Configuration

This document is a short summary of failure detection types with additional/typical/recommended configuration examples using Link-based failure detection only. Even though link-based failure detection was supported before Solaris 10 (since DLPI link up/down notifications are supported by used network driver), it is now possible to use this failure detection type without any probing (probe-based failure detection).


Steps to Follow
IPMP Link-based Only Failure Detection with Solaris [TM] 10 Operating System (OS)

Contents:

1. Types of Failure Detection

1.1. Link-based Failure Detection
1.2. Probe-based Failure Detection

2. Configuration Examples using Link-based Failure Detection only

2.1. Single Interface

2.2. Multiple Interfaces

2.2.1. Active-Active
2.2.1.1. Two Interfaces
2.2.1.2. Two Interfaces + logical
2.2.1.3. Three Interfaces

2.2.2. Active-Standby
2.2.2.1. Two Interfaces
2.2.2.2. Two Interfaces + logical

3. References

1. Types of Failure Detection

1.1. Link-based Failure Detection

Link-based failure detection is always enabled (supposed to be supported by the interface), whether optional probe-based failure detection is used or not. As per PSARC/1999/225 network drivers do send asynchronous DLPI notifications DL_NOTE_LINK_DOWN (link/NIC is down) and DL_NOTE_LINK_UP (link/NIC is up). The UP and DOWN notifications are used in IP to set and clear the IFF_RUNNING flag which is, in the absence of such notifications, always set for an interface that is up. Failure detection software will immediately detect changes to IFF_RUNNING. These DLPI notifications were implemented to network drivers by and by, and supported by almost all of them since Solaris 10.

With link-based failure detection, only the link between local interface and the link partner is been checked on hardware layer. Neither IP layer nor any further network path will be monitored!

No test addresses are required for link-based failure detection.

For more informations, please refer to Solaris 10 System Administration Guide:
IP Services >> IPMP >> 30. Introducing IPMP (Overview) >> Link-Based Failure Detection

1.2. Probe-based Failure Detection

Probe-based failure detection is performed on each interface in the IPMP group that has a test address. Using this test address, ICMP probe messages go out over this interface to one or more target systems on the same IP link. The in.mpathd daemon determines which target systems to probe dynamically:

  • all default routes on same IP link are used as probe targets.
  • all host routes on same IP link are used as probe targets. ( Configuring Target Systems)
  • always neither default nor host routes are available, in.mpathd sends out a all hosts multicast to 224.0.0.1 in IPv4 and ff02::1 in IPv6 to find neighbor hosts on the link.

Note: Available probe targets are determined dynamically, so the daemon in.mpathd has not to be re-started.

The in.mpathd daemon probes all the targets separately through all the interfaces in the IPMP group. The probing rate depends on the failure detection time (FDT) specified in /etc/default/mpathd (default 10 seconds) with 5 probes each timeframe. If 5 consecutive probes fail, the in.mpathd considers the interface to have failed. The minimum repair detection time is twice the failure detection time, 20 seconds by default, because replies to 10 consecutive probes must be received.

Without any configured host routes, the default route is used as a single probe target in most cases. In this case the whole network path up to the gateway (router) is monitored on IP layer. With all interfaces in the IPMP group connected via redundant network paths (switches etc.), you get full redundancy. On the other hand the default router can be a single point of failure, resulting in 'All Interfaces in group have failed'. Even with default gateway down, it could make sense to not fail the whole IPMP group, and to allow traffic within the local network. In this case specific probe targets (hosts or active network components) can be configured via host routes. So it is question of network design, which network path you do want to monitor.

A test address is required on each interface in the IPMP group, but the test addresses can be in a different IP test subnet than the data address(es). So private network addresses as specified by rfc1918 (e.g. 10/8, 172.16/12, or 192.168/16) can be used as well.

For more informations, please refer to Solaris 10 System Administration Guide:
IP Services >> IPMP >> 30. Introducing IPMP (Overview) >> Probe-Based Failure Detection

2. Configuration Examples using Link-based Failure Detection

An IPMP configuration typically consists of two or more physical interfaces on the same system that are attached to the same IP link. These physical interfaces might or might not be on the same NIC. The interfaces are configured as members of the same IPMP group.

A single interface can be configured in its own IPMP group. The single interface IPMP group has the same behavior as an IPMP group with multiple interfaces. However, failover and failback cannot occur for an IPMP group with only one interface.

The following message does tell you, that this is link-based failure detection only configuration. It is reported for each interface in the group.

/var/adm/messages
in.mpathd[144]: [ID 975029 daemon.error] No test address configured on interface ce0; disabling probe-based failure detection on it

So in this configuration it is not an error, but more a confirmation, that the probe-based failure detection has been disabled correctly.

2.1. Single Interface

/etc/hostname.ce0
192.168.10.10 netmask + broadcast + group ipmp0 up

# ifconfig -a
ce0: flags=1000843 mtu 1500 index 4
inet 192.168.10.10 netmask ffffff00 broadcast 192.168.10.255
groupname ipmp0
ether 0:3:ba:93:90:fc

2.2. Multiple Interfaces

2.2.1. Active-Active

2.2.1.1. Two Interfaces

/etc/hostname.ce0
192.168.10.10 netmask + broadcast + group ipmp0 up

/etc/hostname.ce1
group ipmp0 up

# ifconfig -a
ce0: flags=1000843 mtu 1500 index 4
inet 192.168.10.10 netmask ffffff00 broadcast 192.168.10.255
groupname ipmp0
ether 0:3:ba:93:90:fc
ce1: flags=1000843 mtu 1500 index 5
inet 0.0.0.0 netmask ff000000 broadcast 0.255.255.255
groupname ipmp0
ether 0:3:ba:93:91:35

2.2.1.2. Two Interfaces + logical

/etc/hostname.ce0
192.168.10.10 netmask + broadcast + group ipmp0 up \
addif 192.168.10.11 netmask + broadcast + up

/etc/hostname.ce1
group ipmp0 up

# ifconfig -a
ce0: flags=1000843 mtu 1500 index 4
inet 192.168.10.10 netmask ffffff00 broadcast 192.168.10.255
groupname ipmp0
ether 0:3:ba:93:90:fc
ce0:1: flags=1000843 mtu 1500 index 4
inet 192.168.10.11 netmask ffffff00 broadcast 192.168.10.255
ce1: flags=1000843 mtu 1500 index 5
inet 0.0.0.0 netmask ff000000 broadcast 0.255.255.255
groupname ipmp0
ether 0:3:ba:93:91:35

2.2.1.3. Three Interfaces

/etc/hostname.ce0
192.168.10.10 netmask + broadcast + group ipmp0 up

/etc/hostname.ce1
group ipmp0 up

/etc/hostname.bge1
group ipmp0 up

# ifconfig -a
bge1: flags=1000843 mtu 1500 index 3
inet 0.0.0.0 netmask ff000000 broadcast 0.255.255.255
groupname ipmp0
ether 0:9:3d:11:91:1b
ce0: flags=1000843 mtu 1500 index 4
inet 192.168.10.10 netmask ffffff00 broadcast 192.168.10.255
groupname ipmp0
ether 0:3:ba:93:90:fc
ce1: flags=1000843 mtu 1500 index 5
inet 0.0.0.0 netmask ff000000 broadcast 0.255.255.255
groupname ipmp0
ether 0:3:ba:93:91:35

2.2.2. Active-Standby

2.2.2.1. Two Interfaces

/etc/hostname.ce0
192.168.10.10 netmask + broadcast + group ipmp0 up

/etc/hostname.ce1
group ipmp0 standby up

# ifconfig -a
ce0: flags=1000843 mtu 1500 index 4
inet 192.168.10.10 netmask ffffff00 broadcast 192.168.10.255
groupname ipmp0
ether 0:3:ba:93:90:fc
ce0:1: flags=1000843 mtu 1500 index 4
inet 0.0.0.0 netmask ff000000 broadcast 0.255.255.255
ce1: flags=69000842 mtu 0 index 5
inet 0.0.0.0 netmask 0
groupname ipmp0
ether 0:3:ba:93:91:35

2.2.2.2. Two Interfaces + logical

/etc/hostname.ce0
192.168.10.10 netmask + broadcast + group ipmp0 up \
addif 192.168.10.11 netmask + broadcast + up

/etc/hostname.ce1
group ipmp0 standby up

# ifconfig -a
ce0: flags=1000843 mtu 1500 index 4
inet 192.168.10.10 netmask ffffff00 broadcast 192.168.10.255
groupname ipmp0
ether 0:3:ba:93:90:fc
ce0:1: flags=1000843 mtu 1500 index 4
inet 192.168.10.11 netmask ffffff00 broadcast 192.168.10.255
ce0:2: flags=1000843 mtu 1500 index 4
inet 0.0.0.0 netmask ff000000 broadcast 0.255.255.255
ce1: flags=69000842 mtu 0 index 5
inet 0.0.0.0 netmask 0
groupname ipmp0
ether 0:3:ba:93:91:35

Friday, May 08, 2009

Enabling rsh in Solaris 10

If the server has been hardened with SUNWjass, then these steps are required:

Edit /etc/default/login and ensure root can login to more than the console device.
Edit /etc/pam.conf and enable rsh and rlogin for rhosts authentication.
Edit /etc/hosts.allow and enable in.rshd and in.rlogind for access to other network devices.
Enable rsh by executing svcadm enable svc:/network/shell:default.

Test by rsh date

Friday, April 24, 2009

OBP commands

Use the following for setting boot parameters:

ok setenv boot-device disk disk1
ok setenv auto-boot? false/true

Use the following for debugging hardware (after putting the keyswitch into the diagnostics position):

ok setenv diag-level max
ok reset-all
ok probe-scsi-all
ok show-post-results

Wednesday, April 22, 2009

Ksh PS1 prompt

Set the below in the .profile and also a list of aliases in .kshrc

PS1='
('`uname -n`')'`/usr/ucb/whoami`':$PWD
$ '

Using parfiles for Oracle export

Always use a parfile for exporting from Oracle, a sample is below:


userid=system/*******
full=y
file=/u01/dbbackupdmp/exports/stage_exp.dmp
log=/u01/dbbackupdmp/exports/stage_exp.log
direct=y
recordlength=65535
statistics=none

and for importing:

userid=system/b1gkn0b
file=/u01/dbbackupdmp/exports/stage_exp.dmp
log=/u01/dbbackupdmp/exports/stage_impnorows.log
recordlength=65535
statistics=none
ignore=y
rows=n
constraints=n
indexes=n
fromuser=AGRESSO, CSMIG, DMSA
touser=AGRESSO, CSMIG, DMSA

To sucessfully reimport, follow this procedure:

1. Objects only (ignore=y,rows=n,indexes=n,constraints=n)
2. Data only (ignore=y,rows=y,indexes=n,constraints=n)
3. Contraints only ((ignore=y,rows=n,indexes=y,constraints=y)

Enable IO multipathing

When adding 3rd part SAN devices, the file /kernel/drv/scsi_vhci.conf needs to be edited to add multipathing support and then 'stmsboot -e' needs to be executed, which requires a reboot:

System Administration Commands stmsboot(1M)

NAME
stmsboot - administration program for the Solaris I/O mul-
tipathing feature

SYNOPSIS
/usr/sbin/stmsboot
[-d | -e | -u | -L | -l controller_number]

DESCRIPTION
The Solaris I/O multipathing feature is a multipathing solu-
tion for storage devices that is part of the Solaris operat-
ing environment. This feature was formerly known as Sun
StorEdge Traffic Manager (STMS) or MPxIO.

The stmsboot program is an administrative command to manage
enumeration of fibre channel devices under Solaris I/O mul-
tipathing. Solaris I/O multipathing-enabled devices are
enumerated under scsi_vhci(7D), providing multipathing capa-
bilities. Solaris I/O multipathing-disabled devices are
enumerated under the physical controller.

In the /dev and /devices trees, Solaris I/O multipathing-
enabled devices receive new names that indicate that they
are under Solaris I/O multipathing control. This means a
device will have a different name from its original name
(following installation) when it is under Solaris I/O mul-
tipathing control. The stmsboot command automatically
updates /etc/vfstab and dump configuration to reflect the
device names changes when enabling or disabling Solaris I/O
multipathing. A reboot is required for changes to take
effect.

RMAN: Restore archivelogs

This is a quick method of restoring rman archivelog files for database incomplete recovery. It is easier to simply use the rman catalog:

$ rman target /

Recovery Manager: Release 10.2.0.3.0 - Production on Wed Apr 22 14:02:17 2009

Copyright (c) 1982, 2005, Oracle. All rights reserved.

connected to target database: TSTINET (DBID=1789357865)

RMAN> ALLOCATE CHANNEL FOR MAINTENANCE DEVICE TYPE 'sbt_tape' parms 'ENV=(TDPO_OPTFILE=/opt/tivoli/tsm/client/oracle/bin/tdpo.opt)';

using target database control file instead of recovery catalog
allocated channel: ORA_MAINT_SBT_TAPE_1
channel ORA_MAINT_SBT_TAPE_1: sid=63 devtype=SBT_TAPE
channel ORA_MAINT_SBT_TAPE_1: Tivoli Data Protection for Oracle: version 5.2.0.0

you can now list the scn changes contained with the archivelog required (in this case seq 648)

RMAN> list backup of archivelog sequence 648;


List of Backup Sets
===================

BS Key Size Device Type Elapsed Time Completion Time
------- ---------- ----------- ------------ ---------------
1597 76.75M SBT_TAPE 00:00:06 21-APR-09
BP Key: 1597 Status: AVAILABLE Compressed: NO Tag: TAG20090421T230314
Handle: TSTINET/TSTINET_al_t684802994_s1622_p1 Media: DISKPOOL

List of Archived Logs in backup set 1597
Thrd Seq Low SCN Low Time Next SCN Next Time
---- ------- ---------- --------- ---------- ---------
1 648 2324906766 20-APR-09 2324970620 21-APR-09

The archive log can now be restored:


RMAN> run
2> {
3> allocate channel ch1 type 'sbt_tape' parms
4> 'ENV=(TDPO_OPTFILE=/opt/tivoli/tsm/client/oracle/bin/tdpo.opt)';
5> restore archivelog from scn 2324906766 until scn 2324970620;
6> }

Renaming a database

1. Cerate a create controlfile script with the command 'alter database backup controlfile to trace;'
2. Move the database to the new sid in /u02/oradata.
3. Move the admin directory and all relevant $ORACLE_HOME/dbs files.
4. Edit the control file so it contains the following command (change reuse to set):

CREATE CONTROLFILE SET DATABASE "TSTOLTP" RESETLOGS ARCHIVELOG

5. Receate the controlfile and simply run 'alter database open resetlogs;'.
6. Recreate the temporary tablespace.

Tuesday, March 31, 2009

Checking Solaris 10 NIC speeds

To check interface speeds on Solaris 10, use:

root@S1DB03 # dladm show-dev
bge0 link: up speed: 1000 Mbps duplex: full
bge1 link: up speed: 1000 Mbps duplex: full
nxge0 link: up speed: 1000 Mbps duplex: full
nxge1 link: unknown speed: 0 Mbps duplex: unknown
nxge2 link: unknown speed: 0 Mbps duplex: unknown
nxge3 link: up speed: 1000 Mbps duplex: full