Tuesday, October 27, 2015

Find out CPU usage for each session

Below query will be useful to find out CPU usage for each session.

select nvl(ss.USERNAME,'ORACLE USER') username, se.SID, VALUE cpu_usage
from v$session ss, v$sesstat se, v$statname sn
where se.STATISTIC# = sn.STATISTIC#
and NAME like '%CPU used by this session%'
and se.SID = ss.SID
order by VALUE desc;

Sample Output:

USERNAME                              SID        CPU_USAGE
--------------------------------------- ---------- ----------
ORACLE USER                           699        1450
SYS                                             813          539
ORACLE USER                           583          425
ORACLE USER                             2            387
ORACLE USER                           587          133
ORACLE USER                           235             2

Here;

USERNAME - Name of the user
SID - Session id
CPU Usage - CPU centi-seconds used by this session (divide by 100 to get real CPU seconds)

Thanks,
Chowdari

Tuesday, October 20, 2015

Overview of ADOP Phase=Prepare

Oracle E-Business Suite Release 12.2 introduces Online Patching (OP), a new feature that greatly reduces the downtime that was needed in previous releases for application of Release Update Packs (RUPs), Critical Patch Updates (CPUs), and other patches and bug fixes of various kinds.

In Release 12.2, all patching operations are carried out while the applications are in use and users are online. Patching is performed using the new adop (AD Online Patching) utility.

ADOP – Regular Patching Cycle:

The online patching cycle consists of a number of phases:

 1) Prepare - Creates a new patch edition in the database.

 2) Apply - Executes patch drivers to update patch edition.

 3) Finalize - This phase is used to perform the final operations that can be executed while the application is online.
    a) Compiles invalid objects.
    b) Generates derived objects.

 4) Cutover - Configures patch edition of database to be the new run edition. Restarts application tier services.

 5) Cleanup - Delete obsolete code and seed data to recover space.

Overview of adop phase=prepare:

During the prepare phase, adop performs the following steps:

1) Checks whether to perform a cleanup, which will be needed if the user failed to invoke cleanup after the cutover phase of a previous online patching cycle.

2) Validates system configuration to ensure that the system is ready to start an online patching cycle.

3) Performing Sanity Checks to see if the database is prepared for online patching:

   a) Checks if the database user is edition-enabled. If not, adop immediately exits with an error.

   b) Checks to see if the patch service (ebs_patch) has been created. ADOP requires that a special database service exists for the purpose of connecting to the patch edition.
   
   c) Checks to see if logon trigger exists and is enabled. If the logon trigger is missing or the patch service (ebs_patch) has not been created, adop will automatically try to fix the issue so that it can proceed. If it cannot fix the issue, it will exit with an error message.

   d) Checks the integrity of the database data dictionary. If any corruption is found, adop exits with an error.
 
   e) Checks to see if enough space is available in the database (SYSTEM tablespace should have minimum of 25 GB of free space and APPS_TS_SEED tablespace should have minimum of 5 GB of free space)
 
   f) Checks that the E-Business Suite Technology Codelevel Checker (ETCC) has has been run, to verify that all required patches have been applied to the database.
 
4) Online Patching tool would submit (if its not already submitted) a concurrent request (ADZDPATCH), which would make the incompatible concurrent programs not to run, instead, they would be enqueued and would be picked-up after the patching cycle is complete. If any concurrent program is already running, waits until it finishes.

 5) The next stage depends on whether the concurrent managers are running:

       i) If the concurrent managers are all down, the prepare phase continues, with ADZDPATCH entering a status of pending (with the highest priority) until the managers are started.

      ii) If the concurrent managers are partially up, but there is no manager defined that can run ADZDPATCH, then the prepare phase will exit with an error.

     iii) If the concurrent managers are up, and there is one defined that can run ADZDPATCH, processing will loop until ADZDPATCH changes status from pending to running. The prepare phase then continues.

   Note: ADZDPATCH is cancelled when the cutover phase is complete.

  6) Invokes the TXK script $AD_TOP/patch/115/bin/txkADOPPreparePhaseSynchronize.pl to synchronize the patches which have been applied to the run APPL_TOP, but not the patch APPL_TOP. The script depends on the adop repository for patches that have been applied on the run APPL_TOP but not the patch APPL_TOP.

  7) Checks the database for the existence of a patch edition, and creates one if it does not find one.
 
  8) Calls the $AD_TOP/patch/115/bin/txkADOPPreparePhaseSanityCheck.pl script again to confirm that the database connection to the patch edition is working.
 
  9) If any of the above checks fail, adop will exit with an error.
 
10) The patch file system APPL_TOP is synchronized with the run file system APPL_TOP
 
      i) The APPLY phase of online patching tool would maintain the patches which are applied to the RUN environment (in previous patching cycle) in Online Patching tool repository.

     ii) The synchronization process would pickup these patches for synchronization of Patch file system. The patches are applied to perform only file system actions (copy or generate actions).

11) If any patch is applied previously using adop, then during prepare phase, it would be applied with nodatabaseportion on patch file system. Also, patch_file_stystem_base column would be updated in ad_adop_session_patches.

12) If patch is applied with nocopyportion,nodatabaseportion, then the value of patch_file_system_base would be ‘skippatch’ and adop will not do any synchronization as there’s really no need.

Thats it. Hope this will help you. :)

Thanks,
Chowdari

Monday, October 19, 2015

Automated Syncup Script For Manual Standby Database

Sometimes we may created manual standby database for standard edition databases, by that time it required to apply archivelogs manually to sync-up the standby database with production database.
Here you can find automated script for applying newly generated archivelogs to standby database.

Note: For copying archivelogs from Production to standby server, need to configure passwordless connectivity between servers.

Find below list of scripts need to be copy:

1) main_db_synch_300.sh -  This main script will call below list of scripts.

  a) db_synch_301.sh
  b) db_synch_302.sh
  c) db_synch_303.sh
  d) getlog.sql
  e) getlog_prod.sql
  f) getlog_standby.sql
  g) getname_prod.sql

Find below detailed sctipts:

1) main_db_synch_300.sh - This is the main script to call.

################### Begin Script main_db_synch_300.sh##################
## Author       : Chowdari Mathukumilli
## Description  : This automated script will be useful for synch-up
##                standby database with Production database even
##                database in standard edition.
##                Need to put standby database in mount standby database.
## Call Syntax  : Schedule below line in standby server crontab file.
##                Based on your requirement you can schedule it like every
##                15mins, 30mins or 60mins etc. and also need to add both production
##                and standby database tns entries in standby server.
##             00 * * * * /u01/app/oracle/db_scripts/main_db_synch_300.sh
## Script Changes: Below scriptyou need to change
##             ORACLE_HOME -- Change it to your DB Home location
##             SCR_HOME   -- Change it to your sych-up scripts location
##             ORACLE_SID  -- Change it to your DB SID name
##             PASSWORD  --  Change it to ur sys password (Primary or Standby)
##             hostname  -- change it to your primary database server name
## Total files : 8        
##                      
################################################################
#bin ! sh 
export ORACLE_HOME=/u01/app/oracle/product/11.2.0/db_1
export SCR_HOME=/u01/app/oracle/db_scripts
export PATH=$ORACLE_HOME/bin:$PATH:
export DDATE=`date +%Y-%m-%d`

echo "*************** PROD_DB ***************"
export ORACLE_SID=PROD_DB

echo "Performing switch logfile in Production database"
$SCR_HOME/db_synch_301.sh

echo "Finding current archivelog in PROD_DB production DB"
export FIND_CUR_ARCH_PROD=`sqlplus -s sys/PASSWORD@PROD_DB as sysdba @$SCR_HOME/getlog_prod.sql | sed '/^$/d'`
echo $FIND_CUR_ARCH_PROD

echo "Find last applied archive log in stndby side"
export LAST_APPL_ARCH_STAND=`sqlplus -s sys/PASSWORD@STANDBY_DB as sysdba @$SCR_HOME/getlog_standby.sql` 
echo $LAST_APPL_ARCH_STAND

export DR_ARCH_LOCATION=/u01/oracle/STANDBY_DB/FRA/ARCH

for (( i = $LAST_APPL_ARCH_STAND; i <= $FIND_CUR_ARCH_PROD; i++ ))
do
   if [ $i -le $FIND_CUR_ARCH_PROD ]; then
        echo "Below execution will help to find new generated archivelogs with location in primary side"
        export NEW_GENERATED_ARCH_PROD=`sqlplus -s sys/PASSWORD@PROD_DB as sysdba @$SCR_HOME/getname_prod.sql $i | tail -2 |head -1`
        echo $NEW_GENERATED_ARCH_PROD
        echo "Copying achives from prod to standby side $i"
        scp -r oracle@hostname:$NEW_GENERATED_ARCH_PROD $DR_ARCH_LOCATION
  else
        echo " Nothing to DO - Already Standby database is Synch-up with production database - :$i = $FIND_CUR_ARCH_PROD "
        exit;
   fi
done

echo "Backup archivelogs not backed up in prod"
$SCR_HOME/db_synch_302.sh

echo "Applying archives in backup side"
$SCR_HOME/db_synch_303.sh

sleep 50

#Find latest applied archivelog in Standby Database
export LATEST_APPLIED_ARCH_STAND=`sqlplus -s sys/<PASSWORD>@STANDBY_DB as sysdba @$SCR_HOME/getlog.sql`
echo "Applied arch : $LATEST_APPLIED_ARCH_STAND"

###################### END SCRIPT - main_db_synch_300.sh #############

2) db_synch_301.sh  - This script is for performing switch logfile in Production database.

###################Begin Script - db_synch_301.sh #######################
## Author       : Chowdari Mathukumilli
## Description  : This file is for switch logfile
##                      
######################################################################
sqlplus /nolog << EOF 
connect sys/<PASSWORD>@PROD_DB as sysdba 
alter system switch logfile;
disc; 
quit;
###################### END SCRIPT - db_synch_301.sh## ################

3) getlog_prod.sql - This sql is used for finding current running archivelog in production side

###################Begin Script - getlog_prod.sql #########################
## Author       : Chowdari Mathukumilli
## Description  : This sql script is for finding current archivelog in prod
##                      
######################################################################
set head off;
select  lhmax "Current Seq running/applied :"
from (select thread# thrd, max(sequence#) almax
from v$archived_log
where resetlogs_change#=(select resetlogs_change# from v$database)
group by thread#) al,
(select thread# thrd, trim(max(sequence#)) lhmax
from v$log_history
where first_time=(select max(first_time) from v$log_history)
group by thread#) lh
where al.thrd = lh.thrd;
#disc;
#quit;
###################### END SCRIPT - getlog_prod.sql## #####################

4) getlog_standby.sql - This SQL script is used for find last applied archivelog in stndby side

###################Begin Script - getlog_standby.sql ########################
## Author       : Chowdari Mathukumilli
## Description  : This sql script is for find last applied archivelog in stndby side
##                      
#####################################################################
set heading off;
select max(sequence#) from v$archived_log where applied='YES';
exit;
###################### END SCRIPT - getlog_standby.sql## ###############

5) getname_prod.sql -  This sql is for find new generated archivelog files in production side

###################Begin Script - getname_prod.sql #######################
## Author       : Chowdari Mathukumilli
##                      
#######################################################################
set heading off;
select name from v$archived_log where sequence# = '&&1';
exit;
###################### END SCRIPT - getname_prod.sql## ##################

6) db_synch_302.sh - Backup the archivelogs in production side

###################Begin Script - db_synch_302.sh ##########################
## Author       : Chowdari Mathukumilli
##                      
########################################################################
rman target sys/<PASSWORD>@PROD_DB << EOF
crosscheck archivelog all;
backup archivelog all not backed up;
quit;
###################### END SCRIPT - db_synch_302.sh ## ####################

7) db_synch_303.sh - Applying archives in backup side

###################Begin Script - db_synch_303.sh ###########################
## Author       : Chowdari Mathukumilli
##                      
#########################################################################
rman target / << EOF
catalog start with '/u01/oracle/STANDBY_DB/FRA/ARCH';
YES
DELETE noprompt COPY OF ARCHIVELOG ALL COMPLETED BEFORE 'SYSDATE-5';
quit;
###################### END SCRIPT - db_synch_303.sh ## #####################

Note: This (main_db_synch_300.sh) script should schedule in standby server and need to add both production and standby database tns entries in standby server.

Thats it.. If you any problems with this script please contact me :-).

Thanks,
Chowdari

Friday, October 16, 2015

Setup Manual Standby Database

As everybody know oracle High Availabily DataGuard feature is available with Enterprise Edition only. But some times we need temporary High Availabily solutions for oracle standard edition databases. So here you can find detailed steps for how to setup manual standby database.

Follow below steps to setup manual standby database:

1) Primary database should be in Archivelog mode.

   Follow this post to enable archivelog mode in your database.
 
2) Create parameter file in primary side.

   SQL> Create pfile from spfile;

3) Take backup of your primary database and also take standby controlfile backup.
 
   RMAN> backup database plus archivelog;
   RMAN> copy current controlfile for standby to '/u01/app/oracle/bkp/stndby_ctrl01.ctl';
 
4) Copy pfile and backup files to standby server

   example: scp -r * oracle@hostname:<<backup_location>>

4) In Standby server, install same version of ORACLE binaries (same as production) and create same directory structure like production database.

5) Copy pfile in $ORACLE_HOME/dbs location and check the paths existed in standby server.

6) In standby side, follow below steps to restore and recover the database.

   SQL> startup nomount;
   
   RMAN> restore controlfile from '/u01/app/oracle/bkp/stndby_ctrl01.ctl';
   
   SQL> ALTER DATABASE MOUNT STANDBY DATABASE;
   
   RMAN> catalog start with '/u01/app/oracle/bkp';
   
   RMAN> restore database;
   
   RMAN> recover database;
 
7) After restored the database put standby database in MRM mode.

   SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE  DISCONNECT FROM SESSION;
 
If you want open manually created stand by database then follow below steps:

1) Activate the standby database and open it.

   SQL> alter database activate standby database;
   SQL> alter database open;
 
2) Create temprary tablespace and make it as default.

   SQL> CREATE TEMPORARY TABLESPACE <<TEMP_TABLESPACE_NAME>> TEMPFILE '<<DATAFILE_PATH>>' SIZE 1024M;
   SQL> ALTER DATABASE DEFAULT TEMPORARY TABLESPACE <<TEMP_TABLESPACE_NAME>>;
 
Note: Using DBVisit, you can configure standby database like dataguard in Standard Edition. But separate licence is required for DBVisit product usage.

Thats it.. Hope this will help you :)

Thanks,
Chowdari

Wednesday, October 14, 2015

Useful EXPDP Commands

Here you can find some useful EXPDP commands:

1) Export FULL database:

    expdp system/manager dumpfile=full.dmp directory=DATA_PUMP_DIR full=y logfile=full.log

2) Export database with multiple dump files:

     In some cases where the Database is in Terabytes and since the dump file size will be larger than the operating system limit, and hence export will fail. In those situations you can create multiple dump files by typing the following command.

     expdp  system/manager FULL=Y DIRECTORY=DATA_PUMP_DIR DUMPFILE=full%U.dmp FILESIZE=5G  LOGFILE=myfullexp.log JOB_NAME=myfullJob
   
     Note: This will create multiple dump files named full01.dmp, full02.dmp, full03.dmp and so on. The FILESIZE parameter specifies how large the dump file should be.

3) Export schema:

     Below command will useful for taking backup of single schema (SCOTT) with using SCOTT credentials:

     expdp scott/tiger directory=DATA_PUMP_DIR dumpfile=scottexp.dmp logfile=scottexp.log

4) Export multiple schemas:

     Below command for taking backup of multiple schemas.

     expdp system/manager dumpfile=scott_demo.dmp directory=DATA_PUMP_DIR schemas=(scott,demo) logfile=scottexp.log

5) Export Table: 

     Below command for taking backup of single table (emp) under scott schema

     expdp scott/tiger dumpfile=scott_emp.dmp directory=DATA_PUMP_DIR tables=(emp) logfile=scottexp.log

6) Export multiple tables:

    Below command for taking backup of multiple tables (emp,dept) under scott schema

    expdp scott/tiger dumpfile=scott_emp_dept.dmp directory=DATA_PUMP_DIR tables=(emp,dept) logfile=scottexp.log

7) Export Tablespaces:

  Below command for taking backup of tablespace (users)
 
    expdp system/manager dumpfile=users_ts.dmp directory=DATA_PUMP_DIR tablespaces=(users) logfile=users_ts.log

8) Export multiple tablespaces:

     Below command for taking backup of multiple tablespaces (users, hr)

    expdp system/manager dumpfile=users_ts.dmp directory=DATA_PUMP_DIR tablespaces=(users, hr) logfile=users_ts.log

9) Export table with data only:

     Below command for taking single table with data only
 
     expdp scott/tiger dumpfile=emp_data.dmp directory=DATA_PUMP_DIR tables=(emp) content=data_only logfile=emp_data.log

10) Export table with metadata only:

       Below command for taking single table with metadata only.
 
       expdp scott/tiger dumpfile=emp_metadata.sql directory=DATA_PUMP_DIR tables=(emp) content=metadata_only logfile=emp_metadata.log

11) Export Full database without logfile:

      expdp system/manager dumpfile=full.dmp directory=DATA_PUMP_DIR full=y nologfile=y

Estimate parameter: 

This parameter will tell us how much space a new export job is going to consume. the space estimation is always in terms of bytes. we can specify the database to provide us with estimates using either number of database blocks or optimizer statistics.

12) Export with parameter estimate (blocks and statistics)

       expdp scott/tiger dumpfile=scott_estimate.dmp directory=data_pump_dir  logfile=scott_estimate.log estimate=blocks

       expdp scott/tiger dumpfile=scott_estimate.dmp directory=data_pump_dir  logfile=scott_estimate.log  estimate=statistics

INCLUDE and EXCLUDE parameters:

The INCLUDE and EXCLUDE parameters can be used to limit the export/import to specific objects. When the INCLUDE parameter is used, only those objects specified by it will be included in the export. When the EXCLUDE parameter is used, all objects except those specified by it will be included in the export. The two parameters are mutually exclusive, so use the parameter that requires the least entries to give you the result you require. The basic syntax for both parameters is the same.

13) Export with EXCLUDE and INCLUDE examples:

       Below command for excluding table EMP in SCOTT schema.

       expdp system/manager dumpfile=scott_1.dmp directory=data_pump_dir logfile=scott_1.log schemas=scott exclude=table:"in('EMP')"

       Below command for excluding database objetcs with table EMP in SCOTT schema.

expdp system/manager dumpfile=scott_2.dmp directory=data_pump_dir logfile=scott_2.log schemas=scott exclude=procedure,trigger,function,sequence,index,table:"in('EMP')"

Below command for excluding table which starts with T under SCOTT schema.

expdp scott/tiger directory=data_pump_dir dumpfile=scott_schema.dmp logfile=scott_3.log schemas=scott exclude=table:"like'T%'"

Below command for including table which starts with S under SCOTT schema.

expdp scott/tiger directory=data_pump_dir dumpfile=scott_schema1.dmp logfile=scott_4.log schemas=scott include=table:"like'S%'"

14) Export with QUERY Option: Predicate clause used to export a subset of a table.

       Below command will useful for export only pirticular records with using QUERY option

expdp scott/tiger QUERY=emp:'"WHERE deptno = 10 AND sal > 10000"' DIRECTORY=data_pump_dir DUMPFILE=exp1.dmp logfile=scott_5.log
 
Go through with my previous articles related to Datapump:

impdp - ORA-31640 ORA-31693 ORA-19505 ORA-27037
ORA-31685 Error while import using IMPDP
Script: Shell Script to export Full DB or Single or Multiple Schemas
Export Logical Database Backup Dumps to ASM Disk

Hope this article will help you.  :-)

Thanks,
Chowdari

Some Most Popular Articles