How to kill IMPDP/EXPDP job:

nohup impdp \"/ as sysdba\" directory=PUMP dumpfile=FULL_EXPORT.dmp logfile=FULL_IMPORT.log &

check import process from database:

sqlplus / as sysdba

SQL*Plus: Release 12.1.0.2.0 Production on Thu Mar 21 10:01:05 2019

Copyright (c) 1982, 2014, Oracle. All rights reserved.

Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options

SQL> select JOB_NAME,state from dba_datapump_jobs;

JOB_NAME STATE
------------------------------ ------------------------------
SYS_IMPORT_FULL_02 EXECUTING
SYS_IMPORT_SCHEMA_02 NOT RUNNING
SYS_IMPORT_SCHEMA_01 NOT RUNNING
SYS_IMPORT_FULL_01 NOT RUNNING
SYS_IMPORT_SCHEMA_03 NOT RUNNING

exit

You can see that IMPORT process is run with SYS_IMPORT_FULL_02 EXECUTING name like above.

We can attach or connect to this process like following:

impdp \"/ as sysdba\" attach=SYS_IMPORT_FULL_02

check status of IMPORT process :

IMPORT> status


If you want to kill this IMPORT job:


IMPORT> kill_job
Are you sure you wish to stop this job ([yes]/no): yes




***************************************************************************


Datapump architecture:


expdp/impdp <--> Datapump API (DBMS_DATAPUMP) <--> access sql*loader ,dbms_metadata <--> direct access.


Import can be done directly over the network link without export.
In other words, a database link is created between production and test database and it can be used for import directly 
into the test database without exporting from production. 

DB LINK: 

CREATE PUBLIC DATABASE LINK DB_LINK_NAME
CONNECT TO username
IDENTIFIED BY password
USING 'PRODUCTION_IP:PORT/SID';

We can import the HR schema directly into the test database without exporting like following:

impdp \"/ as sysdba\" SCHEMAS=HR DIRECTORY=DATAPUMP LOGFILE=HR.log table_exists_action=replace NETWORK_LINK=DB_LINK_NAME

COMPRESSES EXPORT:

expdp \"/ as sysdba\" TABLES=MEHMET.DEVECI directory=DATAPUMP dumpfile=mehmet_deveci.dmp logfile=mehmet_deveci.log compression=all

ENCRYPT:

expdp \"/ as sysdba\" SCHEMAS=HR DIRECTORY=DATAPUMP DUMPFILE=HR.dmp LOGFILE=HR.log compression=all encryption=all

REMAP_SCHEMA: import MEHMET schema as SALIH.

impdp \"/ as sysdba\" TABLES=MEHMET.DEVECI directory=DATAPUMP dumpfile=mehmet_deveci.dmp logfile=mehmet_deveci.log REMAP_TABLE=MEHMET.DEVECI:SALIH

EXPORT MULTIPLE SCHEMA:

expdp \"/ as sysdba\" SCHEMAS=MEHMET,SALIH,DEVECI DIRECTORY=DATAPUMP DUMPFILE=mehmet_salih_deveci.dmp LOGFILE=mehmet_salih_deveci.log compression=all encryption=all

TABLE EXPORT:

expdp \"/ as sysdba\" tables=HR.personel DIRECTORY=DATAPUMP DUMPFILE=personel.dmp LOGFILE=personel.log

TABLE IMPORT:

impdp \"/ as sysdba\" tables=HR.personel DIRECTORY=DATAPUMP DUMPFILE=personel.dmp LOGFILE=personel.log table_exists_action=append

export and import lots of tables:

expdp \"/ as sysdba\" tables=HR.personel,HR.sales DIRECTORY=DATAPUMP DUMPFILE=personel_sales.dmp LOGFILE=personel_sales.log

impdp \"/ as sysdba\" tables=HR.personel,HR.sales DIRECTORY=DATAPUMP DUMPFILE=personel_sales.dmp LOGFILE=personel_sales.log table_exists_action=replace

FULL DB EXPORT:

expdp \"/ as sysdba\" DIRECTORY=DATAPUMP DUMPFILE=FULL_EXPORT.dmp LOGFILE=FULL_EXPORT.log FULL=Y

ADD PARALLELISM :

expdp \"/ as sysdba\" tables=HR.personel DIRECTORY=DATAPUMP DUMPFILE=personel%U.dmp LOGFILE=personel.log PARALLEL=8

REMAP TABLESPACE:

schema in the source database is kept in the DATA01 tablespace, it will be kept in the existing DATA tablespace in the target database.

impdp \"/ as sysdba\" SCHEMAS=HR DIRECTORY=DATAPUMP LOGFILE=HR.log table_exists_action=replace REMAP_TABLESPACE=DATA01:DATA

REMAP_TABLE:

MEHMET.DEVECI table will be MEHMET.SALIH name after import.

impdp \"/ as sysdba\" TABLES=MEHMET.DEVECI directory=DATAPUMP dumpfile=mehmet_deveci.dmp logfile=mehmet_deveci.log REMAP_TABLE=MEHMET.DEVECI:SALIH


















A user must have EXP_FULL_DATABASE privileges to export the data related to this command.

A user must import IMP_FULL_DATABASE privileges to import export files with this command.


f you do not create a directory on the operating system for the data pump, 
the export files that are taken by default are discarded to the path under parameter of DATA_PUMP_DIR directory. 
We can create a separate directory for datapump on the operating system as follows.


create directory DATAPUMP as '/backup';

Related (sys,system) User has to have read / write permission on the related directory . We should grant read and write permission like following

grant read, write on directory DATAPUMP to sys;

SCHEMAS: To be backed up schema or to be imported schema is specified with this parameter.

DIRECTORY: The logical equivalent of the path to keep this physical dump file on the operating system is specified by the DIRECTORY parameter.
The file or files that are created during the export process will store to the path where this directory is located. 
Dumpfile is created under this directory path.

DUMPFILE: The physical dump file created by the export or imported dump file is specified in the DUMPFILE parameter with its name like HR.dmp.

LOGFILE: Logs produced during this process will be written to the log file specified by parameter LOGFILE


Table_exists_action: While Import process is running if any table exists in database what will Oracle do ? 
replace existing table or skip if it exists. this parameter can take following values.

TABLE_EXISTS_ACTION=[SKIP | APPEND | TRUNCATE | REPLACE]

SKIP is default value. If SKIP is used then table replacing is not done.

APPEND  loads rows from the export files and leaves target existing rows unchanged.

TRUNCATE deletes existing rows in target table and then loads rows from the export.

REPLACE drops the existing table in the target and then creates and loads it from the export.












Comments

Popular posts from this blog

How to set the root password after creating instance on oracle cloud

TNS-12541 TNS-12560 TNS-00511 TNS:protocol adapter error No listener

Queryable inventory could not determine the current opatch status.