Starting with Oracle 10g, Oracle has introduced an enhanced
version of EXPORT and IMPORT utility known as DATA PUMP. Data Pump is similar
to EXPORT and IMPORT utility but it has many advantages. Some of the advantages
are:
• Most Data
Pump export and import operations occur on the Oracle database server. i.e. all
the dump files are created in the server even if you run the Data Pump utility
from client machine. This results in increased performance because data is not
transferred through network.
• You can
Stop and Re-Start export and import jobs. This is particularly useful if you
have started an export or import job and after some time you want to do some
other urgent work.
• The
ability to detach from and reattach to long-running jobs without affecting the
job itself. This allows DBAs and other operations personnel to monitor jobs
from multiple locations.
• The
ability to estimate how much space an export job would consume, without
actually performing the export
• Support
for an interactive-command mode that allows monitoring of and interaction with
ongoing jobs
Using Data Pump Export Utility
To Use Data Pump, DBA has to create a directory in Server
Machine and create a Directory Object in the database mapping to the directory
created in the file system.
The following example creates a directory in the file system
and creates a directory object in the database and grants privileges on the
Directory Object to the SCOTT user.
$mkdir my_dump_dir
$sqlplus
Enter User:/ as sysdba
SQL>create directory data_pump_dir as
‘/u01/oracle/my_dump_dir’;
Now grant access on this directory object to SCOTT user
SQL> grant read, write on directory data_pump_dir to
scott;
Example of Exporting a Full Database
To Export Full Database, give the following command
$expdp scott/tiger
FULL=y DIRECTORY=data_pump_dir DUMPFILE=full.dmp
LOGFILE=myfullexp.log JOB_NAME=myfullJob
The above command will export the full database and it will
create the dump file full.dmp in the directory on the server
/u01/oracle/my_dump_dir
In some cases where the Database is in Terabytes the above
command will not feasible since the dump file size will be larger than the
operating system limit, and hence export will fail. In this situation you can
create multiple dump files by typing the following command
$expdp scott/tiger
FULL=y DIRECTORY=data_pump_dir DUMPFILE=full%U.dmp
FILESIZE=5G LOGFILE=myfullexp.log
JOB_NAME=myfullJob
This will create multiple dump files named full01.dmp,
full02.dmp, full03.dmp and so on. The FILESIZE parameter specifies how much
larger the dump file should be.
Example of Exporting a Schema
To export all the objects of SCOTT’S schema you can run the
following export data pump command.
$expdp scott/tiger DIRECTORY=data_pump_dir
DUMPFILE=scott_schema.dmp
SCHEMAS=SCOTT
You can omit SCHEMAS since the default mode of Data Pump
export is SCHEMAS only.
If you want to export objects of multiple schemas you can
specify the following command
$expdp scott/tiger DIRECTORY=data_pump_dir
DUMPFILE=scott_schema.dmp
SCHEMAS=SCOTT,HR,ALI
Exporting Individual Tables using Data Pump Export
You can use Data Pump Export utility to export individual
tables. The following example shows the syntax to export tables
$expdp hr/hr DIRECTORY=dpump_dir1 DUMPFILE=tables.dmp
TABLES=employees,jobs,departments
Exporting Tables located in a Tablespace
If you want to export tables located in a particular
tablespace you can type the following command
$expdp hr/hr DIRECTORY=dpump_dir1 DUMPFILE=tbs.dmp
TABLESPACES=tbs_4, tbs_5, tbs_6
The above will export all the objects located in
tbs_4,tbs_5,tbs_6
Excluding and Including Objects during Export
You can exclude objects while performing a export by using
EXCLUDE option of Data Pump utility. For example you are exporting a schema and
don’t want to export tables whose name
starts with “A” then you can type the following command
$expdp scott/tiger DIRECTORY=data_pump_dir
DUMPFILE=scott_schema.dmp
SCHEMAS=SCOTT EXCLUDE=TABLE:”like ‘A%’”
Then all tables in Scott’s Schema whose name starts with “A
“ will not be exported.
Similarly you can also INCLUDE option to only export certain
objects like this
$expdp scott/tiger DIRECTORY=data_pump_dir DUMPFILE=scott_schema.dmp
SCHEMAS=SCOTT
INCLUDE=TABLE:”like ‘A%’”
This is opposite of EXCLUDE option i.e. it will export only
those tables of Scott’s schema whose name starts with “A”
Similarly you can also exclude INDEXES, CONSTRAINTS, GRANTS,
USER, SCHEMA
Using Query to Filter Rows during Export
You can use QUERY option to export only required rows. For
Example, the following will export only those rows of employees tables whose
salary is above 10000 and whose dept id is 10.
expdp hr/hr
QUERY=emp:'"WHERE dept_id > 10 AND sal > 10000"'
NOLOGFILE=y
DIRECTORY=dpump_dir1 DUMPFILE=exp1.dmp
Suspending and Resuming Export Jobs (Attaching and
Re-Attaching to the Jobs)
You can suspend running export jobs and later on resume
these jobs or kill these jobs using Data Pump Export. You can start a job in
one client machine and then, if because of some work, you can suspend it.
Afterwards when your work has been finished you can continue the job from the
same client, where you stopped the job, or you can restart the job from another
client machine.
For Example, suppose a DBA starts a full database export by
typing the following command at one client machine CLNT1 by typing the
following command
$expdp scott/tiger@mydb FULL=y DIRECTORY=data_pump_dir
DUMPFILE=full.dmp
LOGFILE=myfullexp.log JOB_NAME=myfullJob
After some time, the DBA wants to stop this job temporarily.
Then he presses CTRL+C to enter into interactive mode. Then he will get the
Export> prompt where he can type interactive commands
Now he wants to stop this export job so he will type the
following command
Export> STOP_JOB=IMMEDIATE
Are you sure you wish to stop this job ([y]/n): y
The job is placed in a stopped state and exits the client.
After finishing his other work, the DBA wants to resume the
export job and the client machine from where he actually started the job is
locked because, the user has locked his/her cabin. So now the DBA will go to
another client machine and he reattach to the job by typing the following
command
$expdp hr/hr@mydb ATTACH=myfulljob
After the job status is displayed, he can issue the
CONTINUE_CLIENT command to resume logging mode and restart the myfulljob job.
Export> CONTINUE_CLIENT
A message is displayed that the job has been reopened, and
processing status is output to the client.
Note: After reattaching to the Job a DBA can also kill the
job by typing KILL_JOB, if he doesn’t want to continue with the export job.
Advantages:
===>
Better control on the job running – it provides features like start,
stop, restart
===>
Improved performance because of It is server side technology with parallel
streams option
===>
Using the parallel streams option, data pump can backup large volume of data
quickly
===>
Data pump is 15-50% faster than the conventional export import.
===>
It has the ability to estimate the job times
===>
Failed jobs can be restarted
===>
Using exclude/include option we can perform fine-grained object selection
===>
Backup jobs can be monitored
===>
It has the remapping capabilities
===>
It supports the export/import operations over the network. The NETWORK_LINK
parameter initiate the export using a
database
link
===>
Using “Query” parameter DBA can extract the data from tables like SELECT
“Content”
parameter gives the flexibility for what to import/export. For example Metadata
only, data or both
===>
It supports full range of data types
===>
It supports cross platform compatibility
===>
No need to specify the buffer size like in normal exp/imp
===>
It has its own performace tuning features
===>
V$session_longops view can be used for the time estimation for the data pump
jobs
===>
It supports interactive mode that allows the dba to monitor or interact with
ongoing jobs
===>
Dumps will be in compressed
===>
Data can be encrypted
===>
XML schemas and XML type is supported by the Data Pump
Disadvantages:
===>
Export cannot be taken to tape
===>
Import will work only with Oracle 10g or above
===>
Cannot use with Unix pipes
Related
Views:
DBA_DATAPUMP_JOBS
USER_DATAPUMP_JOBS
DBA_DIRECTORIES
DATABASE_EXPORT_OBJECTS
SCHEMA_EXPORT_OBJECTS
TABLE_EXPORT_OBJECTS
DBA_DATAPUMP_SESSIONS
Difference
between Conventional path and Direct Path
Conventional
path means that a SQL INSERT statement is used to load data into tables. Direct
path deals with data much faster than conventional path. Direct path achieves
this performance gain by inserting data directly, bypassing the SQL command
processing layer and saves on data copies whenever possible.
This
means you don't get any redo created however and once your load is complete you
should backup immediately in order to have your data safe.
Lets
elaborate them!
--Conventional
path Export.
Conventional
path Export uses the SQL SELECT statement to extract data from tables. Data is
read from disk into the buffer cache, and rows are transferred to the
evaluating buffer. The data, after passing expression evaluation, is
transferred to the Export client, which then writes the data into the export
file.
--Direct
path Export.
When
using a Direct path Export, the data is read from disk directly into the export
session's program global area (PGA): the rows are transferred directly to the
Export session's private buffer. This also means that the SQL
command-processing layer (evaluation buffer) can be bypassed, because the data
is already in the format that Export expects. As a result, unnecessary data
conversion is avoided. The data is transferred to the Export client, which then
writes the data into the export file.
VERSION:
With
the Export Data Pump parameter VERSION, you can control the version of the
dumpfile set, and make the set compatible to be imported into a database with a
lower compatibility level. Note that this does not mean that dumpfile set can
be used with versions of Oracle Database prior to 10.1.0 because Export and
Import Data Pump only work with Oracle Database 10g release 1 (10.1) or later.
The
VERSION parameter can be useful in a downgrade situation, for example. Assume
you have a 10.2 database and you export with VERSION=9.2. Only database objects
and attributes that are compatible with 9.2 will be exported. You could then
import the dump file into a 10.1 database whose compatibility is set to 9.2. A
10.1 database set to 9.2 compatibility could theoretically be downgraded to
9.2.
VERSION={COMPATIBLE
| LATEST | version_string}
COMPATIBLE
= The version of the metadata corresponds to the database compatibility level.
Database compatibility must be set to 9.2 or higher. This is the default value.
LATEST
= The version of the metadata corresponds to the database release version.
version_string
= A specific database version (e.g.: 11.1.0).
If
database compatibility is 11.1.* then datapump dumpfile set compatibility is
2.1
If
database compatibility is 10.2.* then datapump dumpfile set compatibility is
1.1
If
database compatibility is 10.1.* then datapump dumpfile set compatibility is
0.1
Examples:
If
you specify a VERSION for an Export Data Pump job that is older than the
current database version, certain Export Data Pump features may be unavailable.
For example, specifying VERSION=10.1 will cause an error if data compression is
also specified for the job (e.g.: COMPRESSION=all)
SQL>
host expdp system/sys@delhi version=10.2 compression=all dumpfile=my.dmp
Export:
Release 11.2.0.1.0 - Production on Mon Sep 26 12:28:39 2011
Copyright
(c) 1982, 2009, Oracle and/or its affiliates. All rights reserved.
Connected
to: Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Production
With
the Partitioning, OLAP, Data Mining and Real Application Testing options
ORA-39005:
inconsistent arguments
ORA-39055:
The COMPRESSION feature is not supported in version 10.2.
Export
from Oracle database version:11.2:
expdp
hr/hr TABLES=hr.employees2 VERSION=10.2 DIRECTORY=data_pump_dir DUMPFILE=emp2.dmp
LOGFILE=emp2.log
Import
into oracle database version 10.2:
impdp
hr/hr TABLES=hr.employees2 DIRECTORY=data_pump_dir DUMPFILE=emp2.dmp
LOGFILE=emp2_imp.log
External
tables with Datapump
External
tables
are largely used as a convenient way of moving data into and out of the
database.They let you query data in a flat file as though the file were an
Oracle table.
Oracle
uses SQL*Loader functionality, through the ORACLE_LOADER access driver to move
data from the flat file into the database; it uses a Data Pump access driver to
move data out of the db into a file in an Oracle-proprietary format, and back
into the database from files of that format
Real-Life
use of External Tables:
Suppose
that you receive a daily .csv file from a reporting department. Instead of
writing a SQL*Loader script to import each day's .csv file into your database,
you can simply create an external table and write an "insert ...
select" SQL query to insert the data directly into your tables. Place the
day's CSV file in the location specified in the external table definition, run
the query, and you're done.
Lets
see how its done:
Choose
or create a OS directory where you will put your .CSV(..flat file).
In
my case i have taken : 'D:\et\'
My
CSV file will look like this having name "emp.CSV"
001,Hutt,Jabba,896743856,jabba@thecompany.com,18
002,Simpson,Homer,382947382,homer@thecompany.com,20
003,Kent,Clark,082736194,superman@thecompany.com,5
004,Kid,Billy,928743627,billythkid@thecompany.com,9
005,Stranger,Perfect,389209831,nobody@thecompany.com,23
The
next step is to create this directories in Oracle, and grant read/write access
on it to the Oracle user who will be creating the external table
SQL>
create directory opump as 'D:\et\';
Directory
created.
SQL>
grant read,write on directory opump to gg;
Grant
succeeded.
Now,create
the external table correspond to .CSV file.
SQL>
conn gg/gg
Connected.
SQL>
create table xtern_empl_rpt
2 ( empl_id varchar2(3),
3 last_name varchar2(50),
4 first_name varchar2(50),
5 ssn varchar2(9),
6 email_addr varchar2(100),
7 years_of_service number(2,0)
8 )
9 organization external
10
( default directory opump
11
access parameters
12
( records delimited by newline
13
fields terminated by ','
14
)
15
location ('emp.csv')
16
);
Table
created.
External
table is successfully created now.Lets check the data in oracle!
SQL>
sho user;
USER
is "GG"
SQL>
select * from xtern_empl_rpt ;
Note: If u add more
records in the csv file i.e. emp.csv then oracle table
"xtern_empl_rpt" will automatically updated.
For
example i will add a new record in my csv file:
7,Vishu,DBA,123,itsmevishu82@gmail.com,25
Save
the file..
Lets
see the record if updated in oracle table.
Unloading
data into an external file using DATAPUMP.
Oracle
10g lets you create a new external table from data in your database, which goes
into a flat file pushed from the database using the ORACLE_DATAPUMP access
driver. This flat file is in an Oracle-proprietary format that can be read by
DataPump. The syntax is similar to the CREATE TABLE... ORGANIZATION EXTERNAL
above, but simpler -- since you can't specify the data format, you can specify
very few access_parameters. The key difference is that you must specify the
access driver, ORACLE_DATAPUMP, since the access driver defaults to
ORACLE_LOADER.
Lets
see how its done..
First
create a user called "mm" and provide the privileges as below.
SQL>
create user mm identified by mm;
User
created.
SQL>
grant read,write on directory opump to mm;
Grant
succeeded.
Now,connect
to mm and create the dumpfile using external table.
SQL>
conn mm/mm
Connected.
SQL>
create table import_empl_info
2 ( empl_id varchar2(3),
3 last_name varchar2(50),
4 first_name varchar2(50),
5 ssn varchar2(9),
6 birth_dt date
7 )
8 organization external
9 ( type oracle_datapump
10
default directory opump
11
location ('empl_info_rpt.dmp')
12
) ;
Table
created.
It
will create the dumpfile called "empl_info_rpt.dmp" which can
be moved and used in any other database or same.
IMPORT
THE TABLE BACK IN AGAIN..
We
will now again load the data into oracle from the dump being created through
external table.
First
drop the table "import_empl_info".
SQL>
drop table import_empl_info;
Table
dropped.
Now
load the data again ..
SQL>
create table import_empl_info
2 ( empl_id varchar2(3),
3 last_name varchar2(50),
4 first_name varchar2(50),
5 ssn varchar2(9)
6 )
7 organization external
8 ( type oracle_datapump
9 default directory opump
10
location ('empl_info_rpt.dmp')
11
) ;
We
have seen now how External tables in 9i and 10g provide a convenient, seamless
way to move data in and out of the database
Restrictions
imposed on External tables:
1.)External
are not usable in many ways regular Oracle tables.
2.)You
cannot perform any DML operations on external tables other than table creation.
3.)You
can't create an index on an external table.
STATUS:
Specifies
the frequency at which the job status display is updated.
If
you supply a value for integer, it specifies how frequently, in seconds, job
status should be displayed in logging mode.
DEMO:
SQL>
host expdp system/sys dumpfile=EasdXPgjhDAT.dmp directory=ipump full=y
status=10
Export:
Release 11.2.0.1.0 - Production on Sat Sep 24 12:13:07 2011
Copyright
(c) 1982, 2009, Oracle and/or its affiliates. All rights reserved.
Connected
to: Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Production
With
the Partitioning, OLAP, Data Mining and Real Application Testing options
Job:
SYS_EXPORT_FULL_05
Operation: EXPORT
Mode: FULL
State: EXECUTING
Bytes Processed: 0
Current Parallelism: 1
Job Error Count: 0
Dump File: D:\EASDXPGJHDAT.DMP
bytes written: 4,096
Worker
1 Status:
State: UNDEFINED
Starting
"SYSTEM"."SYS_EXPORT_FULL_05": system/********
dumpfile=EasdXPgjhDAT.dmp directory=ipump full=y status=10
Job:
SYS_EXPORT_FULL_05
Operation: EXPORT
Mode: FULL
State: EXECUTING
Bytes Processed: 0
Current Parallelism: 1
Job Error Count: 0
Dump File: D:\EASDXPGJHDAT.DMP
bytes written: 4,096
Worker
1 Status:
Process Name: DW00
State: EXECUTING
Estimate
in progress using BLOCKS method...
Job:
SYS_EXPORT_FULL_05
Operation: EXPORT
Mode: FULL
State: EXECUTING
Bytes Processed: 0
Current Parallelism: 1
Job Error Count: 0
Dump File: D:\EASDXPGJHDAT.DMP
bytes written: 4,096
Worker
1 Status:
Process Name: DW00
State: EXECUTING
Processing
object type DATABASE_EXPORT/SCHEMA/TABLE/TABLE_DATA
Job:
SYS_EXPORT_FULL_05
Below
are some enhancements done in Oracle 11g:
-Compression: parameter now can
use to compress both data and Metadata with the help of values "ALL".
expdp
quest/quest DIRECTORY= quest_pump_dir DUMPFILE=quest_comp.dmp compression=ALL
-Data
Pump Encryption Enhancements:
Different
encryption modes are available during the export. These include Dual, Password,
and Transparent. The mode that is used is set using the encryption_mode
parameter.
Here
is an example of using the new encryption parameters for a data pump export:
expdp
quest/quest DIRECTORY=quest_pump_dir DUMPFILE=quest_comp.dmp encryption=all
encryption_password=not_quest encryption_algorithm=AES128
encryption_mode=PASSWORD
-Data
Pump Rename Table(or remap_table)
Sometimes
its desirable to be able to rename a table during the import process. In Oracle
11g you can now do this with Oracle Data Pump. To rename a table during the
import process use the remap_table parameter. Using the remap_table parameter
define the original table schema and name, and then indicate the new name for
the table. Here is an example where the table QUEST.NAMES is renamed to
COPY_NAMES:
impdp
quest/quest DIRECTORY=quest_pump_dir DUMPFILE=quest.dmp tables=QUEST.NAMES
remap_table=QUEST.NAMES:COPY_NAMES
-Data
Pump and Partitioned Tables:
If
you want to have some control over partitioning of tables during a Oracle Data
Pump import you can use the partition_options parameter which is a new feature
available in impdp in Oracle 11g. The following optioins are available when
using the partition_options parameter:
None
- Tables will be imported such that they will look like those on the system on
which the export was created.
Departition - Partitions will be created as individual tables rather than
partitions of a partitioned table.
Merge - Causes all partitions to be merged into one, unpartitioned table.
In
this example we use the partition_options parameter to take all partitions of
the NAMES table and combine them into one unpartitioned table on the
destination side:
impdp
quest/quest DIRECTORY=quest_pump_dir DUMPFILE=quest.dmp tables=QUEST.NAMES
partition_options=merge
-Overwrite
Dump Files
When
using impdp in Oracle 11g you can now overwrite any dump file that might
already exist by using the new reuse_dumpfiles parameter as seen in this
example:
expdp
quest/quest DIRECTORY=quest_pump_dir DUMPFILE=quest.dmp tables=QUEST.NAMES
reuse_dumpfiles
Data
Pump Data_Options Parameter
Have
you ever loaded records into a table with Oracle Data Pump Import, only to have
the entire load fail because a few duplicate records were in the import file.
Assume that your EMP table has three rows in it. EMPNO is the primary key and
here is what a query of the table looks like:
SQL>
select empno from emp;
EMPNO
----------
7369
8000
8001
3
rows selected.
When
you try to load your Data Pump Export file the load fails because it has these
three rows in it and they will violate the primary key of the table. The load
and the resulting error messages would look like this:
C:\oracle\product\11.1.0\db_1\NETWORK\ADMIN>impdp
scott/robert dumpfile=new_emp.dmp directory=data_pump_dir tables=emp
table_exists_action=append
Import:
Release 11.1.0.6.0 - Production on Tuesday, 01 January, 2008 23:51:32
Copyright
(c) 2003, 2007, Oracle. All rights reserved.
Connected
to: Oracle Database 11g Enterprise Edition Release 11.1.0.6.0 - Production
With
the Partitioning, OLAP, Data Mining and Real Application Testing options
Master
table "SCOTT"."SYS_IMPORT_TABLE_01" successfully
loaded/unloaded
Starting
"SCOTT"."SYS_IMPORT_TABLE_01": scott/********
dumpfile=new_emp.dmp directory=data_pump_dir tables=emp table_exists_action=
append
Processing
object type TABLE_EXPORT/TABLE/TABLE
ORA-39152:
Table "SCOTT"."EMP" exists. Data will be appended to
existing table but all dependent metadata will be skipped due to tab
le_exists_action
of append
Processing
object type TABLE_EXPORT/TABLE/TABLE_DATA
ORA-31693:
Table data object "SCOTT"."EMP" failed to load/unload and
is being skipped due to error:
ORA-00001:
unique constraint (SCOTT.PK_EMP) violated
Processing
object type TABLE_EXPORT/TABLE/INDEX/INDEX
Processing
object type TABLE_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
Processing
object type TABLE_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Processing
object type TABLE_EXPORT/TABLE/CONSTRAINT/REF_CONSTRAINT
Processing
object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
Processing
object type TABLE_EXPORT/TABLE/STATISTICS/USER_PREF_STATISTICS
Job
"SCOTT"."SYS_IMPORT_TABLE_01" completed with 2 error(s) at
23:51:35
And
the resulting table looks just the same:
SQL>
select empno from emp;
EMPNO
----------
7369
8000
8001
3
rows selected.
The
load of the EMP table has completely failed. Data Pump does not come with a
reject limit clause like SQL*Loader does, so the whole load of the table just
fails, even if it's just one record causing the problem.
Oracle
11g solves this problem with a new parameter for use with Data Pump Import.
This parameter is data_options=skip_constraint_errors. When this parameter is
used Data Pump Import will ignore the rows that generate a constraint error
(while still not loading them of course) and will continue to load those rows
that do not generate an error. Here is an example of an import using this new
parameter:
SQL>
select empno from emp;
EMPNO
----------
7369
8000
8001
3
rows selected.
C:\oracle\product\11.1.0\db_1\NETWORK\ADMIN>impdp
scott/robert dumpfile=new_emp.dmp directory=data_pump_dir tables=emp
table_exists_action=append data_options=skip_constraint_errors
Import:
Release 11.1.0.6.0 - Production on Tuesday, 01 January, 2008 23:55:33
Copyright
(c) 2003, 2007, Oracle. All rights reserved.
Connected
to: Oracle Database 11g Enterprise Edition Release 11.1.0.6.0 - Production
With
the Partitioning, OLAP, Data Mining and Real Application Testing options
Master
table "SCOTT"."SYS_IMPORT_TABLE_01" successfully
loaded/unloaded
Starting
"SCOTT"."SYS_IMPORT_TABLE_01": scott/********
dumpfile=new_emp.dmp directory=data_pump_dir tables=emp table_exists_action=
append
data_options=skip_constraint_errors
Processing
object type TABLE_EXPORT/TABLE/TABLE
ORA-39152:
Table "SCOTT"."EMP" exists. Data will be appended to
existing table but all dependent metadata will be skipped due to tab
le_exists_action
of append
Processing
object type TABLE_EXPORT/TABLE/TABLE_DATA
.
. imported "SCOTT"."EMP"
8.187 KB
13 out of 16 rows
3
row(s) were rejected with the following error:
ORA-00001:
unique constraint (SCOTT.PK_EMP) violated
Rejected
rows with the primary keys are:
Rejected
row #1:
column EMPNO: 8000
Rejected
row #2:
column EMPNO: 8001
Rejected
row #3:
column EMPNO: 7369
Processing
object type TABLE_EXPORT/TABLE/INDEX/INDEX
Processing
object type TABLE_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
Processing
object type TABLE_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Processing
object type TABLE_EXPORT/TABLE/CONSTRAINT/REF_CONSTRAINT
Processing
object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
Processing
object type TABLE_EXPORT/TABLE/STATISTICS/USER_PREF_STATISTICS
Job
"SCOTT"."SYS_IMPORT_TABLE_01" completed with 1 error(s) at
23:55:36
Note
in this case that 3 rows were rejected, but the remaining 13 rows were
successfully loaded. Here is a select query against the table. This looks much
better:
SQL>
select empno from emp;
EMPNO
----------
7369
7499
7521
7566
7654
7698
7782
7788
7839
7844
7876
7900
7902
7934
8000
8001
16
rows selected.
There
is one main restriction with this feature and that is that the Data Pump import
must use the external table method when loading data. Data Pump can use two
methods of importing (or exporting) data, direct path and external tables. When
you import using data_options=skip_constraint_errors Data Pump must use
external tables. Normally this will happen as a default, so you don't have to
worry about it. However, if you do get an error because Oracle Data Pump is not
using the external table method you can use the undocumented parameter
access_method=external_table to try to force Data Pump to use external tables.
The
new Oracle Data Pump parameter data_options also provides for special handling
of other types of data-related issues. Use the XML_CLOBS setting of the
data_options parameter of the Data Pump Export utility to override the default
behavior of Oracle Data Pump, which is to compress the format of the XMLType
CLOB. If you use this option be aware that the XML schemas at the source and
destination must be the same.
-The
Transportable Parameter:
When
the transportable parameter is used in impdp or expdp only the metadata
associated with specific tables, partitions, or sub-partitions will be
extracted, rather than all metadata. You can then proceed to transport the
associated data files as you normally would. You export the data with expdp:
expdp
quest/quert DIRECTORY=quest_pump_dir DUMPFILE=quest.dmp tables=QUEST.NAMES
transportable=always
You
then copy the data files and the dump file set to the destination and plug in
the database. You would use impdp as a part of this process to import the
metadata into the database, as seen in this example:
impdp
quest/quert DIRECTORY=quest_pump_dir DUMPFILE=quest.dmp tables=QUEST.NAMES
remap_schema=Robert:new_Robert
Enjoy:-)
No comments:
Post a Comment