The WbImport
command can be used to import data from text, XML or
Spreadsheet (ODS, XLS, XLSX) files into a table of the database.
WbImport can read the XML files generated by the WbExport command's XML format.
It can also read text files created by the WbExport command that escape non-printable
characters.
The WbImport
command can be used like any other SQL command
(such as UPDATE
or INSERT
), including scripts
that are run in batch mode.
During the import of text files, empty lines (i.e. lines which only contain whitespace) will be silently ignored.
WbImport
recognizes certain "literals" to identify the current date or time
when converting values from text files to the appropriate data type of the DBMS.
Thus, input values like now
, or current_timestamp
for date or timestamp columns are converted correctly. For details on which "literals" are
supported, please see the description about editing data.
The DataPumper can also be used to import text files
into a database table, though it does not offer all of the possibilities from the
WbImport
command.
Archives created with the WbExport
command
using the -compress=true
parameter can be imported using WbImport
command. You simply need to specifiy the archive file created by WbExport
, and
WbImport
will automatically detect the archive. For an example to create
and import compressed exports, please refer to compressing export files
![]() | |
If you use |
In order to import Microsoft Excel (XSL, XSLT) or OpenOffice Calc (ODS) files, additional libraries are needed. These are included in the download "Generic package including all optional libraries".
You can tell if the needed libraries are installed if you invoke the code-completion
after typing the -type=
parameter. If the types XLS or ODS are presented in the drop down, the libraries installed.
The Excel import supports XLS and XLSX, it does not support the "SpreadsheetML" format.
![]() | |
To import XLS or XLSX files, the entire file needs to be read into memory. When importing large files this will require a substantial amount of memory. |
WbImport
supports conditional execution
WbImport
supports the following parameters
Parameter | Description | |||
---|---|---|---|---|
-type |
Possible values: Defines the type of the input file. This is only needed if the input file has a non-standard file extensions. If this parameter is not specified, the import type is derived from the input file's extension. This parameter supports code-completion. If invoked, it will only display available import types. | |||
-mode |
Defines how the data should be sent to the database. Possible
values are '
For some DBMS, the additional modes: ' This parameter supports code-completion. | |||
-file |
Defines the full name of the input file. Alternatively
you can also specify a directory (using | |||
-table |
Defines the table into which the data should be imported
This parameter is ignored, if the files are imported using the
This parameter supports code-completion. | |||
-sourceDir |
Defines a directory which contains import files. All
files from that directory will be imported. If this switch is used with text files and no
target table is specified, then it is assumed that each filename (without the extension)
defines the target table. If a target table is specified using the | |||
-extension |
When using the | |||
-ignoreOwner |
If the file names imported with from the directory specified with -sourceDir contain the owner (schema) information, this owner (schema) information can be ignored using this parameter. Otherwise the files might be imported into a wrong schema, or the target tables will not be found. | |||
-excludeFiles |
Using -excludeFiles, files from the source directory (when using -sourceDir)
can be excluded from the import. The value for this parameter is a comma
separated list of partial names. Each file that contains at least one of the
values supplied in this parameter is ignored. | |||
-skipTargetCheck |
Normally
However, | |||
-checkDependencies |
When importing more than one file (using the | |||
-commitEvery |
If your DBMS neeeds frequent commits to improve performance and reduce locking on the import table you can control the number of rows after which a COMMIT is sent to the server.
When using batch import and your DBMS requires
frequent commits to improve import performance, the
You can turn off the use of a commit or rollback during import completely by using the option
Using | |||
-transactionControl |
Possible values:
Controls if SQL Workbench/J handles the transaction for the import,
or if the import must be committed (or rolled back) manually.
If | |||
-continueOnError |
Possible values: This parameter controls the behavior when errors occur during
the import. The default is
The default value for this parameter can be controlled in the settings file
and it will be displayed if you run
With PostgreSQL | |||
-emptyFile |
Possible values:
This parameter controls the behavior when an empty file (i.e. with a length of zero bytes) is used
for the input file.
The default value is | |||
-useSavepoint |
Possible values:
Controls if SQL Workbench/J guards every insert or update statement
with a savepoint to recover from individual error during import,
when Using a savepoint for each DML statement can drastically reduce the performance of the import. | |||
-keyColumns |
Defines the key columns for the target table. This parameter
is only necessary if import is running in
It is assumed that the values for the key columns will never be
This parameter is ignored if files are imported using the | |||
-ignoreIdentityColumns |
Possible values: Controls if identity or auto-increment columns will be included in the import.
If this is used, the JDBC driver must correctly report the column to be excluded as an AUTOINCREMENT
column. This can be verified in the table definition display of the DbExplorer.
If the column is reported with | |||
-overrideIdentity |
Possible values:
Controls the option to override identity columns, for DBMS that support the standard SQL's
The option No check is done, if the target DBMS supports the option. The default is, to not use any overriding option. | |||
-schema | Defines the schema into which the data should be imported. This is necessary for DBMS that support schemas, and you want to import the data into a different schema, then the current one. | |||
-encoding |
Defines the encoding of the input file (and possible CLOB files)
If code-completion is invoked for this parameter, it will show a list of encodings
defined through the configuration property | |||
-deleteTarget |
Possible values:
If this parameter is set to true, data from the target table will
be deleted (using This parameter is ignored for spreadsheet imports. | |||
-truncateTable |
Possible values:
This is essentially the same as | |||
-batchSize |
A numeric value that defines the size of the batch queue. Any value greater than 1 will enable batch mode. If the JDBC driver supports this, the INSERT (or UPDATE) performance can be increased drastically.
This parameter will be ignored if the driver does not support batch updates or if
the mode is not | |||
-commitBatch |
Possible values:
If using batch execution (by specifying a batch size using the
When you specify | |||
-updateWhere |
When using update mode
an additional | |||
-startRow |
A numeric value to define the first row to be imported. Any row before the specified row will be ignored. The header row is not counted to determine the row number. For a text file with a header row, the physical line 2 is row 1 (one) for this parameter.
When importing text files, empty lines in the input file are silently ignored
and do not add to the count of rows for this parameter. So if your input file
has two lines to be ignored, then one empty line and then another line to be ignored,
| |||
-endRow |
A numeric value to define the last row to be imported. The import
will be stopped after this row has been imported. When you
specify -startRow=10 and -endRow=20
11 rows will be imported (i.e. rows 10 to 20). If this is a text file
import with a header row, this would correspond to the physical lines
11 to 21 in the input file as the header row is not counted.
| |||
-columnFilter |
This defines a filter on column level that selects only certain rows
from the input file to be sent to the database. The filter has to be
defined as
This parameter is ignored when the | |||
-badFile |
If If a file with that name exists it will be deleted when the import for the table is started. The file will not be created unless at least one record is rejected during the import. The file will be created with the same encoding as indicated for the input file(s). | |||
-maxLength |
With the parameter
The parameter defines the maximum length for certain columns using the following
format: | |||
-booleanToNumber |
Possible values:
In case you are importing a boolean column (containing "true", "false")
into a numeric column in the target DBMS, SQL Workbench/J will automatically
convert the literal
To store different values than 0/1 in the target column, use the parameters
| |||
-numericFalse -numericTrue |
These parameters control the conversion of boolean literals into numbers.
If these parameters are used, any text input that is identified as a "false" literal, will be stored with the number specified
with
To use -1 for false and 1 for true, use the following parameters:
These parameters can be combined with Please note:
| |||
-literalsFalse -literalsTrue |
These parameters control the conversion of boolean literals into boolean values.
These two switches define the text values that represent the (boolean) values
The value to these switches is a comma separated list of literals
that should be treated as the specified value, e.g.:
Please note:
| |||
-dateFormat |
The format for date columns. This is only used for text imports and when using -stringDates=true with spreadsheet imports | |||
-timestampFormat |
The format for datetime (or timestamp) columns in the input file. This is only used for text imports and when using -stringDates=true with spreadsheet imports | |||
-locale |
The locale (language) to be used for parsing date and timestamp values. This is only used for text imports and spreadsheet imports using -stringDates=true This parameter supports code-completion. | |||
-illegalDateIsNull |
If this is set to This is only used for text and spreadsheet imports | |||
-trimValues |
Possible values:
Controls whether leading and trailing whitespace are removed from the input values
before they are stored in the database. When used in combination with
The default value for this parameter can be controlled
in the settings file
and it will be displayed if you run Note that, input values for non character columns (such as numbers or date columns) are always trimmed before converting them to their target datatype. This is only used for text and spreadsheet imports | |||
-emptyStringIsNull |
Possible values:
Controls whether input values for character type columns
with a length of zero are treated as
The default value for this parameter is
Note that, input values for non character columns (such as numbers or date columns) that are
empty or consist only of whitespace will always be treated as This is only used for text and spreadsheet imports | |||
-columnExpression |
This parameter can be used to define SQL expressions to be used instead of a plain column reference for the INSERT statement. This is useful to apply SQL functions directly on the server to the value retrieved from the input file.
The format is The position of the input value is denoted by a question mark. The question mark must not be quoted, even if the input is a string/character value.
As an example, this can be used to convert a comma separated string into a Postgres array:
| |||
-constantValues |
With this parameter you can supply constant values for one or more columns that will be used when inserting new rows into the database.
The constant values will only be used when inserting rows (e.g. using
The format of the values is
To specify a function call to be executed, enclose the function call in
You can also specify a
The syntax to specify a SELECT statement is similar to a function call:
The parameter for the SELECT statement do not need to be quoted as internally a prepared statement is used. However
the values in the input file must be convertible by the JDBC driver. If the input column from the source file
is not part of the target table, the value will be passed as a string to the statement. This means
that the SQL query should cast the parameter to the appropriate data type if needed, e.g.
In addition to the function call or The following three variables are supported
Please refer to the examples for more details on the usage. | |||
-insertSQL |
Define the statement to be used for inserting rows.
This can be used to use hints or customize the
generated INSERT statement. The parameter may only contain the
-insertSQL='INSERT /*+ append */ INTO' | |||
-adjustSequences |
Possible values: For DBMS that support sequences which are associated with a column, this parameter can be used to adjust the next value for the sequence to the maximum value of the imported data. This can also be used to synchronize identity columns for DBMS that allow overriding the generated values. Currently this is implemented for PostgreSQL, DB2 (LUW), H2 Database and HyperSQL (aka HSQLDB). | |||
-preTableStatement -postTableStatement |
This parameter defines a SQL statement that should be executed before the import
process starts inserting data into the target table. The name of the current
table (when e.g. importing a whole directory) can be referenced using
To define a statement that should be executed after all rows have been
inserted and have been committed, you can use the These parameters can e.g. be used to enable identity insert for MS SQL Server: -preTableStatement="set identity_insert ${table.name} on" -postTableStatement="set identity_insert ${table.name} off"
Errors resulting from executing these statements will be ignored. If you want
to abort the import in that case you can specify These statements are only used if more than one table is processed. | |||
-runTableStatementOnError |
Possible values:
Controls the execution of the post-table statement in case an error occurred while importing the data.
By default the post-table statement is executed even if the import was not successful. If this is
should not happen, use | |||
-ignorePrePostErrors |
Possible values:
Controls handling of errors for the SQL statements defined through the | |||
-showProgress |
Valid values:
Control the update frequence in the status bar (when running in
GUI mode). The default is every 10th row is reported. To disable
the display of the progress specify a value of 0 (zero) or the
value |
Parameter | Description |
---|---|
-fileColumns |
A comma separated list of the table columns in the import file
Each column from the file should be listed with the appropriate column
name from the target table. This parameter also defines
the order in which those columns appear in the file.
If the file does not contain a header line or the header line does not
contain the names of the columns in the database (or has different names),
this parameter has to be supplied. If a column from the input
file has no match in the target table, then it should be specified with
the name
This parameter is ignored when the |
-importColumns |
Defines the columns that should be imported. If all
columns from the input file should be imported (the default), then
this parameter can be ommited. If only certain columns should be
imported then the list of columns can be specified here. The column
names should match the names provided with the -filecolumns switch.
The same result can be achieved by providing the columns
that should be excluded as
This parameter is ignored when the |
-delimiter |
Define the character which separates columns in one line.
Records are always separated by newlines (either CR/LF or a
single a LF character) unless
The character can also be specified as a unicode escape sequence, e.g. Default value: \t (a tab character) |
-columnWidths |
In order to import files that do not have a delimiter but have a fixed
width for each column, this parameters defines the width of each
column in the input file. The value for this parameter is a
comma separated list, where each element defines the width in characters for
each column. If this parameter is given, the e.g.: Note that the whole list must be enclosed in quotes as the parameter value contains the equal sign.
If you want to import only certain columns you have to use
|
-quoteChar |
The character which was used to quote values where the delimiter is contained.
This parameter has no default value. Thus if this is not specified, no quote checking
will take place. If you use
The character can also be specified as a unicode escape sequence, e.g. |
-quoteAlways |
Possible values: WbImport will always handled quoted values correctly, if a quote character is defined through -quoteChar.
Using |
-quoteCharEscaping |
Possible values: Defines how quote characters that appear in the actual data are stored in the input file. You have to define a quote character in order for this option to have an effect. The character defined with the -quoteChar switch will then be imported according to the setting defined by this switch.
If
If |
-multiLine |
Possible values: Enable support for records spanning more than one line in the input file. These records have to be quoted, otherwise they will not be recognized.
If you create your exports with the WbExport command,
it is recommended to encode special characters using the The default value for this parameter can be controlled
in the settings file
and it will be displayed if you run |
-decimal | The decimal symbol to be used for numbers. The default is a dot |
-header |
Possible values:
If set to true, indicates that the file contains a header
line with the column names for the target table. This will also ignore
the data from the first line of the file. If the column names
to be imported are defined using the
This parameter is always set to true when the
The default value for this option can be changed in the
settings file and it will be displayed if you run |
-decode |
Possible values:
This controls the decoding of escaped characters. If the
export file was e.g. written with WbExport's escaping enabled
then you need to set |
-lineFilter |
This defines a filter on the level of the whole input row (rather than for each column individually). Only rows matching this regular expression will be included in the import. The complete content of the row from the input file will be used to check the regular expression. When defining the expression, remember that the (column) delimiter will be part of the input string of the expression. |
-nullString |
Defines the string value that in the input file to denote a |
-blobIsFilename |
Possible values:
This is a deprecated parameter. Please use
When exporting tables that have BLOB columns using WbExport
into text files, each BLOB will be written into a separate file. The actual column
data of the text file will contain the file name of the external file.
When importing text files that do not reference external files
into tables with BLOB columns setting this parameter to false, will send the content
of the BLOB column "as is" to the DBMS. This will of course only work
if the JDBC driver can handle the data that in the BLOB columns of the
text file. The default for this parameter is
This parameter is ignored, if |
-blobType |
Possible values:
Specifies how BLOB data is stored in the input file. If
For the other two type,
If this parameter is specified, |
-clobIsFilename |
Possible values:
When exporting tables that have CLOB columns using WbExport
and the parameter |
-usePgCopy |
This parameter has no value, its presence turns the feature on. If this parameter is specified, then the input file is sent to the PostgreSQL server using PostgreSQL's JDBC support for COPY
The specified file(s) must conform to the format expected by PostgreSQL's COPY command. SQL Workbench/J
creates a
As
The options defined in the
Especially the formatting options for dates/timestamps and numbers will have no effect. So the input file must be formatted properly. All parameters controlling the target table(s), the columns, the source directory and so on still work. Including the import directly from a ZIP archive. |
WbImport -file=c:/temp/contacts.txt -table=person -filecolumns=lastname,firstname,birthday -dateformat="yyyy-MM-dd";
This imports a file with three columns into a table named person. The
first column in the file is lastname
, the second column
is firstname
and the third column is birthday
.
Values in date columns are formated as yyyy-MM-dd
![]() | |
A special timestamp format |
WbImport -file=c:/temp/contacts.txt -table=person -filecolumns=lastname,firstname,$wb_skip$,birthday -dateformat="yyyy-MM-dd";
This will import a file with four columns. The third column in the file
does not have a corresponding column in the table person
so its specified as $wb_skip$
and will not be imported.
WbImport -file=c:/temp/contacts.txt -table=person -filecolumns=lastname,firstname,phone,birthday -importcolumns=lastname,firstname;
This will import a file with four columns where all columns
exist in the target table. Only lastname
and
firstname
will be imported. The same effect could
be achieved by specifying $wb_skip$ for the last two columns and leaving
out the -importcolumns switch. Using -importcolumns is a bit more readable
because you can still see the structure of the input file. The
version with $wb_skip$
is mandatory if the input file
contains columns that do not exist in the target table.
WbImport -file=cust_data.txt -table=customer -filecolumns=custnr,accountid,region_code -columnWidths='custnr=10,accountid=10,region_code=2';
This will import a file with three columns. The first column named custnr
is taken
from the characters 1-10, the second column named accountid
is taken from
the characters 21-30 and the third the column region_code
is taken from
characters 31 and 32
If you want to import certain rows from the input file, you can use regular expressions:
WbImport -file=c:/temp/contacts.txt -table=person -filecolumns=lastname,firstname,birthday -columnfilter=lastname="^Bee.*",firstname="^Za.*" -dateformat="yyyy-MM-dd";
The above statement will import only rows where the column lastname
contains values that start with Bee
and the column firstname
contains values that start with Za
. So Zaphod Beeblebrox
would be imported, Arthur Beeblebrox
would not be imported.
If you want to learn more about regular expressions, please have a look at http://www.regular-expressions.info/
If you want to limit the rows that are updated but cannot filter them
from the input file using -columnfilter
or -linefilter
,
use the -updatewhere
parameter:
WbImport -file=c:/temp/contacts.txt -table=person -filecolumns=id,lastname,firstname,birthday -keycolumns=id -mode=update -updatewhere="source <> 'manual'"
This will update the table PERSON
. The generated UPDATE
statement would normally be: UPDATE person SET lastname=?, firstname=?, birthday=? WHERE id=?
.
The table contains entries that are maintained manually (identified by the value 'manual' in
the column source
) and should not be updated by SQL Workbench/J. By specifying
the -updatewhere
parameter, the above UPDATE
statement will
be extended to WHERE id=? AND (source <> 'manual')
. Thus skipping
records that are flagged as manual even if they are contained in the input file.
WbImport -sourceDir=c:/data/backup -extension=txt -header=true
This will import all files with the extension txt
located in the
directory c:/data/backup
into the database. This assumes that
each filename indicates the name of the target table.
WbImport -sourceDir=c:/data/backup -extension=txt -table=person -header=true
This will import all files with the extension txt
located in the
directory c:/data/backup
into the table person
regardless
of the name of the input file. In this mode, the parameter -deleteTarget
will be ignored.
The following statement will import all .txt
files from the directory /data/import
and store them in the appropriate tables. Each table that is being imported has to have a column named source_file
and the complete path to the import file will be stored in that column (for each imported row).
WbImport -sourceDir=/data/import -header=true -schema=staging -extension=txt -constantValues="source_file=$[_wb_import_file_path]" -type=text;
When your input file does not contain the actual values to be stored in the target table, but e.g. lookup values, you can specify a SELECT statement to retrieve the necessary primary key of the lookup table.
Consider the following tables:
contact (contact_id, first_name, last_name, type_id)
|
contact_type (type_id, type_name) |
The table contact_type
contains: (1, 'business'), (2, 'private'), (3, 'other').
Your input file only contains contact_id, first_name, last_name, type_name
. Where type_name
references an entry from the contact_type
table.
To import this file, the following statement can be used:
WbImport -file=contacts.txt -type=text -header=true -table=contact -importColumns=contact_id, first_name, last_name -constantValues="type_id=$@{SELECT type_id FROM contact_type WHERE type_name = $4}"
For every row from the input file, SQL Workbench/J will run the specified SELECT statement. The value of the first column
of the first row that is returned by the SELECT, will then be used to populate the type_id
column. The SELECT
statement will use the value of the third column of the row that is currently being inserted as the value for the WHERE condition.
You must use the -importColumns parameter as well to make sure the type_name column is not processed! As an alternative
you can also use -fileColumns=contact_id, first_name, last_name, $wb_skip$
instead of -importColumns.
![]() | |
The "placeholders" with the column index must not be quoted (e.g. '$1' for a character column will not work)! |
If the column contact_id
should be populated by a sequence, the above statement can be extended to
include a function call to retrieve the sequence value (PostgreSQL syntax:)
WbImport -file=contacts.txt -type=text -header=true -table=contact -importColumns=first_name, last_name -constantValues="id=${nextval('contact_id_seq'::regclass)}" -constantValues="type_id=$@{SELECT type_id FROM contact_type WHERE type_name = $4}"
As the ID column is now populated through a constant expression, it may not appear in the -importColumns
list. Again you
could alternatively use -fileColumns=$wb_skip$, first_name, last_name, $wb_skip$
to make sure the columns that
are populated through the -constantValue parameter are not taken from the input file.
The XML import only works with files generated by the WbExport command.
Parameter | Description |
---|---|
-verboseXML |
Possible values:
If the XML was generated with |
-sourceDir |
Specify a directory which contains the XML files.
All files in that directory ending with ".xml"
(lowercase!) will be processed.
The table into which the data is imported is read
from the XML file, also the columns to be imported. The parameters
When importing several files at once, the files will be imported into the tables specified in the XML files. You cannot specify a different table (apart from editing the XML file before starting the import). |
-importColumns |
Defines the columns that should be imported. If all columns from the input file should be imported (the default), then this parameter can be omited. When specified, the columns have to match the column names available in the XML file. |
-createTarget | If this parameter is set to true the target table
will be created, if it doesn't exist.
Valid values are true or false .
|
Both spreadsheet imports (Microsoft Excel, OpenOffice) support a subset of the parameters that are used for flat file imports.
These parameters are:
-header
-fileColumns
-importColumns
-nullString
-emptyStringIsNull
-illegalDateIsNull
The spreadsheet import does not support specifying a date or timestamp format unless
-stringDates=true
is used.
It is expected that those columns are formatted in such a way that they can be identified as date or timestamps.
The spreadsheet import also does not support importing BLOB files that are referenced from within the spreadsheet. If you want to import this kind of data, you need to convert the spreadsheet into a text file.
The spreadsheet import supports the following additional parameters that arer not available for the text imports:
Parameter | Description |
---|---|
-sheetNumber |
Selects the spread sheet inside the file to be imported. If this is not specified the first sheet is used. The first sheet has the number 1.
All sheets can be imported with a single command when using
If all sheets are imported, the parameters |
-sheetName |
Defines the name of the spreedsheet inside the file to be imported. If this is not specified the first sheet is used. |
-stringDates |
Possible values:
By default WbImport tries to read "native" date and timestamp values from an Excel Worksheet. When
this parameter is set to |
-recalculateFormulas |
Possible values:
By default WbImport will recalculate all formulas in an Excel workbook after loading it. This can
be controlled with this parameter. |
The -mode
parameter controls the way the data is sent
to the database. The default is INSERT
. SQL Workbench/J will
generate an INSERT
statement for each record. If the INSERT
fails no further processing takes place for that record.
If -mode
is set to UPDATE
, SQL Workbench/J will
generate an UPDATE
statement for each row. In order for this to work,
the table needs to have a primary key defined, and all columns of the primary key need to
be present in the import file. Otherwise the generated UPDATE
statement
will modify rows that should not be modified. This can be used to update existing
data in the database based on the data from the export file.
To either update or insert data into the table, both keywords can be specified
for the -mode
parameter. The order in which they appear as the parameter
value, defines the order in which the respective statements are sent to the database. If the first
statement fails, the second will be executed. For -mode=insert,update
to
work properly a primary or unique key has to be defined on the table. SQL Workbench/J
will catch any exception (=error) when inserting a record, then it will try updating
the record, based on the specified key columns.
The -mode=update,insert
works the other way. First SQL Workbench/J
will try to update the record based on the primary keys. If the DBMS signals that no rows
have been updated, it is assumed that the row does not exist and the record will be inserted
into the table. This mode is recommended when no primary or unique key is defined on the table,
and an INSERT
would always succeed.
The keycolumns defined with the -keycolumns
parameter don't
have to match the real primary key, but they should identify one row uniquely.
You cannot use the update mode, if the tables in question only
consist of key columns (or if only key columns are specified).
The values from the source are used to build up the WHERE
clause for
the UPDATE
statement.
If you specify a combined mode (e.g.: update,insert
) and one
of the tables involved consists only of key columns, the import will revert to
insert
mode. In this case database errors during an INSERT
are not considered as real errors and are silently ignored.
For maximum performance, choose the update strategy that will result in a succssful first statement more often. As a rule of thumb:
Use -mode=insert,update
, if you expect more rows to be inserted then updated.
Use -mode=update,insert
, if you expect more rows to be updated then inserted.
To use insert/update or update/insert with PostgreSQL, make sure you have enabled savepoints for the import (which is enabled by default).
When using a DBMS that supports an "update or insert" functionality directly, this can be selected using -mode=upsert
. In this case
SQL Workbench/J will only use a single statement instead of two statements as described in the previous chapter. The advantage of using this mode
over e.g. insert,update
is that fewer statements are sent to the database, and that this mode supports the use of batching, which
is not possible when using insert,update
.
For the following database systems, native UPSERT is available:
PostgreSQL 9.5, using INSERT ... ON CONFLICT
: http://www.postgresql.org/docs/9.5/static/sql-insert.html
Firebird 2.1, using UPDATE OR INSERT
: http://www.firebirdfaq.org/faq220/
H2 Database, using MERGE INTO
: http://www.h2database.com/html/grammar.html#merge
Oracle, Microsoft SQL Server, HSQLDB 2.x and DB2 (LUW and z/OS) using a MERGE
statement
SQL Anyhwere using INSERT ... ON EXISTING UPDATE
(this requires a primary key)
SQLite using INSERT OR REPLACE ...
(this requires a primary key)
SAP HANA using a UPSERT
statement
MySQL using INSERT ... ON DUPLICATE
: http://dev.mysql.com/doc/refman/5.1/en/insert-on-duplicate.html
As MySQL does not allow to specify the key columns for the "ON DUPLICATE" part, this is only supported when the table has a primary key.
The -mode=insertIgnore
will use the built in feature of the DBMS to (silently) ignore inserts that would
result in a violation of a unique key constraint but not update existing rows. Using -mode=insertIgnore
has
the same effect as using -mode=insert -continueOnError=true
but will perform better (especially when
many collisions are expected) because this can be combined with batching and it does not require the
use of savepoints (e.g. for Postgres)
This mode is supported for the following DBMS:
PostgreSQL 9.5, using INSERT ... ON CONFLICT
: http://www.postgresql.org/docs/9.5/static/sql-insert.html
Oracle, using the IGNORE_ROW_ON_DUPKEY_INDEX
hint: https://docs.oracle.com/cd/E11882_01/server.112/e41084/sql_elements006.htm#CHDEGDDG
Microsoft SQL Server, HSQLDB 2.x and DB2 (LUW and z/OS) using a MERGE
statement
without a WHEN NOT MATCHED
clause.
SQLite using INSERT OR IGNORE ...
(this requires a primary key)
SQL Anyhwere using INSERT ... ON EXISTING SKIP
(this requires a primary key)
MySQL using INSERT ... ON DUPLICATE
: http://dev.mysql.com/doc/refman/5.1/en/insert-on-duplicate.html
with a dummy update setting one column to its current value. As MySQL does not allow to specify the key columns
for the "ON DUPLICATE" part, this is only supported when the table has a primary key.