mirror of
https://github.com/apache/sqoop.git
synced 2025-05-04 00:11:15 +08:00

Add ConnManager.importQuery() API. Change BaseSqoopTool.DEBUG_SQL_CMD_ARG to SQL_QUERY_ARG to reflect the broader applicability of the argument. Change 'debugSqlCmd' member of SqoopOptions to 'sqlQuery'. CompilationManager now sets jar name based on specified class name. Add tests for query support. Add documentation for query-based import. From: Aaron Kimball <aaron@cloudera.com> git-svn-id: https://svn.apache.org/repos/asf/incubator/sqoop/trunk@1149932 13f79535-47bb-0310-9956-ffa450edef68
118 lines
2.6 KiB
Plaintext
118 lines
2.6 KiB
Plaintext
sqoop-import(1)
|
|
===============
|
|
|
|
NAME
|
|
----
|
|
sqoop-import - Import a table from an RDBMS to HDFS
|
|
|
|
SYNOPSIS
|
|
--------
|
|
'sqoop-import' <generic-options> <tool-options>
|
|
|
|
'sqoop import' <generic-options> <tool-options>
|
|
|
|
|
|
DESCRIPTION
|
|
-----------
|
|
|
|
include::../user/import-purpose.txt[]
|
|
|
|
OPTIONS
|
|
-------
|
|
|
|
The +--connect+ and +--table+ options are required.
|
|
|
|
include::common-args.txt[]
|
|
|
|
Import control options
|
|
~~~~~~~~~~~~~~~~~~~~~~
|
|
|
|
--append::
|
|
Append data to an existing HDFS dataset
|
|
|
|
--as-sequencefile::
|
|
Imports data to SequenceFiles
|
|
|
|
--as-textfile::
|
|
Imports data as plain text (default)
|
|
|
|
--columns (col,col,col...)::
|
|
Columns to export from table
|
|
|
|
--direct::
|
|
Use direct import fast path (mysql only)
|
|
|
|
--direct-split-size (n)::
|
|
Split the input stream every 'n' bytes when importing in direct mode.
|
|
|
|
--inline-lob-limit (n)::
|
|
Set the maximum size for an inline LOB
|
|
|
|
--num-mappers (n)::
|
|
-m::
|
|
Use 'n' map tasks to import in parallel
|
|
|
|
--query (statement)::
|
|
Imports the results of +statement+ instead of a table
|
|
|
|
--split-by (column-name)::
|
|
Column of the table used to split the table for parallel import
|
|
|
|
--table (table-name)::
|
|
The table to import
|
|
|
|
--target-dir (dir)::
|
|
Explicit HDFS target directory for the import.
|
|
|
|
--warehouse-dir (dir)::
|
|
Tables are uploaded to the HDFS path +/warehouse/dir/(tablename)/+
|
|
|
|
--where (clause)::
|
|
Import only the rows for which _clause_ is true.
|
|
e.g.: `--where "user_id > 400 AND hidden == 0"`
|
|
|
|
--compress::
|
|
-z::
|
|
Uses gzip to compress data as it is written to HDFS
|
|
|
|
|
|
include::output-args.txt[]
|
|
|
|
include::input-args.txt[]
|
|
|
|
include::hive-args.txt[]
|
|
|
|
include::codegen-args.txt[]
|
|
|
|
Database-specific options
|
|
~~~~~~~~~~~~~~~~~~~~~~~~~
|
|
|
|
Additional arguments may be passed to the database manager
|
|
after a lone '--' on the command-line.
|
|
|
|
In MySQL direct mode, additional arguments are passed directly to
|
|
mysqldump.
|
|
|
|
ENVIRONMENT
|
|
-----------
|
|
|
|
See 'sqoop(1)'
|
|
|
|
////
|
|
Licensed to Cloudera, Inc. under one or more
|
|
contributor license agreements. See the NOTICE file distributed with
|
|
this work for additional information regarding copyright ownership.
|
|
Cloudera, Inc. licenses this file to You under the Apache License, Version 2.0
|
|
(the "License"); you may not use this file except in compliance with
|
|
the License. You may obtain a copy of the License at
|
|
|
|
http://www.apache.org/licenses/LICENSE-2.0
|
|
|
|
Unless required by applicable law or agreed to in writing, software
|
|
distributed under the License is distributed on an "AS IS" BASIS,
|
|
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
|
See the License for the specific language governing permissions and
|
|
limitations under the License.
|
|
////
|
|
|