From 7c6b8b3e7d125220724be3dc29d00b8fd36e8172 Mon Sep 17 00:00:00 2001 From: Arvind Prabhakar Date: Mon, 3 Oct 2011 20:55:13 +0000 Subject: [PATCH] SQOOP-355. Improve Sqoop Documentation for Avro data file support. (Doug Cutting via Arvind Prabhakar) git-svn-id: https://svn.apache.org/repos/asf/incubator/sqoop/trunk@1178574 13f79535-47bb-0310-9956-ffa450edef68 --- src/docs/user/basics.txt | 2 +- src/docs/user/import-purpose.txt | 2 +- src/docs/user/import.txt | 16 +++++++++++----- src/docs/user/saved-jobs.txt | 6 +++--- 4 files changed, 16 insertions(+), 10 deletions(-) diff --git a/src/docs/user/basics.txt b/src/docs/user/basics.txt index f355f63c..3c22eaf9 100644 --- a/src/docs/user/basics.txt +++ b/src/docs/user/basics.txt @@ -29,7 +29,7 @@ process is a set of files containing a copy of the imported table. The import process is performed in parallel. For this reason, the output will be in multiple files. These files may be delimited text files (for example, with commas or tabs separating each field), or -binary SequenceFiles containing serialized record data. +binary Avro or SequenceFiles containing serialized record data. A by-product of the import process is a generated Java class which can encapsulate one row of the imported table. This class is used diff --git a/src/docs/user/import-purpose.txt b/src/docs/user/import-purpose.txt index 43690be3..16fc2562 100644 --- a/src/docs/user/import-purpose.txt +++ b/src/docs/user/import-purpose.txt @@ -22,5 +22,5 @@ The +import+ tool imports an individual table from an RDBMS to HDFS. Each row from a table is represented as a separate record in HDFS. Records can be stored as text files (one record per line), or in -binary representation in SequenceFiles. +binary representation as Avro or SequenceFiles. diff --git a/src/docs/user/import.txt b/src/docs/user/import.txt index 24878b43..670c72fa 100644 --- a/src/docs/user/import.txt +++ b/src/docs/user/import.txt @@ -344,11 +344,17 @@ manipulated by custom MapReduce programs (reading from SequenceFiles is higher-performance than reading from text files, as records do not need to be parsed). -By default, data is not compressed. You can compress -your data by using the deflate (gzip) algorithm with the +-z+ or -+\--compress+ argument, or specify any Hadoop compression codec using the -+\--compression-codec+ argument. This applies to both SequenceFiles or text -files. +Avro data files are a compact, efficient binary format that provides +interoperability with applications written in other programming +languages. Avro also supports versioning, so that when, e.g., columns +are added or removed from a table, previously imported data files can +be processed along with new ones. + +By default, data is not compressed. You can compress your data by +using the deflate (gzip) algorithm with the +-z+ or +\--compress+ +argument, or specify any Hadoop compression codec using the ++\--compression-codec+ argument. This applies to SequenceFile, text, +and Avro files. Large Objects ^^^^^^^^^^^^^ diff --git a/src/docs/user/saved-jobs.txt b/src/docs/user/saved-jobs.txt index bff4d9fa..16686771 100644 --- a/src/docs/user/saved-jobs.txt +++ b/src/docs/user/saved-jobs.txt @@ -304,8 +304,8 @@ This would run a MapReduce job where the value in the +id+ column of each row is used to join rows; rows in the +newer+ dataset will be used in preference to rows in the +older+ dataset. -This can be used with both SequenceFile- and text-based incremental -imports. The file types of the newer and older datasets must be the -same. +This can be used with both SequenceFile-, Avro- and text-based +incremental imports. The file types of the newer and older datasets +must be the same.